threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": ">OTOH, when I execute ALTER SUBSCRIPTION ... SET (slot_name=''), it doesn't\ncomplain. However,\n>SELECT select pg_create_logical_replication_slot('', 'pgoutput') complains\nslot name is too\n>short. Although, the slot will be created at publisher, and validate the\nslot name, IMO, we\n>can also validate the slot_name in parse_subscription_options() to get the\nerror early.\n>Attached fixes it. Any thoughts?\nI think that this fix is better after the check if the name is equal to\n\"none\".\nMost of the time it will be \"none\" .\nWhile this, reduce the overhead with strlen into\nReplicationSlotValidateName can it might be worth it.\n\nFor convenience, I have attached a new version.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 7 Jul 2021 14:32:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "re: Why ALTER SUBSCRIPTION ... SET (slot_name='none') requires\n subscription disabled?"
},
{
"msg_contents": "\nOn Thu, 08 Jul 2021 at 01:32, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>OTOH, when I execute ALTER SUBSCRIPTION ... SET (slot_name=''), it doesn't\n> complain. However,\n>>SELECT select pg_create_logical_replication_slot('', 'pgoutput') complains\n> slot name is too\n>>short. Although, the slot will be created at publisher, and validate the\n> slot name, IMO, we\n>>can also validate the slot_name in parse_subscription_options() to get the\n> error early.\n>>Attached fixes it. Any thoughts?\n> I think that this fix is better after the check if the name is equal to\n> \"none\".\n> Most of the time it will be \"none\" .\n> While this, reduce the overhead with strlen into\n> ReplicationSlotValidateName can it might be worth it.\n>\n> For convenience, I have attached a new version.\n>\n\nThanks for your review! LGTM.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 08 Jul 2021 10:30:22 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why ALTER SUBSCRIPTION ... SET (slot_name='none') requires\n subscription disabled?"
}
] |
[
{
"msg_contents": "Hello,\n\n(I hope it's okay to ask general internals questions here; if this list is\nstrictly for development, I'll keep my questions on -general but since I'm\nasking about internal behavior, this seemed more appropriate.)\n\nI was playing around with inspecting TOAST tables in order to understand\nthe mechanism better, and I ran across a strange issue: I've created a\ntable that has a text column, inserted and then deleted some data, and the\nTOAST table still has some entries even though the owning table is now\nempty:\n\nmaciek=# SELECT reltoastrelid::regclass FROM pg_class WHERE relname =\n'users';\n reltoastrelid\n---------------------------\n pg_toast.pg_toast_4398034\n(1 row)\n\nmaciek=# select chunk_id, chunk_seq from pg_toast.pg_toast_4398034;\n chunk_id | chunk_seq\n----------+-----------\n 4721665 | 0\n 4721665 | 1\n(2 rows)\n\nmaciek=# select * from users;\n id | created_at | is_admin | username\n----+------------+----------+----------\n(0 rows)\n\nI've tried to reproduce this with a new table in the same database, but\ncould not see the same behavior (the TOAST table entries are deleted when I\ndelete rows from the main table). This is 11.12. Is this expected?\n\nIn case it's relevant, this table originally had the first three columns. I\ninserted one row, then added the text column, set its STORAGE to EXTERNAL,\nand set toast_tuple_target to the minimum of 128. I inserted a few more\nrows until I found one large enough to go in the TOAST table (It looks like\nPostgres attempts to compress values and store them inline first even when\nSTORAGE is EXTERNAL? I don't recall the exact size, but I had to use a\nvalue much larger than 128 before it hit the TOAST table. The TOAST docs\nallude to this behavior but just making sure I understand correctly.), then\nI deleted the rows with non-NULL values in the text column, and noticed the\nTOAST table entries were still there. So I deleted everything in the users\ntable and still see the two rows above in the TOAST table. I've tried this\nsequence of steps again with a new table and could not reproduce the issue.\n\nThanks,\nMaciek\n\nHello,(I hope it's okay to ask general internals questions here; if this list is strictly for development, I'll keep my questions on -general but since I'm asking about internal behavior, this seemed more appropriate.)I was playing around with inspecting TOAST tables in order to understand the mechanism better, and I ran across a strange issue: I've created a table that has a text column, inserted and then deleted some data, and the TOAST table still has some entries even though the owning table is now empty:maciek=# SELECT reltoastrelid::regclass FROM pg_class WHERE relname = 'users'; reltoastrelid --------------------------- pg_toast.pg_toast_4398034(1 row)maciek=# select chunk_id, chunk_seq from pg_toast.pg_toast_4398034; chunk_id | chunk_seq ----------+----------- 4721665 | 0 4721665 | 1(2 rows)maciek=# select * from users; id | created_at | is_admin | username ----+------------+----------+----------(0 rows)I've tried to reproduce this with a new table in the same database, but could not see the same behavior (the TOAST table entries are deleted when I delete rows from the main table). This is 11.12. Is this expected?In case it's relevant, this table originally had the first three columns. I inserted one row, then added the text column, set its STORAGE to EXTERNAL, and set toast_tuple_target to the minimum of 128. I inserted a few more rows until I found one large enough to go in the TOAST table (It looks like Postgres attempts to compress values and store them inline first even when STORAGE is EXTERNAL? I don't recall the exact size, but I had to use a value much larger than 128 before it hit the TOAST table. The TOAST docs allude to this behavior but just making sure I understand correctly.), then I deleted the rows with non-NULL values in the text column, and noticed the TOAST table entries were still there. So I deleted everything in the users table and still see the two rows above in the TOAST table. I've tried this sequence of steps again with a new table and could not reproduce the issue.Thanks,Maciek",
"msg_date": "Wed, 7 Jul 2021 11:52:55 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": true,
"msg_subject": "TOAST questions"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWe've had node-casting versions of the list extraction macros since\n2017, but several cases of the written-out version have been added since\nthen (even by Tom, who added the l*_node() macros).\n\nHere's a patch to convert the remaining ones. The macros were\nback-patched to all supported branches, so this won't create any\nnew back-patching hazards.\n\n- ilmari",
"msg_date": "Wed, 07 Jul 2021 20:12:20 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Replace remaining =?utf-8?Q?castNode=28=E2=80=A6=2C_lfirst=28?=\n =?utf-8?Q?=E2=80=A6=29=29?= and friends calls with l*_node()"
},
{
"msg_contents": "> On 7 Jul 2021, at 21:12, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n\n> Here's a patch to convert the remaining ones.\n\nI haven't tested it yet, but +1 on the idea of cleaning these up making the\ncodebase consistent.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 7 Jul 2021 23:32:59 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "=?utf-8?B?UmU6IFJlcGxhY2UgcmVtYWluaW5nIGNhc3ROb2RlKOKApiwgbGZp?=\n =?utf-8?B?cnN0KOKApikpIGFuZCBmcmllbmRzIGNhbGxzIHdpdGggbCpfbm9kZSgp?="
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> On 7 Jul 2021, at 21:12, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>\n>> Here's a patch to convert the remaining ones.\n>\n> I haven't tested it yet, but +1 on the idea of cleaning these up making the\n> codebase consistent.\n\nFWIW, it passes `make check-world` on an assert- and TAP-enabled Linux build.\n\n- ilmari\n\n\n",
"msg_date": "Thu, 08 Jul 2021 17:23:51 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Replace remaining =?utf-8?Q?castNode=28=E2=80=A6=2C_lfirst=28?=\n =?utf-8?Q?=E2=80=A6=29=29?= and friends calls with l*_node()"
},
{
"msg_contents": "+1 on the idea. On a quick glance, all changes looks good.\n\nOn 2021-Jul-07, Dagfinn Ilmari Mannsåker wrote:\n\n> diff --git a/src/backend/rewrite/rewriteSearchCycle.c b/src/backend/rewrite/rewriteSearchCycle.c\n> index 599fe8e735..c50ebdba24 100644\n> --- a/src/backend/rewrite/rewriteSearchCycle.c\n> +++ b/src/backend/rewrite/rewriteSearchCycle.c\n> @@ -307,8 +307,8 @@ rewriteSearchAndCycle(CommonTableExpr *cte)\n> \t\t\t\t\t list_nth_oid(cte->ctecolcollations, i),\n> \t\t\t\t\t 0);\n> \t\ttle = makeTargetEntry((Expr *) var, i + 1, strVal(list_nth(cte->ctecolnames, i)), false);\n> -\t\ttle->resorigtbl = castNode(TargetEntry, list_nth(rte1->subquery->targetList, i))->resorigtbl;\n> -\t\ttle->resorigcol = castNode(TargetEntry, list_nth(rte1->subquery->targetList, i))->resorigcol;\n> +\t\ttle->resorigtbl = list_nth_node(TargetEntry, rte1->subquery->targetList, i)->resorigtbl;\n> +\t\ttle->resorigcol = list_nth_node(TargetEntry, rte1->subquery->targetList, i)->resorigcol;\n\nThis seems a bit surprising to me. I mean, clearly we trust our List\nimplementation to be so good that we can just fetch the same node twice,\none for each member of the same struct, and the compiler will optimize\neverything so that it's a single access to the n'th list entry. Is this\ntrue? I would have expected there to be a single fetch of the struct,\nfollowed by an access of each of the two struct members.\n\n> @@ -11990,7 +11990,7 @@ get_range_partbound_string(List *bound_datums)\n> \tforeach(cell, bound_datums)\n> \t{\n> \t\tPartitionRangeDatum *datum =\n> -\t\tcastNode(PartitionRangeDatum, lfirst(cell));\n> +\t\tlfirst_node(PartitionRangeDatum, cell);\n\nThis is pretty personal and subjective, but stylistically I dislike\ninitializations that indent to the same level as the variable\ndeclarations they follow; they still require a second line of code and\ndon't look good when in between other declarations. The style is at\nodds with what pgindent does when the assignment is not an\ninitialization. We seldom use this style. In this particular case it\nis possible to split more cleanly while ending up with exactly the same\nline count by removing the initialization and making it a straight\nassignment, \n\n \tforeach(cell, bound_datums)\n \t{\n \t\tPartitionRangeDatum *datum;\n\n \t\tdatum = lfirst_node(PartitionRangeDatum, cell);\n \t\tappendStringInfoString(buf, sep);\n \t\tif (datum->kind == PARTITION_RANGE_DATUM_MINVALUE)\n\nThis doesn't bother me enough to change on its own, but if we're\nmodifying the same line here, we may as well clean this up.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)\n\n\n",
"msg_date": "Thu, 8 Jul 2021 14:17:41 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Replace remaining =?utf-8?B?Y2FzdE5vZGUo4oCmLCBsZmlyc3Qo4oCm?=\n =?utf-8?B?KSk=?= and friends calls with l*_node()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jul-07, Dagfinn Ilmari Mannsåker wrote:\n>> PartitionRangeDatum *datum =\n>> -\t\tcastNode(PartitionRangeDatum, lfirst(cell));\n>> +\t\tlfirst_node(PartitionRangeDatum, cell);\n\n> This is pretty personal and subjective, but stylistically I dislike\n> initializations that indent to the same level as the variable\n> declarations they follow; they still require a second line of code and\n> don't look good when in between other declarations.\n\nYeah, this seems like a pgindent bug to me, but I've not mustered the\nenergy to try to fix it. As you say, it can be worked around by not\ntrying to lay out the code that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Jul 2021 14:26:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replace remaining =?utf-8?B?Y2FzdE5vZGUo4oCmLCBsZmlyc3Qo4oCm?=\n =?utf-8?B?KSk=?= and friends calls with l*_node()"
},
{
"msg_contents": "\nOn 08.07.21 20:17, Alvaro Herrera wrote:\n>> diff --git a/src/backend/rewrite/rewriteSearchCycle.c b/src/backend/rewrite/rewriteSearchCycle.c\n>> index 599fe8e735..c50ebdba24 100644\n>> --- a/src/backend/rewrite/rewriteSearchCycle.c\n>> +++ b/src/backend/rewrite/rewriteSearchCycle.c\n>> @@ -307,8 +307,8 @@ rewriteSearchAndCycle(CommonTableExpr *cte)\n>> \t\t\t\t\t list_nth_oid(cte->ctecolcollations, i),\n>> \t\t\t\t\t 0);\n>> \t\ttle = makeTargetEntry((Expr *) var, i + 1, strVal(list_nth(cte->ctecolnames, i)), false);\n>> -\t\ttle->resorigtbl = castNode(TargetEntry, list_nth(rte1->subquery->targetList, i))->resorigtbl;\n>> -\t\ttle->resorigcol = castNode(TargetEntry, list_nth(rte1->subquery->targetList, i))->resorigcol;\n>> +\t\ttle->resorigtbl = list_nth_node(TargetEntry, rte1->subquery->targetList, i)->resorigtbl;\n>> +\t\ttle->resorigcol = list_nth_node(TargetEntry, rte1->subquery->targetList, i)->resorigcol;\n> This seems a bit surprising to me. I mean, clearly we trust our List\n> implementation to be so good that we can just fetch the same node twice,\n> one for each member of the same struct, and the compiler will optimize\n> everything so that it's a single access to the n'th list entry. Is this\n> true? I would have expected there to be a single fetch of the struct,\n> followed by an access of each of the two struct members.\n\nLists are arrays now internally, so accessing an element by number is \npretty cheap.\n\n\n",
"msg_date": "Tue, 13 Jul 2021 15:55:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Replace_remaining_castNode=28=e2=80=a6=2c_lfirst=28?=\n =?UTF-8?B?4oCmKSkgYW5kIGZyaWVuZHMgY2FsbHMgd2l0aCBsKl9ub2RlKCk=?="
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> Hi hackers,\n>\n> We've had node-casting versions of the list extraction macros since\n> 2017, but several cases of the written-out version have been added since\n> then (even by Tom, who added the l*_node() macros).\n>\n> Here's a patch to convert the remaining ones. The macros were\n> back-patched to all supported branches, so this won't create any\n> new back-patching hazards.\n\nAdded to the 2021-09 commit fest: https://commitfest.postgresql.org/34/3253/\n\n- ilmari\n\n\n",
"msg_date": "Thu, 15 Jul 2021 12:12:48 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Replace remaining =?utf-8?Q?castNode=28=E2=80=A6=2C_lfirst=28?=\n =?utf-8?Q?=E2=80=A6=29=29?= and friends calls with l*_node()"
},
{
"msg_contents": "On 15.07.21 13:12, Dagfinn Ilmari Mannsåker wrote:\n> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n> \n>> Hi hackers,\n>>\n>> We've had node-casting versions of the list extraction macros since\n>> 2017, but several cases of the written-out version have been added since\n>> then (even by Tom, who added the l*_node() macros).\n>>\n>> Here's a patch to convert the remaining ones. The macros were\n>> back-patched to all supported branches, so this won't create any\n>> new back-patching hazards.\n> \n> Added to the 2021-09 commit fest: https://commitfest.postgresql.org/34/3253/\n\ncommitted\n\n\n",
"msg_date": "Mon, 19 Jul 2021 08:29:42 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Replace_remaining_castNode=28=e2=80=a6=2c_lfirst=28?=\n =?UTF-8?B?4oCmKSkgYW5kIGZyaWVuZHMgY2FsbHMgd2l0aCBsKl9ub2RlKCk=?="
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> On 15.07.21 13:12, Dagfinn Ilmari Mannsåker wrote:\n>> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>>\n>>> Hi hackers,\n>>>\n>>> We've had node-casting versions of the list extraction macros since\n>>> 2017, but several cases of the written-out version have been added since\n>>> then (even by Tom, who added the l*_node() macros).\n>>>\n>>> Here's a patch to convert the remaining ones. The macros were\n>>> back-patched to all supported branches, so this won't create any\n>>> new back-patching hazards.\n>>\n>> Added to the 2021-09 commit fest: https://commitfest.postgresql.org/34/3253/\n>\n> committed\n\nThanks!\n\n- ilmari\n\n\n",
"msg_date": "Mon, 19 Jul 2021 10:33:03 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Replace remaining =?utf-8?Q?castNode=28=E2=80=A6=2C_lfirst=28?=\n =?utf-8?Q?=E2=80=A6=29=29?= and friends calls with l*_node()"
}
] |
[
{
"msg_contents": "Spotted a few random typos in comments while reading code, will apply these\ntomorrow or so unless there are objections.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 8 Jul 2021 00:12:19 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "A few assorted typos in comments"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 10:12 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Spotted a few random typos in comments while reading code, will apply these\n> tomorrow or so unless there are objections.\n\nLGTM.\n\n\n",
"msg_date": "Thu, 8 Jul 2021 10:19:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A few assorted typos in comments"
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 10:19:00AM +1200, Thomas Munro wrote:\n> LGTM.\n\n+1.\n--\nMichael",
"msg_date": "Thu, 8 Jul 2021 16:37:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: A few assorted typos in comments"
},
{
"msg_contents": "> On 8 Jul 2021, at 09:37, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Jul 08, 2021 at 10:19:00AM +1200, Thomas Munro wrote:\n>> LGTM.\n> \n> +1.\n\nPushed (387925893e), thanks.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 13:17:58 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: A few assorted typos in comments"
}
] |
[
{
"msg_contents": "This is a re-posting of my original mail from here [1]\nCreated this new discussion thread for it as suggested here [2]\n\n[1] https://www.postgresql.org/message-id/CAHut%2BPtT0--Tf%3DK_YOmoyB3RtakUOYKeCs76aaOqO2Y%2BJt38kQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAA4eK1L0GT5-RJrya4q9ve%3DVi8hQM_SeHhJekmWMnOqsCh3KbQ%40mail.gmail.com\n\n===========================================\n\nOn Tue, Jul 6, 2021 at 6:21 PM Amit Kapila\n<amit(dot)kapila16(at)gmail(dot)com> wrote:\n>\n> On Fri, Jul 2, 2021 at 12:36 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:\n> >\n> > On Fri, Jul 2, 2021 at 8:35 AM Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> wrote:\n> > >\n> > > > The latest patch sent by Bharath looks good to me. Would you like to\n> > > > commit it or shall I take care of it?\n> > >\n> > > Please, go ahead.\n> > >\n> >\n> > Okay, I'll push it early next week (by Tuesday) unless there are more\n> > comments or suggestions. Thanks!\n> >\n>\n> Pushed!\n\nYesterday, I needed to refactor a lot of code due to this push [1].\n\nThe refactoring exercise caused me to study these v11 changes much more deeply.\n\nIMO there are a few improvements that should be made:\n\n//////\n\n1. Zap 'opts' up-front\n\n+ *\n+ * Caller is expected to have cleared 'opts'.\n\nThis comment is putting the onus on the caller to \"do the right thing\".\n\nI think that hopeful expectations about input should be removed - the\nfunction should just be responsible itself just to zap the SubOpts\nup-front. It makes the code more readable, and it removes any\npotential risk of garbage unintentionally passed in that struct.\n\n/* Start out with cleared opts. */\nmemset(opts, 0, sizeof(SubOpts));\n\nAlternatively, at least there should be an assertion for some sanity check.\n\nAssert(opt->specified_opts == 0);\n\n----\n\n2. Remove redundant conditions\n\n/* Check for incompatible options from the user. */\n- if (enabled && *enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(supported_opts, SUBOPT_ENABLED) &&\n+ IsSet(opts->specified_opts, SUBOPT_ENABLED))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n/*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"connect = false\", \"enabled = true\")));\n\n- if (create_slot && create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n+ IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"connect = false\", \"create_slot = true\")));\n\n- if (copy_data && copy_data_given && *copy_data)\n+ if (opts->copy_data &&\n+ IsSet(supported_opts, SUBOPT_COPY_DATA) &&\n+ IsSet(opts->specified_opts, SUBOPT_COPY_DATA))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"connect = false\", \"copy_data = true\")));\n\nBy definition, this function only allows any option to be\n\"specified_opts\" if that option is also \"supported_opts\". So, there is\nreally no need in the above code to re-check again that it is\nsupported.\n\nIt can be simplified like this:\n\n/* Check for incompatible options from the user. */\n- if (enabled && *enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(opts->specified_opts, SUBOPT_ENABLED))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n/*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"connect = false\", \"enabled = true\")));\n\n- if (create_slot && create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"connect = false\", \"create_slot = true\")));\n\n- if (copy_data && copy_data_given && *copy_data)\n+ if (opts->copy_data &&\n+ IsSet(opts->specified_opts, SUBOPT_COPY_DATA))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"connect = false\", \"copy_data = true\")));\n\n-----\n\n3. Remove redundant conditions\n\nSame as 2. Here are more examples of conditions where the redundant\nchecking of \"supported_opts\" can be removed.\n\n/*\n* Do additional checking for disallowed combination when slot_name = NONE\n* was used.\n*/\n- if (slot_name && *slot_name_given && !*slot_name)\n+ if (!opts->slot_name &&\n+ IsSet(supported_opts, SUBOPT_SLOT_NAME) &&\n+ IsSet(opts->specified_opts, SUBOPT_SLOT_NAME))\n{\n- if (enabled && *enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(supported_opts, SUBOPT_ENABLED) &&\n+ IsSet(opts->specified_opts, SUBOPT_ENABLED))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n/*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"slot_name = NONE\", \"enabled = true\")));\n\n- if (create_slot && create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n+ IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"slot_name = NONE\", \"create_slot = true\")));\n\nIt can be simplified like this:\n\n/*\n* Do additional checking for disallowed combination when slot_name = NONE\n* was used.\n*/\n- if (slot_name && *slot_name_given && !*slot_name)\n+ if (!opts->slot_name &&\n+ IsSet(opts->specified_opts, SUBOPT_SLOT_NAME))\n{\n- if (enabled && *enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(opts->specified_opts, SUBOPT_ENABLED))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n/*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"slot_name = NONE\", \"enabled = true\")));\n\n- if (create_slot && create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"%s and %s are mutually exclusive options\",\n\"slot_name = NONE\", \"create_slot = true\")));\n\n------\n\n4. Remove redundant conditions\n\n- if (enabled && !*enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(supported_opts, SUBOPT_ENABLED) &&\n+ !IsSet(opts->specified_opts, SUBOPT_ENABLED))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n/*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"subscription with %s must also set %s\",\n\"slot_name = NONE\", \"enabled = false\")));\n\n- if (create_slot && !create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n+ !IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"subscription with %s must also set %s\",\n\"slot_name = NONE\", \"create_slot = false\")));\n\nThis code can be simplified even more than the others mentioned,\nbecause here the \"specified_opts\" checks were already done in the code\nthat precedes this.\n\nIt can be simplified like this:\n\n- if (enabled && !*enabled_given && *enabled)\n+ if (opts->enabled)\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n/*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"subscription with %s must also set %s\",\n\"slot_name = NONE\", \"enabled = false\")));\n\n- if (create_slot && !create_slot_given && *create_slot)\n+ if (opts->create_slot)\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\nerrmsg(\"subscription with %s must also set %s\",\n\"slot_name = NONE\", \"create_slot = false\")));\n\n//////\n\nPSA my patch which includes all the fixes mentioned above.\n\n(Make check, and TAP subscription tests are tested and pass OK)\n\n------\n[1] https://github.com/postgres/postgres/commit/8aafb02616753f5c6c90bbc567636b73c0cbb9d4\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 8 Jul 2021 10:47:28 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "parse_subscription_options - suggested improvements"
},
{
"msg_contents": "v11 -> v12\n\n* A rebase was needed due to some recent pgindent changes on HEAD.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Sun, 8 Aug 2021 16:54:12 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parse_subscription_options - suggested improvements"
},
{
"msg_contents": "On 8/8/21, 11:54 PM, \"Peter Smith\" <smithpb2250@gmail.com> wrote:\r\n> v11 -> v12\r\n>\r\n> * A rebase was needed due to some recent pgindent changes on HEAD.\r\n\r\nThe patch looks correct to me. I have a couple of small comments.\r\n\r\n+\t/* Start out with cleared opts. */\r\n+\tmemset(opts, 0, sizeof(SubOpts));\r\n\r\nShould we stop initializing opts in the callers?\r\n\r\n-\t\tif (opts->enabled &&\r\n-\t\t\tIsSet(supported_opts, SUBOPT_ENABLED) &&\r\n-\t\t\t!IsSet(opts->specified_opts, SUBOPT_ENABLED))\r\n+\t\tif (opts->enabled)\r\n \t\t\tereport(ERROR,\r\n \t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\r\n \t\t\t/*- translator: both %s are strings of the form \"option = value\" */\r\n \t\t\t\t\t errmsg(\"subscription with %s must also set %s\",\r\n \t\t\t\t\t\t\t\"slot_name = NONE\", \"enabled = false\")));\r\n\r\nIMO the way these 'if' statements are written isn't super readable.\r\nRight now, it's written like this:\r\n\r\n if (opt && IsSet(someopt))\r\n ereport(ERROR, ...);\r\n\r\n if (otheropt && IsSet(someotheropt))\r\n ereport(ERROR, ...);\r\n\r\n if (opt)\r\n ereport(ERROR, ...);\r\n\r\n if (otheropt)\r\n ereport(ERROR, ...);\r\n\r\nI think it would be easier to understand if it was written more like\r\nthis:\r\n\r\n if (opt)\r\n {\r\n if (IsSet(someopt))\r\n ereport(ERROR, ...);\r\n else\r\n ereport(ERROR, ...);\r\n }\r\n\r\n if (otheropt)\r\n {\r\n if (IsSet(someotheropt))\r\n ereport(ERROR, ...);\r\n else\r\n ereport(ERROR, ...);\r\n }\r\n\r\nOf course, this does result in a small behavior change because the\r\norder of the checks is different, but I'm not sure that's a big deal.\r\nUltimately, it would probably be nice to report all the errors instead\r\nof just the first one that is hit, but again, I don't know if that's\r\nworth the effort.\r\n\r\nI attached a new version of the patch with my suggestions. However, I\r\nthink v12 is committable.\r\n\r\nNathan",
"msg_date": "Fri, 3 Dec 2021 00:02:34 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: parse_subscription_options - suggested improvements"
},
{
"msg_contents": "On Fri, Dec 3, 2021 at 11:02 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 8/8/21, 11:54 PM, \"Peter Smith\" <smithpb2250@gmail.com> wrote:\n> > v11 -> v12\n> >\n> > * A rebase was needed due to some recent pgindent changes on HEAD.\n>\n> The patch looks correct to me. I have a couple of small comments.\n\nThank you for taking an interest in my patch and moving it to a\n\"Ready\" state in the CF.\n\n>\n> + /* Start out with cleared opts. */\n> + memset(opts, 0, sizeof(SubOpts));\n>\n> Should we stop initializing opts in the callers?\n\nFor the initialization of opts I put memset within the function to\nmake it explicit that the bit-masks will work as intended without\nhaving to look back at calling code for the initial values. In any\ncase, I think the caller declarations of SubOpts are trivial, (e.g.\nSubOpts opts = {0};) so I felt caller initializations don't need to be\nchanged regardless of the memset.\n\n>\n> - if (opts->enabled &&\n> - IsSet(supported_opts, SUBOPT_ENABLED) &&\n> - !IsSet(opts->specified_opts, SUBOPT_ENABLED))\n> + if (opts->enabled)\n> ereport(ERROR,\n> (errcode(ERRCODE_SYNTAX_ERROR),\n> /*- translator: both %s are strings of the form \"option = value\" */\n> errmsg(\"subscription with %s must also set %s\",\n> \"slot_name = NONE\", \"enabled = false\")));\n>\n> IMO the way these 'if' statements are written isn't super readable.\n> Right now, it's written like this:\n>\n> if (opt && IsSet(someopt))\n> ereport(ERROR, ...);\n>\n> if (otheropt && IsSet(someotheropt))\n> ereport(ERROR, ...);\n>\n> if (opt)\n> ereport(ERROR, ...);\n>\n> if (otheropt)\n> ereport(ERROR, ...);\n>\n> I think it would be easier to understand if it was written more like\n> this:\n>\n> if (opt)\n> {\n> if (IsSet(someopt))\n> ereport(ERROR, ...);\n> else\n> ereport(ERROR, ...);\n> }\n>\n> if (otheropt)\n> {\n> if (IsSet(someotheropt))\n> ereport(ERROR, ...);\n> else\n> ereport(ERROR, ...);\n> }\n>\n> Of course, this does result in a small behaviour change because the\n> order of the checks is different, but I'm not sure that's a big deal.\n> Ultimately, it would probably be nice to report all the errors instead\n> of just the first one that is hit, but again, I don't know if that's\n> worth the effort.\n>\n\nMy patch was meant only to remove all the redundant conditions of the\nHEAD code, so I did not rearrange any of the logic at all. Personally,\nI also think your v13 is better and easier to read, but those subtle\nbehaviour differences were something I'd deliberately avoided in v12.\nHowever, if the committer thinks it does not matter then your v13 is\nfine by me.\n\n> I attached a new version of the patch with my suggestions. However, I\n> think v12 is committable.\n>\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 6 Dec 2021 11:28:12 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parse_subscription_options - suggested improvements"
},
{
"msg_contents": "On Mon, Dec 06, 2021 at 11:28:12AM +1100, Peter Smith wrote:\n> For the initialization of opts I put memset within the function to\n> make it explicit that the bit-masks will work as intended without\n> having to look back at calling code for the initial values. In any\n> case, I think the caller declarations of SubOpts are trivial, (e.g.\n> SubOpts opts = {0};) so I felt caller initializations don't need to be\n> changed regardless of the memset.\n\nIt seems to me that not initializing these may cause some compilation\nwarnings. memset(0) at the beginning of parse_subscription_options()\nis an improvement.\n\n> My patch was meant only to remove all the redundant conditions of the\n> HEAD code, so I did not rearrange any of the logic at all. Personally,\n> I also think your v13 is better and easier to read, but those subtle\n> behaviour differences were something I'd deliberately avoided in v12.\n> However, if the committer thinks it does not matter then your v13 is\n> fine by me.\n\nWell, there is always the argument that it could be confusing as a\ndifferent combination of options generates a slightly-different error,\nbut the user would get warned about each one of his/her mistakes at\nthe end, so the result is the same.\n\n- if (opts->enabled &&\n- IsSet(supported_opts, SUBOPT_ENABLED) &&\n- !IsSet(opts->specified_opts, SUBOPT_ENABLED))\n+ if (opts->enabled)\n\nI see. The last condition on the specified options in the last two\nchecks is removed thanks to the first two checks. As a matter of\nconsistency with those error strings, keeping each !IsSet() would be\ncleaner. But I agree that v13 is better than that, without removing\nthe two initializations.\n--\nMichael",
"msg_date": "Mon, 6 Dec 2021 14:20:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: parse_subscription_options - suggested improvements"
},
{
"msg_contents": "On 12/5/21, 9:21 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Mon, Dec 06, 2021 at 11:28:12AM +1100, Peter Smith wrote:\r\n>> For the initialization of opts I put memset within the function to\r\n>> make it explicit that the bit-masks will work as intended without\r\n>> having to look back at calling code for the initial values. In any\r\n>> case, I think the caller declarations of SubOpts are trivial, (e.g.\r\n>> SubOpts opts = {0};) so I felt caller initializations don't need to be\r\n>> changed regardless of the memset.\r\n>\r\n> It seems to me that not initializing these may cause some compilation\r\n> warnings. memset(0) at the beginning of parse_subscription_options()\r\n> is an improvement.\r\n\r\nI'll admit I was surprised that my compiler didn't complain about\r\nthis, but I wouldn't be surprised at all if others did. I agree that\r\nthere is no strong need to remove the initializations from the calling\r\nfunctions.\r\n\r\n>> My patch was meant only to remove all the redundant conditions of the\r\n>> HEAD code, so I did not rearrange any of the logic at all. Personally,\r\n>> I also think your v13 is better and easier to read, but those subtle\r\n>> behaviour differences were something I'd deliberately avoided in v12.\r\n>> However, if the committer thinks it does not matter then your v13 is\r\n>> fine by me.\r\n>\r\n> Well, there is always the argument that it could be confusing as a\r\n> different combination of options generates a slightly-different error,\r\n> but the user would get warned about each one of his/her mistakes at\r\n> the end, so the result is the same.\r\n>\r\n> - if (opts->enabled &&\r\n> - IsSet(supported_opts, SUBOPT_ENABLED) &&\r\n> - !IsSet(opts->specified_opts, SUBOPT_ENABLED))\r\n> + if (opts->enabled)\r\n>\r\n> I see. The last condition on the specified options in the last two\r\n> checks is removed thanks to the first two checks. As a matter of\r\n> consistency with those error strings, keeping each !IsSet() would be\r\n> cleaner. But I agree that v13 is better than that, without removing\r\n> the two initializations.\r\n\r\nAttached a v14 with the initializations added back.\r\n\r\nNathan",
"msg_date": "Mon, 6 Dec 2021 19:07:09 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: parse_subscription_options - suggested improvements"
},
{
"msg_contents": "On Tue, Dec 7, 2021 at 6:07 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/5/21, 9:21 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> > On Mon, Dec 06, 2021 at 11:28:12AM +1100, Peter Smith wrote:\n> >> For the initialization of opts I put memset within the function to\n> >> make it explicit that the bit-masks will work as intended without\n> >> having to look back at calling code for the initial values. In any\n> >> case, I think the caller declarations of SubOpts are trivial, (e.g.\n> >> SubOpts opts = {0};) so I felt caller initializations don't need to be\n> >> changed regardless of the memset.\n> >\n> > It seems to me that not initializing these may cause some compilation\n> > warnings. memset(0) at the beginning of parse_subscription_options()\n> > is an improvement.\n>\n> I'll admit I was surprised that my compiler didn't complain about\n> this, but I wouldn't be surprised at all if others did. I agree that\n> there is no strong need to remove the initializations from the calling\n> functions.\n>\n> >> My patch was meant only to remove all the redundant conditions of the\n> >> HEAD code, so I did not rearrange any of the logic at all. Personally,\n> >> I also think your v13 is better and easier to read, but those subtle\n> >> behaviour differences were something I'd deliberately avoided in v12.\n> >> However, if the committer thinks it does not matter then your v13 is\n> >> fine by me.\n> >\n> > Well, there is always the argument that it could be confusing as a\n> > different combination of options generates a slightly-different error,\n> > but the user would get warned about each one of his/her mistakes at\n> > the end, so the result is the same.\n> >\n> > - if (opts->enabled &&\n> > - IsSet(supported_opts, SUBOPT_ENABLED) &&\n> > - !IsSet(opts->specified_opts, SUBOPT_ENABLED))\n> > + if (opts->enabled)\n> >\n> > I see. The last condition on the specified options in the last two\n> > checks is removed thanks to the first two checks. As a matter of\n> > consistency with those error strings, keeping each !IsSet() would be\n> > cleaner. But I agree that v13 is better than that, without removing\n> > the two initializations.\n>\n> Attached a v14 with the initializations added back.\n>\n\nLGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 7 Dec 2021 08:12:59 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parse_subscription_options - suggested improvements"
},
{
"msg_contents": "On Tue, Dec 07, 2021 at 08:12:59AM +1100, Peter Smith wrote:\n> On Tue, Dec 7, 2021 at 6:07 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>> Attached a v14 with the initializations added back.\n> \n> LGTM.\n\nAll the code paths previously covered still are, so applied this one.\nThanks!\n--\nMichael",
"msg_date": "Wed, 8 Dec 2021 12:51:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: parse_subscription_options - suggested improvements"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 2:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 07, 2021 at 08:12:59AM +1100, Peter Smith wrote:\n> > On Tue, Dec 7, 2021 at 6:07 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> Attached a v14 with the initializations added back.\n> >\n> > LGTM.\n>\n> All the code paths previously covered still are, so applied this one.\n> Thanks!\n\nThanks for pushing!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 8 Dec 2021 15:09:02 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parse_subscription_options - suggested improvements"
}
] |
[
{
"msg_contents": "If the block size is 32k, the function page_header of the pageinspect \nmodule returns negative numbers:\n\npostgres=# select * from page_header(get_raw_page('t1',0));\n lsn | checksum | flags | lower | upper | special | pagesize | \nversion | prune_xid\n-----------+----------+-------+-------+-------+---------+----------+---------+-----------\n 0/174CF58 | 0 | 0 | 28 | 32736 | -32768 | -32768 | \n 4 | 0\n(1 row)\n\n\nThis patch changes the output parameters lower, upper, special and \npagesize to int32.\n\npostgres=# select * from page_header(get_raw_page('t1',0));\n lsn | checksum | flags | lower | upper | special | pagesize | \nversion | prune_xid\n-----------+----------+-------+-------+-------+---------+----------+---------+-----------\n 0/19EA640 | 0 | 0 | 28 | 32736 | 32768 | 32768 | \n 4 | 0\n(1 row)\n\n\n--\nQuan Zongliang",
"msg_date": "Thu, 8 Jul 2021 08:56:19 +0800",
"msg_from": "Quan Zongliang <quanzongliang@yeah.net>",
"msg_from_op": true,
"msg_subject": "bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 6:26 AM Quan Zongliang <quanzongliang@yeah.net> wrote:\n>\n> If the block size is 32k, the function page_header of the pageinspect\n> module returns negative numbers:\n>\n> postgres=# select * from page_header(get_raw_page('t1',0));\n> lsn | checksum | flags | lower | upper | special | pagesize |\n> version | prune_xid\n> -----------+----------+-------+-------+-------+---------+----------+---------+-----------\n> 0/174CF58 | 0 | 0 | 28 | 32736 | -32768 | -32768 |\n> 4 | 0\n> (1 row)\n>\n>\n> This patch changes the output parameters lower, upper, special and\n> pagesize to int32.\n>\n> postgres=# select * from page_header(get_raw_page('t1',0));\n> lsn | checksum | flags | lower | upper | special | pagesize |\n> version | prune_xid\n> -----------+----------+-------+-------+-------+---------+----------+---------+-----------\n> 0/19EA640 | 0 | 0 | 28 | 32736 | 32768 | 32768 |\n> 4 | 0\n> (1 row)\n\n+1. int32 makes sense because the maximum allowed block size is 32768\nand smallint with range -32768 to +32767 can't hold it. Internally,\nlower, upper, special are treated as unit16. I looked at the patch,\nhow about using \"int4\" instead of just \"int\", just for readability?\nAnd, do we need to change in pageinspect--1.1--1.2.sql and\npageinspect--1.0--1.1.sql along with pageinspect--1.5.sql?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 8 Jul 2021 09:12:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 09:12:26AM +0530, Bharath Rupireddy wrote:\n> +1. int32 makes sense because the maximum allowed block size is 32768\n> and smallint with range -32768 to +32767 can't hold it. Internally,\n> lower, upper, special are treated as unit16. I looked at the patch,\n> how about using \"int4\" instead of just \"int\", just for readability?\n> And, do we need to change in pageinspect--1.1--1.2.sql and\n> pageinspect--1.0--1.1.sql along with pageinspect--1.5.sql?\n\nChanges in the object set of an extension requires a new SQL script\nthat changes the objects to reflect the change. So, in this case,\nwhat you need to do is to create pageinspect--1.9--1.10.sql, assuming\nthat the new extension version is 1.10 and change page_header()\naccordingly.\n\nYou also need to be careful about compatibility with past versions of\nthis extension, as the code you are changing to use int8 could be used\nat runtime by older versions of pageinspect where int4 is used. I\nwould suggest to test that with some new extra tests in\noldextversions.sql.\n\nThe patch, and your suggestions, are incorrect on those aspects.\n--\nMichael",
"msg_date": "Thu, 8 Jul 2021 13:15:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 01:15:12PM +0900, Michael Paquier wrote:\n> On Thu, Jul 08, 2021 at 09:12:26AM +0530, Bharath Rupireddy wrote:\n> > +1. int32 makes sense because the maximum allowed block size is 32768\n> > and smallint with range -32768 to +32767 can't hold it. Internally,\n> > lower, upper, special are treated as unit16. I looked at the patch,\n> > how about using \"int4\" instead of just \"int\", just for readability?\n> > And, do we need to change in pageinspect--1.1--1.2.sql and\n> > pageinspect--1.0--1.1.sql along with pageinspect--1.5.sql?\n> \n> Changes in the object set of an extension requires a new SQL script\n> that changes the objects to reflect the change. So, in this case,\n> what you need to do is to create pageinspect--1.9--1.10.sql, assuming\n> that the new extension version is 1.10 and change page_header()\n> accordingly.\n\nI think it's common (and preferred) for changes in extension versions to be\n\"folded\" together within a major release of postgres. Since v1.9 is new in\npg14 (commit 756ab2912), this change can be included in the same, new version.\n\n> You also need to be careful about compatibility with past versions of\n> this extension, as the code you are changing to use int8 could be used\n> at runtime by older versions of pageinspect where int4 is used. I\n> would suggest to test that with some new extra tests in\n> oldextversions.sql.\n\nI think you can refer to this prior commit for guidance.\n\ncommit f18aa1b203930ed28cfe42e82d3418ae6277576d\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Tue Jan 19 10:28:05 2021 +0100\n\n pageinspect: Change block number arguments to bigint\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 7 Jul 2021 23:28:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Jul 08, 2021 at 01:15:12PM +0900, Michael Paquier wrote:\n>> Changes in the object set of an extension requires a new SQL script\n>> that changes the objects to reflect the change. So, in this case,\n>> what you need to do is to create pageinspect--1.9--1.10.sql, assuming\n>> that the new extension version is 1.10 and change page_header()\n>> accordingly.\n\n> I think it's common (and preferred) for changes in extension versions to be\n> \"folded\" together within a major release of postgres. Since v1.9 is new in\n> pg14 (commit 756ab2912), this change can be included in the same, new version.\n\nSince we're already past beta2, I'm not sure that's a good idea. We\ncan't really treat pageinspect 1.9 as something that the world has\nnever seen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Jul 2021 00:49:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k,\n the function page_header of pageinspect returns negative numbers."
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 12:49:25AM -0400, Tom Lane wrote:\n> Since we're already past beta2, I'm not sure that's a good idea. We\n> can't really treat pageinspect 1.9 as something that the world has\n> never seen.\n\nYeah, that's why I would object to new changes in 1.9 and\nREL_14_STABLE. So my take would be to just have 1.10, and do those\nchanges on HEAD.\n--\nMichael",
"msg_date": "Thu, 8 Jul 2021 14:01:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "> On 8 Jul 2021, at 07:01, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Jul 08, 2021 at 12:49:25AM -0400, Tom Lane wrote:\n>> Since we're already past beta2, I'm not sure that's a good idea. We\n>> can't really treat pageinspect 1.9 as something that the world has\n>> never seen.\n> \n> Yeah, that's why I would object to new changes in 1.9 and\n> REL_14_STABLE. So my take would be to just have 1.10, and do those\n> changes on HEAD.\n\n+1, this should go into a 1.10 version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 08:23:58 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Wed, Jul 07, 2021 at 11:28:06PM -0500, Justin Pryzby wrote:\n> I think you can refer to this prior commit for guidance.\n> \n> commit f18aa1b203930ed28cfe42e82d3418ae6277576d\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date: Tue Jan 19 10:28:05 2021 +0100\n> \n> pageinspect: Change block number arguments to bigint\n\nYes, thanks. Peter's recent work is what I had in mind. I agree that\nwe could improve the situation, and the change is not complicated once\nyou know what needs to be done. It needs to be done as follows:\n- Create a new pageinspect--1.9--1.10.sql.\n- Provide a proper implementation for older extension version based on\nthe output parameter types, with a lookup at the TupleDesc for the\nfunction to adapt.\n- Add tests to sql/oldextversions.sql to stress the old function based\non smallint results.\n- Bump pageinspect.control.\n\nQuan, would you like to produce a patch? That's a good exercise to\nunderstand how the maintenance of extensions is done.\n--\nMichael",
"msg_date": "Thu, 8 Jul 2021 16:54:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "\n\nOn 2021/7/8 3:54 下午, Michael Paquier wrote:\n> On Wed, Jul 07, 2021 at 11:28:06PM -0500, Justin Pryzby wrote:\n>> I think you can refer to this prior commit for guidance.\n>>\n>> commit f18aa1b203930ed28cfe42e82d3418ae6277576d\n>> Author: Peter Eisentraut <peter@eisentraut.org>\n>> Date: Tue Jan 19 10:28:05 2021 +0100\n>>\n>> pageinspect: Change block number arguments to bigint\n> \n> Yes, thanks. Peter's recent work is what I had in mind. I agree that\n> we could improve the situation, and the change is not complicated once\n> you know what needs to be done. It needs to be done as follows:\n> - Create a new pageinspect--1.9--1.10.sql.\n> - Provide a proper implementation for older extension version based on\n> the output parameter types, with a lookup at the TupleDesc for the\n> function to adapt.\n> - Add tests to sql/oldextversions.sql to stress the old function based\n> on smallint results.\n> - Bump pageinspect.control.\n> \n> Quan, would you like to produce a patch? That's a good exercise to\n> understand how the maintenance of extensions is done.\n\nOk. I'll do it.\n\n> --\n> Michael\n> \n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 18:02:03 +0800",
"msg_from": "Quan Zongliang <quanzongliang@yeah.net>",
"msg_from_op": true,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On 2021/7/8 3:54 下午, Michael Paquier wrote:\n> On Wed, Jul 07, 2021 at 11:28:06PM -0500, Justin Pryzby wrote:\n>> I think you can refer to this prior commit for guidance.\n>>\n>> commit f18aa1b203930ed28cfe42e82d3418ae6277576d\n>> Author: Peter Eisentraut <peter@eisentraut.org>\n>> Date: Tue Jan 19 10:28:05 2021 +0100\n>>\n>> pageinspect: Change block number arguments to bigint\n> \n> Yes, thanks. Peter's recent work is what I had in mind. I agree that\n> we could improve the situation, and the change is not complicated once\n> you know what needs to be done. It needs to be done as follows:\n> - Create a new pageinspect--1.9--1.10.sql.\n> - Provide a proper implementation for older extension version based on\n> the output parameter types, with a lookup at the TupleDesc for the\n> function to adapt.\n> - Add tests to sql/oldextversions.sql to stress the old function based\n> on smallint results.\n> - Bump pageinspect.control.\n> \n> Quan, would you like to produce a patch? That's a good exercise to\n> understand how the maintenance of extensions is done.\n\nnew patch attached\n\n> --\n> Michael\n>",
"msg_date": "Fri, 9 Jul 2021 09:26:37 +0800",
"msg_from": "Quan Zongliang <quanzongliang@yeah.net>",
"msg_from_op": true,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 09:26:37AM +0800, Quan Zongliang wrote:\n> new patch attached\n\nThat's mostly fine at quick glance. Here are some comments.\n\nPlease add pageinspect--1.9--1.10.sql within the patch. Using git,\nyou can do that with a simple \"git add\". With the current shape of\nthe patch, one has to manually copy pageinspect--1.9--1.10.sql into\ncontrib/pageinspect/ to be able to test it.\n\n+SELECT lower, upper, special, pagesize, version FROM\npage_header(get_raw_page('test1', 0));\n+ lower | upper | special | pagesize | version\n+-------+-------+---------+----------+---------\n+ 28 | 8152 | 8192 | 8192 | 4\n+(1 row)\nI would not test all the fields, just pagesize and version perhaps?\n\n+ if (TupleDescAttr(tupdesc, 3)->atttypid == INT2OID)\n+ values[3] = UInt16GetDatum(page->pd_lower);\n+ else\n+ values[3] = Int32GetDatum(page->pd_lower);\nLet's make the style more consistent with brinfuncs.c, by grouping all\nthe fields together in a switch/case, like that:\nswitch ((TupleDescAttr(tupdesc, 3)->atttypid)):\n{\n case INT2OID:\n /* fill in values with UInt16GetDatum() */\n\tbreak;\n case INT4OID:\n /* fill in values with Int32GetDatum() */\n\tbreak;\n default:\n elog(ERROR, \"blah\");\n}\n\n+ALTER EXTENSION pageinspect UPDATE TO '1.10';\n+\\df page_header\n+SELECT lower, upper, special, pagesize, version FROM page_header(get_raw_page('test1', 0));\nNo need for this test as page.sql already stresses page_header() for\nthe latest version. Perhaps we could just UPDATE TO '1.9' before\nrunning your query to show that it is the last version of pageinspect\nwhere the older page_header() appeared.\n\nThe documentation of pageinspect does not require an update, as far as\nI can see. So we are good there.\n--\nMichael",
"msg_date": "Fri, 9 Jul 2021 10:50:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On 2021/7/9 9:50 上午, Michael Paquier wrote:\n> On Fri, Jul 09, 2021 at 09:26:37AM +0800, Quan Zongliang wrote:\n>> new patch attached\n> \n> That's mostly fine at quick glance. Here are some comments.\n> \n> Please add pageinspect--1.9--1.10.sql within the patch. Using git,\n> you can do that with a simple \"git add\". With the current shape of\n> the patch, one has to manually copy pageinspect--1.9--1.10.sql into\n> contrib/pageinspect/ to be able to test it.\n> \n> +SELECT lower, upper, special, pagesize, version FROM\n> page_header(get_raw_page('test1', 0));\n> + lower | upper | special | pagesize | version\n> +-------+-------+---------+----------+---------\n> + 28 | 8152 | 8192 | 8192 | 4\n> +(1 row)\n> I would not test all the fields, just pagesize and version perhaps?\n> \n> + if (TupleDescAttr(tupdesc, 3)->atttypid == INT2OID)\n> + values[3] = UInt16GetDatum(page->pd_lower);\n> + else\n> + values[3] = Int32GetDatum(page->pd_lower);\n> Let's make the style more consistent with brinfuncs.c, by grouping all\n> the fields together in a switch/case, like that:\n> switch ((TupleDescAttr(tupdesc, 3)->atttypid)):\n> {\n> case INT2OID:\n> /* fill in values with UInt16GetDatum() */\n> \tbreak;\n> case INT4OID:\n> /* fill in values with Int32GetDatum() */\n> \tbreak;\n> default:\n> elog(ERROR, \"blah\");\n> }\n> \n> +ALTER EXTENSION pageinspect UPDATE TO '1.10';\n> +\\df page_header\n> +SELECT lower, upper, special, pagesize, version FROM page_header(get_raw_page('test1', 0));\n> No need for this test as page.sql already stresses page_header() for\n> the latest version. Perhaps we could just UPDATE TO '1.9' before\n> running your query to show that it is the last version of pageinspect\n> where the older page_header() appeared.\n> \n> The documentation of pageinspect does not require an update, as far as\n> I can see. So we are good there.\n> --\n> Michael\n> \n\nThanks for the comments.\nDone",
"msg_date": "Fri, 9 Jul 2021 11:11:46 +0800",
"msg_from": "Quan Zongliang <quanzongliang@yeah.net>",
"msg_from_op": true,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 11:11:46AM +0800, Quan Zongliang wrote:\n> Thanks for the comments.\n\nThanks. Having four switches is a bit repetitive so I would just use\none of these, and perhaps complete with some assertions to make sure\nthat atttypid matches to what is expected. That's a minor comment\nthough, this looks rather fine to me reading through.\n--\nMichael",
"msg_date": "Fri, 9 Jul 2021 14:16:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 8:43 AM Quan Zongliang <quanzongliang@yeah.net> wrote:\n> Thanks for the comments.\n> Done\n\nThanks for the patch. Few comments:\n\n1) How about just adding a comment /* support for old extension\nversion */ before INT2OID handling?\n+ case INT2OID:\n+ values[3] = UInt16GetDatum(page->pd_lower);\n+ break;\n\n2) Do we ever reach the error statement elog(ERROR, \"incorrect output\ntypes\");? We have the function either defined with smallint or int, I\ndon't think so we ever reach it. Instead, an assertion would work as\nsuggested by Micheal.\n\n3) Isn't this test case unstable when regression tests are run with a\ndifferent BLCKSZ setting? Or is it okay that some of the other tests\nfor pageinspect already outputs page_size, hash_page_stats.\n+SELECT pagesize, version FROM page_header(get_raw_page('test1', 0));\n+ pagesize | version\n+----------+---------\n+ 8192 | 4\n\n4) Can we arrange pageinspect--1.8--1.9.sql into the first line itself?\n+DATA = pageinspect--1.9--1.10.sql \\\n+ pageinspect--1.8--1.9.sql \\\n\n5) How about using \"int4\" instead of just \"int\", just for readability?\n\n6) How about something like below instead of repetitive switch statements?\nstatic inline Datum\nget_page_header_attr(TupleDesc desc, int attno, int val)\n{\nOid atttypid;\nDatum datum;\n\natttypid = TupleDescAttr(desc, attno)->atttypid;\nAssert(atttypid == INT2OID || atttypid == INT4OID);\n\nif (atttypid == INT2OID)\ndatum = UInt16GetDatum(val);\nelse if (atttypid == INT4OID)\ndatum = Int32GetDatum(val);\n\nreturn datum;\n}\n\nvalues[3] = get_page_header_attr(tupdesc, 3, page->pd_lower);\nvalues[4] = get_page_header_attr(tupdesc, 4, page->pd_upper);\nvalues[5] = get_page_header_attr(tupdesc, 5, page->pd_special);\nvalues[6] = get_page_header_attr(tupdesc, 6, PageGetPageSize(page));\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 9 Jul 2021 17:26:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 05:26:37PM +0530, Bharath Rupireddy wrote:\n> 1) How about just adding a comment /* support for old extension\n> version */ before INT2OID handling?\n> + case INT2OID:\n> + values[3] = UInt16GetDatum(page->pd_lower);\n> + break;\n\nYes, having a comment to document from which version this is done\nwould be nice. This is more consistent with the surroundings.\n\n> 2) Do we ever reach the error statement elog(ERROR, \"incorrect output\n> types\");? We have the function either defined with smallint or int, I\n> don't think so we ever reach it. Instead, an assertion would work as\n> suggested by Micheal.\n\nI would keep an elog() here for the default case. I was referring to\nthe use of assertions if changing the code into a single switch/case,\nwith assertions checking that the other arguments have the expected\ntype.\n\n> 3) Isn't this test case unstable when regression tests are run with a\n> different BLCKSZ setting? Or is it okay that some of the other tests\n> for pageinspect already outputs page_size, hash_page_stats.\n> +SELECT pagesize, version FROM page_header(get_raw_page('test1', 0));\n> + pagesize | version\n> +----------+---------\n> + 8192 | 4\n\nI don't think it matters much, most of the tests of pageinspect\nalready rely on 8k pages. So let's keep it as-is.\n\n> 4) Can we arrange pageinspect--1.8--1.9.sql into the first line itself?\n> +DATA = pageinspect--1.9--1.10.sql \\\n> + pageinspect--1.8--1.9.sql \\\n\nThat's a nit, but why not.\n\n> 5) How about using \"int4\" instead of just \"int\", just for readability?\n\nAny way is fine, I'd stick with \"int\" as the other fields used\n\"smallint\".\n\n> 6) How about something like below instead of repetitive switch statements?\n> static inline Datum\n> get_page_header_attr(TupleDesc desc, int attno, int val)\n> {\n> Oid atttypid;\n> Datum datum;\n\nNah. It does not seem like an improvement to me in terms of\nreadability.\n\nSo I would finish with the attached, close enough to what Quan has\nsent upthread. \n--\nMichael",
"msg_date": "Sat, 10 Jul 2021 20:29:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Sat, Jul 10, 2021 at 4:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n> So I would finish with the attached, close enough to what Quan has\n> sent upthread.\n\nThanks. The patch looks good to me, except a minor comment - isn't it\n\"int2 for these fields\" as the fields still exist? + /* pageinspect >=\n1.10 uses int4 instead of int2 for those fields */\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 10 Jul 2021 19:09:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
},
{
"msg_contents": "On Sat, Jul 10, 2021 at 07:09:10PM +0530, Bharath Rupireddy wrote:\n> Thanks. The patch looks good to me, except a minor comment - isn't it\n> \"int2 for these fields\" as the fields still exist? + /* pageinspect >=\n> 1.10 uses int4 instead of int2 for those fields */\n\nThis comment looks fine to me as-is, so applied on HEAD.\n--\nMichael",
"msg_date": "Mon, 12 Jul 2021 11:16:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bugfix: when the blocksize is 32k, the function page_header of\n pageinspect returns negative numbers."
}
] |
[
{
"msg_contents": "Hello, hackers\n\n\nWhen the current HEAD fails during logical decoding, the failure\nincrements txns count in pg_stat_replication_slots - [1] and adds\nthe transaction size to the sum of bytes in the same repeatedly\non the publisher, until the problem is solved.\nOne of the good examples is duplication error on the subscriber side\nand this applies to both streaming and spill cases as well.\n\nThis update prevents users from grasping the exact number and size of\nsuccessful and unsuccessful transactions. Accordingly, we need to\nhave new columns of failed transactions that will work to differentiate\nboth of them for all types, which means spill, streaming and normal\ntransactions. This will help users to measure the exact status of\nlogical replication.\n\nAttached file is the POC patch for this.\nCurrent design is to save failed stats data in the ReplicationSlot struct.\nThis is because after the error, I'm not able to access the ReorderBuffer object.\nThus, I chose the object where I can interact with at the ReplicationSlotRelease timing.\n\nBelow is one example that I can get on the publisher,\nafter the duplication error on the subscriber caused by insert is solved.\n\npostgres=# select * from pg_stat_replication_slots;\n-[ RECORD 1 ]-------+------\nslot_name | mysub\nspill_txns | 0\nspill_count | 0\nspill_bytes | 0\nfailed_spill_txns | 0\nfailed_spill_bytes | 0\nstream_txns | 0\nstream_count | 0\nstream_bytes | 0\nfailed_stream_txns | 0\nfailed_stream_bytes | 0\ntotal_txns | 4\ntotal_bytes | 528\nfailed_total_txns | 3\nfailed_total_bytes | 396\nstats_reset | \n\n\nAny ideas and comments are welcome.\n\n[1] - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Thu, 8 Jul 2021 06:54:45 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 12:25 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hello, hackers\n>\n>\n> When the current HEAD fails during logical decoding, the failure\n> increments txns count in pg_stat_replication_slots - [1] and adds\n> the transaction size to the sum of bytes in the same repeatedly\n> on the publisher, until the problem is solved.\n> One of the good examples is duplication error on the subscriber side\n> and this applies to both streaming and spill cases as well.\n>\n> This update prevents users from grasping the exact number and size of\n> successful and unsuccessful transactions. Accordingly, we need to\n> have new columns of failed transactions that will work to differentiate\n> both of them for all types, which means spill, streaming and normal\n> transactions. This will help users to measure the exact status of\n> logical replication.\n>\n> Attached file is the POC patch for this.\n> Current design is to save failed stats data in the ReplicationSlot struct.\n> This is because after the error, I'm not able to access the ReorderBuffer object.\n> Thus, I chose the object where I can interact with at the ReplicationSlotRelease timing.\n>\n> Below is one example that I can get on the publisher,\n> after the duplication error on the subscriber caused by insert is solved.\n>\n> postgres=# select * from pg_stat_replication_slots;\n> -[ RECORD 1 ]-------+------\n> slot_name | mysub\n> spill_txns | 0\n> spill_count | 0\n> spill_bytes | 0\n> failed_spill_txns | 0\n> failed_spill_bytes | 0\n> stream_txns | 0\n> stream_count | 0\n> stream_bytes | 0\n> failed_stream_txns | 0\n> failed_stream_bytes | 0\n> total_txns | 4\n> total_bytes | 528\n> failed_total_txns | 3\n> failed_total_bytes | 396\n> stats_reset |\n>\n>\n> Any ideas and comments are welcome.\n\n+1 for having logical replication failed statistics. Currently if\nthere is any transaction failure in the subscriber after sending the\ndecoded data to the subscriber like constraint violation, object not\nexist, the statistics will include the failed decoded transaction info\nand there is no way to identify the actual successful transaction\ndata. This patch will help in measuring the actual decoded transaction\ndata.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 13 Jul 2021 11:20:13 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, July 13, 2021 2:50 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > When the current HEAD fails during logical decoding, the failure\r\n> > increments txns count in pg_stat_replication_slots - [1] and adds the\r\n> > transaction size to the sum of bytes in the same repeatedly on the\r\n> > publisher, until the problem is solved.\r\n> > One of the good examples is duplication error on the subscriber side\r\n> > and this applies to both streaming and spill cases as well.\r\n> >\r\n> > This update prevents users from grasping the exact number and size of\r\n> > successful and unsuccessful transactions. Accordingly, we need to have\r\n> > new columns of failed transactions that will work to differentiate\r\n> > both of them for all types, which means spill, streaming and normal\r\n> > transactions. This will help users to measure the exact status of\r\n> > logical replication.\r\n> >\r\n> > Attached file is the POC patch for this.\r\n> > Current design is to save failed stats data in the ReplicationSlot struct.\r\n> > This is because after the error, I'm not able to access the ReorderBuffer\r\n> object.\r\n> > Thus, I chose the object where I can interact with at the\r\n> ReplicationSlotRelease timing.\r\n> > Any ideas and comments are welcome.\r\n...\r\n> +1 for having logical replication failed statistics. Currently if\r\n> there is any transaction failure in the subscriber after sending the decoded\r\n> data to the subscriber like constraint violation, object not exist, the statistics\r\n> will include the failed decoded transaction info and there is no way to identify\r\n> the actual successful transaction data. This patch will help in measuring the\r\n> actual decoded transaction data.\r\nYeah, we can apply this improvement to other error cases.\r\nThank you for sharing ideas to make this enhancement more persuasive.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 13 Jul 2021 06:59:56 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 4:55 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n\n> Attached file is the POC patch for this.\n> Current design is to save failed stats data in the ReplicationSlot struct.\n> This is because after the error, I'm not able to access the ReorderBuffer object.\n> Thus, I chose the object where I can interact with at the ReplicationSlotRelease timing.\n\nI think this is a good idea to capture the failed replication stats.\nBut I'm wondering how you are deciding if\nthe replication failed or not? Not all cases of ReplicationSLotRelease\nare due to a failure. It could also be due to a planned dropping\nof subscription or disable of subscription. I have not tested this but\nwon't the failed stats be updated in this case as well? Is that\ncorrect?\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 27 Jul 2021 16:59:14 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, July 27, 2021 3:59 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> On Thu, Jul 8, 2021 at 4:55 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> \r\n> > Attached file is the POC patch for this.\r\n> > Current design is to save failed stats data in the ReplicationSlot struct.\r\n> > This is because after the error, I'm not able to access the ReorderBuffer\r\n> object.\r\n> > Thus, I chose the object where I can interact with at the\r\n> ReplicationSlotRelease timing.\r\n> \r\n> I think this is a good idea to capture the failed replication stats.\r\n> But I'm wondering how you are deciding if the replication failed or not? Not all\r\n> cases of ReplicationSLotRelease are due to a failure. It could also be due to a\r\n> planned dropping of subscription or disable of subscription. I have not tested\r\n> this but won't the failed stats be updated in this case as well? Is that correct?\r\nYes, what you said is true. Currently, when I run DROP SUBSCRIPTION or\r\nALTER SUBSCRIPTION DISABLE, failed stats values are added\r\nto pg_stat_replication_slots unintentionally, if they have some left values.\r\nThis is because all those commands, like the subscriber apply failure\r\nby duplication error, have the publisher get 'X' message at ProcessRepliesIfAny()\r\nand go into the path to call ReplicationSlotRelease().\r\n\r\nAlso, other opportunities like server stop call the same in the end,\r\nwhich leads to a situation that after the server restart,\r\nthe value of failed stats catch up with the (successful) existing stats values.\r\nAccordingly, I need to change the patch to adjust those situations.\r\nThank you.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 28 Jul 2021 12:56:52 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 3:55 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hello, hackers\n>\n>\n> When the current HEAD fails during logical decoding, the failure\n> increments txns count in pg_stat_replication_slots - [1] and adds\n> the transaction size to the sum of bytes in the same repeatedly\n> on the publisher, until the problem is solved.\n> One of the good examples is duplication error on the subscriber side\n> and this applies to both streaming and spill cases as well.\n>\n> This update prevents users from grasping the exact number and size of\n> successful and unsuccessful transactions. Accordingly, we need to\n> have new columns of failed transactions that will work to differentiate\n> both of them for all types, which means spill, streaming and normal\n> transactions. This will help users to measure the exact status of\n> logical replication.\n\nCould you please elaborate on use cases of the proposed statistics?\nFor example, the current statistics on pg_replication_slots can be\nused for tuning logical_decoding_work_mem as well as inferring the\ntotal amount of bytes passed to the output plugin. How will the user\nuse those statistics?\n\nAlso, if we want the stats of successful transactions why don't we\nshow the stats of successful transactions in the view instead of ones\nof failed transactions?\n\n>\n> Attached file is the POC patch for this.\n> Current design is to save failed stats data in the ReplicationSlot struct.\n> This is because after the error, I'm not able to access the ReorderBuffer object.\n> Thus, I chose the object where I can interact with at the ReplicationSlotRelease timing.\n\nWhen discussing the pg_stat_replication_slots view, there was an idea\nto store the slot statistics on ReplicationSlot struct. But the idea\nwas rejected mainly because the struct is on the shared buffer[1]. If\nwe store those counts on ReplicationSlot struct it increases the usage\nof shared memory. And those statistics are used only by logical slots\nand don’t necessarily need to be shared among the server processes.\nMoreover, if we want to add more statistics on the view in the future,\nit further increases the usage of shared memory. If we want to track\nthe stats of successful transactions, I think it's easier to track\nthem on the subscriber side rather than the publisher side. We can\nincrease counters when applying [stream]commit/abort logical changes\non the subscriber.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1Kuj%2B3G59hh3wu86f4mmpQLpah_mGv2-wfAPyn%2BzT%3DP4A%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 29 Jul 2021 10:49:55 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, July 29, 2021 10:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Thu, Jul 8, 2021 at 3:55 PM osumi.takamichi@fujitsu.com \r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > When the current HEAD fails during logical decoding, the failure \r\n> > increments txns count in pg_stat_replication_slots - [1] and adds \r\n> > the transaction size to the sum of bytes in the same repeatedly on \r\n> > the publisher, until the problem is solved.\r\n> > One of the good examples is duplication error on the subscriber side \r\n> > and this applies to both streaming and spill cases as well.\r\n> >\r\n> > This update prevents users from grasping the exact number and size \r\n> > of successful and unsuccessful transactions. Accordingly, we need to \r\n> > have new columns of failed transactions that will work to \r\n> > differentiate both of them for all types, which means spill, \r\n> > streaming and normal transactions. This will help users to measure \r\n> > the exact status of logical replication.\r\n> \r\n> Could you please elaborate on use cases of the proposed statistics?\r\n> For example, the current statistics on pg_replication_slots can be \r\n> used for tuning logical_decoding_work_mem as well as inferring the \r\n> total amount of bytes passed to the output plugin. How will the user use those statistics?\r\n> \r\n> Also, if we want the stats of successful transactions why don't we \r\n> show the stats of successful transactions in the view instead of ones \r\n> of failed transactions?\r\nIt works to show the ratio of successful and unsuccessful transactions,\r\nwhich should be helpful in terms of administrator perspective.\r\nFYI, the POC patch added the columns where I prefixed 'failed' to those names.\r\nBut, substantially, it meant the ratio when user compared normal columns and\r\nnewly introduced columns by this POC in the pg_stat_replication_slots.\r\n\r\n\r\n> > Attached file is the POC patch for this.\r\n> > Current design is to save failed stats data in the ReplicationSlot struct.\r\n> > This is because after the error, I'm not able to access the \r\n> > ReorderBuffer\r\n> object.\r\n> > Thus, I chose the object where I can interact with at the\r\n> ReplicationSlotRelease timing.\r\n> \r\n> When discussing the pg_stat_replication_slots view, there was an idea \r\n> to store the slot statistics on ReplicationSlot struct. But the idea \r\n> was rejected mainly because the struct is on the shared buffer[1]. If \r\n> we store those counts on ReplicationSlot struct it increases the usage \r\n> of shared memory. And those statistics are used only by logical slots \r\n> and don’t necessarily need to be shared among the server processes.\r\nYes, I was aware of this.\r\nI was not sure if this design will be expected or not for the enhancement,\r\nI thought of changing the design accordingly once the idea gets accepted by the community.\r\n\r\n> Moreover, if we want to add more statistics on the view in the future, \r\n> it further increases the usage of shared memory. If we want to track \r\n> the stats of successful transactions, I think it's easier to track \r\n> them on the subscriber side rather than the publisher side. We can \r\n> increase counters when applying [stream]commit/abort logical changes on the subscriber.\r\nIt's true that to track the stats of successful and unsuccessful transactions on the *sub*\r\nis easier than on the pub. After some survey, it turned out that I cannot distinguish\r\nthe protocol messages between the cases of any failure (e.g. duplication error on the sub)\r\nfrom user intentional and successful operations(e.g. DROP SUBSCRIPTION and ALTER SUBSCRIPTION DISABLE) on the pub.\r\n\r\nIf we truly want to achieve this change on the publisher side,\r\nprotocol change requires in order to make above cases distinguishable,\r\nnow I feel that it is better to do this in the subscriber side. \r\n\r\nAccordingly, I'm thinking to have unsuccessful and successful stats on the sub side.\r\nSawada-san is now implementing a new view in [1].\r\nDo you think that I should write a patch to introduce a new separate view\r\nor write a patch to add more columns to the new view \"pg_stat_subscription_errors\" that is added at [1] ?\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK%3D30xJfUVihNZDA%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 2 Aug 2021 05:52:27 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Aug 2, 2021 at 2:52 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, July 29, 2021 10:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Thu, Jul 8, 2021 at 3:55 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > > When the current HEAD fails during logical decoding, the failure\n> > > increments txns count in pg_stat_replication_slots - [1] and adds\n> > > the transaction size to the sum of bytes in the same repeatedly on\n> > > the publisher, until the problem is solved.\n> > > One of the good examples is duplication error on the subscriber side\n> > > and this applies to both streaming and spill cases as well.\n> > >\n> > > This update prevents users from grasping the exact number and size\n> > > of successful and unsuccessful transactions. Accordingly, we need to\n> > > have new columns of failed transactions that will work to\n> > > differentiate both of them for all types, which means spill,\n> > > streaming and normal transactions. This will help users to measure\n> > > the exact status of logical replication.\n> >\n> > Could you please elaborate on use cases of the proposed statistics?\n> > For example, the current statistics on pg_replication_slots can be\n> > used for tuning logical_decoding_work_mem as well as inferring the\n> > total amount of bytes passed to the output plugin. How will the user use those statistics?\n> >\n> > Also, if we want the stats of successful transactions why don't we\n> > show the stats of successful transactions in the view instead of ones\n> > of failed transactions?\n> It works to show the ratio of successful and unsuccessful transactions,\n> which should be helpful in terms of administrator perspective.\n> FYI, the POC patch added the columns where I prefixed 'failed' to those names.\n> But, substantially, it meant the ratio when user compared normal columns and\n> newly introduced columns by this POC in the pg_stat_replication_slots.\n\nWhat can the administrator use the ratio of successful and\nunsuccessful logical replication transactions for? For example, IIUC\nif a conflict happens on the subscriber as you mentioned, the\nsuccessful transaction ratio calculated by those statistics is getting\nlow, perhaps meaning the logical replication stopped. But it can be\nchecked also by checking pg_stat_replication view or\npg_replication_slots view (or error counts of\npg_stat_subscription_errors view I’m proposing[1]). Do you have other\nuse cases?\n\n> > Moreover, if we want to add more statistics on the view in the future,\n> > it further increases the usage of shared memory. If we want to track\n> > the stats of successful transactions, I think it's easier to track\n> > them on the subscriber side rather than the publisher side. We can\n> > increase counters when applying [stream]commit/abort logical changes on the subscriber.\n> It's true that to track the stats of successful and unsuccessful transactions on the *sub*\n> is easier than on the pub. After some survey, it turned out that I cannot distinguish\n> the protocol messages between the cases of any failure (e.g. duplication error on the sub)\n> from user intentional and successful operations(e.g. DROP SUBSCRIPTION and ALTER SUBSCRIPTION DISABLE) on the pub.\n>\n> If we truly want to achieve this change on the publisher side,\n> protocol change requires in order to make above cases distinguishable,\n> now I feel that it is better to do this in the subscriber side.\n>\n> Accordingly, I'm thinking to have unsuccessful and successful stats on the sub side.\n> Sawada-san is now implementing a new view in [1].\n> Do you think that I should write a patch to introduce a new separate view\n> or write a patch to add more columns to the new view \"pg_stat_subscription_errors\" that is added at [1] ?\n\npg_stat_subscriptions_errors view I'm proposing is a view showing the\ndetails of error happening during logical replication. So I think a\nseparate view or pg_stat_subscription view would be a more appropriate\nplace.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/flat/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 2 Aug 2021 16:42:53 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Aug 2, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Aug 2, 2021 at 2:52 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> >\n> > Accordingly, I'm thinking to have unsuccessful and successful stats on the sub side.\n> > Sawada-san is now implementing a new view in [1].\n> > Do you think that I should write a patch to introduce a new separate view\n> > or write a patch to add more columns to the new view \"pg_stat_subscription_errors\" that is added at [1] ?\n>\n> pg_stat_subscriptions_errors view I'm proposing is a view showing the\n> details of error happening during logical replication. So I think a\n> separate view or pg_stat_subscription view would be a more appropriate\n> place.\n>\n\n+1 for having these stats in pg_stat_subscription. Do we want to add\ntwo columns (xact_commit: number of transactions successfully applied\nin this subscription, xact_rollback: number of transactions that have\nbeen rolled back in this subscription) or do you guys have something\nelse in mind?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Aug 2021 08:17:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 2, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Aug 2, 2021 at 2:52 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > >\n> > > Accordingly, I'm thinking to have unsuccessful and successful stats on the sub side.\n> > > Sawada-san is now implementing a new view in [1].\n> > > Do you think that I should write a patch to introduce a new separate view\n> > > or write a patch to add more columns to the new view \"pg_stat_subscription_errors\" that is added at [1] ?\n> >\n> > pg_stat_subscriptions_errors view I'm proposing is a view showing the\n> > details of error happening during logical replication. So I think a\n> > separate view or pg_stat_subscription view would be a more appropriate\n> > place.\n> >\n>\n> +1 for having these stats in pg_stat_subscription. Do we want to add\n> two columns (xact_commit: number of transactions successfully applied\n> in this subscription, xact_rollback: number of transactions that have\n> been rolled back in this subscription)\n\nSounds good. We might want to have separate counters for the number of\ntransactions failed due to an error and transactions rolled back by\nstream_abort.\n\npg_stat_subscription currently shows logical replication worker stats\non the shared memory. I think xact_commit and xact_rollback should\nsurvive beyond the server restarts, so it would be better to be\ncollected by the stats collector. My skipping transaction patch adds a\nhash table of which entry represents a subscription stats. I guess we\ncan use the hash table so that one subscription stats entry has both\ntransaction stats and errors.\n\n> or do you guys have something else in mind?\n\nOsumi-san was originally concerned that there is no way to grasp the\nexact number and size of successful and unsuccessful transactions. The\nabove idea covers only the number of successful and unsuccessful\ntransactions but not the size. What do you think, Osumi-san?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 3 Aug 2021 14:28:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Aug 2, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Aug 2, 2021 at 2:52 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Thursday, July 29, 2021 10:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > On Thu, Jul 8, 2021 at 3:55 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > > When the current HEAD fails during logical decoding, the failure\n> > > > increments txns count in pg_stat_replication_slots - [1] and adds\n> > > > the transaction size to the sum of bytes in the same repeatedly on\n> > > > the publisher, until the problem is solved.\n> > > > One of the good examples is duplication error on the subscriber side\n> > > > and this applies to both streaming and spill cases as well.\n> > > >\n> > > > This update prevents users from grasping the exact number and size\n> > > > of successful and unsuccessful transactions. Accordingly, we need to\n> > > > have new columns of failed transactions that will work to\n> > > > differentiate both of them for all types, which means spill,\n> > > > streaming and normal transactions. This will help users to measure\n> > > > the exact status of logical replication.\n> > >\n> > > Could you please elaborate on use cases of the proposed statistics?\n> > > For example, the current statistics on pg_replication_slots can be\n> > > used for tuning logical_decoding_work_mem as well as inferring the\n> > > total amount of bytes passed to the output plugin. How will the user use those statistics?\n> > >\n> > > Also, if we want the stats of successful transactions why don't we\n> > > show the stats of successful transactions in the view instead of ones\n> > > of failed transactions?\n> > It works to show the ratio of successful and unsuccessful transactions,\n> > which should be helpful in terms of administrator perspective.\n> > FYI, the POC patch added the columns where I prefixed 'failed' to those names.\n> > But, substantially, it meant the ratio when user compared normal columns and\n> > newly introduced columns by this POC in the pg_stat_replication_slots.\n>\n> What can the administrator use the ratio of successful and\n> unsuccessful logical replication transactions for? For example, IIUC\n> if a conflict happens on the subscriber as you mentioned, the\n> successful transaction ratio calculated by those statistics is getting\n> low, perhaps meaning the logical replication stopped. But it can be\n> checked also by checking pg_stat_replication view or\n> pg_replication_slots view (or error counts of\n> pg_stat_subscription_errors view I’m proposing[1]). Do you have other\n> use cases?\n\nWe could also include failed_data_size, this will help us to identify\nthe actual bandwidth consumed by the failed transaction. It will help\nthe DBA's to understand the network consumption in a better way.\nCurrently only total transaction and total data will be available but\nwhen there is a failure, the failed transaction data will be sent\nrepeatedly, if the DBA does not solve the actual cause of failure,\nthere can be significant consumption of the network due to failure\ntransaction being sent repeatedly. DBA will not be able to understand\nwhy there is so much network bandwidth consumption. If we give the\nfailed transaction information, the DBA might not get alarmed in this\ncase and understand that the network consumption is genuine. Also it\nwill help monitoring tools to project this value.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Aug 2021 10:59:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, August 3, 2021 2:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Tue, Aug 3, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Mon, Aug 2, 2021 at 1:13 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Mon, Aug 2, 2021 at 2:52 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > >\r\n> > > > Accordingly, I'm thinking to have unsuccessful and successful stats on the\r\n> sub side.\r\n> > > > Sawada-san is now implementing a new view in [1].\r\n> > > > Do you think that I should write a patch to introduce a new\r\n> > > > separate view or write a patch to add more columns to the new view\r\n> \"pg_stat_subscription_errors\" that is added at [1] ?\r\n> > >\r\n> > > pg_stat_subscriptions_errors view I'm proposing is a view showing\r\n> > > the details of error happening during logical replication. So I\r\n> > > think a separate view or pg_stat_subscription view would be a more\r\n> > > appropriate place.\r\n> > >\r\n> >\r\n> > +1 for having these stats in pg_stat_subscription. Do we want to add\r\n> > two columns (xact_commit: number of transactions successfully applied\r\n> > in this subscription, xact_rollback: number of transactions that have\r\n> > been rolled back in this subscription)\r\n> \r\n> Sounds good. We might want to have separate counters for the number of\r\n> transactions failed due to an error and transactions rolled back by stream_abort.\r\nOkay. I wanna make those separate as well for this feature.\r\n\r\n\r\n> pg_stat_subscription currently shows logical replication worker stats on the\r\n> shared memory. I think xact_commit and xact_rollback should survive beyond\r\n> the server restarts, so it would be better to be collected by the stats collector.\r\n> My skipping transaction patch adds a hash table of which entry represents a\r\n> subscription stats. I guess we can use the hash table so that one subscription\r\n> stats entry has both transaction stats and errors.\r\n> \r\n> > or do you guys have something else in mind?\r\n> \r\n> Osumi-san was originally concerned that there is no way to grasp the exact\r\n> number and size of successful and unsuccessful transactions. The above idea\r\n> covers only the number of successful and unsuccessful transactions but not the\r\n> size. What do you think, Osumi-san?\r\nYeah, I think tracking sizes of failed transactions and roll-backed transactions\r\nis helpful to identify the genuine network consumption,\r\nas mentioned by Vignesh in another mail.\r\nI'd like to include those also and post a patch for this.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 3 Aug 2021 06:09:26 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 10:59 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 3, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Aug 2, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Aug 2, 2021 at 2:52 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> > > >\n> > > > Accordingly, I'm thinking to have unsuccessful and successful stats on the sub side.\n> > > > Sawada-san is now implementing a new view in [1].\n> > > > Do you think that I should write a patch to introduce a new separate view\n> > > > or write a patch to add more columns to the new view \"pg_stat_subscription_errors\" that is added at [1] ?\n> > >\n> > > pg_stat_subscriptions_errors view I'm proposing is a view showing the\n> > > details of error happening during logical replication. So I think a\n> > > separate view or pg_stat_subscription view would be a more appropriate\n> > > place.\n> > >\n> >\n> > +1 for having these stats in pg_stat_subscription. Do we want to add\n> > two columns (xact_commit: number of transactions successfully applied\n> > in this subscription, xact_rollback: number of transactions that have\n> > been rolled back in this subscription)\n>\n> Sounds good. We might want to have separate counters for the number of\n> transactions failed due to an error and transactions rolled back by\n> stream_abort.\n>\n\nI was trying to think based on similar counters in pg_stat_database\nbut if you think there is a value in showing errored and actual\nrollbacked transactions separately then we can do that but how do you\nthink one can make use of it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Aug 2021 14:41:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 6:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 3, 2021 at 10:59 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 3, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Aug 2, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Aug 2, 2021 at 2:52 PM osumi.takamichi@fujitsu.com\n> > > > <osumi.takamichi@fujitsu.com> wrote:\n> > > > >\n> > > > >\n> > > > > Accordingly, I'm thinking to have unsuccessful and successful stats on the sub side.\n> > > > > Sawada-san is now implementing a new view in [1].\n> > > > > Do you think that I should write a patch to introduce a new separate view\n> > > > > or write a patch to add more columns to the new view \"pg_stat_subscription_errors\" that is added at [1] ?\n> > > >\n> > > > pg_stat_subscriptions_errors view I'm proposing is a view showing the\n> > > > details of error happening during logical replication. So I think a\n> > > > separate view or pg_stat_subscription view would be a more appropriate\n> > > > place.\n> > > >\n> > >\n> > > +1 for having these stats in pg_stat_subscription. Do we want to add\n> > > two columns (xact_commit: number of transactions successfully applied\n> > > in this subscription, xact_rollback: number of transactions that have\n> > > been rolled back in this subscription)\n> >\n> > Sounds good. We might want to have separate counters for the number of\n> > transactions failed due to an error and transactions rolled back by\n> > stream_abort.\n> >\n>\n> I was trying to think based on similar counters in pg_stat_database\n> but if you think there is a value in showing errored and actual\n> rollbacked transactions separately then we can do that but how do you\n> think one can make use of it?\n\nI'm concerned that the value that includes both errored and actual\nrollbacked transactions doesn't make sense in practice since the\nnumber of errored transactions can easily get increased once a\nconflict happens. IMO the errored transaction doesn’t not necessarily\nnecessary since the number of (successive) errors that happened on the\nsubscription is tracked by pg_stat_subscription_errors view. So it\nmight be enough to have actual rollbacked transactions. If this value\nis high, it's likely that many rollbacked transactions are streamed,\nunnecessarily consuming network bandwidth. So the user might want to\nincrease logical_decoding_work_mem to suppress transactions getting\nstreamed.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 4 Aug 2021 09:48:30 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 6:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 3, 2021 at 6:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I was trying to think based on similar counters in pg_stat_database\n> > but if you think there is a value in showing errored and actual\n> > rollbacked transactions separately then we can do that but how do you\n> > think one can make use of it?\n>\n> I'm concerned that the value that includes both errored and actual\n> rollbacked transactions doesn't make sense in practice since the\n> number of errored transactions can easily get increased once a\n> conflict happens. IMO the errored transaction doesn’t not necessarily\n> necessary since the number of (successive) errors that happened on the\n> subscription is tracked by pg_stat_subscription_errors view.\n>\n\nIt sounds awkward to display two of the xact (xact_commit,\nxact_rollback) counters in one view and then the other similar counter\n(xact_error or something like that) in another view. Isn't it better\nto display all of them together possibly in pg_stat_subscription? I\nguess it might be a bit tricky to track counters for tablesync workers\nseparately but we can track them in the corresponding subscription.\n\n> So it\n> might be enough to have actual rollbacked transactions. If this value\n> is high, it's likely that many rollbacked transactions are streamed,\n> unnecessarily consuming network bandwidth. So the user might want to\n> increase logical_decoding_work_mem to suppress transactions getting\n> streamed.\n>\n\nOkay, we might want to probably document such a use case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 4 Aug 2021 08:47:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 12:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 4, 2021 at 6:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 3, 2021 at 6:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I was trying to think based on similar counters in pg_stat_database\n> > > but if you think there is a value in showing errored and actual\n> > > rollbacked transactions separately then we can do that but how do you\n> > > think one can make use of it?\n> >\n> > I'm concerned that the value that includes both errored and actual\n> > rollbacked transactions doesn't make sense in practice since the\n> > number of errored transactions can easily get increased once a\n> > conflict happens. IMO the errored transaction doesn’t not necessarily\n> > necessary since the number of (successive) errors that happened on the\n> > subscription is tracked by pg_stat_subscription_errors view.\n> >\n>\n> It sounds awkward to display two of the xact (xact_commit,\n> xact_rollback) counters in one view and then the other similar counter\n> (xact_error or something like that) in another view. Isn't it better\n> to display all of them together possibly in pg_stat_subscription? I\n> guess it might be a bit tricky to track counters for tablesync workers\n> separately but we can track them in the corresponding subscription.\n\nI meant that the number of rolled back transactions due to an error\nseems not to be necessary since pg_stat_subscription_errors has a\nsimilar value. So what I imagined is that we have xact_commit and\nxact_rollback (counting only actual rollbacked transaction) counters\nin pg_stat_subscription. What do you think of this idea? Or did you\nmean the number of errored transactions also has value so should be\nincluded in pg_stat_subscription along with xact_commit and\nxact_rollback?\n\nOriginally I thought your proposal of having the number of rollback\ntransactions includes both errored transactions and actual rolled back\ntransactions so my point was it's better to separate them and include\nonly the actual rolled-back transaction.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 4 Aug 2021 16:04:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 12:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 4, 2021 at 12:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Aug 4, 2021 at 6:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Aug 3, 2021 at 6:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I was trying to think based on similar counters in pg_stat_database\n> > > > but if you think there is a value in showing errored and actual\n> > > > rollbacked transactions separately then we can do that but how do you\n> > > > think one can make use of it?\n> > >\n> > > I'm concerned that the value that includes both errored and actual\n> > > rollbacked transactions doesn't make sense in practice since the\n> > > number of errored transactions can easily get increased once a\n> > > conflict happens. IMO the errored transaction doesn’t not necessarily\n> > > necessary since the number of (successive) errors that happened on the\n> > > subscription is tracked by pg_stat_subscription_errors view.\n> > >\n> >\n> > It sounds awkward to display two of the xact (xact_commit,\n> > xact_rollback) counters in one view and then the other similar counter\n> > (xact_error or something like that) in another view. Isn't it better\n> > to display all of them together possibly in pg_stat_subscription? I\n> > guess it might be a bit tricky to track counters for tablesync workers\n> > separately but we can track them in the corresponding subscription.\n>\n> I meant that the number of rolled back transactions due to an error\n> seems not to be necessary since pg_stat_subscription_errors has a\n> similar value.\n>\n\nI got that point.\n\n> So what I imagined is that we have xact_commit and\n> xact_rollback (counting only actual rollbacked transaction) counters\n> in pg_stat_subscription. What do you think of this idea? Or did you\n> mean the number of errored transactions also has value so should be\n> included in pg_stat_subscription along with xact_commit and\n> xact_rollback?\n>\n\nI meant the later one. I think it might be better to display all three\n(xact_commit, xact_abort, xact_error) in one place. Earlier it made\nsense to me to display it in pg_stat_subscription_errors but not sure\nafter this proposal. Won't it be better for users to see all the\ncounters in one view?\n\n> Originally I thought your proposal of having the number of rollback\n> transactions includes both errored transactions and actual rolled back\n> transactions so my point was it's better to separate them and include\n> only the actual rolled-back transaction.\n>\n\nI am fine with your proposal to separate the actual rollback and error\ntransactions counter but I thought it would be better to display them\nin one view.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 4 Aug 2021 18:28:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "Hi\r\n\r\n\r\nI've made a new patch to extend pg_stat_subscription\r\nas suggested in [1] to have columns\r\nxact_commit, xact_error and independent xact_abort mentioned in [2].\r\nAlso, during discussion, we touched a topic if we should\r\ninclude data sizes for each column above and concluded that it's\r\nbetter to have ones. Accordingly, I've implemented corresponding\r\ncolumns to show the data sizes as well.\r\n\r\nNote that this patch depends on v12 patchset of apply error callback\r\nprovided in [3]. Therfore, applying the patchset first is required,\r\nif you would like to test my patch.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BtOV-%2BssGjj1zq%2BnAL8a9LfPsxbtyupZGvZ0U7nV0A7g%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/CAA4eK1KMT8biciVqTBoZ9gYV-Gf297JFeNhJaxZNmFrZL8m2jA%40mail.gmail.com\r\n[3] - https://www.postgresql.org/message-id/CAD21AoA5HrhXqwbYLpSobGzV6rWoJmH3-NB9J3YarKDwARBj4w%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 2 Sep 2021 02:22:42 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "Hi\r\n\r\nOn Thursday, September 2, 2021 11:23 AM I wrote:\r\n> I've made a new patch to extend pg_stat_subscription as suggested in [1] to\r\n> have columns xact_commit, xact_error and independent xact_abort mentioned\r\n> in [2].\r\n> Also, during discussion, we touched a topic if we should include data sizes for\r\n> each column above and concluded that it's better to have ones. Accordingly,\r\n> I've implemented corresponding columns to show the data sizes as well.\r\nI've updated my previous patch of subscriber's stats.\r\nThe main change of this version is\r\nthe bytes calculation that are exported by pg_stat_subscription.\r\nI prepared a new function which is equivalent to ReorderBufferChangeSize\r\non the subscriber side to calculate the resource consumption.\r\nIt's because this is in line with actual process of the subscriber.\r\n\r\nOther changes are minor and cosmetic.\r\n\r\nAlso, this patch is based on the v12 patch-set of skip xid,\r\nas described in my previous email.\r\nNote that you need to use past commit-id if you want to apply v12 set.\r\nI'll update mine as well when the latest patch-set v13 is shared on hackers.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 9 Sep 2021 08:03:32 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "Hi\r\n\r\nOn Thursday, September 9, 2021 5:04 PM I wrote:\r\n> Also, this patch is based on the v12 patch-set of skip xid, as described in my\r\n> previous email.\r\n> Note that you need to use past commit-id if you want to apply v12 set.\r\n> I'll update mine as well when the latest patch-set v13 is shared on hackers.\r\nRebased, using v13 patch-set in [1].\r\n\r\n\r\n[1 ] - https://www.postgresql.org/message-id/CAD21AoBUXM4ODfPFa%3Dh7M6vSKwOKysapUce3tS4rs9mfVMm%2BcQ%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 14 Sep 2021 03:11:02 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "Hello\r\n\r\n\r\nJust conducted some cosmetic changes\r\nand rebased my patch, using v14 patch-set in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoCO_ZYWZEBw7ziiYoX7Zm1P0L9%3Dd7Jj9YsGEGsT9o6wmw%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Wed, 22 Sep 2021 04:39:58 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 10:10 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Just conducted some cosmetic changes\n> and rebased my patch, using v14 patch-set in [1].\n>\n\nIIUC, this proposal will allow new xact stats for subscriptions via\npg_stat_subscription. One thing that is not clear to me in this patch\nis that why you choose a different way to store these stats than the\nexisting stats in that view? AFAICS, the other existing stats are\nstored in-memory in LogicalRepWorker whereas these new stats are\nstored/fetched via stats collector means these will persist. Isn't it\nbetter to be consistent here? I am not sure which is a more\nappropriate way to store these stats and I would like to hear your and\nother's thoughts on that matter but it appears a bit awkward to me\nthat some of the stats in the same view are persistent and others are\nin-memory.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Sep 2021 10:12:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, September 27, 2021 1:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Sep 22, 2021 at 10:10 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Just conducted some cosmetic changes\r\n> > and rebased my patch, using v14 patch-set in [1].\r\n> >\r\n> \r\n> IIUC, this proposal will allow new xact stats for subscriptions via\r\n> pg_stat_subscription. One thing that is not clear to me in this patch is that why\r\n> you choose a different way to store these stats than the existing stats in that\r\n> view? AFAICS, the other existing stats are stored in-memory in\r\n> LogicalRepWorker whereas these new stats are stored/fetched via stats\r\n> collector means these will persist. Isn't it better to be consistent here? I am not\r\n> sure which is a more appropriate way to store these stats and I would like to\r\n> hear your and other's thoughts on that matter but it appears a bit awkward to\r\n> me that some of the stats in the same view are persistent and others are\r\n> in-memory.\r\nYeah, existing stats values of pg_stat_subscription are in-memory.\r\nI thought xact stats should survive over the restart,\r\nto summarize and show all accumulative transaction values\r\non one subscription for user. But, your pointing out is reasonable,\r\nmixing two types can be awkward and lack of consistency.\r\n\r\nThen, if, we proceed in this direction,\r\nthe place to implement those stats\r\nwould be on the LogicalRepWorker struct, instead ?\r\n\r\nOn one hand, what confuses me is that\r\nin another thread of feature to skip xid,\r\nI wondered if Sawada-san has started to take\r\nthose xact stats into account (probably in his patch-set),\r\nbecause the stats values this thread is taking care of are listed up in the thread.\r\nIf that's true, its thread and this thread are getting really close.\r\nSo, IIUC, we have to discuss this point as well.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 28 Sep 2021 01:55:39 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 7:25 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, September 27, 2021 1:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Wed, Sep 22, 2021 at 10:10 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > Just conducted some cosmetic changes\n> > > and rebased my patch, using v14 patch-set in [1].\n> > >\n> >\n> > IIUC, this proposal will allow new xact stats for subscriptions via\n> > pg_stat_subscription. One thing that is not clear to me in this patch is that why\n> > you choose a different way to store these stats than the existing stats in that\n> > view? AFAICS, the other existing stats are stored in-memory in\n> > LogicalRepWorker whereas these new stats are stored/fetched via stats\n> > collector means these will persist. Isn't it better to be consistent here? I am not\n> > sure which is a more appropriate way to store these stats and I would like to\n> > hear your and other's thoughts on that matter but it appears a bit awkward to\n> > me that some of the stats in the same view are persistent and others are\n> > in-memory.\n> Yeah, existing stats values of pg_stat_subscription are in-memory.\n> I thought xact stats should survive over the restart,\n> to summarize and show all accumulative transaction values\n> on one subscription for user. But, your pointing out is reasonable,\n> mixing two types can be awkward and lack of consistency.\n>\n> Then, if, we proceed in this direction,\n> the place to implement those stats\n> would be on the LogicalRepWorker struct, instead ?\n>\n\nOr, we can make existing stats persistent and then add these stats on\ntop of it. Sawada-San, do you have any thoughts on this matter?\n\n> On one hand, what confuses me is that\n> in another thread of feature to skip xid,\n> I wondered if Sawada-san has started to take\n> those xact stats into account (probably in his patch-set),\n> because the stats values this thread is taking care of are listed up in the thread.\n>\n\nI don't think the Skip Xid patch is going to take care of these\nadditional stats but Sawada-San can confirm.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Sep 2021 10:24:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 28, 2021 at 7:25 AM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Monday, September 27, 2021 1:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Wed, Sep 22, 2021 at 10:10 AM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> > > > Just conducted some cosmetic changes\n> > > > and rebased my patch, using v14 patch-set in [1].\n> > > >\n> > >\n> > > IIUC, this proposal will allow new xact stats for subscriptions via\n> > > pg_stat_subscription. One thing that is not clear to me in this patch is that why\n> > > you choose a different way to store these stats than the existing stats in that\n> > > view? AFAICS, the other existing stats are stored in-memory in\n> > > LogicalRepWorker whereas these new stats are stored/fetched via stats\n> > > collector means these will persist. Isn't it better to be consistent here? I am not\n> > > sure which is a more appropriate way to store these stats and I would like to\n> > > hear your and other's thoughts on that matter but it appears a bit awkward to\n> > > me that some of the stats in the same view are persistent and others are\n> > > in-memory.\n> > Yeah, existing stats values of pg_stat_subscription are in-memory.\n> > I thought xact stats should survive over the restart,\n> > to summarize and show all accumulative transaction values\n> > on one subscription for user. But, your pointing out is reasonable,\n> > mixing two types can be awkward and lack of consistency.\n\nI remembered that we have discussed a similar thing when discussing\npg_stat_replication_slots view[1]. The slot statistics such as\nspill_txn, spill_count, and spill_bytes were originally shown in\npg_stat_replication, mixing cumulative counters and dynamic counters.\nBut we concluded to separate these cumulative values from\npg_stat_replication view and introduce a new pg_stat_replication_slots\nview.\n\n> >\n> > Then, if, we proceed in this direction,\n> > the place to implement those stats\n> > would be on the LogicalRepWorker struct, instead ?\n> >\n>\n> Or, we can make existing stats persistent and then add these stats on\n> top of it. Sawada-San, do you have any thoughts on this matter?\n\nI think that making existing stats including received_lsn and\nlast_msg_receipt_time persistent by using stats collector could cause\nmassive reporting messages. We can report these messages with a\ncertain interval to reduce the amount of messages but we will end up\nseeing old stats on the view. Another idea could be to have a separate\nview, say pg_stat_subscription_xact but I'm not sure it's a better\nidea.\n\n>\n> > On one hand, what confuses me is that\n> > in another thread of feature to skip xid,\n> > I wondered if Sawada-san has started to take\n> > those xact stats into account (probably in his patch-set),\n> > because the stats values this thread is taking care of are listed up in the thread.\n> >\n>\n> I don't think the Skip Xid patch is going to take care of these\n> additional stats but Sawada-San can confirm.\n\nYes, I don't take care of these additional stats discussed on this\nthread. The reason why I recently mentioned these statistics on the\nskip_xid thread is that since I thought that this patch will be built\non top of the subscription error reporting patch that I'm proposing I\ndiscussed the extensibility of my patch.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CABUevEwayASgA2GHAQx%3DVtCp6OQh5PVfem6JDQ97gFHka%3D6n1w%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 28 Sep 2021 15:04:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 11:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Sep 28, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > Then, if, we proceed in this direction,\n> > > the place to implement those stats\n> > > would be on the LogicalRepWorker struct, instead ?\n> > >\n> >\n> > Or, we can make existing stats persistent and then add these stats on\n> > top of it. Sawada-San, do you have any thoughts on this matter?\n>\n> I think that making existing stats including received_lsn and\n> last_msg_receipt_time persistent by using stats collector could cause\n> massive reporting messages. We can report these messages with a\n> certain interval to reduce the amount of messages but we will end up\n> seeing old stats on the view.\n>\n\nCan't we keep the current and new stats both in-memory and persist on\ndisk? So, the persistent stats data will be used to fill the in-memory\ncounters after restarting of workers, otherwise, we will always refer\nto in-memory values.\n\n> Another idea could be to have a separate\n> view, say pg_stat_subscription_xact but I'm not sure it's a better\n> idea.\n>\n\nYeah, that is another idea but I am afraid that having three different\nviews for subscription stats will be too much. I think it would be\nbetter if we can display these additional stats via the existing view\npg_stat_subscription or the new view pg_stat_subscription_errors (or\nwhatever name we want to give it).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Sep 2021 15:34:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 8:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Another idea could be to have a separate\n> > view, say pg_stat_subscription_xact but I'm not sure it's a better\n> > idea.\n> >\n>\n> Yeah, that is another idea but I am afraid that having three different\n> views for subscription stats will be too much. I think it would be\n> better if we can display these additional stats via the existing view\n> pg_stat_subscription or the new view pg_stat_subscription_errors (or\n> whatever name we want to give it).\n>\n\n\nIt seems that we have come full-circle in the discussion of how these\nnew stats could be best added.\nOther than adding a new \"pg_stats_subscription_xact\" view (which Amit\nthought would result in too many views) I don't see any clear solution\nhere, because the new xact-related cumulative stats don't really\nconsistently integrate with the existing in-memory stats in\npg_stat_subscription, and IMHO the new stats wouldn't integrate well\ninto the existing error-related stats in the\n\"pg_stat_subscription_errors\" view (even if its name was changed, that\nview in any case maintains lowel-level error details and the way error\nentries are removed doesn't allow the \"success\" stats to be maintained\nfor the subscription and doesn't fit well with the added\n\"pg_stat_reset_subscription_error\" function either).\nIf we can't really add another view like \"pg_stats_subscription_xact\",\nit seems we need to find some way the new stats fit more consistently\ninto the existing \"pg_stat_subscription\" view.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 29 Sep 2021 15:59:20 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nThank you, Amit-san and Sawada-san for the discussion.\r\nOn Tuesday, September 28, 2021 7:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > Another idea could be to have a separate view, say\r\n> > pg_stat_subscription_xact but I'm not sure it's a better idea.\r\n> >\r\n> \r\n> Yeah, that is another idea but I am afraid that having three different\r\n> views for subscription stats will be too much. I think it would be\r\n> better if we can display these additional stats via the existing view\r\n> pg_stat_subscription or the new view pg_stat_subscription_errors (or\r\n> whatever name we want to give it).\r\npg_stat_subscription_errors specializes in showing an error record.\r\nSo, it would be awkward to combine it with other normal xact stats.\r\n\r\n\r\n> > > > Then, if, we proceed in this direction, the place to implement\r\n> > > > those stats would be on the LogicalRepWorker struct, instead ?\r\n> > > >\r\n> > >\r\n> > > Or, we can make existing stats persistent and then add these stats\r\n> > > on top of it. Sawada-San, do you have any thoughts on this matter?\r\n> >\r\n> > I think that making existing stats including received_lsn and\r\n> > last_msg_receipt_time persistent by using stats collector could cause\r\n> > massive reporting messages. We can report these messages with a\r\n> > certain interval to reduce the amount of messages but we will end up\r\n> > seeing old stats on the view.\r\n> >\r\n> \r\n> Can't we keep the current and new stats both in-memory and persist on disk?\r\n> So, the persistent stats data will be used to fill the in-memory counters after\r\n> restarting of workers, otherwise, we will always refer to in-memory values.\r\nI felt this isn't impossible.\r\nWhen we have to update the values of the xact stats is\r\nthe end of message apply for COMMIT, COMMIT PREPARED, STREAM_ABORT and etc\r\nor the time when an error happens during apply. Then, if we want,\r\nwe can update xact stats values at such moments accordingly.\r\nI'm thinking that we will have a hash table whose key is a pair of subid + relid\r\nand entry is a proposed stats structure and update the entry,\r\ndepending on the above timings.\r\n\r\nHere, one thing a bit unclear to me is\r\nwhether we should move existing stats of pg_stat_subscription\r\n(such as last_lsn and reply_lsn) to the hash entry or not.\r\nNow, in pg_stat_get_subscription() for pg_stat_subscription view,\r\ncurrent stats values are referenced directly from (a copy of)\r\nexisting LogicalRepCtx->workers. I felt that we need to avoid\r\na situation that some existing data are fetched from LogicalRepWorker\r\nand other new xact stats are from the hash in the function,\r\nin order to keeping the alignment of the function. Was this correct ?\r\n\r\nAnother thing we need to talk is where we put a new file\r\nof contents of pg_stat_subscription. I'm thinking that\r\nit's pg_logical/, because above idea does not interact with\r\nstats collector any more.\r\n\r\nLet me know if I miss something.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 29 Sep 2021 06:05:45 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, September 29, 2021 2:59 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Tue, Sep 28, 2021 at 8:04 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > > Another idea could be to have a separate view, say\r\n> > > pg_stat_subscription_xact but I'm not sure it's a better idea.\r\n> > >\r\n> >\r\n> > Yeah, that is another idea but I am afraid that having three different\r\n> > views for subscription stats will be too much. I think it would be\r\n> > better if we can display these additional stats via the existing view\r\n> > pg_stat_subscription or the new view pg_stat_subscription_errors (or\r\n> > whatever name we want to give it).\r\n> \r\n> It seems that we have come full-circle in the discussion of how these new stats\r\n> could be best added.\r\n> Other than adding a new \"pg_stats_subscription_xact\" view (which Amit\r\n> thought would result in too many views) I don't see any clear solution here,\r\n> because the new xact-related cumulative stats don't really consistently\r\n> integrate with the existing in-memory stats in pg_stat_subscription, and IMHO\r\n> the new stats wouldn't integrate well into the existing error-related stats in the\r\n> \"pg_stat_subscription_errors\" view (even if its name was changed, that view in\r\n> any case maintains lowel-level error details and the way error entries are\r\n> removed doesn't allow the \"success\" stats to be maintained for the\r\n> subscription and doesn't fit well with the added\r\n> \"pg_stat_reset_subscription_error\" function either).\r\n> If we can't really add another view like \"pg_stats_subscription_xact\", it seems\r\n> we need to find some way the new stats fit more consistently into the existing\r\n> \"pg_stat_subscription\" view.\r\nYeah, I agree with your conclusion.\r\nI feel we cannot avoid changing some base of pg_stat_subscription\r\nfor the xact stats.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 29 Sep 2021 06:27:15 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 11:35 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hi,\n>\n>\n> Thank you, Amit-san and Sawada-san for the discussion.\n> On Tuesday, September 28, 2021 7:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Another idea could be to have a separate view, say\n> > > pg_stat_subscription_xact but I'm not sure it's a better idea.\n> > >\n> >\n> > Yeah, that is another idea but I am afraid that having three different\n> > views for subscription stats will be too much. I think it would be\n> > better if we can display these additional stats via the existing view\n> > pg_stat_subscription or the new view pg_stat_subscription_errors (or\n> > whatever name we want to give it).\n> pg_stat_subscription_errors specializes in showing an error record.\n> So, it would be awkward to combine it with other normal xact stats.\n>\n>\n> > > > > Then, if, we proceed in this direction, the place to implement\n> > > > > those stats would be on the LogicalRepWorker struct, instead ?\n> > > > >\n> > > >\n> > > > Or, we can make existing stats persistent and then add these stats\n> > > > on top of it. Sawada-San, do you have any thoughts on this matter?\n> > >\n> > > I think that making existing stats including received_lsn and\n> > > last_msg_receipt_time persistent by using stats collector could cause\n> > > massive reporting messages. We can report these messages with a\n> > > certain interval to reduce the amount of messages but we will end up\n> > > seeing old stats on the view.\n> > >\n> >\n> > Can't we keep the current and new stats both in-memory and persist on disk?\n> > So, the persistent stats data will be used to fill the in-memory counters after\n> > restarting of workers, otherwise, we will always refer to in-memory values.\n> I felt this isn't impossible.\n> When we have to update the values of the xact stats is\n> the end of message apply for COMMIT, COMMIT PREPARED, STREAM_ABORT and etc\n> or the time when an error happens during apply. Then, if we want,\n> we can update xact stats values at such moments accordingly.\n> I'm thinking that we will have a hash table whose key is a pair of subid + relid\n> and entry is a proposed stats structure and update the entry,\n> depending on the above timings.\n>\n\nAre you thinking of a separate hash table then what we are going to\ncreate for Sawada-San's patch related to error stats? Isn't it\npossible to have stats in the same hash table and same file?\n\n> Here, one thing a bit unclear to me is\n> whether we should move existing stats of pg_stat_subscription\n> (such as last_lsn and reply_lsn) to the hash entry or not.\n>\n\nI think we should move it to hash entry. I think that is an\nimprovement over what we have now because now after restart those\nstats gets lost.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 29 Sep 2021 16:21:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 28, 2021 at 11:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Sep 28, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > >\n> > > > Then, if, we proceed in this direction,\n> > > > the place to implement those stats\n> > > > would be on the LogicalRepWorker struct, instead ?\n> > > >\n> > >\n> > > Or, we can make existing stats persistent and then add these stats on\n> > > top of it. Sawada-San, do you have any thoughts on this matter?\n> >\n> > I think that making existing stats including received_lsn and\n> > last_msg_receipt_time persistent by using stats collector could cause\n> > massive reporting messages. We can report these messages with a\n> > certain interval to reduce the amount of messages but we will end up\n> > seeing old stats on the view.\n> >\n>\n> Can't we keep the current and new stats both in-memory and persist on\n> disk? So, the persistent stats data will be used to fill the in-memory\n> counters after restarting of workers, otherwise, we will always refer\n> to in-memory values.\n\nInteresting. Probably we can have apply workers and table sync workers\nsend their statistics to the stats collector at exit (before the stats\ncollector shutting down)? And the startup process will restore them at\nrestart?\n\n>\n> > Another idea could be to have a separate\n> > view, say pg_stat_subscription_xact but I'm not sure it's a better\n> > idea.\n> >\n>\n> Yeah, that is another idea but I am afraid that having three different\n> views for subscription stats will be too much.\n\nYeah, I have the same feeling.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 29 Sep 2021 20:18:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 4:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Sep 28, 2021 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Sep 28, 2021 at 11:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Sep 28, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > >\n> > > > > Then, if, we proceed in this direction,\n> > > > > the place to implement those stats\n> > > > > would be on the LogicalRepWorker struct, instead ?\n> > > > >\n> > > >\n> > > > Or, we can make existing stats persistent and then add these stats on\n> > > > top of it. Sawada-San, do you have any thoughts on this matter?\n> > >\n> > > I think that making existing stats including received_lsn and\n> > > last_msg_receipt_time persistent by using stats collector could cause\n> > > massive reporting messages. We can report these messages with a\n> > > certain interval to reduce the amount of messages but we will end up\n> > > seeing old stats on the view.\n> > >\n> >\n> > Can't we keep the current and new stats both in-memory and persist on\n> > disk? So, the persistent stats data will be used to fill the in-memory\n> > counters after restarting of workers, otherwise, we will always refer\n> > to in-memory values.\n>\n> Interesting. Probably we can have apply workers and table sync workers\n> send their statistics to the stats collector at exit (before the stats\n> collector shutting down)? And the startup process will restore them at\n> restart?\n>\n\nI think we do need to send at the exit but we should probably send at\nsome other regular interval as well to avoid losing all the stats\nafter the crash-restart situation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 29 Sep 2021 17:25:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, September 29, 2021 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Sep 29, 2021 at 11:35 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Thank you, Amit-san and Sawada-san for the discussion.\r\n> > On Tuesday, September 28, 2021 7:05 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > > Another idea could be to have a separate view, say\r\n> > > > pg_stat_subscription_xact but I'm not sure it's a better idea.\r\n> > > >\r\n> > >\r\n> > > Yeah, that is another idea but I am afraid that having three\r\n> > > different views for subscription stats will be too much. I think it\r\n> > > would be better if we can display these additional stats via the\r\n> > > existing view pg_stat_subscription or the new view\r\n> > > pg_stat_subscription_errors (or whatever name we want to give it).\r\n> > pg_stat_subscription_errors specializes in showing an error record.\r\n> > So, it would be awkward to combine it with other normal xact stats.\r\n> >\r\n> >\r\n> > > > > > Then, if, we proceed in this direction, the place to implement\r\n> > > > > > those stats would be on the LogicalRepWorker struct, instead ?\r\n> > > > > >\r\n> > > > >\r\n> > > > > Or, we can make existing stats persistent and then add these\r\n> > > > > stats on top of it. Sawada-San, do you have any thoughts on this\r\n> matter?\r\n> > > >\r\n> > > > I think that making existing stats including received_lsn and\r\n> > > > last_msg_receipt_time persistent by using stats collector could\r\n> > > > cause massive reporting messages. We can report these messages\r\n> > > > with a certain interval to reduce the amount of messages but we\r\n> > > > will end up seeing old stats on the view.\r\n> > > >\r\n> > >\r\n> > > Can't we keep the current and new stats both in-memory and persist on\r\n> disk?\r\n> > > So, the persistent stats data will be used to fill the in-memory\r\n> > > counters after restarting of workers, otherwise, we will always refer to\r\n> in-memory values.\r\n> > I felt this isn't impossible.\r\n> > When we have to update the values of the xact stats is the end of\r\n> > message apply for COMMIT, COMMIT PREPARED, STREAM_ABORT and etc\r\n> or the\r\n> > time when an error happens during apply. Then, if we want, we can\r\n> > update xact stats values at such moments accordingly.\r\n> > I'm thinking that we will have a hash table whose key is a pair of\r\n> > subid + relid and entry is a proposed stats structure and update the\r\n> > entry, depending on the above timings.\r\n> >\r\n> \r\n> Are you thinking of a separate hash table then what we are going to create for\r\n> Sawada-San's patch related to error stats? Isn't it possible to have stats in the\r\n> same hash table and same file?\r\nIIUC, this would be possible.\r\n\r\nAt the beginning, I thought we don't use stats collector at all for the xact stats\r\nwith the existing stats like received_lsn and last_msg_receipt_time,\r\nconsidering the concern of too many reporting messages for them\r\nto keep them always updated. But, when we send\r\nmessages to stats collector only strictly limited times,\r\nlike successful exit and some regular interval as you explained in your other email,\r\nthis would disappear and I thought those new xact stats + moved stats can coexist in the hash\r\nproposed by you in [1] and we can write those new stats in the same file.\r\nDoes everyone agree ?\r\n\r\n\r\n> > Here, one thing a bit unclear to me is whether we should move existing\r\n> > stats of pg_stat_subscription (such as last_lsn and reply_lsn) to the\r\n> > hash entry or not.\r\n> >\r\n> \r\n> I think we should move it to hash entry. I think that is an improvement over what\r\n> we have now because now after restart those stats gets lost.\r\nOkay !\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1JRCQ-bYnbkwUrvcVcbLURjtiW%2BirFVvXzeG%2Bj%3Dy6jVgA%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n\r\n",
"msg_date": "Thu, 30 Sep 2021 02:23:54 +0000",
"msg_from": "=?utf-8?B?T3N1bWksIFRha2FtaWNoaS/lpKfloqgg5piC6YGT?=\n\t<osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 8:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Sep 29, 2021 at 4:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Sep 28, 2021 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Sep 28, 2021 at 11:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Sep 28, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > >\n> > > > > > Then, if, we proceed in this direction,\n> > > > > > the place to implement those stats\n> > > > > > would be on the LogicalRepWorker struct, instead ?\n> > > > > >\n> > > > >\n> > > > > Or, we can make existing stats persistent and then add these stats on\n> > > > > top of it. Sawada-San, do you have any thoughts on this matter?\n> > > >\n> > > > I think that making existing stats including received_lsn and\n> > > > last_msg_receipt_time persistent by using stats collector could cause\n> > > > massive reporting messages. We can report these messages with a\n> > > > certain interval to reduce the amount of messages but we will end up\n> > > > seeing old stats on the view.\n> > > >\n> > >\n> > > Can't we keep the current and new stats both in-memory and persist on\n> > > disk? So, the persistent stats data will be used to fill the in-memory\n> > > counters after restarting of workers, otherwise, we will always refer\n> > > to in-memory values.\n> >\n> > Interesting. Probably we can have apply workers and table sync workers\n> > send their statistics to the stats collector at exit (before the stats\n> > collector shutting down)? And the startup process will restore them at\n> > restart?\n> >\n>\n> I think we do need to send at the exit but we should probably send at\n> some other regular interval as well to avoid losing all the stats\n> after the crash-restart situation.\n\nBut we clear all statistics collected by the stats collector during\ncrash recovery. No?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 30 Sep 2021 11:47:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tues, Sep 28, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Sep 28, 2021 at 11:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Tue, Sep 28, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > >\r\n> > > > Then, if, we proceed in this direction,\r\n> > > > the place to implement those stats\r\n> > > > would be on the LogicalRepWorker struct, instead ?\r\n> > > >\r\n> > >\r\n> > > Or, we can make existing stats persistent and then add these stats on\r\n> > > top of it. Sawada-San, do you have any thoughts on this matter?\r\n> >\r\n> > I think that making existing stats including received_lsn and\r\n> > last_msg_receipt_time persistent by using stats collector could cause\r\n> > massive reporting messages. We can report these messages with a\r\n> > certain interval to reduce the amount of messages but we will end up\r\n> > seeing old stats on the view.\r\n> >\r\n> \r\n> Can't we keep the current and new stats both in-memory and persist on\r\n> disk? So, the persistent stats data will be used to fill the in-memory\r\n> counters after restarting of workers, otherwise, we will always refer\r\n> to in-memory values.\r\n\r\nI think this approach works, but I have another concern about it.\r\n\r\nThe current pg_stat_subscription view is listed as \"Dynamic Statistics Views\" in\r\nthe document, the data in it seems about the worker process, and the view data\r\nshows what the current worker did. But if we keep the new xact stat persist,\r\nthen it's not what the current worker did, it looks more related to the\r\nsubscription historic data.\r\n\r\nAdding a new view seems resonalble, but it will bring another subscription\r\nrelated view which might be too much. OTOH, I can see there are already some\r\ndifferent views[1] including xact stat, maybe adding another one is accepatble ?\r\n\r\n[1]\r\npg_stat_xact_all_tables\r\npg_stat_xact_sys_tables\r\npg_stat_xact_user_tables\r\npg_stat_xact_user_functions\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Thu, 30 Sep 2021 02:52:54 +0000",
"msg_from": "=?utf-8?B?SG91LCBaaGlqaWUv5L6vIOW/l+adsA==?= <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 8:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Sep 29, 2021 at 8:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Sep 29, 2021 at 4:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Sep 28, 2021 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Sep 28, 2021 at 11:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Sep 28, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > >\n> > > > > > > Then, if, we proceed in this direction,\n> > > > > > > the place to implement those stats\n> > > > > > > would be on the LogicalRepWorker struct, instead ?\n> > > > > > >\n> > > > > >\n> > > > > > Or, we can make existing stats persistent and then add these stats on\n> > > > > > top of it. Sawada-San, do you have any thoughts on this matter?\n> > > > >\n> > > > > I think that making existing stats including received_lsn and\n> > > > > last_msg_receipt_time persistent by using stats collector could cause\n> > > > > massive reporting messages. We can report these messages with a\n> > > > > certain interval to reduce the amount of messages but we will end up\n> > > > > seeing old stats on the view.\n> > > > >\n> > > >\n> > > > Can't we keep the current and new stats both in-memory and persist on\n> > > > disk? So, the persistent stats data will be used to fill the in-memory\n> > > > counters after restarting of workers, otherwise, we will always refer\n> > > > to in-memory values.\n> > >\n> > > Interesting. Probably we can have apply workers and table sync workers\n> > > send their statistics to the stats collector at exit (before the stats\n> > > collector shutting down)? And the startup process will restore them at\n> > > restart?\n> > >\n> >\n> > I think we do need to send at the exit but we should probably send at\n> > some other regular interval as well to avoid losing all the stats\n> > after the crash-restart situation.\n>\n> But we clear all statistics collected by the stats collector during\n> crash recovery.\n>\n\nRight. So, maybe if we do what you are suggesting is sufficient.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 08:42:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, September 30, 2021 11:53 AM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\r\n> On Tues, Sep 28, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > On Tue, Sep 28, 2021 at 11:35 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Tue, Sep 28, 2021 at 1:54 PM Amit Kapila\r\n> > > <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > > >\r\n> > > > >\r\n> > > > > Then, if, we proceed in this direction, the place to implement\r\n> > > > > those stats would be on the LogicalRepWorker struct, instead ?\r\n> > > > >\r\n> > > >\r\n> > > > Or, we can make existing stats persistent and then add these stats\r\n> > > > on top of it. Sawada-San, do you have any thoughts on this matter?\r\n> > >\r\n> > > I think that making existing stats including received_lsn and\r\n> > > last_msg_receipt_time persistent by using stats collector could\r\n> > > cause massive reporting messages. We can report these messages with\r\n> > > a certain interval to reduce the amount of messages but we will end\r\n> > > up seeing old stats on the view.\r\n> > >\r\n> >\r\n> > Can't we keep the current and new stats both in-memory and persist on\r\n> > disk? So, the persistent stats data will be used to fill the in-memory\r\n> > counters after restarting of workers, otherwise, we will always refer\r\n> > to in-memory values.\r\n> \r\n> I think this approach works, but I have another concern about it.\r\n> \r\n> The current pg_stat_subscription view is listed as \"Dynamic Statistics Views\"\r\n> in the document, the data in it seems about the worker process, and the view\r\n> data shows what the current worker did. But if we keep the new xact stat\r\n> persist, then it's not what the current worker did, it looks more related to the\r\n> subscription historic data.\r\n> \r\n> Adding a new view seems resonalble, but it will bring another subscription\r\n> related view which might be too much. OTOH, I can see there are already some\r\n> different views[1] including xact stat, maybe adding another one is\r\n> accepatble ?\r\nI think we'll try to suppress the increment of the numbers\r\nof subscription related stats, if the possibility is not denied.\r\n\r\nIn terms of the document you mentioned,\r\nI feel I'd need some modifications to it in the patch,\r\nbased on the change.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n\r\n",
"msg_date": "Thu, 30 Sep 2021 04:06:31 +0000",
"msg_from": "=?utf-8?B?T3N1bWksIFRha2FtaWNoaS/lpKfloqgg5piC6YGT?=\n\t<osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 8:22 AM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\n>\n> On Tues, Sep 28, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Can't we keep the current and new stats both in-memory and persist on\n> > disk? So, the persistent stats data will be used to fill the in-memory\n> > counters after restarting of workers, otherwise, we will always refer\n> > to in-memory values.\n>\n> I think this approach works, but I have another concern about it.\n>\n> The current pg_stat_subscription view is listed as \"Dynamic Statistics Views\" in\n> the document, the data in it seems about the worker process, and the view data\n> shows what the current worker did. But if we keep the new xact stat persist,\n> then it's not what the current worker did, it looks more related to the\n> subscription historic data.\n>\n\nI see your point.\n\n> Adding a new view seems resonalble, but it will bring another subscription\n> related view which might be too much. OTOH, I can see there are already some\n> different views[1] including xact stat, maybe adding another one is accepatble ?\n>\n\nThese all views are related to untransmitted to the collector but what\nwe really need is a view similar to pg_stat_archiver or\npg_stat_bgwriter which gives information about background workers.\nNow, the problem as I see is if we go that route then\npg_stat_subscription will no longer remain dynamic view and one might\nconsider that as a compatibility break. The other idea I have shared\nis that we display these stats under the new view introduced by\nSawada-San's patch [1] and probably rename that view as\npg_stat_subscription_worker where all the stats (xact info and last\nfailure information) about each worker will be displayed. Do you have\nany opinion on that idea or do you see any problem with it?\n\nSure, we can introduce a new view but I want to avoid it if possible.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK%3D30xJfUVihNZDA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 09:44:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, September 30, 2021 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Sep 30, 2021 at 8:22 AM Hou, Zhijie/侯 志杰\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tues, Sep 28, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > Adding a new view seems resonalble, but it will bring another\r\n> > subscription related view which might be too much. OTOH, I can see\r\n> > there are already some different views[1] including xact stat, maybe adding\r\n> another one is accepatble ?\r\n> These all views are related to untransmitted to the collector but what we really\r\n> need is a view similar to pg_stat_archiver or pg_stat_bgwriter which gives\r\n> information about background workers.\r\n> Now, the problem as I see is if we go that route then pg_stat_subscription will\r\n> no longer remain dynamic view and one might consider that as a compatibility\r\n> break. The other idea I have shared is that we display these stats under the new\r\n> view introduced by Sawada-San's patch [1] and probably rename that view as\r\n> pg_stat_subscription_worker where all the stats (xact info and last failure\r\n> information) about each worker will be displayed.\r\nSorry, all the stats in pg_stat_subscription_worker view ?\r\n\r\nThere was a discussion that\r\nthe xact info should be displayed from pg_stat_subscription\r\nwith existing stats in the same (which will be changed to persist),\r\nbut when your above proposal comes true,\r\nthe list of pg_stat_subscription_worker's columns\r\nwill be something like below (when I list up major columns).\r\n\r\n- subid, subrelid and some other relation attributes required\r\n- 5 stats values moved from pg_stat_subscription\r\n received_lsn, last_msg_send_time, last_msg_receipt_time,\r\n latest_end_lsn, latest_end_time\r\n- xact stats\r\n xact_commit, xact_commit_bytes,\r\n xact_error, xact_error_bytes,\r\n xact_abort, xact_abort_bytes,\r\n- error stats\r\n datname, command, xid, failure_source,\r\n failure_count, last_failure, last_failure_message, etc\r\n\r\nIf this is what you imagined,\r\nwhat we can do from left values of pg_stat_subscription\r\nwould be only finding subscription worker's process id.\r\nIs it OK and is this view's alignment acceptable ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 30 Sep 2021 07:32:14 +0000",
"msg_from": "=?utf-8?B?T3N1bWksIFRha2FtaWNoaS/lpKfloqgg5piC6YGT?=\n\t<osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 1:02 PM Osumi, Takamichi/大墨 昂道\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, September 30, 2021 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Sep 30, 2021 at 8:22 AM Hou, Zhijie/侯 志杰\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tues, Sep 28, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > Adding a new view seems resonalble, but it will bring another\n> > > subscription related view which might be too much. OTOH, I can see\n> > > there are already some different views[1] including xact stat, maybe adding\n> > another one is accepatble ?\n> > These all views are related to untransmitted to the collector but what we really\n> > need is a view similar to pg_stat_archiver or pg_stat_bgwriter which gives\n> > information about background workers.\n> > Now, the problem as I see is if we go that route then pg_stat_subscription will\n> > no longer remain dynamic view and one might consider that as a compatibility\n> > break. The other idea I have shared is that we display these stats under the new\n> > view introduced by Sawada-San's patch [1] and probably rename that view as\n> > pg_stat_subscription_worker where all the stats (xact info and last failure\n> > information) about each worker will be displayed.\n> Sorry, all the stats in pg_stat_subscription_worker view ?\n>\n> There was a discussion that\n> the xact info should be displayed from pg_stat_subscription\n> with existing stats in the same (which will be changed to persist),\n> but when your above proposal comes true,\n> the list of pg_stat_subscription_worker's columns\n> will be something like below (when I list up major columns).\n>\n> - subid, subrelid and some other relation attributes required\n> - 5 stats values moved from pg_stat_subscription\n> received_lsn, last_msg_send_time, last_msg_receipt_time,\n> latest_end_lsn, latest_end_time\n>\n\nIf we go with the view as pg_stat_subscription_worker, we don't need\nto move the existing stats of pg_stat_subscription into it. There is a\nclear difference between them, all the stats displayed via\npg_stat_subscription are dynamic and won't persist whereas all the\nstats corresponding to pg_stat_subscription_worker view will persist.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 16:43:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, September 30, 2021 8:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Sep 30, 2021 at 1:02 PM Osumi, Takamichi/大墨 昂道\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, September 30, 2021 1:15 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > On Thu, Sep 30, 2021 at 8:22 AM Hou, Zhijie/侯 志杰\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Tues, Sep 28, 2021 6:05 PM Amit Kapila\r\n> > > > <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > Adding a new view seems resonalble, but it will bring another\r\n> > > > subscription related view which might be too much. OTOH, I can see\r\n> > > > there are already some different views[1] including xact stat,\r\n> > > > maybe adding\r\n> > > another one is accepatble ?\r\n> > > These all views are related to untransmitted to the collector but\r\n> > > what we really need is a view similar to pg_stat_archiver or\r\n> > > pg_stat_bgwriter which gives information about background workers.\r\n> > > Now, the problem as I see is if we go that route then\r\n> > > pg_stat_subscription will no longer remain dynamic view and one\r\n> > > might consider that as a compatibility break. The other idea I have\r\n> > > shared is that we display these stats under the new view introduced\r\n> > > by Sawada-San's patch [1] and probably rename that view as\r\n> > > pg_stat_subscription_worker where all the stats (xact info and last\r\n> > > failure\r\n> > > information) about each worker will be displayed.\r\n> > Sorry, all the stats in pg_stat_subscription_worker view ?\r\n> >\r\n> > There was a discussion that\r\n> > the xact info should be displayed from pg_stat_subscription with\r\n> > existing stats in the same (which will be changed to persist), but\r\n> > when your above proposal comes true, the list of\r\n> > pg_stat_subscription_worker's columns will be something like below\r\n> > (when I list up major columns).\r\n> >\r\n> > - subid, subrelid and some other relation attributes required\r\n> > - 5 stats values moved from pg_stat_subscription\r\n> > received_lsn, last_msg_send_time, last_msg_receipt_time,\r\n> > latest_end_lsn, latest_end_time\r\n> >\r\n> \r\n> If we go with the view as pg_stat_subscription_worker, we don't need to move\r\n> the existing stats of pg_stat_subscription into it.\r\nThank you for clarifying this point. I understand.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 1 Oct 2021 14:52:24 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, September 30, 2021 12:15 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> On Thu, Sep 30, 2021 at 8:22 AM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tues, Sep 28, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > > Can't we keep the current and new stats both in-memory and persist on\r\n> > > disk? So, the persistent stats data will be used to fill the in-memory\r\n> > > counters after restarting of workers, otherwise, we will always refer\r\n> > > to in-memory values.\r\n> >\r\n> > I think this approach works, but I have another concern about it.\r\n> >\r\n> > The current pg_stat_subscription view is listed as \"Dynamic Statistics Views\"\r\n> in\r\n> > the document, the data in it seems about the worker process, and the view\r\n> data\r\n> > shows what the current worker did. But if we keep the new xact stat persist,\r\n> > then it's not what the current worker did, it looks more related to the\r\n> > subscription historic data.\r\n> >\r\n> \r\n> I see your point.\r\n> \r\n> > Adding a new view seems resonalble, but it will bring another subscription\r\n> > related view which might be too much. OTOH, I can see there are already\r\n> some\r\n> > different views[1] including xact stat, maybe adding another one is\r\n> accepatble ?\r\n> >\r\n> \r\n> These all views are related to untransmitted to the collector but what\r\n> we really need is a view similar to pg_stat_archiver or\r\n> pg_stat_bgwriter which gives information about background workers.\r\n> Now, the problem as I see is if we go that route then\r\n> pg_stat_subscription will no longer remain dynamic view and one might\r\n> consider that as a compatibility break. The other idea I have shared\r\n> is that we display these stats under the new view introduced by\r\n> Sawada-San's patch [1] and probably rename that view as\r\n> pg_stat_subscription_worker where all the stats (xact info and last\r\n> failure information) about each worker will be displayed. Do you have\r\n> any opinion on that idea or do you see any problem with it?\r\n\r\nPersonally, I think it seems reasonable to merge the xact stat into the view from\r\nsawada-san's patch.\r\n\r\nOne problem I noticed is that pg_stat_subscription_error\r\ncurrently have a 'count' column which show how many times the last error\r\nhappened. The xact stat here also have a similar value 'xact_error'. I think we\r\nmight need to rename it or merge them into one in some way.\r\n\r\nBesides, if we decide to merge xact stat into pg_stat_subscription_error, some column\r\nseems need to be renamed. Maybe like:\r\nerror_message => Last_error_message, command=> last_error_command..\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Thu, 14 Oct 2021 03:53:49 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, October 14, 2021 12:54 PM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\r\n> On Thursday, September 30, 2021 12:15 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> > On Thu, Sep 30, 2021 at 8:22 AM Hou, Zhijie/侯 志杰\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Tues, Sep 28, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > Can't we keep the current and new stats both in-memory and persist\r\n> > > > on disk? So, the persistent stats data will be used to fill the\r\n> > > > in-memory counters after restarting of workers, otherwise, we will\r\n> > > > always refer to in-memory values.\r\n> > >\r\n> > > I think this approach works, but I have another concern about it.\r\n> > >\r\n> > > The current pg_stat_subscription view is listed as \"Dynamic Statistics\r\n> Views\"\r\n> > in\r\n> > > the document, the data in it seems about the worker process, and the\r\n> > > view\r\n> > data\r\n> > > shows what the current worker did. But if we keep the new xact stat\r\n> > > persist, then it's not what the current worker did, it looks more\r\n> > > related to the subscription historic data.\r\n> > >\r\n> >\r\n> > I see your point.\r\n> >\r\n> > > Adding a new view seems resonalble, but it will bring another\r\n> > > subscription related view which might be too much. OTOH, I can see\r\n> > > there are already\r\n> > some\r\n> > > different views[1] including xact stat, maybe adding another one is\r\n> > accepatble ?\r\n> > >\r\n> >\r\n> > These all views are related to untransmitted to the collector but what\r\n> > we really need is a view similar to pg_stat_archiver or\r\n> > pg_stat_bgwriter which gives information about background workers.\r\n> > Now, the problem as I see is if we go that route then\r\n> > pg_stat_subscription will no longer remain dynamic view and one might\r\n> > consider that as a compatibility break. The other idea I have shared\r\n> > is that we display these stats under the new view introduced by\r\n> > Sawada-San's patch [1] and probably rename that view as\r\n> > pg_stat_subscription_worker where all the stats (xact info and last\r\n> > failure information) about each worker will be displayed. Do you have\r\n> > any opinion on that idea or do you see any problem with it?\r\n> \r\n> Personally, I think it seems reasonable to merge the xact stat into the view from\r\n> sawada-san's patch.\r\n> \r\n> One problem I noticed is that pg_stat_subscription_error currently have a\r\n> 'count' column which show how many times the last error happened. The xact\r\n> stat here also have a similar value 'xact_error'. I think we might need to rename\r\n> it or merge them into one in some way.\r\n> \r\n> Besides, if we decide to merge xact stat into pg_stat_subscription_error, some\r\n> column seems need to be renamed. Maybe like:\r\n> error_message => Last_error_message, command=> last_error_command..\r\nYeah, we must make them distinguished clearly.\r\n\r\nI guessed that you are concerned about\r\namount of renaming codes that could be a bit large\r\nor you come up with a necessity to consider\r\nthe all column names of the pg_stat_subscription_worker\r\ntogether all at once in advance.\r\n\r\nIt's because my instant impression is,\r\nwhen we go with the current xact stats column definitions\r\n(xact_commit, xact_commit_bytes, xact_error, xact_error_bytes,\r\nxact_abort, xact_abort_bytes), the renaming problem can be solved\r\nif I write one additional patch or extend the main patch of\r\nxact stats to handle renaming.\r\n(This can work to keep both threads independent\r\nfrom each other).\r\n\r\nDid you have some concern that cannot be handled by this way ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 14 Oct 2021 06:13:18 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, October 14, 2021 2:13 PM Osumi, Takamichi wrote:\r\n> On Thursday, October 14, 2021 12:54 PM Hou, Zhijie<houzj.fnst@fujitsu.com> wrote:\r\n> > On Thursday, September 30, 2021 12:15 PM Amit Kapila\r\n> > > On Thu, Sep 30, 2021 at 8:22 AM Hou, Zhijie/侯 志杰wrote:\r\n> > > >\r\n> > > > On Tues, Sep 28, 2021 6:05 PM Amit Kapila\r\n> > > > <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > > > >\r\n> > > > > Can't we keep the current and new stats both in-memory and\r\n> > > > > persist on disk? So, the persistent stats data will be used to\r\n> > > > > fill the in-memory counters after restarting of workers,\r\n> > > > > otherwise, we will always refer to in-memory values.\r\n> > > >\r\n> > > > I think this approach works, but I have another concern about it.\r\n> > > >\r\n> > > > The current pg_stat_subscription view is listed as \"Dynamic\r\n> > > > Statistics Views\" in\r\n> > > > the document, the data in it seems about the worker process, and\r\n> > > > the view data\r\n> > > > shows what the current worker did. But if we keep the new xact\r\n> > > > stat persist, then it's not what the current worker did, it looks\r\n> > > > more related to the subscription historic data.\r\n> > > >\r\n> > >\r\n> > > I see your point.\r\n> > >\r\n> > > > Adding a new view seems resonalble, but it will bring another\r\n> > > > subscription related view which might be too much. OTOH, I can see\r\n> > > > there are already some\r\n> > > > different views[1] including xact stat, maybe adding another one\r\n> > > > is accepatble ?\r\n> > > >\r\n> > >\r\n> > > These all views are related to untransmitted to the collector but\r\n> > > what we really need is a view similar to pg_stat_archiver or\r\n> > > pg_stat_bgwriter which gives information about background workers.\r\n> > > Now, the problem as I see is if we go that route then\r\n> > > pg_stat_subscription will no longer remain dynamic view and one\r\n> > > might consider that as a compatibility break. The other idea I have\r\n> > > shared is that we display these stats under the new view introduced\r\n> > > by Sawada-San's patch [1] and probably rename that view as\r\n> > > pg_stat_subscription_worker where all the stats (xact info and last\r\n> > > failure information) about each worker will be displayed. Do you\r\n> > > have any opinion on that idea or do you see any problem with it?\r\n> >\r\n> > Personally, I think it seems reasonable to merge the xact stat into\r\n> > the view from sawada-san's patch.\r\n> >\r\n> > One problem I noticed is that pg_stat_subscription_error currently\r\n> > have a 'count' column which show how many times the last error\r\n> > happened. The xact stat here also have a similar value 'xact_error'. I\r\n> > think we might need to rename it or merge them into one in some way.\r\n> >\r\n> > Besides, if we decide to merge xact stat into\r\n> > pg_stat_subscription_error, some column seems need to be renamed.\r\n> Maybe like:\r\n> > error_message => Last_error_message, command=> last_error_command..\r\n> Yeah, we must make them distinguished clearly.\r\n> \r\n> I guessed that you are concerned about\r\n> amount of renaming codes that could be a bit large or you come up with a\r\n> necessity to consider the all column names of the pg_stat_subscription_worker\r\n> together all at once in advance.\r\n> \r\n> It's because my instant impression is,\r\n> when we go with the current xact stats column definitions (xact_commit,\r\n> xact_commit_bytes, xact_error, xact_error_bytes, xact_abort,\r\n> xact_abort_bytes), the renaming problem can be solved if I write one\r\n> additional patch or extend the main patch of xact stats to handle renaming.\r\n> (This can work to keep both threads independent from each other).\r\n> \r\n> Did you have some concern that cannot be handled by this way ?\r\nHi,\r\n\r\nCurrently, I don't find some unsolvable issues in this approach.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n",
"msg_date": "Mon, 18 Oct 2021 02:51:43 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Oct 14, 2021 at 9:23 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, September 30, 2021 12:15 PM Amit Kapila <amit.kapila16@gmail.com>\n> >\n> > These all views are related to untransmitted to the collector but what\n> > we really need is a view similar to pg_stat_archiver or\n> > pg_stat_bgwriter which gives information about background workers.\n> > Now, the problem as I see is if we go that route then\n> > pg_stat_subscription will no longer remain dynamic view and one might\n> > consider that as a compatibility break. The other idea I have shared\n> > is that we display these stats under the new view introduced by\n> > Sawada-San's patch [1] and probably rename that view as\n> > pg_stat_subscription_worker where all the stats (xact info and last\n> > failure information) about each worker will be displayed. Do you have\n> > any opinion on that idea or do you see any problem with it?\n>\n> Personally, I think it seems reasonable to merge the xact stat into the view from\n> sawada-san's patch.\n>\n> One problem I noticed is that pg_stat_subscription_error\n> currently have a 'count' column which show how many times the last error\n> happened. The xact stat here also have a similar value 'xact_error'. I think we\n> might need to rename it or merge them into one in some way.\n>\n> Besides, if we decide to merge xact stat into pg_stat_subscription_error, some column\n> seems need to be renamed. Maybe like:\n> error_message => Last_error_message, command=> last_error_command..\n>\n\nDon't you think that keeping the view name as\npg_stat_subscription_error would be a bit confusing if it has to\ndisplay xact_info? Isn't it better to change it to\npg_stat_subscription_worker or some other worker-specific generic\nname?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Oct 2021 15:33:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, October 18, 2021 6:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Oct 14, 2021 at 9:23 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, September 30, 2021 12:15 PM Amit Kapila\r\n> > <amit.kapila16@gmail.com>\r\n> > >\r\n> > > These all views are related to untransmitted to the collector but\r\n> > > what we really need is a view similar to pg_stat_archiver or\r\n> > > pg_stat_bgwriter which gives information about background workers.\r\n> > > Now, the problem as I see is if we go that route then\r\n> > > pg_stat_subscription will no longer remain dynamic view and one\r\n> > > might consider that as a compatibility break. The other idea I have\r\n> > > shared is that we display these stats under the new view introduced\r\n> > > by Sawada-San's patch [1] and probably rename that view as\r\n> > > pg_stat_subscription_worker where all the stats (xact info and last\r\n> > > failure information) about each worker will be displayed. Do you\r\n> > > have any opinion on that idea or do you see any problem with it?\r\n> >\r\n> > Personally, I think it seems reasonable to merge the xact stat into\r\n> > the view from sawada-san's patch.\r\n> >\r\n> > One problem I noticed is that pg_stat_subscription_error currently\r\n> > have a 'count' column which show how many times the last error\r\n> > happened. The xact stat here also have a similar value 'xact_error'. I\r\n> > think we might need to rename it or merge them into one in some way.\r\n> >\r\n> > Besides, if we decide to merge xact stat into\r\n> > pg_stat_subscription_error, some column seems need to be renamed.\r\n> Maybe like:\r\n> > error_message => Last_error_message, command=> last_error_command..\r\n> >\r\n> \r\n> Don't you think that keeping the view name as pg_stat_subscription_error\r\n> would be a bit confusing if it has to display xact_info? Isn't it better to change it\r\n> to pg_stat_subscription_worker or some other worker-specific generic name?\r\n\r\nYes, I agreed that rename the view to pg_stat_subscription_worker or some\r\nother worker-specific generic name is better if we decide to move forward\r\nwith this approach.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Tue, 19 Oct 2021 01:41:37 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, October 18, 2021 11:52 AM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\r\n> On Thursday, October 14, 2021 2:13 PM Osumi, Takamichi wrote:\r\n> > On Thursday, October 14, 2021 12:54 PM Hou,\r\n> Zhijie<houzj.fnst@fujitsu.com> wrote:\r\n> > > On Thursday, September 30, 2021 12:15 PM Amit Kapila\r\n> > > > On Thu, Sep 30, 2021 at 8:22 AM Hou, Zhijie/侯 志杰wrote:\r\n> > > > >\r\n> > > > > On Tues, Sep 28, 2021 6:05 PM Amit Kapila\r\n> > > > > <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > > >\r\n> > > > > > Can't we keep the current and new stats both in-memory and\r\n> > > > > > persist on disk? So, the persistent stats data will be used to\r\n> > > > > > fill the in-memory counters after restarting of workers,\r\n> > > > > > otherwise, we will always refer to in-memory values.\r\n> > > > >\r\n> > > > > I think this approach works, but I have another concern about it.\r\n> > > > >\r\n> > > > > The current pg_stat_subscription view is listed as \"Dynamic\r\n> > > > > Statistics Views\" in the document, the data in it seems about\r\n> > > > > the worker process, and the view data shows what the current\r\n> > > > > worker did. But if we keep the new xact stat persist, then it's\r\n> > > > > not what the current worker did, it looks more related to the\r\n> > > > > subscription historic data.\r\n> > > > >\r\n> > > >\r\n> > > > I see your point.\r\n> > > >\r\n> > > > > Adding a new view seems resonalble, but it will bring another\r\n> > > > > subscription related view which might be too much. OTOH, I can\r\n> > > > > see there are already some different views[1] including xact\r\n> > > > > stat, maybe adding another one is accepatble ?\r\n> > > > >\r\n> > > >\r\n> > > > These all views are related to untransmitted to the collector but\r\n> > > > what we really need is a view similar to pg_stat_archiver or\r\n> > > > pg_stat_bgwriter which gives information about background workers.\r\n> > > > Now, the problem as I see is if we go that route then\r\n> > > > pg_stat_subscription will no longer remain dynamic view and one\r\n> > > > might consider that as a compatibility break. The other idea I\r\n> > > > have shared is that we display these stats under the new view\r\n> > > > introduced by Sawada-San's patch [1] and probably rename that view\r\n> > > > as pg_stat_subscription_worker where all the stats (xact info and\r\n> > > > last failure information) about each worker will be displayed. Do\r\n> > > > you have any opinion on that idea or do you see any problem with it?\r\n> > >\r\n> > > Personally, I think it seems reasonable to merge the xact stat into\r\n> > > the view from sawada-san's patch.\r\n> > >\r\n> > > One problem I noticed is that pg_stat_subscription_error currently\r\n> > > have a 'count' column which show how many times the last error\r\n> > > happened. The xact stat here also have a similar value 'xact_error'.\r\n> > > I think we might need to rename it or merge them into one in some way.\r\n> > >\r\n> > > Besides, if we decide to merge xact stat into\r\n> > > pg_stat_subscription_error, some column seems need to be renamed.\r\n> > Maybe like:\r\n> > > error_message => Last_error_message, command=>\r\n> last_error_command..\r\n> > Yeah, we must make them distinguished clearly.\r\n> >\r\n> > I guessed that you are concerned about amount of renaming codes that\r\n> > could be a bit large or you come up with a necessity to consider the\r\n> > all column names of the pg_stat_subscription_worker together all at\r\n> > once in advance.\r\n> >\r\n> > It's because my instant impression is, when we go with the current\r\n> > xact stats column definitions (xact_commit, xact_commit_bytes,\r\n> > xact_error, xact_error_bytes, xact_abort, xact_abort_bytes), the\r\n> > renaming problem can be solved if I write one additional patch or\r\n> > extend the main patch of xact stats to handle renaming.\r\n> > (This can work to keep both threads independent from each other).\r\n> >\r\n> > Did you have some concern that cannot be handled by this way ?\r\n> Hi,\r\n> \r\n> Currently, I don't find some unsolvable issues in this approach.\r\nOkay. Glad to hear that.\r\nThen, I can restart my implementation with this direction.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n ",
"msg_date": "Tue, 19 Oct 2021 02:51:10 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Oct 18, 2021 at 7:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 14, 2021 at 9:23 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Thursday, September 30, 2021 12:15 PM Amit Kapila <amit.kapila16@gmail.com>\n> > >\n> > > These all views are related to untransmitted to the collector but what\n> > > we really need is a view similar to pg_stat_archiver or\n> > > pg_stat_bgwriter which gives information about background workers.\n> > > Now, the problem as I see is if we go that route then\n> > > pg_stat_subscription will no longer remain dynamic view and one might\n> > > consider that as a compatibility break. The other idea I have shared\n> > > is that we display these stats under the new view introduced by\n> > > Sawada-San's patch [1] and probably rename that view as\n> > > pg_stat_subscription_worker where all the stats (xact info and last\n> > > failure information) about each worker will be displayed. Do you have\n> > > any opinion on that idea or do you see any problem with it?\n> >\n> > Personally, I think it seems reasonable to merge the xact stat into the view from\n> > sawada-san's patch.\n> >\n> > One problem I noticed is that pg_stat_subscription_error\n> > currently have a 'count' column which show how many times the last error\n> > happened. The xact stat here also have a similar value 'xact_error'. I think we\n> > might need to rename it or merge them into one in some way.\n> >\n> > Besides, if we decide to merge xact stat into pg_stat_subscription_error, some column\n> > seems need to be renamed. Maybe like:\n> > error_message => Last_error_message, command=> last_error_command..\n> >\n>\n> Don't you think that keeping the view name as\n> pg_stat_subscription_error would be a bit confusing if it has to\n> display xact_info? Isn't it better to change it to\n> pg_stat_subscription_worker or some other worker-specific generic\n> name?\n\nI agree that it'd be better to include xact info to\npg_stat_subscription_errors view rather than making\npg_stat_subscription a cumulative view. It would be more simple and\nconsistent.\n\nThe user might want to reset only either error stats or xact stats but\nwe can do that by passing a value to the reset function. For example,\npg_stat_reset_subscription_worker(10, 20, ‘error') for resetting only\nerror stats whereas pg_stat_reset_subscription_worker(10, 20, ‘xact’)\nfor resetting only xact stats etc (passing NULL or omitting the third\nargument means resetting all stats). I'll change the view name in the\nnext version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 21 Oct 2021 10:18:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 6:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 7:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 14, 2021 at 9:23 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Thursday, September 30, 2021 12:15 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > >\n> > > > These all views are related to untransmitted to the collector but what\n> > > > we really need is a view similar to pg_stat_archiver or\n> > > > pg_stat_bgwriter which gives information about background workers.\n> > > > Now, the problem as I see is if we go that route then\n> > > > pg_stat_subscription will no longer remain dynamic view and one might\n> > > > consider that as a compatibility break. The other idea I have shared\n> > > > is that we display these stats under the new view introduced by\n> > > > Sawada-San's patch [1] and probably rename that view as\n> > > > pg_stat_subscription_worker where all the stats (xact info and last\n> > > > failure information) about each worker will be displayed. Do you have\n> > > > any opinion on that idea or do you see any problem with it?\n> > >\n> > > Personally, I think it seems reasonable to merge the xact stat into the view from\n> > > sawada-san's patch.\n> > >\n> > > One problem I noticed is that pg_stat_subscription_error\n> > > currently have a 'count' column which show how many times the last error\n> > > happened. The xact stat here also have a similar value 'xact_error'. I think we\n> > > might need to rename it or merge them into one in some way.\n> > >\n> > > Besides, if we decide to merge xact stat into pg_stat_subscription_error, some column\n> > > seems need to be renamed. Maybe like:\n> > > error_message => Last_error_message, command=> last_error_command..\n> > >\n> >\n> > Don't you think that keeping the view name as\n> > pg_stat_subscription_error would be a bit confusing if it has to\n> > display xact_info? Isn't it better to change it to\n> > pg_stat_subscription_worker or some other worker-specific generic\n> > name?\n>\n> I agree that it'd be better to include xact info to\n> pg_stat_subscription_errors view rather than making\n> pg_stat_subscription a cumulative view. It would be more simple and\n> consistent.\n>\n> The user might want to reset only either error stats or xact stats but\n> we can do that by passing a value to the reset function. For example,\n> pg_stat_reset_subscription_worker(10, 20, ‘error') for resetting only\n> error stats whereas pg_stat_reset_subscription_worker(10, 20, ‘xact’)\n> for resetting only xact stats etc (passing NULL or omitting the third\n> argument means resetting all stats).\n>\n\nSounds reasonable.\n\n> I'll change the view name in the\n> next version patch.\n>\n\nThanks, that will be helpful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Oct 2021 14:04:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, October 21, 2021 10:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Mon, Oct 18, 2021 at 7:03 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Oct 14, 2021 at 9:23 AM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Thursday, September 30, 2021 12:15 PM Amit Kapila\r\n> > > <amit.kapila16@gmail.com>\r\n> > > >\r\n> > > > These all views are related to untransmitted to the collector but\r\n> > > > what we really need is a view similar to pg_stat_archiver or\r\n> > > > pg_stat_bgwriter which gives information about background workers.\r\n> > > > Now, the problem as I see is if we go that route then\r\n> > > > pg_stat_subscription will no longer remain dynamic view and one\r\n> > > > might consider that as a compatibility break. The other idea I\r\n> > > > have shared is that we display these stats under the new view\r\n> > > > introduced by Sawada-San's patch [1] and probably rename that view\r\n> > > > as pg_stat_subscription_worker where all the stats (xact info and\r\n> > > > last failure information) about each worker will be displayed. Do\r\n> > > > you have any opinion on that idea or do you see any problem with it?\r\n> > >\r\n> > > Personally, I think it seems reasonable to merge the xact stat into\r\n> > > the view from sawada-san's patch.\r\n> > >\r\n> > > One problem I noticed is that pg_stat_subscription_error currently\r\n> > > have a 'count' column which show how many times the last error\r\n> > > happened. The xact stat here also have a similar value 'xact_error'.\r\n> > > I think we might need to rename it or merge them into one in some way.\r\n> > >\r\n> > > Besides, if we decide to merge xact stat into\r\n> > > pg_stat_subscription_error, some column seems need to be renamed.\r\n> Maybe like:\r\n> > > error_message => Last_error_message, command=>\r\n> last_error_command..\r\n> > >\r\n> >\r\n> > Don't you think that keeping the view name as\r\n> > pg_stat_subscription_error would be a bit confusing if it has to\r\n> > display xact_info? Isn't it better to change it to\r\n> > pg_stat_subscription_worker or some other worker-specific generic\r\n> > name?\r\n> \r\n> I agree that it'd be better to include xact info to pg_stat_subscription_errors\r\n> view rather than making pg_stat_subscription a cumulative view. It would be\r\n> more simple and consistent.\r\n... \r\n>I'll change the view name in the next version patch.\r\nThanks a lot, Sawasa-san.\r\n\r\nI've created a new patch that extends pg_stat_subscription_workers\r\nto include other transaction statistics.\r\n\r\nNote that this patch depends on v18 patch-set in [1]\r\nand needs to be after the perl modules' namespace changes\r\nconducted recently by commit b3b4d8e68ae83f432f43f035c7eb481ef93e1583.\r\n\r\nThere are other several major changes compared to the previous version.\r\n\r\n(1)\r\nAddressing several streaming transactions running in parallel on the pub that\r\ncan send unexpected order of partial data demarcated stream start and stream stop.\r\nEven if one of them aborts, now bytes calculation should be correct.\r\n\r\n(2)\r\nUpdates of stats on the view at either commit prepared or rollback prepared time.\r\nThis means we don't lost prepared transaction size even after server restart\r\nand user can see the results of two phase operation at those timings.\r\n\r\n(3)\r\nAddition of TAP tests.\r\n\r\n(4)\r\nRenames of other existing columns so that those meanings are more apparent\r\nin align with other new stats.\r\n\r\nSome important details are written in the comments of the attached patch.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoCTtQgfy57AxB4q8KUOpRH8rkHN%3Ds_9p9Pvno_XoBK5wg%40mail.gmail.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 28 Oct 2021 14:18:47 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, October 28, 2021 11:19 PM I wrote:\r\n> I've created a new patch that extends pg_stat_subscription_workers to include\r\n> other transaction statistics.\r\n> \r\n> Note that this patch depends on v18 patch-set in [1]...\r\nRebased based on the v19 in [1].\r\nAlso, fixed documentation a little and made tests tidy.\r\nFYI, the newly included TAP test(027_worker_xact_stats.pl) is stable\r\nbecause I checked that 100 times of its execution in a tight loop all passed.\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoDY-9_x819F_m1_wfCVXXFJrGiSmR2MfC9Nw4nW8Om0qA%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 1 Nov 2021 13:18:18 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, November 1, 2021 10:18 PM I wrote:\r\n> On Thursday, October 28, 2021 11:19 PM I wrote:\r\n> > I've created a new patch that extends pg_stat_subscription_workers to\r\n> > include other transaction statistics.\r\n> >\r\n> > Note that this patch depends on v18 patch-set in [1]...\r\n> Rebased based on the v19 in [1].\r\nI'd like to add one more explanation about the latest patches for review.\r\n\r\n\r\nOn Thursday, October 28, 2021 11:19 PM I wrote:\r\n> (2)\r\n> Updates of stats on the view at either commit prepared or rollback prepared\r\n> time.\r\n> This means we don't lost prepared transaction size even after server restart\r\n> and user can see the results of two phase operation at those timings.\r\nThere is another reason I had to treat 'prepare' message like above.\r\nAny reviewer might think that sending prepared bytes to stats collector\r\nand making the bytes survive over the restart is too much.\r\n\r\nBut, we don't know if the prepared transaction is processed\r\nby commit prepared or rollback prepared at 'prepare' time.\r\nAn execution of commit prepared needs to update the column of xact_commit and\r\nxact_commit_bytes while rollback prepared does the column of xact_abort\r\nand xact_abort_bytes where this patch is introducing.\r\n\r\nTherefore, even when we can calculate the bytes of prepared txn at prepare time,\r\nit's *not* possible to add the transaction bytes to either stats column of\r\nbytes sizes and clean up the bytes. Then, there was a premise\r\nthat we should not lose the prepared txn bytes by the shutdown\r\nso my patch has become the implementation described above.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 2 Nov 2021 12:07:01 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Nov 2, 2021 at 12:18 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, October 28, 2021 11:19 PM I wrote:\n> > I've created a new patch that extends pg_stat_subscription_workers to include\n> > other transaction statistics.\n> >\n> > Note that this patch depends on v18 patch-set in [1]...\n> Rebased based on the v19 in [1].\n> Also, fixed documentation a little and made tests tidy.\n> FYI, the newly included TAP test(027_worker_xact_stats.pl) is stable\n> because I checked that 100 times of its execution in a tight loop all passed.\n>\n\nI have done some basic testing of the patch and have some initial\nreview comments:\n\n(1) I think this patch needs to be in \"git format-patch\" format, with\na proper commit message that describes the purpose of the patch and\nthe functionality it adds, and any high-level design points (something\nlike the overview given in the initial post, and accounting for the\nsubsequent discussion points and updated functionality).\n\n(2) doc/src/sgml/monitoring.sgml\nThere are some grammatical issues in the current description. I\nsuggest changing it to something like:\n\nBEFORE:\n+ <entry>At least one row per subscription, showing about\ntransaction statistics and error summary that\nAFTER:\n+ <entry>At least one row per subscription, showing transaction\nstatistics and information about errors that\n\n(2) doc/src/sgml/monitoring.sgml\nThe current description seems a little confusing.\nPer subscription, it shows the transaction statistics and any last\nerror info from tablesync/apply workers? If this is the case, I'd\nsuggest the following change:\n\nBEFORE:\n+ one row per subscription for transaction statistics and summary of the last\n+ error reported by workers applying logical replication changes and workers\n+ handling the initial data copy of the subscribed tables.\nAFTER:\n+ one row per subscription, showing corresponding transaction statistics and\n+ information about the last error reported by workers applying\nlogical replication\n+ changes or by workers handling the initial data copy of the\nsubscribed tables.\n\n(3) xact_commit\nI think that the \"xact_commit\" column should be named\n\"xact_commit_count\" or \"xact_commits\".\nSimilarly, I think \"xact_error\" should be named \"xact_error_count\" or\n\"xact_errors\", and \"xact_aborts\" should be named \"xact_abort_count\" or\n\"xact_aborts\".\n\n(4) xact_commit_bytes\n\n+ Amount of transactions data successfully applied in this subscription.\n+ Consumed memory for xact_commit is displayed.\n\nI find this description a bit confusing. \"Consumed memory for\nxact_commit\" seems different to \"transactions data\".\nCould the description be something like: Amount of data (in bytes)\nsuccessfully applied in this subscription, across \"xact_commit_count\"\ntransactions.\n\n(5)\nI'd suggest some minor rewording for the following:\n\nBEFORE:\n+ Number of transactions failed to be applied and caught by table\n+ sync worker or main apply worker in this subscription.\nAFTER:\n+ Number of transactions that failed to be applied by the table\n+ sync worker or main apply worker in this subscription.\n\n(6) xact_error_bytes\nAgain, it's a little confusing referring to \"consumed memory\" here.\nHow about rewording this, something like:\n\nBEFORE:\n+ Amount of transactions data unsuccessfully applied in this subscription.\n+ Consumed memory that past failed transaction used is displayed.\nAFTER:\n+ Amount of data (in bytes) unsuccessfully applied in this\nsubscription by the last failed transaction.\n\n(7)\nThe additional information provided for \"xact_abort_bytes\" needs some\nrewording, something like:\n\nBEFORE:\n+ Increase <literal>logical_decoding_work_mem</literal> on the publisher\n+ so that it exceeds the size of whole streamed transaction\n+ to suppress unnecessary consumed network bandwidth in addition to change\n+ in memory of the subscriber, if unexpected amount of streamed\ntransactions\n+ are aborted.\nAFTER:\n+ In order to suppress unnecessary consumed network bandwidth, increase\n+ <literal>logical_decoding_work_mem</literal> on the publisher so that it\n+ exceeds the size of the whole streamed transaction, and\nadditionally increase\n+ the available subscriber memory, if an unexpected amount of\nstreamed transactions\n+ are aborted.\n\n(8)\nSuggested update:\n\nBEFORE:\n+ * Tell the collector that worker transaction has finished without problem.\nAFTER:\n+ * Tell the collector that the worker transaction has successfully completed.\n\n(9) src/backend/postmaster/pgstat.c\nI think that the GID copying is unnecessarily copying the whole GID\nbuffer or using an additional strlen().\nIt should be changed to use strlcpy() to match other code:\n\nBEFORE:\n+ /* get the gid for this two phase operation */\n+ if (command == LOGICAL_REP_MSG_PREPARE ||\n+ command == LOGICAL_REP_MSG_STREAM_PREPARE)\n+ memcpy(msg.m_gid, prepare_data->gid, GIDSIZE);\n+ else if (command == LOGICAL_REP_MSG_COMMIT_PREPARED)\n+ memcpy(msg.m_gid, commit_data->gid, GIDSIZE);\n+ else /* rollback prepared */\n+ memcpy(msg.m_gid, rollback_data->gid, GIDSIZE);\nAFTER:\n+ /* get the gid for this two phase operation */\n+ if (command == LOGICAL_REP_MSG_PREPARE ||\n+ command == LOGICAL_REP_MSG_STREAM_PREPARE)\n+ strlcpy(msg.m_gid, prepare_data->gid, sizeof(msg.m_gid));\n+ else if (command == LOGICAL_REP_MSG_COMMIT_PREPARED)\n+ strlcpy(msg.m_gid, commit_data->gid, sizeof(msg.m_gid));\n+ else /* rollback prepared */\n+ strlcpy(msg.m_gid, rollback_data->gid, sizeof(msg.m_gid));\n\n\nBEFORE:\n+ strlcpy(prepared_txn->gid, msg->m_gid, strlen(msg->m_gid) + 1);\nAFTER:\n+ strlcpy(prepared_txn->gid, msg->m_gid, sizeof(prepared_txn->gid));\n\nBEFORE:\n+ memcpy(key.gid, msg->m_gid, strlen(msg->m_gid));\nAFTER:\n+ strlcpy(key.gid, msg->m_gid, sizeof(key.gid));\n\nBEFORE:\n+ memcpy(key.gid, gid, strlen(gid));\nAFTER:\n+ strlcpy(key.gid, gid, sizeof(key.gid));\n\n(10) src/backend/replication/logical/worker.c\nSome suggested rewording:\n\nBEFORE:\n+ * size of streaming transaction resources because it have used the\nAFTER:\n+ * size of streaming transaction resources because it has used the\n\n\nBEFORE:\n+ * tradeoff should not be good. Also, add multiple values\n+ * at once in order to reduce the number of this function call.\nAFTER:\n+ * tradeoff would not be good. Also, add multiple values\n+ * at once in order to reduce the number of calls to this function.\n\n(11) update_apply_change_size()\nShouldn't this function be declared static?\n\n(12) stream_write_change()\n\n+ streamed_entry->xact_size = streamed_entry->xact_size + total_len;\n/* update */\n\ncould be simply written as:\n\n+ streamed_entry->xact_size += total_len; /* update */\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 4 Nov 2021 11:53:47 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, November 4, 2021 9:54 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Tue, Nov 2, 2021 at 12:18 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, October 28, 2021 11:19 PM I wrote:\r\n> > > I've created a new patch that extends pg_stat_subscription_workers\r\n> > > to include other transaction statistics.\r\n> > >\r\n> > > Note that this patch depends on v18 patch-set in [1]...\r\n> > Rebased based on the v19 in [1].\r\n> > Also, fixed documentation a little and made tests tidy.\r\n> > FYI, the newly included TAP test(027_worker_xact_stats.pl) is stable\r\n> > because I checked that 100 times of its execution in a tight loop all passed.\r\n> >\r\n> \r\n> I have done some basic testing of the patch and have some initial review\r\n> comments:\r\nThanks for your review !\r\n\r\n> (1) I think this patch needs to be in \"git format-patch\" format, with a proper\r\n> commit message that describes the purpose of the patch and the functionality\r\n> it adds, and any high-level design points (something like the overview given in\r\n> the initial post, and accounting for the subsequent discussion points and\r\n> updated functionality).\r\nFixed.\r\n\r\n> (2) doc/src/sgml/monitoring.sgml\r\n> There are some grammatical issues in the current description. I suggest\r\n> changing it to something like:\r\n> BEFORE:\r\n> + <entry>At least one row per subscription, showing about\r\n> transaction statistics and error summary that\r\n> AFTER:\r\n> + <entry>At least one row per subscription, showing transaction\r\n> statistics and information about errors that\r\nFixed.\r\n\r\n> (2) doc/src/sgml/monitoring.sgml\r\n> The current description seems a little confusing.\r\n> Per subscription, it shows the transaction statistics and any last error info from\r\n> tablesync/apply workers? If this is the case, I'd suggest the following change:\r\n> \r\n> BEFORE:\r\n> + one row per subscription for transaction statistics and summary of the\r\n> last\r\n> + error reported by workers applying logical replication changes and\r\n> workers\r\n> + handling the initial data copy of the subscribed tables.\r\n> AFTER:\r\n> + one row per subscription, showing corresponding transaction statistics\r\n> and\r\n> + information about the last error reported by workers applying\r\n> logical replication\r\n> + changes or by workers handling the initial data copy of the\r\n> subscribed tables.\r\nFixed.\r\n\r\n> (3) xact_commit\r\n> I think that the \"xact_commit\" column should be named \"xact_commit_count\"\r\n> or \"xact_commits\".\r\n> Similarly, I think \"xact_error\" should be named \"xact_error_count\" or\r\n> \"xact_errors\", and \"xact_aborts\" should be named \"xact_abort_count\" or\r\n> \"xact_aborts\".\r\nI prefered *_count. Renamed.\r\n\r\n> (4) xact_commit_bytes\r\n> \r\n> + Amount of transactions data successfully applied in this subscription.\r\n> + Consumed memory for xact_commit is displayed.\r\n> \r\n> I find this description a bit confusing. \"Consumed memory for xact_commit\"\r\n> seems different to \"transactions data\".\r\n> Could the description be something like: Amount of data (in bytes) successfully\r\n> applied in this subscription, across \"xact_commit_count\"\r\n> transactions.\r\nFixed.\r\n\r\n> (5)\r\n> I'd suggest some minor rewording for the following:\r\n> \r\n> BEFORE:\r\n> + Number of transactions failed to be applied and caught by table\r\n> + sync worker or main apply worker in this subscription.\r\n> AFTER:\r\n> + Number of transactions that failed to be applied by the table\r\n> + sync worker or main apply worker in this subscription.\r\nFixed.\r\n\r\n> (6) xact_error_bytes\r\n> Again, it's a little confusing referring to \"consumed memory\" here.\r\n> How about rewording this, something like:\r\n> \r\n> BEFORE:\r\n> + Amount of transactions data unsuccessfully applied in this\r\n> subscription.\r\n> + Consumed memory that past failed transaction used is displayed.\r\n> AFTER:\r\n> + Amount of data (in bytes) unsuccessfully applied in this\r\n> subscription by the last failed transaction.\r\nxact_error_bytes (and other bytes columns as well) is cumulative\r\nso when a new error happens, the size of this new bytes would be\r\nadded to the same. So here we shouldn't mention just the last error.\r\nI simply applied your previous comments of 'xact_commit_bytes'\r\nto 'xact_error_bytes' description.\r\n\r\n> (7)\r\n> The additional information provided for \"xact_abort_bytes\" needs some\r\n> rewording, something like:\r\n> \r\n> BEFORE:\r\n> + Increase <literal>logical_decoding_work_mem</literal> on the\r\n> publisher\r\n> + so that it exceeds the size of whole streamed transaction\r\n> + to suppress unnecessary consumed network bandwidth in addition to\r\n> change\r\n> + in memory of the subscriber, if unexpected amount of streamed\r\n> transactions\r\n> + are aborted.\r\n> AFTER:\r\n> + In order to suppress unnecessary consumed network bandwidth,\r\n> increase\r\n> + <literal>logical_decoding_work_mem</literal> on the publisher so\r\n> that it\r\n> + exceeds the size of the whole streamed transaction, and\r\n> additionally increase\r\n> + the available subscriber memory, if an unexpected amount of\r\n> streamed transactions\r\n> + are aborted.\r\n\r\nI'm not sure about the last part.\r\n> additionally increase the available subscriber memory,\r\nWhich GUC parameter did you mean by this ?\r\nCould we point out and enalrge the memory size only for\r\nsubscriber's apply processing intentionally ?\r\nI incorporated (7) except for this last part.\r\nWill revise according to your reply.\r\n\r\nI also added the explanation about\r\nxact_abort_bytes itself to align with other bytes columns.\r\n\r\n> (8)\r\n> Suggested update:\r\n> \r\n> BEFORE:\r\n> + * Tell the collector that worker transaction has finished without problem.\r\n> AFTER:\r\n> + * Tell the collector that the worker transaction has successfully completed.\r\nFixed.\r\n\r\n> (9) src/backend/postmaster/pgstat.c\r\n> I think that the GID copying is unnecessarily copying the whole GID buffer or\r\n> using an additional strlen().\r\n> It should be changed to use strlcpy() to match other code:\r\n> \r\n> BEFORE:\r\n> + /* get the gid for this two phase operation */ if (command ==\r\n> + LOGICAL_REP_MSG_PREPARE ||\r\n> + command == LOGICAL_REP_MSG_STREAM_PREPARE)\r\n> + memcpy(msg.m_gid, prepare_data->gid, GIDSIZE); else if (command ==\r\n> + LOGICAL_REP_MSG_COMMIT_PREPARED)\r\n> + memcpy(msg.m_gid, commit_data->gid, GIDSIZE); else /* rollback\r\n> + prepared */\r\n> + memcpy(msg.m_gid, rollback_data->gid, GIDSIZE);\r\n> AFTER:\r\n> + /* get the gid for this two phase operation */ if (command ==\r\n> + LOGICAL_REP_MSG_PREPARE ||\r\n> + command == LOGICAL_REP_MSG_STREAM_PREPARE)\r\n> + strlcpy(msg.m_gid, prepare_data->gid, sizeof(msg.m_gid)); else if\r\n> + (command == LOGICAL_REP_MSG_COMMIT_PREPARED)\r\n> + strlcpy(msg.m_gid, commit_data->gid, sizeof(msg.m_gid)); else /*\r\n> + rollback prepared */\r\n> + strlcpy(msg.m_gid, rollback_data->gid, sizeof(msg.m_gid));\r\nFixed.\r\n\r\n> \r\n> BEFORE:\r\n> + strlcpy(prepared_txn->gid, msg->m_gid, strlen(msg->m_gid) + 1);\r\n> AFTER:\r\n> + strlcpy(prepared_txn->gid, msg->m_gid, sizeof(prepared_txn->gid));\r\n> \r\n> BEFORE:\r\n> + memcpy(key.gid, msg->m_gid, strlen(msg->m_gid));\r\n> AFTER:\r\n> + strlcpy(key.gid, msg->m_gid, sizeof(key.gid));\r\n> \r\n> BEFORE:\r\n> + memcpy(key.gid, gid, strlen(gid));\r\n> AFTER:\r\n> + strlcpy(key.gid, gid, sizeof(key.gid));\r\nFixed.\r\n\r\n> (10) src/backend/replication/logical/worker.c\r\n> Some suggested rewording:\r\n> \r\n> BEFORE:\r\n> + * size of streaming transaction resources because it have used the\r\n> AFTER:\r\n> + * size of streaming transaction resources because it has used the\r\nFixed.\r\n \r\n> BEFORE:\r\n> + * tradeoff should not be good. Also, add multiple values\r\n> + * at once in order to reduce the number of this function call.\r\n> AFTER:\r\n> + * tradeoff would not be good. Also, add multiple values\r\n> + * at once in order to reduce the number of calls to this function.\r\nFixed.\r\n\r\n> (11) update_apply_change_size()\r\n> Shouldn't this function be declared static?\r\nFixed.\r\n\r\n> (12) stream_write_change()\r\n> \r\n> + streamed_entry->xact_size = streamed_entry->xact_size + total_len;\r\n> /* update */\r\n> \r\n> could be simply written as:\r\n> \r\n> + streamed_entry->xact_size += total_len; /* update */\r\nFixed.\r\n\r\nLastly, I removed one unnecessary test that\r\nchecked publisher's stats in the TAP tests.\r\nAlso I introduced ApplyTxnExtraData structure to\r\nremove void* argument of update_apply_change_size\r\nthat might worsen the readability of codes\r\nin the previous version.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 5 Nov 2021 08:11:38 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Fri, Nov 5, 2021 at 1:42 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, November 4, 2021 9:54 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > On Tue, Nov 2, 2021 at 12:18 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Thursday, October 28, 2021 11:19 PM I wrote:\n> > > > I've created a new patch that extends pg_stat_subscription_workers\n> > > > to include other transaction statistics.\n> > > >\n> > > > Note that this patch depends on v18 patch-set in [1]...\n> > > Rebased based on the v19 in [1].\n> > > Also, fixed documentation a little and made tests tidy.\n> > > FYI, the newly included TAP test(027_worker_xact_stats.pl) is stable\n> > > because I checked that 100 times of its execution in a tight loop all passed.\n> > >\n> >\n> > I have done some basic testing of the patch and have some initial review\n> > comments:\n> Thanks for your review !\n>\n> > (1) I think this patch needs to be in \"git format-patch\" format, with a proper\n> > commit message that describes the purpose of the patch and the functionality\n> > it adds, and any high-level design points (something like the overview given in\n> > the initial post, and accounting for the subsequent discussion points and\n> > updated functionality).\n> Fixed.\n>\n> > (2) doc/src/sgml/monitoring.sgml\n> > There are some grammatical issues in the current description. I suggest\n> > changing it to something like:\n> > BEFORE:\n> > + <entry>At least one row per subscription, showing about\n> > transaction statistics and error summary that\n> > AFTER:\n> > + <entry>At least one row per subscription, showing transaction\n> > statistics and information about errors that\n> Fixed.\n>\n> > (2) doc/src/sgml/monitoring.sgml\n> > The current description seems a little confusing.\n> > Per subscription, it shows the transaction statistics and any last error info from\n> > tablesync/apply workers? If this is the case, I'd suggest the following change:\n> >\n> > BEFORE:\n> > + one row per subscription for transaction statistics and summary of the\n> > last\n> > + error reported by workers applying logical replication changes and\n> > workers\n> > + handling the initial data copy of the subscribed tables.\n> > AFTER:\n> > + one row per subscription, showing corresponding transaction statistics\n> > and\n> > + information about the last error reported by workers applying\n> > logical replication\n> > + changes or by workers handling the initial data copy of the\n> > subscribed tables.\n> Fixed.\n>\n> > (3) xact_commit\n> > I think that the \"xact_commit\" column should be named \"xact_commit_count\"\n> > or \"xact_commits\".\n> > Similarly, I think \"xact_error\" should be named \"xact_error_count\" or\n> > \"xact_errors\", and \"xact_aborts\" should be named \"xact_abort_count\" or\n> > \"xact_aborts\".\n> I prefered *_count. Renamed.\n>\n> > (4) xact_commit_bytes\n> >\n> > + Amount of transactions data successfully applied in this subscription.\n> > + Consumed memory for xact_commit is displayed.\n> >\n> > I find this description a bit confusing. \"Consumed memory for xact_commit\"\n> > seems different to \"transactions data\".\n> > Could the description be something like: Amount of data (in bytes) successfully\n> > applied in this subscription, across \"xact_commit_count\"\n> > transactions.\n> Fixed.\n>\n> > (5)\n> > I'd suggest some minor rewording for the following:\n> >\n> > BEFORE:\n> > + Number of transactions failed to be applied and caught by table\n> > + sync worker or main apply worker in this subscription.\n> > AFTER:\n> > + Number of transactions that failed to be applied by the table\n> > + sync worker or main apply worker in this subscription.\n> Fixed.\n>\n> > (6) xact_error_bytes\n> > Again, it's a little confusing referring to \"consumed memory\" here.\n> > How about rewording this, something like:\n> >\n> > BEFORE:\n> > + Amount of transactions data unsuccessfully applied in this\n> > subscription.\n> > + Consumed memory that past failed transaction used is displayed.\n> > AFTER:\n> > + Amount of data (in bytes) unsuccessfully applied in this\n> > subscription by the last failed transaction.\n> xact_error_bytes (and other bytes columns as well) is cumulative\n> so when a new error happens, the size of this new bytes would be\n> added to the same. So here we shouldn't mention just the last error.\n> I simply applied your previous comments of 'xact_commit_bytes'\n> to 'xact_error_bytes' description.\n>\n> > (7)\n> > The additional information provided for \"xact_abort_bytes\" needs some\n> > rewording, something like:\n> >\n> > BEFORE:\n> > + Increase <literal>logical_decoding_work_mem</literal> on the\n> > publisher\n> > + so that it exceeds the size of whole streamed transaction\n> > + to suppress unnecessary consumed network bandwidth in addition to\n> > change\n> > + in memory of the subscriber, if unexpected amount of streamed\n> > transactions\n> > + are aborted.\n> > AFTER:\n> > + In order to suppress unnecessary consumed network bandwidth,\n> > increase\n> > + <literal>logical_decoding_work_mem</literal> on the publisher so\n> > that it\n> > + exceeds the size of the whole streamed transaction, and\n> > additionally increase\n> > + the available subscriber memory, if an unexpected amount of\n> > streamed transactions\n> > + are aborted.\n>\n> I'm not sure about the last part.\n> > additionally increase the available subscriber memory,\n> Which GUC parameter did you mean by this ?\n> Could we point out and enalrge the memory size only for\n> subscriber's apply processing intentionally ?\n> I incorporated (7) except for this last part.\n> Will revise according to your reply.\n>\n> I also added the explanation about\n> xact_abort_bytes itself to align with other bytes columns.\n>\n> > (8)\n> > Suggested update:\n> >\n> > BEFORE:\n> > + * Tell the collector that worker transaction has finished without problem.\n> > AFTER:\n> > + * Tell the collector that the worker transaction has successfully completed.\n> Fixed.\n>\n> > (9) src/backend/postmaster/pgstat.c\n> > I think that the GID copying is unnecessarily copying the whole GID buffer or\n> > using an additional strlen().\n> > It should be changed to use strlcpy() to match other code:\n> >\n> > BEFORE:\n> > + /* get the gid for this two phase operation */ if (command ==\n> > + LOGICAL_REP_MSG_PREPARE ||\n> > + command == LOGICAL_REP_MSG_STREAM_PREPARE)\n> > + memcpy(msg.m_gid, prepare_data->gid, GIDSIZE); else if (command ==\n> > + LOGICAL_REP_MSG_COMMIT_PREPARED)\n> > + memcpy(msg.m_gid, commit_data->gid, GIDSIZE); else /* rollback\n> > + prepared */\n> > + memcpy(msg.m_gid, rollback_data->gid, GIDSIZE);\n> > AFTER:\n> > + /* get the gid for this two phase operation */ if (command ==\n> > + LOGICAL_REP_MSG_PREPARE ||\n> > + command == LOGICAL_REP_MSG_STREAM_PREPARE)\n> > + strlcpy(msg.m_gid, prepare_data->gid, sizeof(msg.m_gid)); else if\n> > + (command == LOGICAL_REP_MSG_COMMIT_PREPARED)\n> > + strlcpy(msg.m_gid, commit_data->gid, sizeof(msg.m_gid)); else /*\n> > + rollback prepared */\n> > + strlcpy(msg.m_gid, rollback_data->gid, sizeof(msg.m_gid));\n> Fixed.\n>\n> >\n> > BEFORE:\n> > + strlcpy(prepared_txn->gid, msg->m_gid, strlen(msg->m_gid) + 1);\n> > AFTER:\n> > + strlcpy(prepared_txn->gid, msg->m_gid, sizeof(prepared_txn->gid));\n> >\n> > BEFORE:\n> > + memcpy(key.gid, msg->m_gid, strlen(msg->m_gid));\n> > AFTER:\n> > + strlcpy(key.gid, msg->m_gid, sizeof(key.gid));\n> >\n> > BEFORE:\n> > + memcpy(key.gid, gid, strlen(gid));\n> > AFTER:\n> > + strlcpy(key.gid, gid, sizeof(key.gid));\n> Fixed.\n>\n> > (10) src/backend/replication/logical/worker.c\n> > Some suggested rewording:\n> >\n> > BEFORE:\n> > + * size of streaming transaction resources because it have used the\n> > AFTER:\n> > + * size of streaming transaction resources because it has used the\n> Fixed.\n>\n> > BEFORE:\n> > + * tradeoff should not be good. Also, add multiple values\n> > + * at once in order to reduce the number of this function call.\n> > AFTER:\n> > + * tradeoff would not be good. Also, add multiple values\n> > + * at once in order to reduce the number of calls to this function.\n> Fixed.\n>\n> > (11) update_apply_change_size()\n> > Shouldn't this function be declared static?\n> Fixed.\n>\n> > (12) stream_write_change()\n> >\n> > + streamed_entry->xact_size = streamed_entry->xact_size + total_len;\n> > /* update */\n> >\n> > could be simply written as:\n> >\n> > + streamed_entry->xact_size += total_len; /* update */\n> Fixed.\n>\n> Lastly, I removed one unnecessary test that\n> checked publisher's stats in the TAP tests.\n> Also I introduced ApplyTxnExtraData structure to\n> remove void* argument of update_apply_change_size\n> that might worsen the readability of codes\n> in the previous version.\n\nThanks for the updated patch.\nFew comments:\n1) You could remove LogicalRepPreparedTxnData,\nLogicalRepCommitPreparedTxnData & LogicalRepRollbackPreparedTxnData\nand change it to char *gid to reduce the function parameter and\nsimiplify the assignment:\n+ */\n+void\n+pgstat_report_subworker_twophase_xact(Oid subid, LogicalRepMsgType command,\n+\n PgStat_Counter xact_size,\n+\n LogicalRepPreparedTxnData *prepare_data,\n+\n LogicalRepCommitPreparedTxnData *commit_data,\n+\n LogicalRepRollbackPreparedTxnData *rollback_data)\n\n\n2) Shouldn't this change be part of skip xid patch?\n- TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"command\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 10, \"last_error_command\",\n TEXTOID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"xid\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 11, \"last_error_xid\",\n XIDOID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"error_count\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 12, \"last_error_count\",\n INT8OID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"error_message\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 13, \"last_error_message\",\n TEXTOID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_error_time\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 14, \"last_error_time\",\n\n3) This newly added structures should be added to typedefs.list:\nApplyTxnExtraData\nXactSizeEntry\nPgStat_MsgSubWorkerXactEnd\nPgStat_MsgSubWorkerTwophaseXact\nPgStat_StatSubWorkerPreparedXact\nPgStat_StatSubWorkerPreparedXactSize\n\n4) We are not sending the transaction size in case of table sync, is\nthis intentional, if so I felt we should document this in\npg_stat_subscription_workers\n+ /* Report the success of table sync. */\n+ pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\n+\n MyLogicalRepWorker->relid,\n+\n 0 /* no logical message type */,\n+\n 0 /* xact size */);\n+\n\n5) pg_stat_subscription_workers has a lot of columns, if we can reduce\nthe column size the readability will improve, like xact_commit_count\nto commit_count, xact_commit_bytes to commit_bytes, etc\n+ w.xact_commit_count,\n+ w.xact_commit_bytes,\n+ w.xact_error_count,\n+ w.xact_error_bytes,\n+ w.xact_abort_count,\n+ w.xact_abort_bytes,\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 8 Nov 2021 11:42:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Fri, Nov 5, 2021 at 7:11 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> I'm not sure about the last part.\n> > additionally increase the available subscriber memory,\n> Which GUC parameter did you mean by this ?\n> Could we point out and enalrge the memory size only for\n> subscriber's apply processing intentionally ?\n> I incorporated (7) except for this last part.\n> Will revise according to your reply.\n>\nI might have misinterpreted your original description, so I'll\nre-review that in your latest patch.\n\nAs a newer version (v20) of the prerequisite patch was posted a day\nago, it looks like your patch needs to be rebased against that (as it\ncurrently applies on top of the v19 version only).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 9 Nov 2021 10:50:03 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Fri, Nov 5, 2021 at 7:11 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n\nI did a quick scan through the latest v8 patch and noticed the following things:\n\nsrc/backend/postmaster/pgstat.c\n\n(1) pgstat_recv_subworker_twophase_xact()\nThe copying from msg->m_gid to key.gid does not seem to be correct.\nstrlen() is being called on a junk value, since key.gid has not been\nassigned yet.\nIt should be changed as follows:\n\nBEFORE:\n+ strlcpy(key.gid, msg->m_gid, strlen(key.gid));\nAFTER:\n+ strlcpy(key.gid, msg->m_gid, sizeof(key.gid));\n\n\n(2) pgstat_get_subworker_prepared_txn()\nSimilar to above, strlen() usage is not correct, and should use\nstrlcpy() instead of memcpy().\n\nBEFORE:\n+ memcpy(key.gid, gid, strlen(key.gid));\nAFTER:\n+ strlcpy(key.gid, gid, sizeof(key.gid));\n\n(3) stats_reset\nNote that the \"stats_reset\" column has been removed from the\npg_stat_subscription_workers view in the underlying latest v20 patch.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 9 Nov 2021 14:07:56 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, November 9, 2021 12:08 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> On Fri, Nov 5, 2021 at 7:11 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> \r\n> I did a quick scan through the latest v8 patch and noticed the following things:\r\nI appreciate your review !\r\n\r\n> src/backend/postmaster/pgstat.c\r\n> \r\n> (1) pgstat_recv_subworker_twophase_xact()\r\n> The copying from msg->m_gid to key.gid does not seem to be correct.\r\n> strlen() is being called on a junk value, since key.gid has not been assigned yet.\r\n> It should be changed as follows:\r\n> \r\n> BEFORE:\r\n> + strlcpy(key.gid, msg->m_gid, strlen(key.gid));\r\n> AFTER:\r\n> + strlcpy(key.gid, msg->m_gid, sizeof(key.gid));\r\nFixed.\r\n \r\n> (2) pgstat_get_subworker_prepared_txn()\r\n> Similar to above, strlen() usage is not correct, and should use\r\n> strlcpy() instead of memcpy().\r\n> \r\n> BEFORE:\r\n> + memcpy(key.gid, gid, strlen(key.gid));\r\n> AFTER:\r\n> + strlcpy(key.gid, gid, sizeof(key.gid));\r\nFixed.\r\n\r\n \r\n> (3) stats_reset\r\n> Note that the \"stats_reset\" column has been removed from the\r\n> pg_stat_subscription_workers view in the underlying latest v20 patch.\r\nYes. I've rebased and updated the patch, paying attention to this point.\r\nAttached the updated version.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 9 Nov 2021 11:35:24 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, November 8, 2021 3:12 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Fri, Nov 5, 2021 at 1:42 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Lastly, I removed one unnecessary test that checked publisher's stats\r\n> > in the TAP tests.\r\n> > Also I introduced ApplyTxnExtraData structure to remove void* argument\r\n> > of update_apply_change_size that might worsen the readability of codes\r\n> > in the previous version.\r\n> \r\n> Thanks for the updated patch.\r\nThanks you for checking my patch !\r\n\r\n\r\n> Few comments:\r\n> 1) You could remove LogicalRepPreparedTxnData,\r\n> LogicalRepCommitPreparedTxnData & LogicalRepRollbackPreparedTxnData\r\n> and change it to char *gid to reduce the function parameter and simiplify the\r\n> assignment:\r\n> + */\r\n> +void\r\n> +pgstat_report_subworker_twophase_xact(Oid subid, LogicalRepMsgType\r\n> +command,\r\n> +\r\n> PgStat_Counter xact_size,\r\n> +\r\n> LogicalRepPreparedTxnData *prepare_data,\r\n> +\r\n> LogicalRepCommitPreparedTxnData *commit_data,\r\n> +\r\n> LogicalRepRollbackPreparedTxnData *rollback_data)\r\nFixed. \r\n\r\n \r\n> 2) Shouldn't this change be part of skip xid patch?\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"command\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 10,\r\n> + \"last_error_command\",\r\n> TEXTOID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"xid\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 11, \"last_error_xid\",\r\n> XIDOID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"error_count\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 12, \"last_error_count\",\r\n> INT8OID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"error_message\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 13,\r\n> + \"last_error_message\",\r\n> TEXTOID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_error_time\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 14, \"last_error_time\",\r\nHmm, I didn't think so. Those renames are necessary\r\nto make exisiting columns of skip xid separate from newly-introduced xact stats.\r\nThat means, original names of skip xid columns in v20 by itself are fine\r\nand the renames are needed only when this patch gets committed.\r\nAt present, we cannot guarantee that this patch will be committed\r\nso I'd like to take care of those renames.\r\n\r\n \r\n> 3) This newly added structures should be added to typedefs.list:\r\n> ApplyTxnExtraData\r\n> XactSizeEntry\r\n> PgStat_MsgSubWorkerXactEnd\r\n> PgStat_MsgSubWorkerTwophaseXact\r\n> PgStat_StatSubWorkerPreparedXact\r\n> PgStat_StatSubWorkerPreparedXactSize\r\nAdded.\r\n\r\n> 4) We are not sending the transaction size in case of table sync, is this\r\n> intentional, if so I felt we should document this in\r\n> pg_stat_subscription_workers\r\n> + /* Report the success of table sync. */\r\n> + pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\r\n> +\r\n> MyLogicalRepWorker->relid,\r\n> +\r\n> 0 /* no logical message type */,\r\n> +\r\n> 0 /* xact size */);\r\nRight. Updated the doc description.\r\nI added the description in a way that the bytes\r\nstats are only for apply worker.\r\n\r\n \r\n> 5) pg_stat_subscription_workers has a lot of columns, if we can reduce the\r\n> column size the readability will improve, like xact_commit_count to\r\n> commit_count, xact_commit_bytes to commit_bytes, etc\r\n> + w.xact_commit_count,\r\n> + w.xact_commit_bytes,\r\n> + w.xact_error_count,\r\n> + w.xact_error_bytes,\r\n> + w.xact_abort_count,\r\n> + w.xact_abort_bytes,\r\nIt makes sense. Those can be somehow redundant.\r\n\r\nTentatively, I renamed only columns' names exported to the users.\r\nThis is because changing internal data structure as well (e.g. removing\r\nthe PgStat_StatSubWorkerEntry's prefixes) causes duplication name\r\nof 'error_count' members and changing such an internal data structure\r\nof skip xid part will have a huge impact of other parts.\r\nKindly imagine a case that we add 'last_' prefix to the\r\nall error statistics representing an error of the structure.\r\nIf you aren't satisfied with this change, please let me know.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 9 Nov 2021 11:39:32 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, November 9, 2021 8:35 PM I wrote:\r\n> Yes. I've rebased and updated the patch, paying attention to this point.\r\n> Attached the updated version.\r\nForgot to note one thing.\r\nThis is based on the skip xid v20 shared in [1]\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2BEUHbZk8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 9 Nov 2021 11:55:33 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 5:05 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, November 9, 2021 12:08 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > On Fri, Nov 5, 2021 at 7:11 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> >\n> > I did a quick scan through the latest v8 patch and noticed the following things:\n> I appreciate your review !\n>\n\nI have reviewed some part of the patch and I have a few comments\n\n1.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>error_count</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of transactions that failed to be applied by the table\n+ sync worker or main apply worker in this subscription.\n+ </para></entry>\n+ </row>\n\nThe error_count, should be number of transaction failed to applied? or\nit should be number of error?\n\n2.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>error_bytes</structfield> <type>bigint</type>\n+ </para>\n\nHow different is error_bytes from the abort_bytes?\n\n3.\n+ {\n+ size += *extra_data->stream_write_len;\n+ add_apply_error_context_xact_size(size);\n+ return;\n+ }\n\n From apply_handle_insert(), we are calling update_apply_change_size(),\nand inside this function we are dereferencing\n*extra_data->stream_write_len. Basically, stream_write_len is in\ninteger pointer and the caller hasn't allocated memory for that and\ninside update_apply_change_size, we are directly dereferencing the\npointer, how this can be correct. And I also see that in the whole\npatch stream_write_len, is never used as lvalue so without storing\nanything into this why are we trying to use this as rvalue here? This\nis clearly an issue.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Nov 2021 12:13:11 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, November 10, 2021 3:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> On Tue, Nov 9, 2021 at 5:05 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > On Tuesday, November 9, 2021 12:08 PM Greg Nancarrow\r\n> <gregn4422@gmail.com> wrote:\r\n> > > On Fri, Nov 5, 2021 at 7:11 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > >\r\n> > > I did a quick scan through the latest v8 patch and noticed the following\r\n> things:\r\n> > I appreciate your review !\r\n> I have reviewed some part of the patch and I have a few comments\r\nI really appreciate your attention and review.\r\n\r\n> 1.\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>error_count</structfield> <type>bigint</type>\r\n> + </para>\r\n> + <para>\r\n> + Number of transactions that failed to be applied by the table\r\n> + sync worker or main apply worker in this subscription.\r\n> + </para></entry>\r\n> + </row>\r\n> \r\n> The error_count, should be number of transaction failed to applied? or it should\r\n> be number of error?\r\nI thought those were same and currently it gets incremented when an error of apply occurs.\r\nThen, it equals to the number of total error. May I have the case when we\r\nget different values between those two ? I can be missing something.\r\n\r\n> 2.\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>error_bytes</structfield> <type>bigint</type>\r\n> + </para>\r\n> \r\n> How different is error_bytes from the abort_bytes?\r\nBy the error_bytes, you can see the consumed resources that\r\nwere acquired during apply but the applying processing stopped by some error.\r\nOn the other hand, abort_bytes displays bytes used for ROLLBACK PREPARED\r\nand stream_abort processing. That's what I intended.\r\n\r\n\r\n> 3.\r\n> + {\r\n> + size += *extra_data->stream_write_len;\r\n> + add_apply_error_context_xact_size(size);\r\n> + return;\r\n> + }\r\n> \r\n> From apply_handle_insert(), we are calling update_apply_change_size(), and\r\n> inside this function we are dereferencing *extra_data->stream_write_len.\r\n> Basically, stream_write_len is in integer pointer and the caller hasn't allocated\r\n> memory for that and inside update_apply_change_size, we are directly\r\n> dereferencing the pointer, how this can be correct. \r\nI'm so sorry to make you confused.\r\n\r\nI'll just delete the top part that handles streaming\r\nbytes calculation in the update_apply_change_size().\r\nIt's because now that there is a specific structure to recognize each streaming xid\r\nand save transaction size there, which makes the top part in question useless.\r\n\r\n> And I also see that in the\r\n> whole patch stream_write_len, is never used as lvalue so without storing\r\n> anything into this why are we trying to use this as rvalue here? This is clearly\r\n> an issue.\r\nAs described above, I'll fix this part and\r\nrelated codes mainly streaming related codes in the next version.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 10 Nov 2021 09:12:34 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 5:05 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> Yes. I've rebased and updated the patch, paying attention to this point.\n> Attached the updated version.\n\nThanks for the updated patch, few comments:\n1) you could rename PgStat_StatSubWorkerPreparedXact to\nPgStat_SW_PreparedXactKey or a simpler name which includes key and\nsimilarly change PgStat_StatSubWorkerPreparedXactSize to\nPgStat_SW_PreparedXactEntry\n\n+/* prepared transaction */\n+typedef struct PgStat_StatSubWorkerPreparedXact\n+{\n+ Oid subid;\n+ char gid[GIDSIZE];\n+} PgStat_StatSubWorkerPreparedXact;\n+\n+typedef struct PgStat_StatSubWorkerPreparedXactSize\n+{\n+ PgStat_StatSubWorkerPreparedXact key; /* hash key */\n+\n+ Oid subid;\n+ char gid[GIDSIZE];\n+ PgStat_Counter xact_size;\n+} PgStat_StatSubWorkerPreparedXactSize;\n+\n\n2) You can change prepared_size to sw_prepared_xact_entry or\nprepared_xact_entry since it is a hash entry with few fields\n+ if (subWorkerPreparedXactSizeHash)\n+ {\n+ PgStat_StatSubWorkerPreparedXactSize *prepared_size;\n+\n+ hash_seq_init(&hstat, subWorkerPreparedXactSizeHash);\n+ while((prepared_size =\n(PgStat_StatSubWorkerPreparedXactSize *) hash_seq_search(&hstat)) !=\nNULL)\n+ {\n+ fputc('P', fpout);\n+ rc = fwrite(prepared_size,\nsizeof(PgStat_StatSubWorkerPreparedXactSize), 1, fpout);\n+ (void) rc; /* we'll check\nfor error with ferror */\n+ }\n\n3) This need to be indented\n- w.relid,\n- w.command,\n- w.xid,\n- w.error_count,\n- w.error_message,\n- w.last_error_time\n+ w.commit_count,\n+ w.commit_bytes,\n+ w.error_count,\n+ w.error_bytes,\n+ w.abort_count,\n+ w.abort_bytes,\n+ w.last_error_relid,\n+ w.last_error_command,\n+ w.last_error_xid,\n+ w.last_error_count,\n+ w.last_error_message,\n+ w.last_error_time\n\n4) Instead of adding a function to calculate the size, can we move\nPartitionTupleRouting from c file to the header file and use sizeof at\nthe caller function?\n+/*\n+ * PartitionTupleRoutingSize - exported to calculate total data size\n+ * of logical replication mesage apply, because this is one of the\n+ * ApplyExecutionData struct members.\n+ */\n+size_t\n+PartitionTupleRoutingSize(void)\n+{\n+ return sizeof(PartitionTupleRouting);\n+}\n\n5) You could run pgindent and pgperltidy for the code and test code to\nfix the indent issues.\n+\nsubWorkerPreparedXactSizeHash = hash_create(\"Subscription worker stats\nof prepared txn\",\n+\n PGSTAT_SUBWORKER_HASH_SIZE,\n+\n &hash_ctl,\n+\n HASH_ELEM | HASH_STRINGS |\nHASH_CONTEXT);\n\n+# There's no entry at the beginning\n+my $result = $node_subscriber->safe_psql('postgres',\n+\"SELECT count(*) FROM pg_stat_subscription_workers;\");\n+is($result, q(0), 'no entry for transaction stats yet');\n\n6) Few places you have used strlcpy and few places you have used\nmemcpy, you can keep it consistent:\n+ msg.m_command = command;\n+ strlcpy(msg.m_gid, gid, sizeof(msg.m_gid));\n+ msg.m_xact_bytes = xact_size;\n\n+ key.subid = subid;\n+ memcpy(key.gid, gid, sizeof(key.gid));\n+ action = (create ? HASH_ENTER : HASH_FIND);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 10 Nov 2021 15:43:20 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Nov 10, 2021 at 3:43 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 5:05 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> > Yes. I've rebased and updated the patch, paying attention to this point.\n> > Attached the updated version.\n>\n> Thanks for the updated patch, few comments:\n> 6) Few places you have used strlcpy and few places you have used\n> memcpy, you can keep it consistent:\n> + msg.m_command = command;\n> + strlcpy(msg.m_gid, gid, sizeof(msg.m_gid));\n> + msg.m_xact_bytes = xact_size;\n>\n> + key.subid = subid;\n> + memcpy(key.gid, gid, sizeof(key.gid));\n> + action = (create ? HASH_ENTER : HASH_FIND);\n\nFew more comments:\n1) Here the tuple length is not considered in the calculation, else it\nwill always show the fixed size for any size tuple. Ex varchar insert\nwith 1 byte or varchar insert with 100's of bytes. So I feel we should\ninclude the tuple length in the calculation.\n+ case LOGICAL_REP_MSG_INSERT:\n+ case LOGICAL_REP_MSG_UPDATE:\n+ case LOGICAL_REP_MSG_DELETE:\n+ Assert(extra_data != NULL);\n+\n+ /*\n+ * Compute size based on ApplyExecutionData.\n+ * The size of LogicalRepRelMapEntry can be\nskipped because\n+ * it is obtained from hash_search in\nlogicalrep_rel_open.\n+ */\n+ size += sizeof(ApplyExecutionData) + sizeof(EState) +\n+ sizeof(ResultRelInfo) + sizeof(ResultRelInfo);\n+\n+ /*\n+ * Add some extra size if the target relation\nis partitioned.\n+ * PartitionTupleRouting isn't exported.\nTherefore, call the\n+ * function that returns its size instead.\n+ */\n+ if\n(extra_data->relmapentry->localrel->rd_rel->relkind ==\nRELKIND_PARTITIONED_TABLE)\n+ size += sizeof(ModifyTableState) +\nPartitionTupleRoutingSize();\n+ break;\n\n2) Can this be part of PgStat_StatDBEntry, similar to tables,\nfunctions and subworkers. It might be more appropriate to have it\nthere instead of having another global variable.\n+ * Stats of prepared transactions should be displayed\n+ * at either commit prepared or rollback prepared time, even when it's\n+ * after the server restart. We have the apply worker send those statistics\n+ * to the stats collector at prepare time and the startup process restore\n+ * those at restart if necessary.\n+ */\n+static HTAB *subWorkerPreparedXactSizeHash = NULL;\n+\n+/*\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 11 Nov 2021 18:17:13 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, November 11, 2021 9:47 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Few more comments:\r\n> 1) Here the tuple length is not considered in the calculation, else it will always\r\n> show the fixed size for any size tuple. Ex varchar insert with 1 byte or varchar\r\n> insert with 100's of bytes. So I feel we should include the tuple length in the\r\n> calculation.\r\n> + case LOGICAL_REP_MSG_INSERT:\r\n> + case LOGICAL_REP_MSG_UPDATE:\r\n> + case LOGICAL_REP_MSG_DELETE:\r\n> + Assert(extra_data != NULL);\r\n> +\r\n> + /*\r\n> + * Compute size based on ApplyExecutionData.\r\n> + * The size of LogicalRepRelMapEntry can be\r\n> skipped because\r\n> + * it is obtained from hash_search in\r\n> logicalrep_rel_open.\r\n> + */\r\n> + size += sizeof(ApplyExecutionData) + sizeof(EState)\r\n> +\r\n> + sizeof(ResultRelInfo) +\r\n> + sizeof(ResultRelInfo);\r\n> +\r\n> + /*\r\n> + * Add some extra size if the target relation\r\n> is partitioned.\r\n> + * PartitionTupleRouting isn't exported.\r\n> Therefore, call the\r\n> + * function that returns its size instead.\r\n> + */\r\n> + if\r\n> (extra_data->relmapentry->localrel->rd_rel->relkind ==\r\n> RELKIND_PARTITIONED_TABLE)\r\n> + size += sizeof(ModifyTableState) +\r\n> PartitionTupleRoutingSize();\r\n> + break;\r\nThanks a lot ! Fixed.\r\n\r\n\r\n> 2) Can this be part of PgStat_StatDBEntry, similar to tables, functions and\r\n> subworkers. It might be more appropriate to have it there instead of having\r\n> another global variable.\r\n> + * Stats of prepared transactions should be displayed\r\n> + * at either commit prepared or rollback prepared time, even when it's\r\n> + * after the server restart. We have the apply worker send those\r\n> +statistics\r\n> + * to the stats collector at prepare time and the startup process\r\n> +restore\r\n> + * those at restart if necessary.\r\n> + */\r\n> +static HTAB *subWorkerPreparedXactSizeHash = NULL;\r\n> +\r\n> +/*\r\nFixed. Also, its name was too long\r\nwhen aligned with other PgStat_StatDBEntry memebers.\r\nThus I renamed it as subworkers_preparedsizes.\r\n\r\nThis depends on v21 in [1]\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoAkd4YSoQUUFfpcrYOtkPRbninaw3sD0qc77nLW6Q89gg%40mail.gmail.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 15 Nov 2021 09:27:51 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, November 10, 2021 7:13 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Tue, Nov 9, 2021 at 5:05 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Yes. I've rebased and updated the patch, paying attention to this point.\r\n> > Attached the updated version.\r\n> \r\n> Thanks for the updated patch, few comments:\r\n> 1) you could rename PgStat_StatSubWorkerPreparedXact to\r\n> PgStat_SW_PreparedXactKey or a simpler name which includes key and\r\n> similarly change PgStat_StatSubWorkerPreparedXactSize to\r\n> PgStat_SW_PreparedXactEntry\r\n> \r\n> +/* prepared transaction */\r\n> +typedef struct PgStat_StatSubWorkerPreparedXact {\r\n> + Oid subid;\r\n> + char gid[GIDSIZE];\r\n> +} PgStat_StatSubWorkerPreparedXact;\r\n> +\r\n> +typedef struct PgStat_StatSubWorkerPreparedXactSize\r\n> +{\r\n> + PgStat_StatSubWorkerPreparedXact key; /* hash key */\r\n> +\r\n> + Oid subid;\r\n> + char gid[GIDSIZE];\r\n> + PgStat_Counter xact_size;\r\n> +} PgStat_StatSubWorkerPreparedXactSize;\r\n> +\r\nFixed. Adopted your suggested names.\r\n\r\n\r\n> 2) You can change prepared_size to sw_prepared_xact_entry or\r\n> prepared_xact_entry since it is a hash entry with few fields\r\n> + if (subWorkerPreparedXactSizeHash)\r\n> + {\r\n> + PgStat_StatSubWorkerPreparedXactSize *prepared_size;\r\n> +\r\n> + hash_seq_init(&hstat, subWorkerPreparedXactSizeHash);\r\n> + while((prepared_size =\r\n> (PgStat_StatSubWorkerPreparedXactSize *) hash_seq_search(&hstat)) !=\r\n> NULL)\r\n> + {\r\n> + fputc('P', fpout);\r\n> + rc = fwrite(prepared_size,\r\n> sizeof(PgStat_StatSubWorkerPreparedXactSize), 1, fpout);\r\n> + (void) rc; /* we'll check\r\n> for error with ferror */\r\n> + }\r\nI preferred prepared_xact_entry. Fixed.\r\n\r\n\r\n> 3) This need to be indented\r\n> - w.relid,\r\n> - w.command,\r\n> - w.xid,\r\n> - w.error_count,\r\n> - w.error_message,\r\n> - w.last_error_time\r\n> + w.commit_count,\r\n> + w.commit_bytes,\r\n> + w.error_count,\r\n> + w.error_bytes,\r\n> + w.abort_count,\r\n> + w.abort_bytes,\r\n> + w.last_error_relid,\r\n> + w.last_error_command,\r\n> + w.last_error_xid,\r\n> + w.last_error_count,\r\n> + w.last_error_message,\r\n> + w.last_error_time\r\nFixed.\r\n\r\n\r\n> 4) Instead of adding a function to calculate the size, can we move\r\n> PartitionTupleRouting from c file to the header file and use sizeof at the caller\r\n> function?\r\n> +/*\r\n> + * PartitionTupleRoutingSize - exported to calculate total data size\r\n> + * of logical replication mesage apply, because this is one of the\r\n> + * ApplyExecutionData struct members.\r\n> + */\r\n> +size_t\r\n> +PartitionTupleRoutingSize(void)\r\n> +{\r\n> + return sizeof(PartitionTupleRouting); }\r\nFixed.\r\n\r\n\r\n> 5) You could run pgindent and pgperltidy for the code and test code to fix the\r\n> indent issues.\r\n> +\r\n> subWorkerPreparedXactSizeHash = hash_create(\"Subscription worker stats\r\n> of prepared txn\",\r\n> +\r\n> \r\n> PGSTAT_SUBWORKER_HASH_SIZE,\r\n> +\r\n> &hash_ctl,\r\n> +\r\n> HASH_ELEM | HASH_STRINGS |\r\n> HASH_CONTEXT);\r\n> +# There's no entry at the beginning\r\n> +my $result = $node_subscriber->safe_psql('postgres',\r\n> +\"SELECT count(*) FROM pg_stat_subscription_workers;\"); is($result,\r\n> +q(0), 'no entry for transaction stats yet');\r\nConducted pgindent and pgperltidy.\r\n\r\n\r\n> 6) Few places you have used strlcpy and few places you have used memcpy,\r\n> you can keep it consistent:\r\n> + msg.m_command = command;\r\n> + strlcpy(msg.m_gid, gid, sizeof(msg.m_gid));\r\n> + msg.m_xact_bytes = xact_size;\r\n> \r\n> + key.subid = subid;\r\n> + memcpy(key.gid, gid, sizeof(key.gid));\r\n> + action = (create ? HASH_ENTER : HASH_FIND);\r\nFixed. I used strlcpy for new additional functions I made.\r\nAn exception is pgstat_read_db_statsfile().\r\nIn this function, we use memcpy() consistently in other places.\r\n\r\nPlease have a look at [1]\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373FEB287F733C81C1E4D42ED989%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 15 Nov 2021 09:31:20 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, November 10, 2021 6:13 PM I wrote:\r\n> On Wednesday, November 10, 2021 3:43 PM Dilip Kumar\r\n> <dilipbalaut@gmail.com> wrote:\r\n> > On Tue, Nov 9, 2021 at 5:05 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > On Tuesday, November 9, 2021 12:08 PM Greg Nancarrow\r\n> > <gregn4422@gmail.com> wrote:\r\n> > > > On Fri, Nov 5, 2021 at 7:11 PM osumi.takamichi@fujitsu.com\r\n> > > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > > >\r\n> > > >\r\n> > > > I did a quick scan through the latest v8 patch and noticed the\r\n> > > > following\r\n> > things:\r\n> > > I appreciate your review !\r\n> > I have reviewed some part of the patch and I have a few comments\r\n> I really appreciate your attention and review.\r\n...\r\n> > 3.\r\n> > + {\r\n> > + size += *extra_data->stream_write_len;\r\n> > + add_apply_error_context_xact_size(size);\r\n> > + return;\r\n> > + }\r\n> >\r\n> > From apply_handle_insert(), we are calling update_apply_change_size(),\r\n> > and inside this function we are dereferencing\r\n> *extra_data->stream_write_len.\r\n> > Basically, stream_write_len is in integer pointer and the caller\r\n> > hasn't allocated memory for that and inside update_apply_change_size,\r\n> > we are directly dereferencing the pointer, how this can be correct.\r\n...\r\n> I'll just delete the top part that handles streaming bytes calculation in the\r\n> update_apply_change_size().\r\n> It's because now that there is a specific structure to recognize each streaming\r\n> xid and save transaction size there, which makes the top part in question\r\n> useless.\r\n> \r\n> > And I also see that in the\r\n> > whole patch stream_write_len, is never used as lvalue so without\r\n> > storing anything into this why are we trying to use this as rvalue\r\n> > here? This is clearly an issue.\r\n> As described above, I'll fix this part and related codes mainly streaming related\r\n> codes in the next version.\r\nRemoved this deadcodes.\r\n\r\nPlease have a look at [1]\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373FEB287F733C81C1E4D42ED989%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 15 Nov 2021 09:32:47 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, November 15, 2021 6:28 PM I wrote:\r\n> On Thursday, November 11, 2021 9:47 PM vignesh C <vignesh21@gmail.com>\r\n> wrote:\r\n> > Few more comments:\r\n...\r\n> Fixed. Also, its name was too long\r\n> when aligned with other PgStat_StatDBEntry memebers.\r\n> Thus I renamed it as subworkers_preparedsizes.\r\n> \r\n> This depends on v21 in [1]\r\nHi\r\n\r\nThere was some minor conflict error by the change of v22 in [1].\r\nI've conducted some update for this.\r\n(The rebased part is only C code and checked by pgindent)\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoCE72vKXJp99f4xRw7Mh5ve-Z2roe21gP8Y82_CxXKvbg%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 15 Nov 2021 12:13:35 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, November 15, 2021 9:14 PM I wrote:\r\n> I've conducted some update for this.\r\n> (The rebased part is only C code and checked by pgindent)\r\nI'll update my patches since a new skip xid patch\r\nhas been shared in [1].\r\n\r\nThis version includes some minor renames of functions\r\nthat are related to transaction sizes.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoA5jupM6O%3DpYsyfaxQ1aMX-en8%3DQNgpW6KfXsg7_CS0CQ%40mail.gmail.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 16 Nov 2021 12:34:11 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 9:34 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, November 15, 2021 9:14 PM I wrote:\n> > I've conducted some update for this.\n> > (The rebased part is only C code and checked by pgindent)\n> I'll update my patches since a new skip xid patch\n> has been shared in [1].\n>\n> This version includes some minor renames of functions\n> that are related to transaction sizes.\n\nI've looked at v12-0001 patch. Here are some comments:\n\n- TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"relid\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"last_error_relid\",\n OIDOID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"command\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"last_error_command\",\n TEXTOID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"xid\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"last_error_xid\",\n XIDOID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"error_count\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"last_error_count\",\n INT8OID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"error_message\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"last_error_message\",\n\nIf renaming column names clarifies those meanings, the above changes\nshould be included into my patch that introduces\npg_stat_subscription_workers view?\n\n---\nI think that exporting PartitionTupleRouting should not be done in the\none patch together with renaming the view columns. There is not\nrelevance between them at all. If it's used by v12-0002 patch, I think\nit should be included in that patch or in another separate patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 17 Nov 2021 12:18:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, November 17, 2021 12:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Tue, Nov 16, 2021 at 9:34 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Monday, November 15, 2021 9:14 PM I wrote:\r\n> > > I've conducted some update for this.\r\n> > > (The rebased part is only C code and checked by pgindent)\r\n> > I'll update my patches since a new skip xid patch has been shared in\r\n> > [1].\r\n> >\r\n> > This version includes some minor renames of functions that are related\r\n> > to transaction sizes.\r\n> \r\n> I've looked at v12-0001 patch. Here are some comments:\r\nThank you for paying attention to this thread !\r\n\r\n\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"relid\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"last_error_relid\",\r\n> OIDOID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"command\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"last_error_command\",\r\n> TEXTOID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"xid\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"last_error_xid\",\r\n> XIDOID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"error_count\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"last_error_count\",\r\n> INT8OID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"error_message\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"last_error_message\",\r\n> \r\n> If renaming column names clarifies those meanings, the above changes should\r\n> be included into my patch that introduces pg_stat_subscription_workers\r\n> view?\r\nAt first, your column names of pg_stat_subscription_workers look totally OK to me by itself\r\nand I thought I should take care of those renaming at the commit timing of my stats patches.\r\nBut, if you agree with the new names above and fixing your patch doesn't\r\nbother you, I'd appreciate your help !\r\n\r\n> I think that exporting PartitionTupleRouting should not be done in the one\r\n> patch together with renaming the view columns. There is not relevance\r\n> between them at all. If it's used by v12-0002 patch, I think it should be included\r\n> in that patch or in another separate patch.\r\nYes, it's used by v12-0002.\r\n\r\nAbsolutely you are right. When you update the patch like above,\r\nI would like to make it independent.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n\r\n",
"msg_date": "Wed, 17 Nov 2021 04:14:37 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 9:44 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, November 17, 2021 12:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Tue, Nov 16, 2021 at 9:34 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Monday, November 15, 2021 9:14 PM I wrote:\n> > > > I've conducted some update for this.\n> > > > (The rebased part is only C code and checked by pgindent)\n> > > I'll update my patches since a new skip xid patch has been shared in\n> > > [1].\n> > >\n> > > This version includes some minor renames of functions that are related\n> > > to transaction sizes.\n> >\n> > I've looked at v12-0001 patch. Here are some comments:\n> Thank you for paying attention to this thread !\n>\n>\n> > - TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"relid\",\n> > + TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"last_error_relid\",\n> > OIDOID, -1, 0);\n> > - TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"command\",\n> > + TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"last_error_command\",\n> > TEXTOID, -1, 0);\n> > - TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"xid\",\n> > + TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"last_error_xid\",\n> > XIDOID, -1, 0);\n> > - TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"error_count\",\n> > + TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"last_error_count\",\n> > INT8OID, -1, 0);\n> > - TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"error_message\",\n> > + TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"last_error_message\",\n> >\n> > If renaming column names clarifies those meanings, the above changes should\n> > be included into my patch that introduces pg_stat_subscription_workers\n> > view?\n\nRight.\n\n> At first, your column names of pg_stat_subscription_workers look totally OK to me by itself\n> and I thought I should take care of those renaming at the commit timing of my stats patches.\n>\n\nCan you please tell us why you think the names in your proposed patch\nare better than the existing names proposed in Sawada-San's patch? Is\nit because those fields always contain the information of the last or\nlatest error that occurred in the corresponding subscription worker?\nIf so, I am not very sure if that is a good reason to increase the\nlength of most of the column names but if you and others feel that is\nhelpful then it is better to do it as part of Sawada-San's patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 17 Nov 2021 18:30:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, November 17, 2021 10:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Nov 17, 2021 at 9:44 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, November 17, 2021 12:19 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > On Tue, Nov 16, 2021 at 9:34 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Monday, November 15, 2021 9:14 PM I wrote:\r\n> > > > > I've conducted some update for this.\r\n> > > > > (The rebased part is only C code and checked by pgindent)\r\n> > > > I'll update my patches since a new skip xid patch has been shared\r\n> > > > in [1].\r\n> > > >\r\n> > > > This version includes some minor renames of functions that are\r\n> > > > related to transaction sizes.\r\n> > >\r\n> > > I've looked at v12-0001 patch. Here are some comments:\r\n> > Thank you for paying attention to this thread !\r\n> >\r\n> >\r\n> > > - TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"relid\",\r\n> > > + TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"last_error_relid\",\r\n> > > OIDOID, -1, 0);\r\n> > > - TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"command\",\r\n> > > + TupleDescInitEntry(tupdesc, (AttrNumber) 4,\r\n> > > + \"last_error_command\",\r\n> > > TEXTOID, -1, 0);\r\n> > > - TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"xid\",\r\n> > > + TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"last_error_xid\",\r\n> > > XIDOID, -1, 0);\r\n> > > - TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"error_count\",\r\n> > > + TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"last_error_count\",\r\n> > > INT8OID, -1, 0);\r\n> > > - TupleDescInitEntry(tupdesc, (AttrNumber) 7, \"error_message\",\r\n> > > + TupleDescInitEntry(tupdesc, (AttrNumber) 7,\r\n> > > + \"last_error_message\",\r\n> > >\r\n> > > If renaming column names clarifies those meanings, the above changes\r\n> > > should be included into my patch that introduces\r\n> > > pg_stat_subscription_workers view?\r\n> \r\n> Right.\r\n> \r\n> > At first, your column names of pg_stat_subscription_workers look\r\n> > totally OK to me by itself and I thought I should take care of those renaming at\r\n> the commit timing of my stats patches.\r\n> >\r\n> \r\n> Can you please tell us why you think the names in your proposed patch are\r\n> better than the existing names proposed in Sawada-San's patch? Is it because\r\n> those fields always contain the information of the last or latest error that\r\n> occurred in the corresponding subscription worker?\r\nThis is one reason. \r\n\r\nAnother big reason comes from the final alignment when we list up all columns of both patches.\r\nThe patches in this thread is trying to introduce a column that indicates\r\ncumulative count of error to show all error counts that the worker got in the past.\r\nIn this thread, after two or three improvements, this column name has reached to\r\na simple one 'error_count' (aligned with 'commit_count' and 'abort_count').\r\nThen we need to differentiate what this thread's patch is trying to introduce and\r\nwhat skip xid patch is introducing.\r\n\r\n> If so, I am not very sure if that is a good reason to increase the length of most of\r\n> the column names but if you and others feel that is helpful then it is better to do\r\n> it as part of Sawada-San's patch.\r\nYes. On this point, it was a mistake that I handled that changes.\r\nIt'd be better that Sawada-san takes care of them once the names have been fixed.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 17 Nov 2021 13:42:19 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 7:12 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, November 17, 2021 10:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Can you please tell us why you think the names in your proposed patch are\n> > better than the existing names proposed in Sawada-San's patch? Is it because\n> > those fields always contain the information of the last or latest error that\n> > occurred in the corresponding subscription worker?\n> This is one reason.\n>\n> Another big reason comes from the final alignment when we list up all columns of both patches.\n> The patches in this thread is trying to introduce a column that indicates\n> cumulative count of error to show all error counts that the worker got in the past.\n>\n\nOkay, I see your point and it makes sense to rename columns after\nthese other stats. I am not able to come up with any better names than\nwhat is being used here. Sawada-San, do you agree with this, or do let\nus know if you have any better ideas?\n\nBTW, I think the way you are computing error_count in\npgstat_recv_subworker_error() doesn't seem correct to me because it\nwill accumulate the counter/bytes for the same error again and again.\nYou might want to update these counters after we have checked that the\nreceived error is not the same as the previous one.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 18 Nov 2021 08:56:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 6:04 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, November 15, 2021 9:14 PM I wrote:\n> > I've conducted some update for this.\n> > (The rebased part is only C code and checked by pgindent)\n> I'll update my patches since a new skip xid patch\n> has been shared in [1].\n>\n> This version includes some minor renames of functions\n> that are related to transaction sizes.\n\nThanks for the updated patch, Few comments:\n1) since pgstat_get_subworker_prepared_txn is called from only one\nplace and create is passed as true, we can remove create function\nparameter or the function could be removed.\n+ * Return subscription worker entry with the given subscription OID and\n+ * gid.\n+ * ----------\n+ */\n+static PgStat_SW_PreparedXactEntry *\n+pgstat_get_subworker_prepared_txn(Oid databaseid, Oid subid,\n+ char\n*gid, bool create)\n+{\n+ PgStat_StatDBEntry *dbentry;\n+ PgStat_SW_PreparedXactKey key;\n\n 2) Include subworker prepared transactions also\n /*\n* Don't create tables/functions/subworkers hashtables for\n* uninteresting databases.\n*/\nif (onlydb != InvalidOid)\n{\nif (dbbuf.databaseid != onlydb &&\ndbbuf.databaseid != InvalidOid)\nbreak;\n}\n\n3) Similarly it should be mentioned in:\nreset_dbentry_counters function header, pgstat_read_db_statsfile\nfunction header and pgstat_get_db_entry function comments.\n\n4) I felt we can remove \"COMMIT of streaming transaction\", since only\ncommit and commit prepared are the user operations. Shall we change it\nto \"COMMIT and COMMIT PREPARED will increment this counter.\"\n+ <structfield>commit_count</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of transactions successfully applied in this subscription.\n+ COMMIT, COMMIT of streaming transaction and COMMIT PREPARED increments\n+ this counter.\n+ </para></entry>\n+ </row>\n\n5) PgStat_SW_PreparedXactEntry should be before PgStat_SW_PreparedXactKey\n PgStat_StatSubWorkerEntry\n PgStat_StatSubWorkerKey\n+PgStat_SW_PreparedXactKey\n+PgStat_SW_PreparedXactEntry\n PgStat_StatTabEntry\n PgStat_SubXactStatus\n\n6) This change is not required\n@@ -293,6 +306,7 @@ static inline void cleanup_subxact_info(void);\n static void stream_cleanup_files(Oid subid, TransactionId xid);\n static void stream_open_file(Oid subid, TransactionId xid, bool first);\n static void stream_write_change(char action, StringInfo s);\n+\n static void stream_close_file(void);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 18 Nov 2021 17:04:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, November 18, 2021 8:35 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Tue, Nov 16, 2021 at 6:04 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Monday, November 15, 2021 9:14 PM I wrote:\r\n> > > I've conducted some update for this.\r\n> > > (The rebased part is only C code and checked by pgindent)\r\n> > I'll update my patches since a new skip xid patch has been shared in\r\n> > [1].\r\n> >\r\n> > This version includes some minor renames of functions that are related\r\n> > to transaction sizes.\r\n> \r\n> Thanks for the updated patch, Few comments:\r\nThank you for checking the patches !\r\n\r\n\r\n> 1) since pgstat_get_subworker_prepared_txn is called from only one place and\r\n> create is passed as true, we can remove create function parameter or the\r\n> function could be removed.\r\n> + * Return subscription worker entry with the given subscription OID and\r\n> + * gid.\r\n> + * ----------\r\n> + */\r\n> +static PgStat_SW_PreparedXactEntry *\r\n> +pgstat_get_subworker_prepared_txn(Oid databaseid, Oid subid,\r\n> + char\r\n> *gid, bool create)\r\n> +{\r\n> + PgStat_StatDBEntry *dbentry;\r\n> + PgStat_SW_PreparedXactKey key;\r\nRemoved the parameter.\r\n\r\n\r\n> 2) Include subworker prepared transactions also\r\n> /*\r\n> * Don't create tables/functions/subworkers hashtables for\r\n> * uninteresting databases.\r\n> */\r\n> if (onlydb != InvalidOid)\r\n> {\r\n> if (dbbuf.databaseid != onlydb &&\r\n> dbbuf.databaseid != InvalidOid)\r\n> break;\r\n> }\r\nFixed.\r\n\r\n\r\n> 3) Similarly it should be mentioned in:\r\n> reset_dbentry_counters function header, pgstat_read_db_statsfile function\r\n> header and pgstat_get_db_entry function comments.\r\nFixed.\r\n\r\n \r\n> 4) I felt we can remove \"COMMIT of streaming transaction\", since only commit\r\n> and commit prepared are the user operations. Shall we change it to \"COMMIT\r\n> and COMMIT PREPARED will increment this counter.\"\r\n> + <structfield>commit_count</structfield> <type>bigint</type>\r\n> + </para>\r\n> + <para>\r\n> + Number of transactions successfully applied in this subscription.\r\n> + COMMIT, COMMIT of streaming transaction and COMMIT\r\n> PREPARED increments\r\n> + this counter.\r\n> + </para></entry>\r\n> + </row>\r\nYou are right ! Fixed.\r\n\r\n \r\n> 5) PgStat_SW_PreparedXactEntry should be before\r\n> PgStat_SW_PreparedXactKey PgStat_StatSubWorkerEntry\r\n> PgStat_StatSubWorkerKey\r\n> +PgStat_SW_PreparedXactKey\r\n> +PgStat_SW_PreparedXactEntry\r\n> PgStat_StatTabEntry\r\n> PgStat_SubXactStatus\r\nFixed.\r\n\r\n> 6) This change is not required\r\n> @@ -293,6 +306,7 @@ static inline void cleanup_subxact_info(void); static\r\n> void stream_cleanup_files(Oid subid, TransactionId xid); static void\r\n> stream_open_file(Oid subid, TransactionId xid, bool first); static void\r\n> stream_write_change(char action, StringInfo s);\r\n> +\r\n> static void stream_close_file(void);\r\nRemoved.\r\n\r\n\r\nOther changes are\r\n1. refined the commit message of v13-0003*.\r\n2. made the additional comment for ApplyErrorCallbackArg simple.\r\n3. wrote more explanations about update_apply_change_size() as comment.\r\n4. changed the behavior of pgstat_recv_subworker_error so that\r\n it can store stats info only once per error.\r\n5. added one simple test for PREPARE and COMMIT PREPARED.\r\n\r\nThis used v23 skip xid patch [1].\r\n(I will remove v13-0001* when the column names are fixed\r\nand Sawada-san starts to take care of the column name definitions)\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoA5jupM6O%3DpYsyfaxQ1aMX-en8%3DQNgpW6KfXsg7_CS0CQ%40mail.gmail.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 18 Nov 2021 14:39:11 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, November 18, 2021 12:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> BTW, I think the way you are computing error_count in\r\n> pgstat_recv_subworker_error() doesn't seem correct to me because it will\r\n> accumulate the counter/bytes for the same error again and again.\r\n> You might want to update these counters after we have checked that the\r\n> received error is not the same as the previous one.\r\nThank you for your comments !\r\nThis is addressed by v13 patchset [1]\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373533A5C24BDDA516DA7E1ED9B9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 18 Nov 2021 14:44:45 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Nov 18, 2021 at 12:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 17, 2021 at 7:12 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Wednesday, November 17, 2021 10:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Can you please tell us why you think the names in your proposed patch are\n> > > better than the existing names proposed in Sawada-San's patch? Is it because\n> > > those fields always contain the information of the last or latest error that\n> > > occurred in the corresponding subscription worker?\n> > This is one reason.\n> >\n> > Another big reason comes from the final alignment when we list up all columns of both patches.\n> > The patches in this thread is trying to introduce a column that indicates\n> > cumulative count of error to show all error counts that the worker got in the past.\n> >\n>\n> Okay, I see your point and it makes sense to rename columns after\n> these other stats. I am not able to come up with any better names than\n> what is being used here. Sawada-San, do you agree with this, or do let\n> us know if you have any better ideas?\n>\n\nI'm concerned that these new names will introduce confusion; if we\nhave last_error_relid, last_error_command, last_error_message,\nlast_error_time, and last_error_xid, I think users might think that\nfirst_error_time is the timestamp at which an error occurred for the\nfirst time in the subscription worker. Also, I'm not sure\nlast_error_count is not clear to me (it looks like showing something\ncount but the only \"last\" one?). An alternative idea would be to add\ntotal_error_count by this patch, resulting in having both error_count\nand total_error_count. Regarding commit_count and abort_count, I\npersonally think xact_commit and xact_rollback would be better since\nthey’re more consistent with pg_stat_database view, although we might\nalready have discussed that.\n\nBesides that, I’m not sure how useful commit_bytes, abort_bytes, and\nerror_bytes are. I originally thought these statistics track the size\nof received data, i.g., how much data is transferred from the\npublisher and processed on the subscriber. But what the view currently\nhas is how much memory is used in the subscription worker. The\nsubscription worker emulates ReorderBufferChangeSize() on the\nsubscriber side but, as the comment of update_apply_change_size()\nmentions, the size in the view is not accurate:\n\n+ * The byte size of transaction on the publisher is calculated by\n+ * ReorderBufferChangeSize() based on the ReorderBufferChange structure.\n+ * But on the subscriber, consumed resources are not same as the\n+ * publisher's decoding processsing and required to be computed in\n+ * different way. Therefore, the exact same byte size is not restored on\n+ * the subscriber usually.\n\nAlso, it seems to take into account the size of FlushPosition that is\nnot taken into account by ReorderBufferChangeSize().\n\nI guess that the purpose of these values is to compare them to\ntotal_bytes, stream_byte, and spill_bytes but if the calculation is\nnot accurate, does it mean that the more stats are updated, the more\nthe stats will be getting inaccurate?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 19 Nov 2021 23:10:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 7:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 18, 2021 at 12:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Nov 17, 2021 at 7:12 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Wednesday, November 17, 2021 10:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > Can you please tell us why you think the names in your proposed patch are\n> > > > better than the existing names proposed in Sawada-San's patch? Is it because\n> > > > those fields always contain the information of the last or latest error that\n> > > > occurred in the corresponding subscription worker?\n> > > This is one reason.\n> > >\n> > > Another big reason comes from the final alignment when we list up all columns of both patches.\n> > > The patches in this thread is trying to introduce a column that indicates\n> > > cumulative count of error to show all error counts that the worker got in the past.\n> > >\n> >\n> > Okay, I see your point and it makes sense to rename columns after\n> > these other stats. I am not able to come up with any better names than\n> > what is being used here. Sawada-San, do you agree with this, or do let\n> > us know if you have any better ideas?\n> >\n>\n> I'm concerned that these new names will introduce confusion; if we\n> have last_error_relid, last_error_command, last_error_message,\n> last_error_time, and last_error_xid, I think users might think that\n> first_error_time is the timestamp at which an error occurred for the\n> first time in the subscription worker.\n>\n\nIsn't to some extent that confusion already exists because of\nlast_error_time column?\n\n> Also, I'm not sure\n> last_error_count is not clear to me (it looks like showing something\n> count but the only \"last\" one?).\n>\n\nI feel if all the error related columns have \"last_error_\" as a prefix\nthen it should not be that confusing?\n\n> An alternative idea would be to add\n> total_error_count by this patch, resulting in having both error_count\n> and total_error_count. Regarding commit_count and abort_count, I\n> personally think xact_commit and xact_rollback would be better since\n> they’re more consistent with pg_stat_database view, although we might\n> already have discussed that.\n>\n\nEven if we decide to change the column names to\nxact_commit/xact_rollback, I think with additional non-error columns\nit will be clear to add 'error' in column names corresponding to error\ncolumns, and last_error_* seems to be consistent with what we have in\npg_stat_archiver (last_failed_wal, last_failed_time). Your point\nrelated to first_error_time has merit and I don't have a better answer\nfor it. I think it is just a convenience column and we are not sure\nwhether that will be required in practice so maybe we can drop that\ncolumn and come back to it later once we get some field feedback on\nthis view?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 20 Nov 2021 11:55:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 7:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 18, 2021 at 12:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Nov 17, 2021 at 7:12 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Wednesday, November 17, 2021 10:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > Can you please tell us why you think the names in your proposed patch are\n> > > > better than the existing names proposed in Sawada-San's patch? Is it because\n> > > > those fields always contain the information of the last or latest error that\n> > > > occurred in the corresponding subscription worker?\n> > > This is one reason.\n> > >\n> > > Another big reason comes from the final alignment when we list up all columns of both patches.\n> > > The patches in this thread is trying to introduce a column that indicates\n> > > cumulative count of error to show all error counts that the worker got in the past.\n> > >\n> >\n> > Okay, I see your point and it makes sense to rename columns after\n> > these other stats. I am not able to come up with any better names than\n> > what is being used here. Sawada-San, do you agree with this, or do let\n> > us know if you have any better ideas?\n> >\n>\n> I'm concerned that these new names will introduce confusion; if we\n> have last_error_relid, last_error_command, last_error_message,\n> last_error_time, and last_error_xid, I think users might think that\n> first_error_time is the timestamp at which an error occurred for the\n> first time in the subscription worker. Also, I'm not sure\n> last_error_count is not clear to me (it looks like showing something\n> count but the only \"last\" one?). An alternative idea would be to add\n> total_error_count by this patch, resulting in having both error_count\n> and total_error_count. Regarding commit_count and abort_count, I\n> personally think xact_commit and xact_rollback would be better since\n> they’re more consistent with pg_stat_database view, although we might\n> already have discussed that.\n>\n> Besides that, I’m not sure how useful commit_bytes, abort_bytes, and\n> error_bytes are. I originally thought these statistics track the size\n> of received data, i.g., how much data is transferred from the\n> publisher and processed on the subscriber. But what the view currently\n> has is how much memory is used in the subscription worker. The\n> subscription worker emulates ReorderBufferChangeSize() on the\n> subscriber side but, as the comment of update_apply_change_size()\n> mentions, the size in the view is not accurate:\n>\n> + * The byte size of transaction on the publisher is calculated by\n> + * ReorderBufferChangeSize() based on the ReorderBufferChange structure.\n> + * But on the subscriber, consumed resources are not same as the\n> + * publisher's decoding processsing and required to be computed in\n> + * different way. Therefore, the exact same byte size is not restored on\n> + * the subscriber usually.\n>\n> Also, it seems to take into account the size of FlushPosition that is\n> not taken into account by ReorderBufferChangeSize().\n\nLet's keep the size calculation similar to the publisher side to avoid\nany confusion, we can try to keep it the same as\nReorderBufferChangeSize wherever possible.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 22 Nov 2021 09:58:59 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Sat, Nov 20, 2021 at 1:11 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I'm concerned that these new names will introduce confusion; if we\n> have last_error_relid, last_error_command, last_error_message,\n> last_error_time, and last_error_xid, I think users might think that\n> first_error_time is the timestamp at which an error occurred for the\n> first time in the subscription worker.\n\nYou mean you think users might think \"first_error_time\" is the\ntimestamp at which the last_error first occurred (rather than the\ntimestamp of the first of any type of error that occurred) on that\nworker?\n\n> ... Also, I'm not sure\n> last_error_count is not clear to me (it looks like showing something\n> count but the only \"last\" one?).\n\nIt's the number of times that the last_error has occurred.\nUnless it's some kind of transient error, that might get resolved\nwithout intervention, logical replication will get stuck in a loop\nretrying and the last error will occur again and again, hence the\ncount of how many times that has happened.\nMaybe there's not much benefit in counting different errors prior to\nthe last error?\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 23 Nov 2021 17:21:29 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Sat, Nov 20, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 19, 2021 at 7:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Nov 18, 2021 at 12:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Nov 17, 2021 at 7:12 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> > > > On Wednesday, November 17, 2021 10:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > Can you please tell us why you think the names in your proposed patch are\n> > > > > better than the existing names proposed in Sawada-San's patch? Is it because\n> > > > > those fields always contain the information of the last or latest error that\n> > > > > occurred in the corresponding subscription worker?\n> > > > This is one reason.\n> > > >\n> > > > Another big reason comes from the final alignment when we list up all columns of both patches.\n> > > > The patches in this thread is trying to introduce a column that indicates\n> > > > cumulative count of error to show all error counts that the worker got in the past.\n> > > >\n> > >\n> > > Okay, I see your point and it makes sense to rename columns after\n> > > these other stats. I am not able to come up with any better names than\n> > > what is being used here. Sawada-San, do you agree with this, or do let\n> > > us know if you have any better ideas?\n> > >\n> >\n> > I'm concerned that these new names will introduce confusion; if we\n> > have last_error_relid, last_error_command, last_error_message,\n> > last_error_time, and last_error_xid, I think users might think that\n> > first_error_time is the timestamp at which an error occurred for the\n> > first time in the subscription worker.\n> >\n>\n> Isn't to some extent that confusion already exists because of\n> last_error_time column?\n>\n> > Also, I'm not sure\n> > last_error_count is not clear to me (it looks like showing something\n> > count but the only \"last\" one?).\n> >\n>\n> I feel if all the error related columns have \"last_error_\" as a prefix\n> then it should not be that confusing?\n>\n> > An alternative idea would be to add\n> > total_error_count by this patch, resulting in having both error_count\n> > and total_error_count. Regarding commit_count and abort_count, I\n> > personally think xact_commit and xact_rollback would be better since\n> > they’re more consistent with pg_stat_database view, although we might\n> > already have discussed that.\n> >\n>\n> Even if we decide to change the column names to\n> xact_commit/xact_rollback, I think with additional non-error columns\n> it will be clear to add 'error' in column names corresponding to error\n> columns, and last_error_* seems to be consistent with what we have in\n> pg_stat_archiver (last_failed_wal, last_failed_time).\n\nOkay, I agree that last_error_* columns will be consistent.\n\n> Your point\n> related to first_error_time has merit and I don't have a better answer\n> for it. I think it is just a convenience column and we are not sure\n> whether that will be required in practice so maybe we can drop that\n> column and come back to it later once we get some field feedback on\n> this view?\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 24 Nov 2021 09:19:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Nov 23, 2021 at 3:21 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Sat, Nov 20, 2021 at 1:11 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I'm concerned that these new names will introduce confusion; if we\n> > have last_error_relid, last_error_command, last_error_message,\n> > last_error_time, and last_error_xid, I think users might think that\n> > first_error_time is the timestamp at which an error occurred for the\n> > first time in the subscription worker.\n>\n> You mean you think users might think \"first_error_time\" is the\n> timestamp at which the last_error first occurred (rather than the\n> timestamp of the first of any type of error that occurred) on that\n> worker?\n\nI felt that \"first_error_time\" is the timestamp of the first of any\ntype of error that occurred on the worker.\n\n>\n> > ... Also, I'm not sure\n> > last_error_count is not clear to me (it looks like showing something\n> > count but the only \"last\" one?).\n>\n> It's the number of times that the last_error has occurred.\n> Unless it's some kind of transient error, that might get resolved\n> without intervention, logical replication will get stuck in a loop\n> retrying and the last error will occur again and again, hence the\n> count of how many times that has happened.\n> Maybe there's not much benefit in counting different errors prior to\n> the last error?\n\nThe name \"last_error_count\" is somewhat clear to me now. I had felt\nthat since the last error refers to *one* error that occurred last and\nit’s odd there is the count of it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 24 Nov 2021 09:26:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Friday, November 19, 2021 11:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Besides that, I’m not sure how useful commit_bytes, abort_bytes, and\r\n> error_bytes are. I originally thought these statistics track the size of received\r\n> data, i.g., how much data is transferred from the publisher and processed on\r\n> the subscriber. But what the view currently has is how much memory is used in\r\n> the subscription worker. The subscription worker emulates\r\n> ReorderBufferChangeSize() on the subscriber side but, as the comment of\r\n> update_apply_change_size() mentions, the size in the view is not accurate:\r\n...\r\n> I guess that the purpose of these values is to compare them to total_bytes,\r\n> stream_byte, and spill_bytes but if the calculation is not accurate, does it mean\r\n> that the more stats are updated, the more the stats will be getting inaccurate?\r\nThanks for your comment !\r\n\r\nI tried to solve your concerns about byte columns but there are really difficult issues to solve.\r\nFor example, to begin with the messages of apply worker are different from those of\r\nreorder buffer.\r\n \r\nTherefore, I decided to split the previous patch and make counter columns go first.\r\nv14 was checked by pgperltidy and pgindent.\r\n\r\nThis patch can be applied to the PG whose commit id is after 8d74fc9 (introduction of\r\npg_stat_subscription_workers).\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Wed, 1 Dec 2021 09:34:05 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Dec 1, 2021 at 3:04 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, November 19, 2021 11:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Besides that, I’m not sure how useful commit_bytes, abort_bytes, and\n> > error_bytes are. I originally thought these statistics track the size of received\n> > data, i.g., how much data is transferred from the publisher and processed on\n> > the subscriber. But what the view currently has is how much memory is used in\n> > the subscription worker. The subscription worker emulates\n> > ReorderBufferChangeSize() on the subscriber side but, as the comment of\n> > update_apply_change_size() mentions, the size in the view is not accurate:\n> ...\n> > I guess that the purpose of these values is to compare them to total_bytes,\n> > stream_byte, and spill_bytes but if the calculation is not accurate, does it mean\n> > that the more stats are updated, the more the stats will be getting inaccurate?\n> Thanks for your comment !\n>\n> I tried to solve your concerns about byte columns but there are really difficult issues to solve.\n> For example, to begin with the messages of apply worker are different from those of\n> reorder buffer.\n>\n> Therefore, I decided to split the previous patch and make counter columns go first.\n> v14 was checked by pgperltidy and pgindent.\n>\n> This patch can be applied to the PG whose commit id is after 8d74fc9 (introduction of\n> pg_stat_subscription_workers).\n\nThanks for the updated patch.\nCurrently we are storing the commit count, error_count and abort_count\nfor each table of the table sync operation. If we have thousands of\ntables, we will be storing the information for each of the tables.\nShouldn't we be storing the consolidated information in this case.\ndiff --git a/src/backend/replication/logical/tablesync.c\nb/src/backend/replication/logical/tablesync.c\nindex f07983a..02e9486 100644\n--- a/src/backend/replication/logical/tablesync.c\n+++ b/src/backend/replication/logical/tablesync.c\n@@ -1149,6 +1149,11 @@ copy_table_done:\n MyLogicalRepWorker->relstate_lsn = *origin_startpos;\n SpinLockRelease(&MyLogicalRepWorker->relmutex);\n\n+ /* Report the success of table sync. */\n+ pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\n+\n MyLogicalRepWorker->relid,\n+\n 0 /* no logical message type */ );\n\npostgres=# select * from pg_stat_subscription_workers ;\n subid | subname | subrelid | commit_count | error_count | abort_count\n| last_error_relid | last_error_command | last_error_xid |\nlast_error_count | last_error_message | last_error_time\n-------+---------+----------+--------------+-------------+-------------+------------------+--------------------+----------------+------------------+--------------------+-----------------\n 16411 | sub1 | 16387 | 1 | 0 | 0\n| | | |\n 0 | |\n 16411 | sub1 | 16396 | 1 | 0 | 0\n| | | |\n 0 | |\n 16411 | sub1 | 16390 | 1 | 0 | 0\n| | | |\n 0 | |\n 16411 | sub1 | 16393 | 1 | 0 | 0\n| | | |\n 0 | |\n 16411 | sub1 | 16402 | 1 | 0 | 0\n| | | |\n 0 | |\n 16411 | sub1 | 16408 | 1 | 0 | 0\n| | | |\n 0 | |\n 16411 | sub1 | 16384 | 1 | 0 | 0\n| | | |\n 0 | |\n 16411 | sub1 | 16399 | 1 | 0 | 0\n| | | |\n 0 | |\n 16411 | sub1 | 16405 | 1 | 0 | 0\n| | | |\n 0 | |\n(9 rows)\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 3 Dec 2021 11:41:32 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Friday, December 3, 2021 3:12 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Thanks for the updated patch.\r\n> Currently we are storing the commit count, error_count and abort_count for\r\n> each table of the table sync operation. If we have thousands of tables, we will\r\n> be storing the information for each of the tables.\r\n> Shouldn't we be storing the consolidated information in this case.\r\n> diff --git a/src/backend/replication/logical/tablesync.c\r\n> b/src/backend/replication/logical/tablesync.c\r\n> index f07983a..02e9486 100644\r\n> --- a/src/backend/replication/logical/tablesync.c\r\n> +++ b/src/backend/replication/logical/tablesync.c\r\n> @@ -1149,6 +1149,11 @@ copy_table_done:\r\n> MyLogicalRepWorker->relstate_lsn = *origin_startpos;\r\n> SpinLockRelease(&MyLogicalRepWorker->relmutex);\r\n> \r\n> + /* Report the success of table sync. */\r\n> + pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\r\n> +\r\n> MyLogicalRepWorker->relid,\r\n> +\r\n> 0 /* no logical message type */ );\r\nOkay.\r\n\r\nI united all stats into that of apply worker.\r\nIn line with this change, I fixed the TAP tests as well\r\nto cover the updates of stats done by table sync workers.\r\n\r\nAlso, during my self-review, I noticed that\r\nI should call pgstat_report_subworker_xact_end() before\r\nprocess_syncing_tables() because it can lead to process\r\nexit, which results in missing one increment of the stats columns.\r\nI noted this point in a comment as well.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Sat, 4 Dec 2021 13:01:58 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Sat, Dec 4, 2021 at 6:32 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, December 3, 2021 3:12 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the updated patch.\n> > Currently we are storing the commit count, error_count and abort_count for\n> > each table of the table sync operation. If we have thousands of tables, we will\n> > be storing the information for each of the tables.\n> > Shouldn't we be storing the consolidated information in this case.\n> > diff --git a/src/backend/replication/logical/tablesync.c\n> > b/src/backend/replication/logical/tablesync.c\n> > index f07983a..02e9486 100644\n> > --- a/src/backend/replication/logical/tablesync.c\n> > +++ b/src/backend/replication/logical/tablesync.c\n> > @@ -1149,6 +1149,11 @@ copy_table_done:\n> > MyLogicalRepWorker->relstate_lsn = *origin_startpos;\n> > SpinLockRelease(&MyLogicalRepWorker->relmutex);\n> >\n> > + /* Report the success of table sync. */\n> > + pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\n> > +\n> > MyLogicalRepWorker->relid,\n> > +\n> > 0 /* no logical message type */ );\n> Okay.\n>\n> I united all stats into that of apply worker.\n> In line with this change, I fixed the TAP tests as well\n> to cover the updates of stats done by table sync workers.\n>\n> Also, during my self-review, I noticed that\n> I should call pgstat_report_subworker_xact_end() before\n> process_syncing_tables() because it can lead to process\n> exit, which results in missing one increment of the stats columns.\n> I noted this point in a comment as well.\n\nThanks for the updated patch, few comments:\n1) We can keep the documentation similar to mention the count includes\nboth table sync worker / main apply worker in case of\ncommit_count/error_count and abort_count to keep it consistent.\n+ <structfield>commit_count</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of transactions successfully applied in this subscription.\n+ COMMIT and COMMIT PREPARED increments this counter.\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>error_count</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of transactions that failed to be applied by the table\n+ sync worker or main apply worker in this subscription.\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>abort_count</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of transactions aborted in this subscription.\n+ ROLLBACK PREPARED increments this counter.\n+ </para></entry>\n+ </row>\n\n2) Can this be changed:\n+ /*\n+ * If this is a new error reported by table sync worker,\nconsolidate this\n+ * error count into the entry of apply worker.\n+ */\n+ if (OidIsValid(msg->m_subrelid))\n+ {\n+ /* Gain the apply worker stats */\n+ subwentry = pgstat_get_subworker_entry(dbentry, msg->m_subid,\n+\n InvalidOid, true);\n+ subwentry->error_count++;\n+ }\n+ else\n+ subwentry->error_count++; /* increment the apply\nworker's counter. */\nTo:\n+ /*\n+ * If this is a new error reported by table sync worker,\nconsolidate this\n+ * error count into the entry of apply worker.\n+ */\n+ if (OidIsValid(msg->m_subrelid))\n+ /* Gain the apply worker stats */\n+ subwentry = pgstat_get_subworker_entry(dbentry, msg->m_subid,\n+\n InvalidOid, true);\n+\n+ subwentry->error_count++; /* increment the apply\nworker's counter. */\n\n3) Since both 026_worker_stats and 027_worker_xact_stats.pl are\ntesting pg_stat_subscription_workers, can we move the tests to\n026_worker_stats.pl. If possible the error_count validation can be\ncombined with the existing tests.\ndiff --git a/src/test/subscription/t/027_worker_xact_stats.pl\nb/src/test/subscription/t/027_worker_xact_stats.pl\nnew file mode 100644\nindex 0000000..31dbea1\n--- /dev/null\n+++ b/src/test/subscription/t/027_worker_xact_stats.pl\n@@ -0,0 +1,162 @@\n+\n+# Copyright (c) 2021, PostgreSQL Global Development Group\n+\n+# Tests for subscription worker statistics during apply.\n+use strict;\n+use warnings;\n+use PostgreSQL::Test::Cluster;\n+use PostgreSQL::Test::Utils;\n+use Test::More tests => 1;\n+\n+# Create publisher node\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 6 Dec 2021 19:57:01 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, December 6, 2021 11:27 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Thanks for the updated patch, few comments:\r\nThank you for your review !\r\n\r\n> 1) We can keep the documentation similar to mention the count includes both\r\n> table sync worker / main apply worker in case of commit_count/error_count\r\n> and abort_count to keep it consistent.\r\n> + <structfield>commit_count</structfield> <type>bigint</type>\r\n> + </para>\r\n> + <para>\r\n> + Number of transactions successfully applied in this subscription.\r\n> + COMMIT and COMMIT PREPARED increments this counter.\r\n> + </para></entry>\r\n> + </row>\r\n> +\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>error_count</structfield> <type>bigint</type>\r\n> + </para>\r\n> + <para>\r\n> + Number of transactions that failed to be applied by the table\r\n> + sync worker or main apply worker in this subscription.\r\n> + </para></entry>\r\n> + </row>\r\n> +\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>abort_count</structfield> <type>bigint</type>\r\n> + </para>\r\n> + <para>\r\n> + Number of transactions aborted in this subscription.\r\n> + ROLLBACK PREPARED increments this counter.\r\n> + </para></entry>\r\n> + </row>\r\nYeah, you are right. Fixed.\r\nNote that abort_count is not used by table sync worker.\r\n\r\n \r\n> 2) Can this be changed:\r\n> + /*\r\n> + * If this is a new error reported by table sync worker,\r\n> consolidate this\r\n> + * error count into the entry of apply worker.\r\n> + */\r\n> + if (OidIsValid(msg->m_subrelid))\r\n> + {\r\n> + /* Gain the apply worker stats */\r\n> + subwentry = pgstat_get_subworker_entry(dbentry,\r\n> + msg->m_subid,\r\n> +\r\n> InvalidOid, true);\r\n> + subwentry->error_count++;\r\n> + }\r\n> + else\r\n> + subwentry->error_count++; /* increment the apply\r\n> worker's counter. */\r\n> To:\r\n> + /*\r\n> + * If this is a new error reported by table sync worker,\r\n> consolidate this\r\n> + * error count into the entry of apply worker.\r\n> + */\r\n> + if (OidIsValid(msg->m_subrelid))\r\n> + /* Gain the apply worker stats */\r\n> + subwentry = pgstat_get_subworker_entry(dbentry,\r\n> + msg->m_subid,\r\n> +\r\n> InvalidOid, true);\r\n> +\r\n> + subwentry->error_count++; /* increment the apply\r\n> worker's counter. */\r\nYour suggestion looks better.\r\nAlso, I fixed some comments of this part\r\nso that we don't need to add a separate comment at the bottom\r\nfor the increment of the apply worker.\r\n\r\n \r\n> 3) Since both 026_worker_stats and 027_worker_xact_stats.pl are testing\r\n> pg_stat_subscription_workers, can we move the tests to 026_worker_stats.pl.\r\n> If possible the error_count validation can be combined with the existing tests.\r\n> diff --git a/src/test/subscription/t/027_worker_xact_stats.pl\r\n> b/src/test/subscription/t/027_worker_xact_stats.pl\r\n> new file mode 100644\r\n> index 0000000..31dbea1\r\n> --- /dev/null\r\n> +++ b/src/test/subscription/t/027_worker_xact_stats.pl\r\n> @@ -0,0 +1,162 @@\r\n> +\r\n> +# Copyright (c) 2021, PostgreSQL Global Development Group\r\n> +\r\n> +# Tests for subscription worker statistics during apply.\r\n> +use strict;\r\n> +use warnings;\r\n> +use PostgreSQL::Test::Cluster;\r\n> +use PostgreSQL::Test::Utils;\r\n> +use Test::More tests => 1;\r\n> +\r\n> +# Create publisher node\r\nRight. I've integrated my tests with 026_worker_stats.pl.\r\nI think error_count validations are combined as you suggested.\r\nAnother change I did is to introduce one function\r\nto contribute to better readability of the stats tests.\r\n\r\nHere, the 026_worker_stats.pl didn't look aligned by\r\npgperltidy. This is not a serious issue at all.\r\nYet, when I ran pgperltidy, the existing codes\r\nthat required adjustments came into my patch.\r\nTherefore, I made a separate part for this.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 7 Dec 2021 09:42:35 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n\nFew questions and comments:\n========================\n1.\nThe <structname>pg_stat_subscription_workers</structname> view will contain\n one row per subscription worker on which errors have occurred, for workers\n applying logical replication changes and workers handling the initial data\n- copy of the subscribed tables. The statistics entry is removed when the\n- corresponding subscription is dropped.\n+ copy of the subscribed tables. Also, the row corresponding to the apply\n+ worker shows all transaction statistics of both types of workers on the\n+ subscription. The statistics entry is removed when the corresponding\n+ subscription is dropped.\n\nWhy did you choose to show stats for both types of workers in one row?\n\n2.\n+ PGSTAT_MTYPE_SUBWORKERXACTEND,\n } StatMsgType;\n\nI don't think we comma with the last message type.\n\n3.\n+ Oid m_subrelid;\n+\n+ /* necessary to determine column to increment */\n+ LogicalRepMsgType m_command;\n+\n+} PgStat_MsgSubWorkerXactEnd;\n\nIs m_subrelid used in this patch? If not, why did you keep it? I think\nif you choose to show separate stats for table sync and apply worker\nthen probably it will be used.\n\n4.\n /*\n+ * Cumulative transaction statistics of subscription worker\n+ */\n+ PgStat_Counter commit_count;\n+ PgStat_Counter error_count;\n+ PgStat_Counter abort_count;\n+\n\nI think it is better to keep the order of columns as commit_count,\nabort_count, error_count in the entire patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Dec 2021 14:49:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, December 13, 2021 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> \r\n> Few questions and comments:\r\nThank you for your comments !\r\n\r\n> ========================\r\n> 1.\r\n> The <structname>pg_stat_subscription_workers</structname> view will\r\n> contain\r\n> one row per subscription worker on which errors have occurred, for workers\r\n> applying logical replication changes and workers handling the initial data\r\n> - copy of the subscribed tables. The statistics entry is removed when the\r\n> - corresponding subscription is dropped.\r\n> + copy of the subscribed tables. Also, the row corresponding to the apply\r\n> + worker shows all transaction statistics of both types of workers on the\r\n> + subscription. The statistics entry is removed when the corresponding\r\n> + subscription is dropped.\r\n> \r\n> Why did you choose to show stats for both types of workers in one row?\r\nThis is because if we have hundreds or thousands of tables for table sync,\r\nwe need to create many entries to cover them and store the entries for all tables.\r\n\r\n\r\n> 2.\r\n> + PGSTAT_MTYPE_SUBWORKERXACTEND,\r\n> } StatMsgType;\r\n> \r\n> I don't think we comma with the last message type.\r\n> 4.\r\n> /*\r\n> + * Cumulative transaction statistics of subscription worker */\r\n> + PgStat_Counter commit_count; PgStat_Counter error_count;\r\n> + PgStat_Counter abort_count;\r\n> +\r\n> \r\n> I think it is better to keep the order of columns as commit_count, abort_count,\r\n> error_count in the entire patch.\r\nOkay, I'll fix both points in the next version.\r\n\r\n \r\n> 3.\r\n> + Oid m_subrelid;\r\n> +\r\n> + /* necessary to determine column to increment */ LogicalRepMsgType\r\n> + m_command;\r\n> +\r\n> +} PgStat_MsgSubWorkerXactEnd;\r\n> \r\n> Is m_subrelid used in this patch? If not, why did you keep it?\r\nAbsolutely, this was a mistake when I took the decision to merge both stats\r\nof table sync and apply worker.\r\n\r\n> I think if you choose\r\n> to show separate stats for table sync and apply worker then probably it will be\r\n> used.\r\nYeah, I'll fix this. Of course, after I could confirm that the idea for merging the\r\ntwo types of workers stats was acceptable for you and others.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n\r\n",
"msg_date": "Mon, 13 Dec 2021 12:18:15 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, December 6, 2021 11:27 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the updated patch, few comments:\n> Thank you for your review !\n>\n> > 1) We can keep the documentation similar to mention the count includes both\n> > table sync worker / main apply worker in case of commit_count/error_count\n> > and abort_count to keep it consistent.\n> > + <structfield>commit_count</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of transactions successfully applied in this subscription.\n> > + COMMIT and COMMIT PREPARED increments this counter.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>error_count</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of transactions that failed to be applied by the table\n> > + sync worker or main apply worker in this subscription.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>abort_count</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of transactions aborted in this subscription.\n> > + ROLLBACK PREPARED increments this counter.\n> > + </para></entry>\n> > + </row>\n> Yeah, you are right. Fixed.\n> Note that abort_count is not used by table sync worker.\n>\n>\n> > 2) Can this be changed:\n> > + /*\n> > + * If this is a new error reported by table sync worker,\n> > consolidate this\n> > + * error count into the entry of apply worker.\n> > + */\n> > + if (OidIsValid(msg->m_subrelid))\n> > + {\n> > + /* Gain the apply worker stats */\n> > + subwentry = pgstat_get_subworker_entry(dbentry,\n> > + msg->m_subid,\n> > +\n> > InvalidOid, true);\n> > + subwentry->error_count++;\n> > + }\n> > + else\n> > + subwentry->error_count++; /* increment the apply\n> > worker's counter. */\n> > To:\n> > + /*\n> > + * If this is a new error reported by table sync worker,\n> > consolidate this\n> > + * error count into the entry of apply worker.\n> > + */\n> > + if (OidIsValid(msg->m_subrelid))\n> > + /* Gain the apply worker stats */\n> > + subwentry = pgstat_get_subworker_entry(dbentry,\n> > + msg->m_subid,\n> > +\n> > InvalidOid, true);\n> > +\n> > + subwentry->error_count++; /* increment the apply\n> > worker's counter. */\n> Your suggestion looks better.\n> Also, I fixed some comments of this part\n> so that we don't need to add a separate comment at the bottom\n> for the increment of the apply worker.\n>\n>\n> > 3) Since both 026_worker_stats and 027_worker_xact_stats.pl are testing\n> > pg_stat_subscription_workers, can we move the tests to 026_worker_stats.pl.\n> > If possible the error_count validation can be combined with the existing tests.\n> > diff --git a/src/test/subscription/t/027_worker_xact_stats.pl\n> > b/src/test/subscription/t/027_worker_xact_stats.pl\n> > new file mode 100644\n> > index 0000000..31dbea1\n> > --- /dev/null\n> > +++ b/src/test/subscription/t/027_worker_xact_stats.pl\n> > @@ -0,0 +1,162 @@\n> > +\n> > +# Copyright (c) 2021, PostgreSQL Global Development Group\n> > +\n> > +# Tests for subscription worker statistics during apply.\n> > +use strict;\n> > +use warnings;\n> > +use PostgreSQL::Test::Cluster;\n> > +use PostgreSQL::Test::Utils;\n> > +use Test::More tests => 1;\n> > +\n> > +# Create publisher node\n> Right. I've integrated my tests with 026_worker_stats.pl.\n> I think error_count validations are combined as you suggested.\n> Another change I did is to introduce one function\n> to contribute to better readability of the stats tests.\n>\n> Here, the 026_worker_stats.pl didn't look aligned by\n> pgperltidy. This is not a serious issue at all.\n> Yet, when I ran pgperltidy, the existing codes\n> that required adjustments came into my patch.\n> Therefore, I made a separate part for this.\n\nThanks for the updated patch, few comments:\n1) Can we change this:\n /*\n+ * Report the success of table sync as one commit to consolidate all\n+ * transaction stats into one record.\n+ */\n+ pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\n+\n LOGICAL_REP_MSG_COMMIT);\n+\nTo:\n /* Report the success of table sync */\n+ pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\n+\n LOGICAL_REP_MSG_COMMIT);\n+\n\n2) Typo: ealier should be earlier\n+ /*\n+ * Report ealier than the call of process_syncing_tables() not\nto miss an\n+ * increment of commit_count in case it leads to the process exit. See\n+ * process_syncing_tables_for_apply().\n+ */\n+ pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\n+\n LOGICAL_REP_MSG_COMMIT);\n+\n\n3) Should we add an Assert for subwentry:\n+ /*\n+ * If this is a new error reported by table sync worker,\nconsolidate this\n+ * error count into the entry of apply worker, by swapping the stats\n+ * entries.\n+ */\n+ if (OidIsValid(msg->m_subrelid))\n+ subwentry = pgstat_get_subworker_entry(dbentry, msg->m_subid,\n+\n InvalidOid, true);\n+ subwentry->error_count++;\n\n4) Can we slightly change it to :We can change it:\n+# Check the update of stats counters.\n+confirm_transaction_stats_update(\n+ $node_subscriber,\n+ 'commit_count = 1',\n+ 'the commit_count increment by table sync');\n+\n+confirm_transaction_stats_update(\n+ $node_subscriber,\n+ 'error_count = 1',\n+ 'the error_count increment by table sync');\nto:\n+# Check updation of subscription worker transaction count statistics.\n+confirm_transaction_stats_update(\n+ $node_subscriber,\n+ 'commit_count = 1',\n+ 'check table sync worker commit count is updated');\n+\n+confirm_transaction_stats_update(\n+ $node_subscriber,\n+ 'error_count = 1',\n+ 'check table sync worker error count is updated');\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 13 Dec 2021 21:15:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Dec 13, 2021 at 5:48 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, December 13, 2021 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > Few questions and comments:\n> Thank you for your comments !\n>\n> > ========================\n> > 1.\n> > The <structname>pg_stat_subscription_workers</structname> view will\n> > contain\n> > one row per subscription worker on which errors have occurred, for workers\n> > applying logical replication changes and workers handling the initial data\n> > - copy of the subscribed tables. The statistics entry is removed when the\n> > - corresponding subscription is dropped.\n> > + copy of the subscribed tables. Also, the row corresponding to the apply\n> > + worker shows all transaction statistics of both types of workers on the\n> > + subscription. The statistics entry is removed when the corresponding\n> > + subscription is dropped.\n> >\n> > Why did you choose to show stats for both types of workers in one row?\n> This is because if we have hundreds or thousands of tables for table sync,\n> we need to create many entries to cover them and store the entries for all tables.\n>\n\nIf we fear a large number of entries for such workers then won't it be\nbetter to show the value of these stats only for apply workers. I\nthink normally the table sync workers perform only copy operation or\nmaybe a fixed number of xacts, so, one might not be interested in the\ntransaction stats of these workers. I find merging only specific stats\nof two different types of workers confusing.\n\nWhat do others think about this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Dec 2021 07:58:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tues, Dec 14, 2021 10:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Dec 13, 2021 at 5:48 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Monday, December 13, 2021 6:19 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > Few questions and comments:\r\n> > Thank you for your comments !\r\n> >\r\n> > > ========================\r\n> > > 1.\r\n> > > one row per subscription worker on which errors have occurred, for workers\r\n> > > applying logical replication changes and workers handling the initial data\r\n> > > - copy of the subscribed tables. The statistics entry is removed when the\r\n> > > - corresponding subscription is dropped.\r\n> > > + copy of the subscribed tables. Also, the row corresponding to the apply\r\n> > > + worker shows all transaction statistics of both types of workers on the\r\n> > > + subscription. The statistics entry is removed when the corresponding\r\n> > > + subscription is dropped.\r\n> > >\r\n> > > Why did you choose to show stats for both types of workers in one row?\r\n> > This is because if we have hundreds or thousands of tables for table sync,\r\n> > we need to create many entries to cover them and store the entries for all\r\n> > tables.\r\n> >\r\n> \r\n> If we fear a large number of entries for such workers then won't it be\r\n> better to show the value of these stats for apply workers. I\r\n> think normally the table sync workers perform only copy operation or\r\n> maybe a fixed number of xacts, so, one might not be interested in the\r\n> transaction stats of these workers. I find merging only specific stats\r\n> of two different types of workers confusing.\r\n> \r\n> What do others think about this?\r\n\r\nPersonally, I agreed that merging two types of stats into one row might not be\r\na good idea. And the xact stats of table sync workers are usually less\r\ninteresting than the apply worker's, So, it's seems acceptable to me if we\r\nshow stats only for apply workers and document about this.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Tue, 14 Dec 2021 13:28:01 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 13, 2021 at 5:48 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Monday, December 13, 2021 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > Few questions and comments:\n> > Thank you for your comments !\n> >\n> > > ========================\n> > > 1.\n> > > The <structname>pg_stat_subscription_workers</structname> view will\n> > > contain\n> > > one row per subscription worker on which errors have occurred, for workers\n> > > applying logical replication changes and workers handling the initial data\n> > > - copy of the subscribed tables. The statistics entry is removed when the\n> > > - corresponding subscription is dropped.\n> > > + copy of the subscribed tables. Also, the row corresponding to the apply\n> > > + worker shows all transaction statistics of both types of workers on the\n> > > + subscription. The statistics entry is removed when the corresponding\n> > > + subscription is dropped.\n> > >\n> > > Why did you choose to show stats for both types of workers in one row?\n> > This is because if we have hundreds or thousands of tables for table sync,\n> > we need to create many entries to cover them and store the entries for all tables.\n> >\n>\n> If we fear a large number of entries for such workers then won't it be\n> better to show the value of these stats only for apply workers. I\n> think normally the table sync workers perform only copy operation or\n> maybe a fixed number of xacts, so, one might not be interested in the\n> transaction stats of these workers. I find merging only specific stats\n> of two different types of workers confusing.\n>\n> What do others think about this?\n\nI understand the concern to have a large number of entries but I agree\nthat merging only specific stats would confuse users. As Amit\nsuggested, it'd be better to show only apply workers' transaction\nstats.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 15 Dec 2021 14:09:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 7:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 13, 2021 at 5:48 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Monday, December 13, 2021 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > Few questions and comments:\n> > Thank you for your comments !\n> >\n> > > ========================\n> > > 1.\n> > > The <structname>pg_stat_subscription_workers</structname> view will\n> > > contain\n> > > one row per subscription worker on which errors have occurred, for workers\n> > > applying logical replication changes and workers handling the initial data\n> > > - copy of the subscribed tables. The statistics entry is removed when the\n> > > - corresponding subscription is dropped.\n> > > + copy of the subscribed tables. Also, the row corresponding to the apply\n> > > + worker shows all transaction statistics of both types of workers on the\n> > > + subscription. The statistics entry is removed when the corresponding\n> > > + subscription is dropped.\n> > >\n> > > Why did you choose to show stats for both types of workers in one row?\n> > This is because if we have hundreds or thousands of tables for table sync,\n> > we need to create many entries to cover them and store the entries for all tables.\n> >\n>\n> If we fear a large number of entries for such workers then won't it be\n> better to show the value of these stats only for apply workers. I\n> think normally the table sync workers perform only copy operation or\n> maybe a fixed number of xacts, so, one might not be interested in the\n> transaction stats of these workers. I find merging only specific stats\n> of two different types of workers confusing.\n>\n> What do others think about this?\n\nWe can remove the table sync workers transaction stats count to avoid\nconfusion, take care of the documentation changes too accordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 15 Dec 2021 18:21:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, December 15, 2021 9:52 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Tue, Dec 14, 2021 at 7:58 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Mon, Dec 13, 2021 at 5:48 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > On Monday, December 13, 2021 6:19 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > > On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\r\n> > > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > Few questions and comments:\r\n> > > Thank you for your comments !\r\n> > >\r\n> > > > ========================\r\n> > > > 1.\r\n> > > > The <structname>pg_stat_subscription_workers</structname> view\r\n> > > > will contain\r\n> > > > one row per subscription worker on which errors have occurred, for\r\n> workers\r\n> > > > applying logical replication changes and workers handling the initial\r\n> data\r\n> > > > - copy of the subscribed tables. The statistics entry is removed\r\n> when the\r\n> > > > - corresponding subscription is dropped.\r\n> > > > + copy of the subscribed tables. Also, the row corresponding to the\r\n> apply\r\n> > > > + worker shows all transaction statistics of both types of workers on\r\n> the\r\n> > > > + subscription. The statistics entry is removed when the\r\n> corresponding\r\n> > > > + subscription is dropped.\r\n> > > >\r\n> > > > Why did you choose to show stats for both types of workers in one row?\r\n> > > This is because if we have hundreds or thousands of tables for table\r\n> > > sync, we need to create many entries to cover them and store the entries for\r\n> all tables.\r\n> > >\r\n> >\r\n> > If we fear a large number of entries for such workers then won't it be\r\n> > better to show the value of these stats only for apply workers. I\r\n> > think normally the table sync workers perform only copy operation or\r\n> > maybe a fixed number of xacts, so, one might not be interested in the\r\n> > transaction stats of these workers. I find merging only specific stats\r\n> > of two different types of workers confusing.\r\n> >\r\n> > What do others think about this?\r\n> \r\n> We can remove the table sync workers transaction stats count to avoid\r\n> confusion, take care of the documentation changes too accordingly.\r\nHi, apologies for my late reply.\r\n\r\nThank you, everyone for confirming the direction.\r\nI'll follow the consensus of the community\r\nand fix the patch, including other comments.\r\nI'll treat only the stats for apply workers.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n\r\n",
"msg_date": "Thu, 16 Dec 2021 06:59:57 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 1:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> If we fear a large number of entries for such workers then won't it be\n> better to show the value of these stats only for apply workers. I\n> think normally the table sync workers perform only copy operation or\n> maybe a fixed number of xacts, so, one might not be interested in the\n> transaction stats of these workers. I find merging only specific stats\n> of two different types of workers confusing.\n>\n> What do others think about this?\n>\n\nI think it might be OK to NOT include the transaction stats of the\ntablesync workers, but my understanding (and slight concern) is that\ncurrently there is potentially some overlap in the work done by the\ntablesync and apply workers - perhaps the small patch (see [1]) proposed by\nPeter Smith could also be considered, in order to make that distinction of\nwork clearer, and the stats more meaningful?\n\n----\n[1]\nhttps://www.postgresql.org/message-id/flat/CAHut+Pt39PbQs0SxT9RMM89aYiZoQ0Kw46YZSkKZwK8z5HOr3g@mail.gmail.com\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\nOn Tue, Dec 14, 2021 at 1:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:>> If we fear a large number of entries for such workers then won't it be> better to show the value of these stats only for apply workers. I> think normally the table sync workers perform only copy operation or> maybe a fixed number of xacts, so, one might not be interested in the> transaction stats of these workers. I find merging only specific stats> of two different types of workers confusing.>> What do others think about this?>I think it might be OK to NOT include the transaction stats of the tablesync workers, but my understanding (and slight concern) is that currently there is potentially some overlap in the work done by the tablesync and apply workers - perhaps the small patch (see [1]) proposed by Peter Smith could also be considered, in order to make that distinction of work clearer, and the stats more meaningful?----[1] https://www.postgresql.org/message-id/flat/CAHut+Pt39PbQs0SxT9RMM89aYiZoQ0Kw46YZSkKZwK8z5HOr3g@mail.gmail.comRegards,Greg NancarrowFujitsu Australia",
"msg_date": "Thu, 16 Dec 2021 18:08:01 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, December 16, 2021 4:00 PM I wrote:\r\n> Thank you, everyone for confirming the direction.\r\n> I'll follow the consensus of the community and fix the patch, including other\r\n> comments.\r\n> I'll treat only the stats for apply workers.\r\nHi, created a new version v17 according to the recent discussion\r\nwith changes to address other review comments.\r\n\r\nKindly have a look at it.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 16 Dec 2021 11:36:46 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, December 13, 2021 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Dec 7, 2021 at 3:12 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> \r\n> Few questions and comments:\r\n> ========================\r\n> 1.\r\n> The <structname>pg_stat_subscription_workers</structname> view will\r\n> contain\r\n> one row per subscription worker on which errors have occurred, for workers\r\n> applying logical replication changes and workers handling the initial data\r\n> - copy of the subscribed tables. The statistics entry is removed when the\r\n> - corresponding subscription is dropped.\r\n> + copy of the subscribed tables. Also, the row corresponding to the apply\r\n> + worker shows all transaction statistics of both types of workers on the\r\n> + subscription. The statistics entry is removed when the corresponding\r\n> + subscription is dropped.\r\n> \r\n> Why did you choose to show stats for both types of workers in one row?\r\nNow, the added stats show only the statistics of apply worker\r\nas we agreed.\r\n\r\n\r\n> 2.\r\n> + PGSTAT_MTYPE_SUBWORKERXACTEND,\r\n> } StatMsgType;\r\n> \r\n> I don't think we comma with the last message type.\r\nFixed.\r\n\r\n \r\n> 3.\r\n> + Oid m_subrelid;\r\n> +\r\n> + /* necessary to determine column to increment */ LogicalRepMsgType\r\n> + m_command;\r\n> +\r\n> +} PgStat_MsgSubWorkerXactEnd;\r\n> \r\n> Is m_subrelid used in this patch? If not, why did you keep it? I think if you\r\n> choose to show separate stats for table sync and apply worker then probably it\r\n> will be used.\r\nRemoved.\r\n\r\n\r\n> 4.\r\n> /*\r\n> + * Cumulative transaction statistics of subscription worker */\r\n> + PgStat_Counter commit_count; PgStat_Counter error_count;\r\n> + PgStat_Counter abort_count;\r\n> +\r\n> \r\n> I think it is better to keep the order of columns as commit_count, abort_count,\r\n> error_count in the entire patch.\r\nFixed.\r\n\r\n\r\nThe new patch is shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83734A7A0596AC7ADB0DCB51ED779%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 16 Dec 2021 11:39:06 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, December 14, 2021 12:45 AM vignesh C <vignesh21@gmail.com> wrote:\r\n> Thanks for the updated patch, few comments:\r\n> 1) Can we change this:\r\n> /*\r\n> + * Report the success of table sync as one commit to consolidate all\r\n> + * transaction stats into one record.\r\n> + */\r\n> + pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\r\n> +\r\n> LOGICAL_REP_MSG_COMMIT);\r\n> +\r\n> To:\r\n> /* Report the success of table sync */\r\n> + pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\r\n> +\r\n> LOGICAL_REP_MSG_COMMIT);\r\n> +\r\nThis function call that the table sync worker reports\r\nan update of stats has been removed according to the recent discussion.\r\n\r\n\r\n> 2) Typo: ealier should be earlier\r\n> + /*\r\n> + * Report ealier than the call of process_syncing_tables() not\r\n> to miss an\r\n> + * increment of commit_count in case it leads to the process exit. See\r\n> + * process_syncing_tables_for_apply().\r\n> + */\r\n> + pgstat_report_subworker_xact_end(MyLogicalRepWorker->subid,\r\n> +\r\n> LOGICAL_REP_MSG_COMMIT);\r\n> +\r\nThanks ! Fixed.\r\n\r\n \r\n> 3) Should we add an Assert for subwentry:\r\n> + /*\r\n> + * If this is a new error reported by table sync worker,\r\n> consolidate this\r\n> + * error count into the entry of apply worker, by swapping the stats\r\n> + * entries.\r\n> + */\r\n> + if (OidIsValid(msg->m_subrelid))\r\n> + subwentry = pgstat_get_subworker_entry(dbentry,\r\n> + msg->m_subid,\r\n> +\r\n> InvalidOid, true);\r\n> + subwentry->error_count++;\r\nThe latest implementation doesn't require\r\nthe call of pgstat_get_subworker_entry().\r\nSo, I skipped.\r\n\r\n> 4) Can we slightly change it to :We can change it:\r\n> +# Check the update of stats counters.\r\n> +confirm_transaction_stats_update(\r\n> + $node_subscriber,\r\n> + 'commit_count = 1',\r\n> + 'the commit_count increment by table sync');\r\n> +\r\n> +confirm_transaction_stats_update(\r\n> + $node_subscriber,\r\n> + 'error_count = 1',\r\n> + 'the error_count increment by table sync');\r\n> to:\r\n> +# Check updation of subscription worker transaction count statistics.\r\n> +confirm_transaction_stats_update(\r\n> + $node_subscriber,\r\n> + 'commit_count = 1',\r\n> + 'check table sync worker commit count is updated');\r\n> +\r\n> +confirm_transaction_stats_update(\r\n> + $node_subscriber,\r\n> + 'error_count = 1',\r\n> + 'check table sync worker error count is updated');\r\nI've removed the corresponding tests for table sync workers in the patch. \r\n\r\nBut, I adopted the comment suggestion partly for the tests of the apply worker.\r\nOn the other hand, I didn't fix the 3rd arguments of confirm_transaction_stats_update().\r\nIt needs to be a noun, because it's connected to another string\r\n\"Timed out while waiting for \". in the function.\r\nSee the definition of the function.\r\n\r\nThe new patch v17 is shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83734A7A0596AC7ADB0DCB51ED779%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 16 Dec 2021 11:41:27 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, December 16, 2021 8:37 PM I wrote:\r\n> Hi, created a new version v17 according to the recent discussion with changes\r\n> to address other review comments.\r\nFYI, in v17 I've removed one part of commit message\r\nabout spool file statistics on the subscriber.\r\nMy intention is just to make the patches more committable shape.\r\n\r\nAlthough I deleted it, I'd say still there would be some room\r\nfor the discussion of the necessity. It's because to begin with,\r\nwe are interested in the disk writes (for the logical replication,\r\npg_stat_replication_slots is an example), and secondly there can\r\nbe a scenario that if the user of logical replication dislikes and wants\r\nto suppress unnecessary writes of file on the subscriber\r\n(STREAM ABORT causes truncate of file with changes, IIUC)\r\nthey can increase the logical_decoding_work_mem on the publisher.\r\nI'll postpone this discussion, till it becomes necessary\r\nor will abandon this idea, if it's rejected. Anyway,\r\nI detached the discussion by removing it from the commit message.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 16 Dec 2021 13:21:22 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "At Thu, 16 Dec 2021 11:36:46 +0000, \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> wrote in \n> On Thursday, December 16, 2021 4:00 PM I wrote:\n> > Thank you, everyone for confirming the direction.\n> > I'll follow the consensus of the community and fix the patch, including other\n> > comments.\n> > I'll treat only the stats for apply workers.\n> Hi, created a new version v17 according to the recent discussion\n> with changes to address other review comments.\n> \n> Kindly have a look at it.\n\nIt sends stats packets at every commit-like operation on apply\nworkers. The current pgstat is so smart that it refrain from sending\nstats packets at too high frequency. We already suffer frequent stats\npackets so apply workers need to bahave the same way.\n\nThat is, the new stats numbers are once accumulated locally then the\naccumulated numbers are sent to stats collector by pgstat_report_stat.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 17 Dec 2021 14:03:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical\n replication progress"
},
{
"msg_contents": "Friday, December 17, 2021 2:03 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> It sends stats packets at every commit-like operation on apply workers. The\n> current pgstat is so smart that it refrain from sending stats packets at too high\n> frequency. We already suffer frequent stats packets so apply workers need to\n> bahave the same way.\n> \n> That is, the new stats numbers are once accumulated locally then the\n> accumulated numbers are sent to stats collector by pgstat_report_stat.\nHi, Horiguchi-san.\n\nI felt your point is absolutely right !\nUpdated the patch to address your concern.\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Mon, 20 Dec 2021 09:40:19 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Dec 20, 2021 at 8:40 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Updated the patch to address your concern.\n>\n\nSome review comments on the v18 patches:\n\nv18-0002\n\ndoc/src/sgml/monitoring.sgml\n(1) tablesync worker stats?\n\nShouldn't the comment below only mention the apply worker? (since\nwe're no longer recording stats of the tablesync worker)\n\n+ Number of transactions that failed to be applied by the table\n+ sync worker or main apply worker in this subscription. This\n+ counter is updated after confirming the error is not same as\n+ the previous one.\n+ </para></entry>\n\nAlso, it should say \"... the error is not the same as the previous one.\"\n\n\nsrc/backend/catalog/system_views.sql\n(2) pgstat_report_subworker_xact_end()\n\nFix typo and some wording:\n\nBEFORE:\n+ * This should be called before the call of process_syning_tables() not to\nAFTER:\n+ * This should be called before the call of\nprocess_syncing_tables(), so to not\n\n\nsrc/backend/postmaster/pgstat.c\n(3) pgstat_send_subworker_xact_stats()\n\nBEFORE:\n+ * Send a subworker transaction stats to the collector.\nAFTER:\n+ * Send a subworker's transaction stats to the collector.\n\n(4)\nWouldn't it be best for:\n\n+ if (!TimestampDifferenceExceeds(last_report, now, PGSTAT_STAT_INTERVAL))\n\nto be:\n\n+ if (last_report != 0 && !TimestampDifferenceExceeds(last_report,\nnow, PGSTAT_STAT_INTERVAL))\n\n?\n\n(5) pgstat_send_subworker_xact_stats()\n\nI think that the comment:\n\n+ * Clear out the statistics buffer, so it can be re-used.\n\nshould instead say:\n\n+ * Clear out the supplied statistics.\n\nbecause the current comment infers that repWorker is pointed at the\nMyLogicalRepWorker buffer (which it might be but it shouldn't be\nrelying on that)\nAlso, I think that the function header should mention something like:\n\"The supplied repWorker statistics are cleared upon return, to assist\nre-use by the caller.\"\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 21 Dec 2021 19:59:34 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, December 21, 2021 6:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> Some review comments on the v18 patches:\r\nThank you for your review !\r\n\r\n> v18-0002\r\n> \r\n> doc/src/sgml/monitoring.sgml\r\n> (1) tablesync worker stats?\r\n> \r\n> Shouldn't the comment below only mention the apply worker? (since we're no\r\n> longer recording stats of the tablesync worker)\r\n> \r\n> + Number of transactions that failed to be applied by the table\r\n> + sync worker or main apply worker in this subscription. This\r\n> + counter is updated after confirming the error is not same as\r\n> + the previous one.\r\n> + </para></entry>\r\n> \r\n> Also, it should say \"... the error is not the same as the previous one.\"\r\nFixed.\r\n \r\n> src/backend/catalog/system_views.sql\r\n> (2) pgstat_report_subworker_xact_end()\r\n> \r\n> Fix typo and some wording:\r\n> \r\n> BEFORE:\r\n> + * This should be called before the call of process_syning_tables()\r\n> + not to\r\n> AFTER:\r\n> + * This should be called before the call of\r\n> process_syncing_tables(), so to not\r\nFixed.\r\n\r\n> src/backend/postmaster/pgstat.c\r\n> (3) pgstat_send_subworker_xact_stats()\r\n> \r\n> BEFORE:\r\n> + * Send a subworker transaction stats to the collector.\r\n> AFTER:\r\n> + * Send a subworker's transaction stats to the collector.\r\nFixed.\r\n\r\n> (4)\r\n> Wouldn't it be best for:\r\n> \r\n> + if (!TimestampDifferenceExceeds(last_report, now,\r\n> + PGSTAT_STAT_INTERVAL))\r\n> \r\n> to be:\r\n> \r\n> + if (last_report != 0 && !TimestampDifferenceExceeds(last_report,\r\n> now, PGSTAT_STAT_INTERVAL))\r\n> \r\n> ?\r\nI'm not sure which is better and\r\nI never have strong objection to your idea but currently\r\nI prefer the previous code because we don't need to\r\nadd one extra condition (last_report != 0) in the function called really frequently\r\nand the optimization to avoid calling TimestampDifferenceExceeds works just once\r\nwith your change, I'd say.\r\n\r\nWe call pgstat_send_subworker_xact_stats() in the LogicalRepApplyLoop's loop.\r\nFor the apply worker, this should be the first call for normal operation,\r\nbefore any call of the apply_dispatch(and subsequent commit-like functions which\r\ncalls pgstat_send_subworker_xact_stats() in the end).\r\n\r\nIn the first call, existing v18's codes (without your suggested change) just initializes\r\n'last_report' variable because of the diff between 0 and 'now' and returns\r\nbecause of no stats values in commit_count and abort_count in the function.\r\nAfter this, 'last_report' will not be 0 for the apply worker.\r\n\r\nOn the other hand, in the case I add your change, in the first call of\r\npgstat_send_subworker_xact_stats(), similarly 'last_report' is initialized but\r\nwithout one call of TimestampDifferenceExceeds(), which might be the optimization\r\neffect and the function returns with no stats again. Here the 'last_report' will\r\nnot be 0 after this. But, then we'll have to check the condition in the apply worker\r\nin the every loop. Besides, after the first setting of 'last_report',\r\nevery call of the pgstat_send_subworker_xact_stats() calculates the time subtraction.\r\nThis means the one skipped call of the function looks less worth in the case of frequent\r\ncalls of the function. So, I'm not sure it' a good idea to incorporate this change.\r\n\r\nKindly let me know if I miss something.\r\nAt present, I'll keep the code as it is.\r\n\r\n> (5) pgstat_send_subworker_xact_stats()\r\n> \r\n> I think that the comment:\r\n> \r\n> + * Clear out the statistics buffer, so it can be re-used.\r\n> \r\n> should instead say:\r\n> \r\n> + * Clear out the supplied statistics.\r\n> \r\n> because the current comment infers that repWorker is pointed at the\r\n> MyLogicalRepWorker buffer (which it might be but it shouldn't be relying on\r\n> that) Also, I think that the function header should mention something like:\r\n> \"The supplied repWorker statistics are cleared upon return, to assist re-use by\r\n> the caller.\"\r\nFixed.\r\n\r\nAttached the new patch v19.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Wed, 22 Dec 2021 10:14:13 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Dec 22, 2021 at 6:14 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n>\r\n> Attached the new patch v19.\r\n>\r\n\r\nI have a question on the v19-0002 patch:\r\n\r\nWhen I tested for this patch, I found pg_stat_subscription_workers has some unexpected data.\r\nFor example:\r\n[Publisher]\r\ncreate table replica_test1(a int, b text); create publication pub1 for table replica_test1;\r\ncreate table replica_test2(a int, b text); create publication pub2 for table replica_test2;\r\n\r\n[Subscriber]\r\ncreate table replica_test1(a int, b text); create subscription sub1 CONNECTION 'dbname=postgres' publication pub1;\r\ncreate table replica_test2(a int, b text); create subscription sub2 CONNECTION 'dbname=postgres' publication pub2;\r\n\r\n[Publisher]\r\ninsert into replica_test1 values(1,'1');\r\n\r\n[Subscriber]\r\nselect * from pg_stat_subscription_workers;\r\n\r\n-[ RECORD 1 ]------+------\r\nSubid\t\t\t| 16389\r\nsubname \t\t| sub1\r\nsubrelid\t\t|\r\ncommit_count\t\t| 1\r\n...\r\n-[ RECORD 2 ]------+------\r\nsubid \t\t\t| 16395\r\nsubname\t\t| sub2\r\nsubrelid \t\t|\r\ncommit_count\t\t| 1\r\n...\r\n\r\nI originally expected only one record for \"sub1\". \r\n\r\nI think the reason is apply_handle_commit() always invoke pgstat_report_subworker_xact_end().\r\nBut when we insert data to replica_test1 in publish side:\r\n[In the publish]\r\npub1's walsender1 will send three messages((LOGICAL_REP_MSG_BEGIN, LOGICAL_REP_MSG_INSERT and LOGICAL_REP_MSG_COMMIT))\r\nto sub1's apply worker1.\t\r\npub2's walsender2 will also send two messages(LOGICAL_REP_MSG_BEGIN and LOGICAL_REP_MSG_COMMIT)\r\nto sub2's apply worker2. Because inserted table is not published by pub2.\r\n\r\n[In the subscription]\r\nsub1's apply worker1 receive LOGICAL_REP_MSG_COMMIT,\r\n\tso invoke pgstat_report_subworker_xact_end to increase commit_count of sub1's stats.\r\nsub2's apply worker2 receive LOGICAL_REP_MSG_COMMIT,\r\n\tit will do the same action to increase commit_count of sub2's stats.\r\n\r\nDo we expect these commit counts which come from empty transactions ?\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Wed, 22 Dec 2021 12:37:35 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, December 22, 2021 9:38 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> I have a question on the v19-0002 patch:\r\n> \r\n> When I tested for this patch, I found pg_stat_subscription_workers has some\r\n> unexpected data.\r\n> For example:\r\n> [Publisher]\r\n> create table replica_test1(a int, b text); create publication pub1 for table\r\n> replica_test1; create table replica_test2(a int, b text); create publication pub2\r\n> for table replica_test2;\r\n> \r\n> [Subscriber]\r\n> create table replica_test1(a int, b text); create subscription sub1 CONNECTION\r\n> 'dbname=postgres' publication pub1; create table replica_test2(a int, b text);\r\n> create subscription sub2 CONNECTION 'dbname=postgres' publication pub2;\r\n> \r\n> [Publisher]\r\n> insert into replica_test1 values(1,'1');\r\n> \r\n> [Subscriber]\r\n> select * from pg_stat_subscription_workers;\r\n> \r\n> -[ RECORD 1 ]------+------\r\n> Subid\t\t\t| 16389\r\n> subname \t\t| sub1\r\n> subrelid\t\t|\r\n> commit_count\t\t| 1\r\n> ...\r\n> -[ RECORD 2 ]------+------\r\n> subid \t\t\t| 16395\r\n> subname\t\t| sub2\r\n> subrelid \t\t|\r\n> commit_count\t\t| 1\r\n> ...\r\n> \r\n> I originally expected only one record for \"sub1\".\r\n> \r\n> I think the reason is apply_handle_commit() always invoke\r\n> pgstat_report_subworker_xact_end().\r\n> But when we insert data to replica_test1 in publish side:\r\n> [In the publish]\r\n> pub1's walsender1 will send three messages((LOGICAL_REP_MSG_BEGIN,\r\n> LOGICAL_REP_MSG_INSERT and LOGICAL_REP_MSG_COMMIT))\r\n> to sub1's apply worker1.\r\n> pub2's walsender2 will also send two messages(LOGICAL_REP_MSG_BEGIN\r\n> and LOGICAL_REP_MSG_COMMIT) to sub2's apply worker2. Because\r\n> inserted table is not published by pub2.\r\n> \r\n> [In the subscription]\r\n> sub1's apply worker1 receive LOGICAL_REP_MSG_COMMIT,\r\n> \tso invoke pgstat_report_subworker_xact_end to increase\r\n> commit_count of sub1's stats.\r\n> sub2's apply worker2 receive LOGICAL_REP_MSG_COMMIT,\r\n> \tit will do the same action to increase commit_count of sub2's stats.\r\n> \r\n> Do we expect these commit counts which come from empty transactions ?\r\nHi, thank you so much for your test !\r\n\r\n\r\nThis is another issue discussed in [1]\r\nwhere the patch in the thread is a work in progress, I think.\r\n\r\nThe point you reported will bring a lot of confusion for the users,\r\nto interpret the results of the subscription stats values,\r\nif those patches including the subscription stats patch will not get committed\r\ntogether (like in the same version).\r\n\r\nI've confirmed that HEAD applied with v19-* and\r\nv15-0001-Skip-empty-transactions-for-logical-replication.patch\r\non top of v19-* showed only one record, as you expected like below,\r\nalthough all patches are not finished yet.\r\n\r\npostgres=# select * from pg_stat_subscription_workers;\r\n-[ RECORD 1 ]------+------\r\nsubid | 16389\r\nsubname | sub1\r\nsubrelid | \r\ncommit_count | 1\r\nabort_count | 0\r\nerror_count | 0\r\n....\r\n\r\nIMHO, the conclusion is we are currently in the middle of fixing the behavior.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAFPTHDbVLWxpfnwMxJcXq703gLXciXHE83hwKQ_0OTCZ6oLCjg%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 22 Dec 2021 14:30:06 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, December 22, 2021 10:30 PM\r\nosumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> On Wednesday, December 22, 2021 8:38 PM I wrote:\r\n> > Do we expect these commit counts which come from empty transactions ?\r\n> This is another issue discussed in [1]\r\n> where the patch in the thread is a work in progress, I think.\r\n> ......\r\n> IMHO, the conclusion is we are currently in the middle of fixing the behavior.\r\n\r\nThank you for telling me this.\r\nAfter applying v19-* and v15-0001-Skip-empty-transactions-for-logical-replication.patch, \r\nI retested v19-* patches. The result of previous case looks good to me.\r\n\r\nBut the results of following cases are also similar to previous unexpected result\r\nwhich increases commit_count or abort_count unexpectedly.\r\n[1]\r\n(Based on environment in the previous example, set TWO_PHASE=true)\r\n[Publisher]\r\nbegin;\r\ninsert into replica_test1 values(1,'1');\r\nprepare transaction 'id';\r\ncommit prepared 'id';\r\n\r\nIn subscriber side, the commit_count of two records(sub1 and sub2) is increased.\r\n\r\n[2]\r\n(Based on environment in the previous example, set STREAMING=on)\r\n[Publisher]\r\nbegin;\r\nINSERT INTO replica_test1 SELECT i, md5(i::text) FROM generate_series(1, 5000) s(i);\r\ncommit;\r\n\r\nIn subscriber side, the commit_count of two records(sub1 and sub2) is increased.\r\n\r\n[3]\r\n(Based on environment in the previous example, set TWO_PHASE=true)\r\n[Publisher]\r\nbegin;\r\ninsert into replica_test1 values(1,'1');\r\nprepare transaction 'id';\r\nrollback prepared 'id';\r\n\r\nIn subscriber side, the abort_count of two records(sub1 and sub2) is increased.\r\n\r\nI think the problem maybe is the patch you mentioned\r\n(Skip-empty-transactions-for-logical-replication.patch) is not finished yet.\r\nShare this information here.\r\n\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Thu, 23 Dec 2021 09:37:02 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, December 22, 2021 6:14 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> \r\n> Attached the new patch v19.\r\n> \r\n\r\nThanks for your patch. I think it's better if you could add this patch to the commitfest.\r\nHere are some comments:\r\n\r\n1)\r\n+ <structfield>commit_count</structfield> <type>bigint</type>\r\n+ </para>\r\n+ <para>\r\n+ Number of transactions successfully applied in this subscription.\r\n+ Both COMMIT and COMMIT PREPARED increment this counter.\r\n+ </para></entry>\r\n+ </row>\r\n...\r\n\r\nI think the commands (like COMMIT, COMMIT PREPARED ...) can be surrounded with\r\n\"<command> </command>\", thoughts?\r\n\r\n2)\r\n+extern void pgstat_report_subworker_xact_end(LogicalRepWorker *repWorker,\r\n+\t\t\t\t\t\t\t\t\t\t\t LogicalRepMsgType command,\r\n+\t\t\t\t\t\t\t\t\t\t\t bool bforce);\r\n\r\nShould \"bforce\" be \"force\"?\r\n\r\n3)\r\n+ * This should be called before the call of process_syning_tables() so to not\r\n\r\n\"process_syning_tables()\" should be \"process_syncing_tables()\".\r\n\r\n4)\r\n+void\r\n+pgstat_send_subworker_xact_stats(LogicalRepWorker *repWorker, bool force)\r\n+{\r\n+\tstatic TimestampTz last_report = 0;\r\n+\tPgStat_MsgSubWorkerXactEnd msg;\r\n+\r\n+\tif (!force)\r\n+\t{\r\n...\r\n+\t\tif (!TimestampDifferenceExceeds(last_report, now, PGSTAT_STAT_INTERVAL))\r\n+\t\t\treturn;\r\n+\t\tlast_report = now;\r\n+\t}\r\n+\r\n...\r\n+\tif (repWorker->commit_count == 0 && repWorker->abort_count == 0)\r\n+\t\treturn;\r\n...\r\n\r\nI think it's better to check commit_count and abort_count first, then check if\r\nreach PGSTAT_STAT_INTERVAL.\r\nOtherwise if commit_count and abort_count are 0, it is possible that the value\r\nof last_report has been updated but it didn't send stats in fact. In this case,\r\nlast_report is not the real time that send last message.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Fri, 31 Dec 2021 01:12:25 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, December 22, 2021 6:14 PM Osumi, Takamichi <osumi.takamichi@fujitsu.com> wrote:\r\n> Attached the new patch v19.\r\nHi,\r\n\r\nThanks for updating the patch.\r\n\r\n--- a/src/include/pgstat.h\r\n+++ b/src/include/pgstat.h\r\n@@ -15,6 +15,7 @@\r\n #include \"portability/instr_time.h\"\r\n #include \"postmaster/pgarch.h\"\t/* for MAX_XFN_CHARS */\r\n #include \"replication/logicalproto.h\"\r\n+#include \"replication/worker_internal.h\"\r\n\r\nI noticed that the patch includes \"worker_internal.h \" in pgstat.h.\r\nI think it might be better to only include this file in pgstat.c.\r\n\r\nAnd it seems we can access MyLogicalRepWorker directly in the\r\nfollowing functions instead of passing a parameter.\r\n\r\n+extern void pgstat_report_subworker_xact_end(LogicalRepWorker *repWorker,\r\n+\t\t\t\t\t\t\t\t\t\t\t LogicalRepMsgType command,\r\n+\t\t\t\t\t\t\t\t\t\t\t bool bforce);\r\n+extern void pgstat_send_subworker_xact_stats(LogicalRepWorker *repWorker,\r\n+\t\t\t\t\t\t\t\t\t\t\t bool force);\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Mon, 3 Jan 2022 05:46:24 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, January 3, 2022 2:46 PM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\r\n> On Wednesday, December 22, 2021 6:14 PM Osumi, Takamichi\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Attached the new patch v19.\r\n> Hi,\r\n> \r\n> Thanks for updating the patch.\r\n> \r\n> --- a/src/include/pgstat.h\r\n> +++ b/src/include/pgstat.h\r\n> @@ -15,6 +15,7 @@\r\n> #include \"portability/instr_time.h\"\r\n> #include \"postmaster/pgarch.h\"\t/* for MAX_XFN_CHARS */\r\n> #include \"replication/logicalproto.h\"\r\n> +#include \"replication/worker_internal.h\"\r\n> \r\n> I noticed that the patch includes \"worker_internal.h \" in pgstat.h.\r\n> I think it might be better to only include this file in pgstat.c.\r\n> And it seems we can access MyLogicalRepWorker directly in the following\r\n> functions instead of passing a parameter.\r\n> \r\n> +extern void pgstat_report_subworker_xact_end(LogicalRepWorker\r\n> *repWorker,\r\n> +\r\n> \t\t LogicalRepMsgType command,\r\n> +\r\n> \t\t bool bforce);\r\n> +extern void pgstat_send_subworker_xact_stats(LogicalRepWorker\r\n> *repWorker,\r\n> +\r\n> \t\t bool force);\r\nHi, thank you for your review !\r\n\r\nBoth are fixed. Additionally, I modified\r\nrelated comments, the header comment of pgstat_send_subworker_xact_stats,\r\nby the change.\r\n\r\nOne other new improvements in v20 is to have removed the 2nd argument of\r\npgstat_send_subworker_xact_stats(). When we called it from\r\ncommit-like operations, I passed 'false' without exceptions in v19\r\nand noticed that could be removed.\r\n\r\nAlso, there's a really minor alignment in the source codes.\r\nIn pgstat.c, I placed pgstat_report_subworker_xact_end() after pgstat_report_subworker_error(),\r\nand pgstat_send_subworker_xact_stats() after pgstat_send_subscription_purge() and so on\r\nlike the order of PgstatCollectorMain() and PgStat_Msg definition, because\r\nmy patch's new functions are added after the existing functions\r\nfor error handling functions of subscription workers.\r\n\r\nLastly, I changed the error report in pgstat_report_subworker_xact_end()\r\nso that it can be more understandable easily and a bit modern.\r\n\r\nKindly have a look at the attached version.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 4 Jan 2022 11:47:26 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Friday, December 31, 2021 10:12 AM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\r\n> On Wednesday, December 22, 2021 6:14 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Attached the new patch v19.\r\n> >\r\n> \r\n> Thanks for your patch. I think it's better if you could add this patch to the\r\n> commitfest.\r\n> Here are some comments:\r\nThank you for your review !\r\nI've created one entry in the next commitfest for this patch [1]\r\n\r\n> \r\n> 1)\r\n> + <structfield>commit_count</structfield> <type>bigint</type>\r\n> + </para>\r\n> + <para>\r\n> + Number of transactions successfully applied in this subscription.\r\n> + Both COMMIT and COMMIT PREPARED increment this counter.\r\n> + </para></entry>\r\n> + </row>\r\n> ...\r\n> \r\n> I think the commands (like COMMIT, COMMIT PREPARED ...) can be\r\n> surrounded with \"<command> </command>\", thoughts?\r\nMakes sense to me. Fixed.\r\n\r\nNote that to the user perspective,\r\nwe should write only COMMIT and COMMIT PREPARED in the documentation.\r\nThus, I don't list up other commands.\r\n\r\nI wrapped ROLLBACK PREPARED for abort_count as well.\r\n \r\n> 2)\r\n> +extern void pgstat_report_subworker_xact_end(LogicalRepWorker\r\n> *repWorker,\r\n> +\r\n> \t\t LogicalRepMsgType command,\r\n> +\r\n> \t\t bool bforce);\r\n> \r\n> Should \"bforce\" be \"force\"?\r\nFixed the typo.\r\n\r\n> 3)\r\n> + * This should be called before the call of process_syning_tables() so\r\n> + to not\r\n> \r\n> \"process_syning_tables()\" should be \"process_syncing_tables()\".\r\nFixed.\r\n \r\n> 4)\r\n> +void\r\n> +pgstat_send_subworker_xact_stats(LogicalRepWorker *repWorker, bool\r\n> +force) {\r\n> +\tstatic TimestampTz last_report = 0;\r\n> +\tPgStat_MsgSubWorkerXactEnd msg;\r\n> +\r\n> +\tif (!force)\r\n> +\t{\r\n> ...\r\n> +\t\tif (!TimestampDifferenceExceeds(last_report, now,\r\n> PGSTAT_STAT_INTERVAL))\r\n> +\t\t\treturn;\r\n> +\t\tlast_report = now;\r\n> +\t}\r\n> +\r\n> ...\r\n> +\tif (repWorker->commit_count == 0 && repWorker->abort_count ==\r\n> 0)\r\n> +\t\treturn;\r\n> ...\r\n> \r\n> I think it's better to check commit_count and abort_count first, then check if\r\n> reach PGSTAT_STAT_INTERVAL.\r\n> Otherwise if commit_count and abort_count are 0, it is possible that the value\r\n> of last_report has been updated but it didn't send stats in fact. In this case,\r\n> last_report is not the real time that send last message.\r\nYeah, agreed. This fix is right in terms of the variable name aspect.\r\n\r\nThe only scenario that we can take advantage of the previous implementation of\r\nv19's pgstat_send_subworker_xact_stats() should be a case where\r\nwe execute a bunch of commit-like logical replication apply messages\r\nwithin PGSTAT_STAT_INTERVAL intensively and continuously for long period,\r\nbecause we check \"repWorker->commit_count == 0 && repWorker->abort_count == 0\"\r\njust once before calling pgstat_send() in this case.\r\n*But*, this scenario didn't look reasonable. In other words,\r\nthe way to call TimestampDifferenceExceeds() only if there's any need of\r\nupdate for commit_count/abort_count looks less heavy.\r\nAccordingly, I've fixed it as you suggested.\r\nAlso, I modified some comments in pgstat_send_subworker_xact_stats() for this change.\r\n\r\nKindly have a look at the v20 shared in [2].\r\n\r\n[1] - https://commitfest.postgresql.org/37/3504/\r\n[2] - https://www.postgresql.org/message-id/TYCPR01MB8373AB2AE1A6EC7B9E012519ED4A9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 4 Jan 2022 11:52:21 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, December 23, 2021 6:37 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> On Wednesday, December 22, 2021 10:30 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > On Wednesday, December 22, 2021 8:38 PM I wrote:\r\n> > > Do we expect these commit counts which come from empty transactions ?\r\n> > This is another issue discussed in [1] where the patch in the thread\r\n> > is a work in progress, I think.\r\n> > ......\r\n> > IMHO, the conclusion is we are currently in the middle of fixing the behavior.\r\n> \r\n> Thank you for telling me this.\r\n> After applying v19-* and\r\n> v15-0001-Skip-empty-transactions-for-logical-replication.patch,\r\n> I retested v19-* patches. The result of previous case looks good to me.\r\n> \r\n> But the results of following cases are also similar to previous unexpected result\r\n> which increases commit_count or abort_count unexpectedly.\r\n> [1]\r\n> (Based on environment in the previous example, set TWO_PHASE=true)\r\n> [Publisher] begin; insert into replica_test1 values(1,'1'); prepare transaction\r\n> 'id'; commit prepared 'id';\r\n> \r\n> In subscriber side, the commit_count of two records(sub1 and sub2) is\r\n> increased.\r\n> \r\n> [2]\r\n> (Based on environment in the previous example, set STREAMING=on)\r\n> [Publisher] begin; INSERT INTO replica_test1 SELECT i, md5(i::text) FROM\r\n> generate_series(1, 5000) s(i); commit;\r\n> \r\n> In subscriber side, the commit_count of two records(sub1 and sub2) is\r\n> increased.\r\n> \r\n> [3]\r\n> (Based on environment in the previous example, set TWO_PHASE=true)\r\n> [Publisher] begin; insert into replica_test1 values(1,'1'); prepare transaction\r\n> 'id'; rollback prepared 'id';\r\n> \r\n> In subscriber side, the abort_count of two records(sub1 and sub2) is\r\n> increased.\r\n> \r\n> I think the problem maybe is the patch you mentioned\r\n> (Skip-empty-transactions-for-logical-replication.patch) is not finished yet.\r\n> Share this information here.\r\nHi, thank you for your report.\r\n\r\nYes. As the patch's commit message mentions, the patch doesn't\r\ncover streaming and two phase transactions.\r\n\r\nIn the above reply, I said that this was an independent issue\r\nand we were in the middle of the modification of the behavior,\r\nbut empty transaction turned out to be harmful enough for this feature.\r\nAs far as I searched, the current logical replication sends\r\nevery transaction that is unrelated to publication specification.\r\nIt means that in a common scenario where some parts of tables are\r\nreplicated but others are not, the subscription statistics will\r\nbe buried by the updates by the empty transactions for the latter,\r\nwhich damages this patch's value greatly.\r\n\r\nTherefore, I included the description in the documentation\r\nreported by you.\r\n\r\nThe attached v21 has a couple of other minor updates\r\nlike a modification of error message text.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Wed, 12 Jan 2022 12:34:49 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 6:04 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, December 23, 2021 6:37 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\n> > On Wednesday, December 22, 2021 10:30 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > > On Wednesday, December 22, 2021 8:38 PM I wrote:\n> > > > Do we expect these commit counts which come from empty transactions ?\n> > > This is another issue discussed in [1] where the patch in the thread\n> > > is a work in progress, I think.\n> > > ......\n> > > IMHO, the conclusion is we are currently in the middle of fixing the behavior.\n> >\n> > Thank you for telling me this.\n> > After applying v19-* and\n> > v15-0001-Skip-empty-transactions-for-logical-replication.patch,\n> > I retested v19-* patches. The result of previous case looks good to me.\n> >\n> > But the results of following cases are also similar to previous unexpected result\n> > which increases commit_count or abort_count unexpectedly.\n> > [1]\n> > (Based on environment in the previous example, set TWO_PHASE=true)\n> > [Publisher] begin; insert into replica_test1 values(1,'1'); prepare transaction\n> > 'id'; commit prepared 'id';\n> >\n> > In subscriber side, the commit_count of two records(sub1 and sub2) is\n> > increased.\n> >\n> > [2]\n> > (Based on environment in the previous example, set STREAMING=on)\n> > [Publisher] begin; INSERT INTO replica_test1 SELECT i, md5(i::text) FROM\n> > generate_series(1, 5000) s(i); commit;\n> >\n> > In subscriber side, the commit_count of two records(sub1 and sub2) is\n> > increased.\n> >\n> > [3]\n> > (Based on environment in the previous example, set TWO_PHASE=true)\n> > [Publisher] begin; insert into replica_test1 values(1,'1'); prepare transaction\n> > 'id'; rollback prepared 'id';\n> >\n> > In subscriber side, the abort_count of two records(sub1 and sub2) is\n> > increased.\n> >\n> > I think the problem maybe is the patch you mentioned\n> > (Skip-empty-transactions-for-logical-replication.patch) is not finished yet.\n> > Share this information here.\n> Hi, thank you for your report.\n>\n> Yes. As the patch's commit message mentions, the patch doesn't\n> cover streaming and two phase transactions.\n>\n> In the above reply, I said that this was an independent issue\n> and we were in the middle of the modification of the behavior,\n> but empty transaction turned out to be harmful enough for this feature.\n>\n\nIsn't that because of this patch? I mean the patch is reporting count\neven when during apply we haven't started any transaction. In\nparticular, if you would have reported stats from\napply_handle_commit_internal() when IsTransactionState() returns true,\nthen it shouldn't have updated the stats for an empty transaction.\nSimilarly, I see for rollback_prepared, you don't need to report stats\nif there is no prepared transaction to rollback. I think for\ncommit_prepare case, we can't do much for empty xacts but why would\nthat be a problem of this patch? I think as far as this patch goes, it\nreports the number of completed xacts (committed/aborted) in the\nsubscription worker, so, it doesn't need to handle any special cases\nlike empty transactions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Feb 2022 14:38:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Jan 4, 2022 at 5:22 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, December 31, 2021 10:12 AM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\n> > 4)\n> > +void\n> > +pgstat_send_subworker_xact_stats(LogicalRepWorker *repWorker, bool\n> > +force) {\n> > + static TimestampTz last_report = 0;\n> > + PgStat_MsgSubWorkerXactEnd msg;\n> > +\n> > + if (!force)\n> > + {\n> > ...\n> > + if (!TimestampDifferenceExceeds(last_report, now,\n> > PGSTAT_STAT_INTERVAL))\n> > + return;\n> > + last_report = now;\n> > + }\n> > +\n> > ...\n> > + if (repWorker->commit_count == 0 && repWorker->abort_count ==\n> > 0)\n> > + return;\n> > ...\n> >\n> > I think it's better to check commit_count and abort_count first, then check if\n> > reach PGSTAT_STAT_INTERVAL.\n> > Otherwise if commit_count and abort_count are 0, it is possible that the value\n> > of last_report has been updated but it didn't send stats in fact. In this case,\n> > last_report is not the real time that send last message.\n> Yeah, agreed. This fix is right in terms of the variable name aspect.\n>\n\nCan't we use pgstat_report_stat() here? Basically, you can update xact\ncompletetion counters during apply, and then from\npgstat_report_stat(), you can invoke a logical replication worker\nstats-related function.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Feb 2022 15:13:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Feb 17, 2022 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 4, 2022 at 5:22 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Friday, December 31, 2021 10:12 AM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\n> > > 4)\n> > > +void\n> > > +pgstat_send_subworker_xact_stats(LogicalRepWorker *repWorker, bool\n> > > +force) {\n> > > + static TimestampTz last_report = 0;\n> > > + PgStat_MsgSubWorkerXactEnd msg;\n> > > +\n> > > + if (!force)\n> > > + {\n> > > ...\n> > > + if (!TimestampDifferenceExceeds(last_report, now,\n> > > PGSTAT_STAT_INTERVAL))\n> > > + return;\n> > > + last_report = now;\n> > > + }\n> > > +\n> > > ...\n> > > + if (repWorker->commit_count == 0 && repWorker->abort_count ==\n> > > 0)\n> > > + return;\n> > > ...\n> > >\n> > > I think it's better to check commit_count and abort_count first, then check if\n> > > reach PGSTAT_STAT_INTERVAL.\n> > > Otherwise if commit_count and abort_count are 0, it is possible that the value\n> > > of last_report has been updated but it didn't send stats in fact. In this case,\n> > > last_report is not the real time that send last message.\n> > Yeah, agreed. This fix is right in terms of the variable name aspect.\n> >\n>\n> Can't we use pgstat_report_stat() here? Basically, you can update xact\n> completetion counters during apply, and then from\n> pgstat_report_stat(), you can invoke a logical replication worker\n> stats-related function.\n>\n\nIf we can do this then we can save the logic this patch is trying to\nintroduce for PGSTAT_STAT_INTERVAL.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Feb 2022 15:14:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Jan 12, 2022 8:35 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> \r\n> The attached v21 has a couple of other minor updates\r\n> like a modification of error message text.\r\n> \r\n> \r\n\r\nThanks for updating the patch. Here are some comments.\r\n\r\n1) I saw the following description about pg_stat_subscription_workers view in\r\nexisting doc:\r\n\r\n The <structname>pg_stat_subscription_workers</structname> view will contain\r\n one row per subscription worker on which errors have occurred, ...\r\n\r\nIt only says \"which errors have occurred\", maybe we should also mention\r\ntransactions here, right?\r\n\r\n2)\r\n/* ----------\r\n+ * pgstat_send_subworker_xact_stats() -\r\n+ *\r\n+ *\tSend a subworker's transaction stats to the collector.\r\n+ *\tThe statistics are cleared upon return.\r\n\r\nShould \"The statistics are cleared upon return\" changed to \"The statistics are\r\ncleared upon sending\"? Because if it doesn't reach PGSTAT_STAT_INTERVAL and the\r\ntransaction stats are not sent, the function will return without clearing out\r\nstatistics.\r\n\r\n3)\r\n+\tAssert(command == LOGICAL_REP_MSG_COMMIT ||\r\n+\t\t command == LOGICAL_REP_MSG_STREAM_COMMIT ||\r\n+\t\t command == LOGICAL_REP_MSG_COMMIT_PREPARED ||\r\n+\t\t command == LOGICAL_REP_MSG_ROLLBACK_PREPARED);\r\n+\r\n+\tswitch (command)\r\n+\t{\r\n+\t\tcase LOGICAL_REP_MSG_COMMIT:\r\n+\t\tcase LOGICAL_REP_MSG_STREAM_COMMIT:\r\n+\t\tcase LOGICAL_REP_MSG_COMMIT_PREPARED:\r\n+\t\t\tMyLogicalRepWorker->commit_count++;\r\n+\t\t\tbreak;\r\n+\t\tcase LOGICAL_REP_MSG_ROLLBACK_PREPARED:\r\n+\t\t\tMyLogicalRepWorker->abort_count++;\r\n+\t\t\tbreak;\r\n+\t\tdefault:\r\n+\t\t\tereport(ERROR,\r\n+\t\t\t\t\terrmsg(\"invalid logical message type for transaction statistics of subscription\"));\r\n+\t\t\tbreak;\r\n+\t}\r\n\r\nI'm not sure that do we need the assert, because it will report an error later\r\nif command is an invalid value.\r\n\r\n4) I noticed that the abort_count doesn't include aborted streaming transactions.\r\nShould we take this case into consideration?\r\n\r\nRegards,\r\nTang\r\n\r\n",
"msg_date": "Fri, 18 Feb 2022 06:34:15 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, February 17, 2022 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Feb 17, 2022 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Jan 4, 2022 at 5:22 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > On Friday, December 31, 2021 10:12 AM Tang, Haiying/唐 海英\r\n> <tanghy.fnst@fujitsu.com> wrote:\r\n> > > > 4)\r\n> > > > +void\r\n> > > > +pgstat_send_subworker_xact_stats(LogicalRepWorker *repWorker,\r\n> > > > +bool\r\n> > > > +force) {\r\n> > > > + static TimestampTz last_report = 0;\r\n> > > > + PgStat_MsgSubWorkerXactEnd msg;\r\n> > > > +\r\n> > > > + if (!force)\r\n> > > > + {\r\n> > > > ...\r\n> > > > + if (!TimestampDifferenceExceeds(last_report, now,\r\n> > > > PGSTAT_STAT_INTERVAL))\r\n> > > > + return;\r\n> > > > + last_report = now;\r\n> > > > + }\r\n> > > > +\r\n> > > > ...\r\n> > > > + if (repWorker->commit_count == 0 && repWorker->abort_count\r\n> > > > + ==\r\n> > > > 0)\r\n> > > > + return;\r\n> > > > ...\r\n> > > >\r\n> > > > I think it's better to check commit_count and abort_count first,\r\n> > > > then check if reach PGSTAT_STAT_INTERVAL.\r\n> > > > Otherwise if commit_count and abort_count are 0, it is possible\r\n> > > > that the value of last_report has been updated but it didn't send\r\n> > > > stats in fact. In this case, last_report is not the real time that send last\r\n> message.\r\n> > > Yeah, agreed. This fix is right in terms of the variable name aspect.\r\n> > >\r\n> >\r\n> > Can't we use pgstat_report_stat() here? Basically, you can update xact\r\n> > completetion counters during apply, and then from\r\n> > pgstat_report_stat(), you can invoke a logical replication worker\r\n> > stats-related function.\r\n> >\r\n> \r\n> If we can do this then we can save the logic this patch is trying to introduce for\r\n> PGSTAT_STAT_INTERVAL.\r\nHi, I've encounter a couple of questions during my modification, following your advice.\r\n\r\nIn the pgstat_report_stat, we refer to the return value of\r\nGetCurrentTransactionStopTimestamp to compare the time different from the last time.\r\n(In my previous patch, I used GetCurrentTimestamp)\r\n\r\nThis time is updated in apply_handle_commit_internal's CommitTransactionCommand for the apply worker.\r\nThen, if I update the subscription worker stats(commit_count/abort_count) immediately after\r\nthis CommitTransactionCommand and immediately call pgstat_report_stat in the apply_handle_commit_internal,\r\nthe time difference becomes too small (falls short of PGSTAT_STAT_INTERVAL).\r\nAlso, the time of GetCurrentTransactionStopTimestamp is not updated\r\neven when I keep calling pgstat_report_stat repeatedly.\r\nThen, IIUC the next possible timing that message of commit_count or abort_count\r\nis sent to the stats collector would become the time when we execute another transaction\r\nby the apply worker and update the time for GetCurrentTransactionStopTimestamp\r\nand rerun pgstat_report_stat again.\r\n\r\nSo, if we keep GetCurrentTransactionStopTimestamp without change,\r\nan update of stats depends on another new subsequent transaction if we simply merge those.\r\n(this leads to users cannot see the latest stats information ?)\r\nAt least, I got a test failure because of this function for streaming commit case\r\nbecause it uses poll_query_until to wait for stats update.\r\n\r\nOn the other hand, replacing GetCurrentTransactionStopTimestamp with\r\nGetCurrentTimestamp in case of apply worker looks have another negative impact.\r\nIf we do so, it becomes possible that we go into the code to scan TabStatusArray with\r\nPgStat_TableStatus's trans with non-null values, because of the timing change.\r\n\r\nI might be able to avoid this kind of assert failure if I write the message send\r\nfunction of this patch before other existing functions to send various type of messages\r\nand return if the process is apply worker in pgstat_report_stat.\r\nBut, I can't be convinced that this way of modification is OK.\r\n\r\nWhat did you think about those issues ?\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 18 Feb 2022 08:34:01 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Fri, Feb 18, 2022 at 2:04 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, February 17, 2022 6:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Feb 17, 2022 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > Can't we use pgstat_report_stat() here? Basically, you can update xact\n> > > completetion counters during apply, and then from\n> > > pgstat_report_stat(), you can invoke a logical replication worker\n> > > stats-related function.\n> > >\n> >\n> > If we can do this then we can save the logic this patch is trying to introduce for\n> > PGSTAT_STAT_INTERVAL.\n> Hi, I've encounter a couple of questions during my modification, following your advice.\n>\n> In the pgstat_report_stat, we refer to the return value of\n> GetCurrentTransactionStopTimestamp to compare the time different from the last time.\n> (In my previous patch, I used GetCurrentTimestamp)\n>\n> This time is updated in apply_handle_commit_internal's CommitTransactionCommand for the apply worker.\n> Then, if I update the subscription worker stats(commit_count/abort_count) immediately after\n> this CommitTransactionCommand and immediately call pgstat_report_stat in the apply_handle_commit_internal,\n> the time difference becomes too small (falls short of PGSTAT_STAT_INTERVAL).\n> Also, the time of GetCurrentTransactionStopTimestamp is not updated\n> even when I keep calling pgstat_report_stat repeatedly.\n> Then, IIUC the next possible timing that message of commit_count or abort_count\n> is sent to the stats collector would become the time when we execute another transaction\n> by the apply worker and update the time for GetCurrentTransactionStopTimestamp\n> and rerun pgstat_report_stat again.\n>\n\nI think but same is true in the case of the transaction in the backend\nwhere we increment commit counter via AtEOXact_PgStat after updating\nthe transaction time. After that, we call pgstat_report_stat() at\nlater point. How is this case different?\n\n> So, if we keep GetCurrentTransactionStopTimestamp without change,\n> an update of stats depends on another new subsequent transaction if we simply merge those.\n> (this leads to users cannot see the latest stats information ?)\n>\n\nI think this should be okay as these don't need to be accurate.\n\n> At least, I got a test failure because of this function for streaming commit case\n> because it uses poll_query_until to wait for stats update.\n>\n\nI feel it is not a good idea to wait for the accurate update of these counters.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 18 Feb 2022 16:40:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Friday, February 18, 2022 8:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Feb 18, 2022 at 2:04 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, February 17, 2022 6:45 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > On Thu, Feb 17, 2022 at 3:13 PM Amit Kapila\r\n> > > <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > Can't we use pgstat_report_stat() here? Basically, you can update\r\n> > > > xact completetion counters during apply, and then from\r\n> > > > pgstat_report_stat(), you can invoke a logical replication worker\r\n> > > > stats-related function.\r\n> > > >\r\n> > >\r\n> > > If we can do this then we can save the logic this patch is trying to\r\n> > > introduce for PGSTAT_STAT_INTERVAL.\r\n> > Hi, I've encounter a couple of questions during my modification, following\r\n> your advice.\r\n> >\r\n> > In the pgstat_report_stat, we refer to the return value of\r\n> > GetCurrentTransactionStopTimestamp to compare the time different from\r\n> the last time.\r\n> > (In my previous patch, I used GetCurrentTimestamp)\r\n> >\r\n> > This time is updated in apply_handle_commit_internal's\r\n> CommitTransactionCommand for the apply worker.\r\n> > Then, if I update the subscription worker\r\n> > stats(commit_count/abort_count) immediately after this\r\n> > CommitTransactionCommand and immediately call pgstat_report_stat in the\r\n> apply_handle_commit_internal, the time difference becomes too small (falls\r\n> short of PGSTAT_STAT_INTERVAL).\r\n> > Also, the time of GetCurrentTransactionStopTimestamp is not updated\r\n> > even when I keep calling pgstat_report_stat repeatedly.\r\n> > Then, IIUC the next possible timing that message of commit_count or\r\n> > abort_count is sent to the stats collector would become the time when\r\n> > we execute another transaction by the apply worker and update the time\r\n> > for GetCurrentTransactionStopTimestamp\r\n> > and rerun pgstat_report_stat again.\r\n> >\r\n> \r\n> I think but same is true in the case of the transaction in the backend where we\r\n> increment commit counter via AtEOXact_PgStat after updating the transaction\r\n> time. After that, we call pgstat_report_stat() at later point. How is this case\r\n> different?\r\n> \r\n> > So, if we keep GetCurrentTransactionStopTimestamp without change, an\r\n> > update of stats depends on another new subsequent transaction if we\r\n> simply merge those.\r\n> > (this leads to users cannot see the latest stats information ?)\r\n> >\r\n> \r\n> I think this should be okay as these don't need to be accurate.\r\n> \r\n> > At least, I got a test failure because of this function for streaming\r\n> > commit case because it uses poll_query_until to wait for stats update.\r\n> >\r\n> \r\n> I feel it is not a good idea to wait for the accurate update of these counters.\r\nAh, then I had wrote tests based on totally wrong direction and made noises for it.\r\nSorry for that. I don't see tests for existing xact_commit/rollback count,\r\nso I'll follow the same way.\r\n\r\nAttached a new patch that addresses three major improvements I've got so far as comments.\r\n1. skip increment of stats counter by empty transaction, on the subscriber side\r\n (except for commit prepared)\r\n2. utilize the existing pgstat_report_stat, instead of having a similar logic newly.\r\n3. remove the wrong tests.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 18 Feb 2022 14:51:27 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Friday, February 18, 2022 3:34 PM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\r\n> On Wed, Jan 12, 2022 8:35 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > The attached v21 has a couple of other minor updates like a\r\n> > modification of error message text.\r\n> >\r\n> >\r\n> \r\n> Thanks for updating the patch. Here are some comments.\r\nThank you for your reivew !\r\n\r\n\r\n\r\n> 1) I saw the following description about pg_stat_subscription_workers view in\r\n> existing doc:\r\n> \r\n> The <structname>pg_stat_subscription_workers</structname> view will\r\n> contain\r\n> one row per subscription worker on which errors have occurred, ...\r\n> \r\n> It only says \"which errors have occurred\", maybe we should also mention\r\n> transactions here, right?\r\nI wrote about this statistics in the next line but as you pointed out,\r\nseparating the description into two sentences wasn't good idea.\r\nFixed.\r\n\r\n\r\n\r\n> 2)\r\n> /* ----------\r\n> + * pgstat_send_subworker_xact_stats() -\r\n> + *\r\n> + *\tSend a subworker's transaction stats to the collector.\r\n> + *\tThe statistics are cleared upon return.\r\n> \r\n> Should \"The statistics are cleared upon return\" changed to \"The statistics are\r\n> cleared upon sending\"? Because if it doesn't reach PGSTAT_STAT_INTERVAL\r\n> and the transaction stats are not sent, the function will return without clearing\r\n> out statistics.\r\nNow, the purpose of this function has become purely\r\nto send a message and whenever it's called, the function\r\nclears the saved stats. So, I skipped this comments now.\r\n\r\n\r\n> 3)\r\n> +\tAssert(command == LOGICAL_REP_MSG_COMMIT ||\r\n> +\t\t command == LOGICAL_REP_MSG_STREAM_COMMIT ||\r\n> +\t\t command == LOGICAL_REP_MSG_COMMIT_PREPARED\r\n> ||\r\n> +\t\t command ==\r\n> LOGICAL_REP_MSG_ROLLBACK_PREPARED);\r\n> +\r\n> +\tswitch (command)\r\n> +\t{\r\n> +\t\tcase LOGICAL_REP_MSG_COMMIT:\r\n> +\t\tcase LOGICAL_REP_MSG_STREAM_COMMIT:\r\n> +\t\tcase LOGICAL_REP_MSG_COMMIT_PREPARED:\r\n> +\t\t\tMyLogicalRepWorker->commit_count++;\r\n> +\t\t\tbreak;\r\n> +\t\tcase LOGICAL_REP_MSG_ROLLBACK_PREPARED:\r\n> +\t\t\tMyLogicalRepWorker->abort_count++;\r\n> +\t\t\tbreak;\r\n> +\t\tdefault:\r\n> +\t\t\tereport(ERROR,\r\n> +\t\t\t\t\terrmsg(\"invalid logical message type\r\n> for transaction statistics of subscription\"));\r\n> +\t\t\tbreak;\r\n> +\t}\r\n> \r\n> I'm not sure that do we need the assert, because it will report an error later if\r\n> command is an invalid value.\r\nThe Assert has been removed, along with the switch branches now.\r\nSince there was an adivce that we should call this from apply_handle_commit_internal\r\nand from that function, if we don't want to change this function's argument,\r\nall we need to do is to pass a boolean value that indicates the stats is\r\ncommit_count or abort_count. Kindly have a look at the updated version.\r\n\r\n\r\n> 4) I noticed that the abort_count doesn't include aborted streaming\r\n> transactions.\r\n> Should we take this case into consideration?\r\nHmm, we can add this into this column, when there's no objection.\r\nI'm not sure but someone might say those should be separate columns.\r\n\r\nThe new patch v22 is shared in [2].\r\n\r\n[2] - https://www.postgresql.org/message-id/TYCPR01MB83737C689F8F310C87C19C1EED379%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 18 Feb 2022 14:59:43 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Saturday, February 19, 2022 12:00 AM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> On Friday, February 18, 2022 3:34 PM Tang, Haiying/唐 海英\r\n> <tanghy.fnst@fujitsu.com> wrote:\r\n> > On Wed, Jan 12, 2022 8:35 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > 4) I noticed that the abort_count doesn't include aborted streaming\r\n> > transactions.\r\n> > Should we take this case into consideration?\r\n> Hmm, we can add this into this column, when there's no objection.\r\n> I'm not sure but someone might say those should be separate columns.\r\nI've addressed this point in a new v23 patch,\r\nsince there was no opinion on this so far.\r\n\r\nKindly have a look at the attached one.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 21 Feb 2022 03:45:37 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Feb 21, 2022 11:46 AM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\r\n>I've addressed this point in a new v23 patch, since there was no opinion on this so far.\r\n>Kindly have a look at the attached one.\r\nThanks for updating the patch. Here is a comment:\r\n\r\nIn function apply_handle_stream_abort:\r\n@@ -1217,6 +1219,7 @@ apply_handle_stream_abort(StringInfo s)\r\n \t{\r\n \t\tset_apply_error_context_xact(xid, 0);\r\n \t\tstream_cleanup_files(MyLogicalRepWorker->subid, xid);\r\n+\t\tpgstat_report_subworker_xact_end(false);\r\n \t}\r\n \telse\r\n \t{\r\n\r\nI think there is a problem here, pgstat_report_stat is not invoked here.\r\nWhile the other three places where function pgstat_report_subworker_xact_end is\r\ninvoked, the function pgstat_report_stat is invoked.\r\nDo we need to invoke pgstat_report_stat in apply_handle_stream_abort?\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Mon, 21 Feb 2022 09:06:11 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Monday, February 21, 2022 6:06 PM Monday, February 21, 2022 6:06 PM wrote:\r\n> On Mon, Feb 21, 2022 11:46 AM Osumi, Takamichi/大墨 昂道\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >I've addressed this point in a new v23 patch, since there was no opinion on\r\n> this so far.\r\n> >Kindly have a look at the attached one.\r\n> Thanks for updating the patch. Here is a comment:\r\n> \r\n> In function apply_handle_stream_abort:\r\n> @@ -1217,6 +1219,7 @@ apply_handle_stream_abort(StringInfo s)\r\n> \t{\r\n> \t\tset_apply_error_context_xact(xid, 0);\r\n> \t\tstream_cleanup_files(MyLogicalRepWorker->subid, xid);\r\n> +\t\tpgstat_report_subworker_xact_end(false);\r\n> \t}\r\n> \telse\r\n> \t{\r\n> \r\n> I think there is a problem here, pgstat_report_stat is not invoked here.\r\n> While the other three places where function\r\n> pgstat_report_subworker_xact_end is invoked, the function pgstat_report_stat\r\n> is invoked.\r\n> Do we need to invoke pgstat_report_stat in apply_handle_stream_abort?\r\nHi,\r\n\r\nI had tested this case before I posted the latest patch v23.\r\nIt works when I call pg_stat_report_stat by other transaction.\r\n\r\nBut, if we want to add pgstat_report_stat here,\r\nI need to investigate the impact of the addition.\r\nI'll check it and let you know.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 21 Feb 2022 10:06:17 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Feb 21, 2022 11:46 AM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> \r\n> On Saturday, February 19, 2022 12:00 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > On Friday, February 18, 2022 3:34 PM Tang, Haiying/唐 海英\r\n> > <tanghy.fnst@fujitsu.com> wrote:\r\n> > > On Wed, Jan 12, 2022 8:35 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > 4) I noticed that the abort_count doesn't include aborted streaming\r\n> > > transactions.\r\n> > > Should we take this case into consideration?\r\n> > Hmm, we can add this into this column, when there's no objection.\r\n> > I'm not sure but someone might say those should be separate columns.\r\n> I've addressed this point in a new v23 patch,\r\n> since there was no opinion on this so far.\r\n> \r\n> Kindly have a look at the attached one.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\nI found a problem when using it. When a replication workers exits, the\r\ntransaction stats should be sent to stats collector if they were not sent before\r\nbecause it didn't reach PGSTAT_STAT_INTERVAL. But I saw that the stats weren't\r\nupdated as expected.\r\n\r\nI looked into it and found that the replication worker would send the \r\ntransaction stats (if any) before it exits. But it got invalid subid in\r\npgstat_send_subworker_xact_stats(), which led to the following result:\r\n\r\npostgres=# select pg_stat_get_subscription_worker(0, null);\r\n pg_stat_get_subscription_worker\r\n---------------------------------\r\n (0,,2,0,0,,,,0,\"\",)\r\n(1 row)\r\n\r\nI think that's because subid has already been cleaned when trying to send the\r\nstats. I printed the value of before_shmem_exit_list, the functions in this list\r\nwould be called in shmem_exit() when the worker exits.\r\nlogicalrep_worker_onexit() would clean up the worker info (including subid), and\r\npgstat_shutdown_hook() would send stats if any. logicalrep_worker_onexit() was\r\ncalled before calling pgstat_shutdown_hook().\r\n\r\n(gdb) p before_shmem_exit_list\r\n$1 = {{function = 0xa88f1e <pgstat_shutdown_hook>, arg = 0}, {function = 0xb619e7 <BeforeShmemExit_Files>, arg = 0}, {function = 0xb07b5c <ReplicationSlotShmemExit>, arg = 0}, {\r\n function = 0xabdd93 <logicalrep_worker_onexit>, arg = 0}, {function = 0xe30c89 <ShutdownPostgres>, arg = 0}, {function = 0x0, arg = 0} <repeats 15 times>}\r\n\r\nMaybe we should make some modification to fix it.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Tue, 22 Feb 2022 01:15:24 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, February 22, 2022 10:15 AM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\r\n> On Mon, Feb 21, 2022 11:46 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Saturday, February 19, 2022 12:00 AM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > On Friday, February 18, 2022 3:34 PM Tang, Haiying/唐 海英\r\n> > > <tanghy.fnst@fujitsu.com> wrote:\r\n> > > > On Wed, Jan 12, 2022 8:35 PM osumi.takamichi@fujitsu.com\r\n> > > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > > 4) I noticed that the abort_count doesn't include aborted\r\n> > > > streaming transactions.\r\n> > > > Should we take this case into consideration?\r\n> > > Hmm, we can add this into this column, when there's no objection.\r\n> > > I'm not sure but someone might say those should be separate columns.\r\n> > I've addressed this point in a new v23 patch, since there was no\r\n> > opinion on this so far.\r\n> >\r\n> > Kindly have a look at the attached one.\r\n> >\r\n> \r\n> Thanks for updating the patch.\r\n> \r\n> I found a problem when using it. When a replication workers exits, the\r\n> transaction stats should be sent to stats collector if they were not sent before\r\n> because it didn't reach PGSTAT_STAT_INTERVAL. But I saw that the stats\r\n> weren't updated as expected.\r\n> \r\n> I looked into it and found that the replication worker would send the transaction\r\n> stats (if any) before it exits. But it got invalid subid in\r\n> pgstat_send_subworker_xact_stats(), which led to the following result:\r\n> \r\n> postgres=# select pg_stat_get_subscription_worker(0, null);\r\n> pg_stat_get_subscription_worker\r\n> ---------------------------------\r\n> (0,,2,0,0,,,,0,\"\",)\r\n> (1 row)\r\n> \r\n> I think that's because subid has already been cleaned when trying to send the\r\n> stats. I printed the value of before_shmem_exit_list, the functions in this list\r\n> would be called in shmem_exit() when the worker exits.\r\n> logicalrep_worker_onexit() would clean up the worker info (including subid),\r\n> and\r\n> pgstat_shutdown_hook() would send stats if any. logicalrep_worker_onexit()\r\n> was called before calling pgstat_shutdown_hook().\r\n> \r\n> (gdb) p before_shmem_exit_list\r\n> $1 = {{function = 0xa88f1e <pgstat_shutdown_hook>, arg = 0}, {function =\r\n> 0xb619e7 <BeforeShmemExit_Files>, arg = 0}, {function = 0xb07b5c\r\n> <ReplicationSlotShmemExit>, arg = 0}, {\r\n> function = 0xabdd93 <logicalrep_worker_onexit>, arg = 0}, {function =\r\n> 0xe30c89 <ShutdownPostgres>, arg = 0}, {function = 0x0, arg = 0} <repeats 15\r\n> times>}\r\n> \r\n> Maybe we should make some modification to fix it.\r\nThank you for letting me know this issue.\r\nI'll investigate this and will report the result.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 22 Feb 2022 01:34:25 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tue, Feb 22, 2022 at 6:45 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> I found a problem when using it. When a replication workers exits, the\n> transaction stats should be sent to stats collector if they were not sent before\n> because it didn't reach PGSTAT_STAT_INTERVAL. But I saw that the stats weren't\n> updated as expected.\n>\n> I looked into it and found that the replication worker would send the\n> transaction stats (if any) before it exits. But it got invalid subid in\n> pgstat_send_subworker_xact_stats(), which led to the following result:\n>\n> postgres=# select pg_stat_get_subscription_worker(0, null);\n> pg_stat_get_subscription_worker\n> ---------------------------------\n> (0,,2,0,0,,,,0,\"\",)\n> (1 row)\n>\n> I think that's because subid has already been cleaned when trying to send the\n> stats. I printed the value of before_shmem_exit_list, the functions in this list\n> would be called in shmem_exit() when the worker exits.\n> logicalrep_worker_onexit() would clean up the worker info (including subid), and\n> pgstat_shutdown_hook() would send stats if any. logicalrep_worker_onexit() was\n> called before calling pgstat_shutdown_hook().\n>\n\nYeah, I think that is a problem and maybe we can think of solving it\nby sending the stats via logicalrep_worker_onexit before subid is\ncleared but not sure that is a good idea. I feel we need to go back to\nthe idea of v21 for sending stats instead of using pgstat_report_stat.\nI think the one thing which we could improve is to avoid trying to\nsend it each time before receiving each message by walrcv_receive and\nrather try to send it before we try to wait (WaitLatchOrSocket).\nTrying after each message doesn't seem to be required and could lead\nto some overhead as well. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 23 Feb 2022 12:00:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Mon, Feb 21, 2022 at 12:45 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Saturday, February 19, 2022 12:00 AM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\n> > On Friday, February 18, 2022 3:34 PM Tang, Haiying/唐 海英\n> > <tanghy.fnst@fujitsu.com> wrote:\n> > > On Wed, Jan 12, 2022 8:35 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > 4) I noticed that the abort_count doesn't include aborted streaming\n> > > transactions.\n> > > Should we take this case into consideration?\n> > Hmm, we can add this into this column, when there's no objection.\n> > I'm not sure but someone might say those should be separate columns.\n> I've addressed this point in a new v23 patch,\n> since there was no opinion on this so far.\n>\n> Kindly have a look at the attached one.\n>\n\nI have some comments on v23 patch:\n\n@@ -66,6 +66,12 @@ typedef struct LogicalRepWorker\n TimestampTz last_recv_time;\n XLogRecPtr reply_lsn;\n TimestampTz reply_time;\n+\n+ /*\n+ * Transaction statistics of subscription worker\n+ */\n+ int64 commit_count;\n+ int64 abort_count;\n } LogicalRepWorker;\n\nI think that adding these statistics to the struct whose data is\nallocated on the shared memory is not a good idea since they don't\nneed to be shared. We might want to add more statistics for\nsubscriptions such as insert_count and update_count in the future. I\nthink it's better to track these statistics in local memory either in\nworker.c or pgstat.c.\n\n+/* ----------\n+ * pgstat_report_subworker_xact_end() -\n+ *\n+ * Update the statistics of subscription worker and have\n+ * pgstat_report_stat send a message to stats collector\n+ * after count increment.\n+ * ----------\n+ */\n+void\n+pgstat_report_subworker_xact_end(bool is_commit)\n+{\n+ if (is_commit)\n+ MyLogicalRepWorker->commit_count++;\n+ else\n+ MyLogicalRepWorker->abort_count++;\n+}\n\nIt's slightly odd and it seems unnecessary to me that we modify fields\nof MyLogicalRepWorker in pgstat.c. Although this function has “report”\nin its name but it just increments the counter. I think we can do that\nin worker.c.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 24 Feb 2022 11:07:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thursday, February 24, 2022 11:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I have some comments on v23 patch:\r\n> \r\n> @@ -66,6 +66,12 @@ typedef struct LogicalRepWorker\r\n> TimestampTz last_recv_time;\r\n> XLogRecPtr reply_lsn;\r\n> TimestampTz reply_time;\r\n> +\r\n> + /*\r\n> + * Transaction statistics of subscription worker\r\n> + */\r\n> + int64 commit_count;\r\n> + int64 abort_count;\r\n> } LogicalRepWorker;\r\n> \r\n> I think that adding these statistics to the struct whose data is allocated on the\r\n> shared memory is not a good idea since they don't need to be shared. We might\r\n> want to add more statistics for subscriptions such as insert_count and\r\n> update_count in the future. I think it's better to track these statistics in local\r\n> memory either in worker.c or pgstat.c.\r\nFixed.\r\n\r\n> +/* ----------\r\n> + * pgstat_report_subworker_xact_end() -\r\n> + *\r\n> + * Update the statistics of subscription worker and have\r\n> + * pgstat_report_stat send a message to stats collector\r\n> + * after count increment.\r\n> + * ----------\r\n> + */\r\n> +void\r\n> +pgstat_report_subworker_xact_end(bool is_commit) {\r\n> + if (is_commit)\r\n> + MyLogicalRepWorker->commit_count++;\r\n> + else\r\n> + MyLogicalRepWorker->abort_count++;\r\n> +}\r\n> \r\n> It's slightly odd and it seems unnecessary to me that we modify fields of\r\n> MyLogicalRepWorker in pgstat.c. Although this function has “report”\r\n> in its name but it just increments the counter. I think we can do that in worker.c.\r\nFixed.\r\n\r\n\r\nAlso, I made the timing adjustment logic\r\nback and now have the independent one as Amit-san suggested in [1].\r\n\r\nKindly have a look at v24.\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1LWYc15%3DASj1tMTEFsXtxu%3D02aGoMwq9YanUVr9-QMhdQ%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 24 Feb 2022 22:57:39 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, February 23, 2022 3:30 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> On Tue, Feb 22, 2022 at 6:45 AM tanghy.fnst@fujitsu.com\r\n> <tanghy.fnst@fujitsu.com> wrote:\r\n> >\r\n> > I found a problem when using it. When a replication workers exits, the\r\n> > transaction stats should be sent to stats collector if they were not\r\n> > sent before because it didn't reach PGSTAT_STAT_INTERVAL. But I saw\r\n> > that the stats weren't updated as expected.\r\n> >\r\n> > I looked into it and found that the replication worker would send the\r\n> > transaction stats (if any) before it exits. But it got invalid subid\r\n> > in pgstat_send_subworker_xact_stats(), which led to the following result:\r\n> >\r\n> > postgres=# select pg_stat_get_subscription_worker(0, null);\r\n> > pg_stat_get_subscription_worker\r\n> > ---------------------------------\r\n> > (0,,2,0,0,,,,0,\"\",)\r\n> > (1 row)\r\n> >\r\n> > I think that's because subid has already been cleaned when trying to\r\n> > send the stats. I printed the value of before_shmem_exit_list, the\r\n> > functions in this list would be called in shmem_exit() when the worker exits.\r\n> > logicalrep_worker_onexit() would clean up the worker info (including\r\n> > subid), and\r\n> > pgstat_shutdown_hook() would send stats if any.\r\n> > logicalrep_worker_onexit() was called before calling\r\n> pgstat_shutdown_hook().\r\n> >\r\n> \r\n> Yeah, I think that is a problem and maybe we can think of solving it by sending\r\n> the stats via logicalrep_worker_onexit before subid is cleared but not sure that\r\n> is a good idea. I feel we need to go back to the idea of v21 for sending stats\r\n> instead of using pgstat_report_stat.\r\n> I think the one thing which we could improve is to avoid trying to send it each\r\n> time before receiving each message by walrcv_receive and rather try to send it\r\n> before we try to wait (WaitLatchOrSocket).\r\n> Trying after each message doesn't seem to be required and could lead to some\r\n> overhead as well. What do you think?\r\nI agree. Fixed.\r\n\r\nKindly have a look at v24 shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373A3E1BE237BAF38185BF2ED3D9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 24 Feb 2022 23:01:57 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Friday, February 25, 2022 7:58 AM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> Kindly have a look at v24.\r\nHi.\r\n\r\nThe recent commit(7a85073) has redesigned the view pg_stat_subscription_workers\r\nand now we have pg_stat_subscription_stats. Therefore, I rebased my patch\r\nso that my statistics patch can be applied on top of the HEAD.\r\n\r\nIn the process of this rebase, I had to drop one column\r\nthat stored error count for unique errors(which was\r\nincremented after confirming the error is not the same as previous\r\none), because the commit tentatively removes the same error check\r\nmechanism.\r\n\r\nTherefore, this patch has apply_commit_count, apply_rollback_count only.\r\nI slightly changed minor changes as well so that those can\r\nbecome more aligned.\r\n\r\nKindly please have a look at the patch.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 1 Mar 2022 02:04:10 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "Please see below my review comments for v25.\n\n======\n\n1. Commit message\n\nIntroduce cumulative columns of transactions of\nlogical replication subscriber to the pg_stat_subscription_stats view.\n\n\"cumulative columns of transactions\" sounds a bit strange to me.\n\nSUGGESTED\nIntroduce 2 new subscription statistics columns (apply_commit_count,\nand apply_rollback_count) to the pg_stat_subscription_stats view for\ncounting cumulative transaction commits/rollbacks.\n\n~~~\n\n2. doc/src/sgml/monitoring.sgml - bug\n\nThe new SGML <row>s have been added in the wrong place!\n\nI don't think this renders like you expect it does. Please regenerate\nthe help to see for yourself.\n\n~~~\n\n3. doc/src/sgml/monitoring.sgml - wording\n\n+ <para>\n+ Number of transactions rollbacked in this subscription. Both\n+ <command>ROLLBACK</command> of transaction streamed as in-progress\n+ transaction and <command>ROLLBACK PREPARED</command> increment this\n+ counter.\n+ </para></entry>\n\nBEFORE\nNumber of transactions rollbacked in this subscription.\n\nSUGGESTED\nNumber of transaction rollbacks in this subscription.\n\n~~~\n\n4. doc/src/sgml/monitoring.sgml - wording\n\n+ <para>\n+ Number of transactions rollbacked in this subscription. Both\n+ <command>ROLLBACK</command> of transaction streamed as in-progress\n+ transaction and <command>ROLLBACK PREPARED</command> increment this\n+ counter.\n+ </para></entry>\n\nTrying to distinguish between the ROLLBACK of a transaction and of a\nstreamed in-progress transaction seems to have made this description\ntoo complicated. I don’t think the user even cares/knows about this\n(in-progress) distinction. So, I think this should just be written\nmore simply (like the COMMIT part was)\n\nBEFORE\nBoth <command>ROLLBACK</command> of transaction streamed as\nin-progress transaction and <command>ROLLBACK PREPARED</command>\nincrement this counter.\n\nSUGGESTED\nBoth <command>ROLLBACK</command> and <command>ROLLBACK\nPREPARED</command> increment this counter.\n\n~~~\n\n5. Question - column names.\n\nJust curious why the columns are called \"apply_commit_count\" and\n\"apply_rollback_count\"? Specifically, what extra meaning do those\nnames have versus just calling them \"commit_count\" and\n\"rollback_count\"?\n\n~~~\n\n6. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\n\n@@ -3421,6 +3425,60 @@ pgstat_send_slru(void)\n }\n\n /* ----------\n+ * pgstat_report_subscription_xact() -\n+ *\n+ * Send a subscription transaction stats to the collector.\n+ * The statistics are cleared upon sending.\n+ *\n+ * 'force' is true only when the subscription worker process exits.\n+ * ----------\n+ */\n+void\n+pgstat_report_subscription_xact(bool force)\n\n6a.\nI think this comment should be worded more like the other\npgstat_report_subscption_XXX comments\n\nBEFORE\nSend a subscription transaction stats to the collector.\n\nSUGGESTED\nTell the collector about subscriptions transaction stats.\n\n6b.\n+ * 'force' is true only when the subscription worker process exits.\n\nI thought this comment should just describe what the 'force' param\nactually does in this function; not the scenario about who calls it...\n\n~~~\n\n7. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\n\nI think the entire function maybe should be relocated to be nearby the\nother pgstat_report_subscription_XXX functions in the source.\n\n~~~\n\n8. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\n\n+ /*\n+ * This function can be called even if nothing at all has happened. In\n+ * this case, there's no need to go forward.\n+ */\n\nToo much information. Clearly, it is possible for this function to be\ncalled for this case otherwise this code would not exist in the first\nplace :)\nIMO the comment can be much simpler but still say all it needs to.\n\nBEFORE\nThis function can be called even if nothing at all has happened. In\nthis case, there's no need to go forward.\nSUGGESTED\nBailout early if nothing to do.\n\n~~~\n\n9. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\n\n+ if (subStats.subid == InvalidOid ||\n+ (subStats.apply_commit_count == 0 && subStats.apply_rollback_count == 0))\n+ return;\n\nMaybe using !OisIsValid(subStats.subid) is better?\n\n~~~\n\n10. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\n\n+ /*\n+ * Don't send a message unless it's been at least PGSTAT_STAT_INTERVAL\n+ * msec since we last sent one to avoid overloading the stats\n+ * collector.\n+ */\n\nSUGGESTED (2 sentences instead of 1)\nDon't send a message unless it's been at least PGSTAT_STAT_INTERVAL\nmsec since we last sent one. This is to avoid overloading the stats\ncollector.\n\n~~~\n\n11. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\n\n+ if (!force)\n+ {\n+ TimestampTz now = GetCurrentTimestamp();\n+\n+ /*\n+ * Don't send a message unless it's been at least PGSTAT_STAT_INTERVAL\n+ * msec since we last sent one to avoid overloading the stats\n+ * collector.\n+ */\n+ if (!TimestampDifferenceExceeds(last_report, now, PGSTAT_STAT_INTERVAL))\n+ return;\n+ last_report = now;\n+ }\n\n(Yeah, I know there is similar code in this module but 2 wrongs do not\nmake a right)\n\nI think logically it is better to put the 'now' and the 'last_report'\noutside this if (!force) block. Otherwise, the forced report is not\nsetting the 'last_report' time and that just seems strange.\n\nRearranging this code is better IMO. e.g.\n- the conditions are expressed positive instead of negative (!)\n- only one return point instead of multiple\n- the 'last_report' is always set so that strangeness is eliminated\n\nSUGGESTED (it's the same code but rearranged)\n\nTimestampTz now = GetCurrentTimestamp();\n\nif (force || TimestampDifferenceExceeds(last_report, now, PGSTAT_STAT_INTERVAL))\n{\n/*\n* Prepare and send the message.\n*/\npgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONXACT);\nmsg.m_databaseid = MyDatabaseId;\nmsg.m_subid = subStats.subid;\nmsg.apply_commit_count = subStats.apply_commit_count;\nmsg.apply_rollback_count = subStats.apply_rollback_count;\npgstat_send(&msg, sizeof(PgStat_MsgSubscriptionXact));\nlast_report = now;\n\n/*\n* Clear out the statistics.\n*/\nsubStats.apply_commit_count = 0;\nsubStats.apply_rollback_count = 0;\n}\n\n~~~\n\n12. src/backend/replication/logical/worker.c - LogicalRepSubscriptionStats\n\n@@ -238,6 +238,8 @@ static ApplyErrorCallbackArg apply_error_callback_arg =\n .ts = 0,\n };\n\n+LogicalRepSubscriptionStats subStats = {InvalidOid, 0, 0};\n\nMaybe better to show explicit the member assignments (like the struct\nabove this one) to make it more clear.\n\n~~~\n\n13. src/backend/replication/logical/worker.c - subscription_stats_update\n\n@@ -3372,6 +3386,22 @@ TwoPhaseTransactionGid(Oid subid, TransactionId\nxid, char *gid, int szgid)\n snprintf(gid, szgid, \"pg_gid_%u_%u\", subid, xid);\n }\n\n+/*\n+ * Update the statistics of subscription.\n+ */\n+static void\n+subscription_stats_update(bool is_commit)\n+{\n+ Assert(OidIsValid(subStats.subid));\n+\n+ if (is_commit)\n+ subStats.apply_commit_count++;\n+ else\n+ subStats.apply_rollback_count++;\n+\n+ pgstat_report_subscription_xact(false);\n+}\n+\n\nI felt maybe this would be look better split into 2 functions: e.g.\nsubscription_stats_incr_commit() and\nsubscription_stats_incr_rollback(). Then it would be more readable\nfrom all the callers instead of the vague looking\nsubscription_stats_update(true/false).\n\n~~~\n\n14. src/backend/replication/logical/worker.c - subscription_stats_update\n\n+/*\n+ * Update the statistics of subscription.\n+ */\n+static void\n+subscription_stats_update(bool is_commit)\n+{\n+ Assert(OidIsValid(subStats.subid));\n+\n+ if (is_commit)\n+ subStats.apply_commit_count++;\n+ else\n+ subStats.apply_rollback_count++;\n+\n+ pgstat_report_subscription_xact(false);\n+}\n\nIs it really necessary to be calling\npgstat_report_subscription_xactfrom here? That is already being called\nin the LogicalRepApplyLoop. Isn't that enough?\n\n~~~\n\n15. src/backend/replication/logical/worker.c - LogicalRepApplyLoop\n\n@@ -2717,6 +2729,8 @@ LogicalRepApplyLoop(XLogRecPtr last_received)\n if (endofstream)\n break;\n\n+ pgstat_report_subscription_xact(false);\n+\n\nWondering if this call is better to be done a couple of lines up\n(above the 'if/break'). Especially if you remove the call from the\nsubscription_stats_update as suggested in my review comment #14.\n\n~~~\n\n16. src/backend/utils/adt/pgstatfuncs.c - pg_stat_get_subscription_stats\n\n@@ -2424,7 +2424,11 @@ pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)\n INT8OID, -1, 0);\n TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"sync_error_count\",\n INT8OID, -1, 0);\n- TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"stats_reset\",\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"apply_commit_count\",\n+ INT8OID, -1, 0);\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"sync_rollback_count\",\n+ INT8OID, -1, 0);\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"stats_reset\",\n TIMESTAMPTZOID, -1, 0);\n\nBug? What is \"sync_rollback_count\"? Looks like a cut/paste error.\n\n~~~\n\n17. src/include/pgstat.h\n\n+typedef struct PgStat_MsgSubscriptionXact\n+{\n+ PgStat_MsgHdr m_hdr;\n+\n+ /* determine the worker entry */\n+ Oid m_databaseid;\n+ Oid m_subid;\n+\n+ PgStat_Counter apply_commit_count;\n+ PgStat_Counter apply_rollback_count;\n+} PgStat_MsgSubscriptionXact;\n\nIs that m_databaseid even needed? I did not notice it getting used\n(e.g. pgstat_recv_subscription_xact does not use it). Also, wasn't\nsimilar removed from the other subscription error stats?\n\n~~~\n\n18. src/include/pgstat.h\n\n+ PgStat_Counter apply_rollback_count;\n+} PgStat_MsgSubscriptionXact;\n+\n+\n\nThe extra blank line can be removed.\n\n~~~\n\n19. src/include/pgstat.h\n\n@@ -1177,6 +1201,8 @@ extern void pgstat_send_archiver(const char\n*xlog, bool failed);\n extern void pgstat_send_bgwriter(void);\n extern void pgstat_send_checkpointer(void);\n extern void pgstat_send_wal(bool force);\n+extern void pgstat_report_subscription_xact(bool force);\n+\n\n /* ----------\n\nThe extra blank line can be removed.\n\n~~~\n\n20. Test for the column names.\n\nThe patch added a couple of new columns to statistics so I was\nsurprised there were no regression test updates needed for these? How\ncan that be? Shouldn’t there be just one regression test that\nvalidates the view column names are what they are expected to be?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 1 Mar 2022 18:12:25 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Tuesday, March 1, 2022 4:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Please see below my review comments for v25.\r\n> \r\n> ======\r\n> \r\n> 1. Commit message\r\n> \r\n> Introduce cumulative columns of transactions of logical replication subscriber\r\n> to the pg_stat_subscription_stats view.\r\n> \r\n> \"cumulative columns of transactions\" sounds a bit strange to me.\r\n> \r\n> SUGGESTED\r\n> Introduce 2 new subscription statistics columns (apply_commit_count, and\r\n> apply_rollback_count) to the pg_stat_subscription_stats view for counting\r\n> cumulative transaction commits/rollbacks.\r\nFixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 2. doc/src/sgml/monitoring.sgml - bug\r\n> \r\n> The new SGML <row>s have been added in the wrong place!\r\n> \r\n> I don't think this renders like you expect it does. Please regenerate the help to\r\n> see for yourself.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 3. doc/src/sgml/monitoring.sgml - wording\r\n> \r\n> + <para>\r\n> + Number of transactions rollbacked in this subscription. Both\r\n> + <command>ROLLBACK</command> of transaction streamed as\r\n> in-progress\r\n> + transaction and <command>ROLLBACK PREPARED</command>\r\n> increment this\r\n> + counter.\r\n> + </para></entry>\r\n> \r\n> BEFORE\r\n> Number of transactions rollbacked in this subscription.\r\n> \r\n> SUGGESTED\r\n> Number of transaction rollbacks in this subscription.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 4. doc/src/sgml/monitoring.sgml - wording\r\n> \r\n> + <para>\r\n> + Number of transactions rollbacked in this subscription. Both\r\n> + <command>ROLLBACK</command> of transaction streamed as\r\n> in-progress\r\n> + transaction and <command>ROLLBACK PREPARED</command>\r\n> increment this\r\n> + counter.\r\n> + </para></entry>\r\n> \r\n> Trying to distinguish between the ROLLBACK of a transaction and of a\r\n> streamed in-progress transaction seems to have made this description too\r\n> complicated. I don't think the user even cares/knows about this\r\n> (in-progress) distinction. So, I think this should just be written more simply\r\n> (like the COMMIT part was)\r\n> \r\n> BEFORE\r\n> Both <command>ROLLBACK</command> of transaction streamed as\r\n> in-progress transaction and <command>ROLLBACK\r\n> PREPARED</command> increment this counter.\r\n> \r\n> SUGGESTED\r\n> Both <command>ROLLBACK</command> and <command>ROLLBACK\r\n> PREPARED</command> increment this counter.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 5. Question - column names.\r\n> \r\n> Just curious why the columns are called \"apply_commit_count\" and\r\n> \"apply_rollback_count\"? Specifically, what extra meaning do those names have\r\n> versus just calling them \"commit_count\" and \"rollback_count\"?\r\nI think there's possibility that we'll have counters\r\nfor tablesync commit for example. So, the name prefix avoids\r\nthe overlap between the possible names.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 6. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\r\n> \r\n> @@ -3421,6 +3425,60 @@ pgstat_send_slru(void) }\r\n> \r\n> /* ----------\r\n> + * pgstat_report_subscription_xact() -\r\n> + *\r\n> + * Send a subscription transaction stats to the collector.\r\n> + * The statistics are cleared upon sending.\r\n> + *\r\n> + * 'force' is true only when the subscription worker process exits.\r\n> + * ----------\r\n> + */\r\n> +void\r\n> +pgstat_report_subscription_xact(bool force)\r\n> \r\n> 6a.\r\n> I think this comment should be worded more like the other\r\n> pgstat_report_subscption_XXX comments\r\n> \r\n> BEFORE\r\n> Send a subscription transaction stats to the collector.\r\n> \r\n> SUGGESTED\r\n> Tell the collector about subscriptions transaction stats.\r\nFixed.\r\n\r\n\r\n\r\n> 6b.\r\n> + * 'force' is true only when the subscription worker process exits.\r\n> \r\n> I thought this comment should just describe what the 'force' param actually\r\n> does in this function; not the scenario about who calls it...\r\nFixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 7. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\r\n> \r\n> I think the entire function maybe should be relocated to be nearby the other\r\n> pgstat_report_subscription_XXX functions in the source.\r\nI placed the pgstat_report_subscription_xact below pgstat_report_subscription_drop.\r\nMeanwhile, pgstat_recv_subscription_xact, another new function in pgstat.c,\r\nis already placed below pgstat_recv_subscription_error, so I kept it as it is.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 8. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\r\n> \r\n> + /*\r\n> + * This function can be called even if nothing at all has happened. In\r\n> + * this case, there's no need to go forward.\r\n> + */\r\n> \r\n> Too much information. Clearly, it is possible for this function to be called for this\r\n> case otherwise this code would not exist in the first place :) IMO the comment\r\n> can be much simpler but still say all it needs to.\r\n> \r\n> BEFORE\r\n> This function can be called even if nothing at all has happened. In this case,\r\n> there's no need to go forward.\r\n> SUGGESTED\r\n> Bailout early if nothing to do.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 9. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\r\n> \r\n> + if (subStats.subid == InvalidOid ||\r\n> + (subStats.apply_commit_count == 0 && subStats.apply_rollback_count ==\r\n> + 0)) return;\r\n> \r\n> Maybe using !OisIsValid(subStats.subid) is better?\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 10. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\r\n> \r\n> + /*\r\n> + * Don't send a message unless it's been at least PGSTAT_STAT_INTERVAL\r\n> + * msec since we last sent one to avoid overloading the stats\r\n> + * collector.\r\n> + */\r\n> \r\n> SUGGESTED (2 sentences instead of 1)\r\n> Don't send a message unless it's been at least PGSTAT_STAT_INTERVAL\r\n> msec since we last sent one. This is to avoid overloading the stats collector.\r\nFixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 11. src/backend/postmaster/pgstat.c - pgstat_report_subscription_xact\r\n> \r\n> + if (!force)\r\n> + {\r\n> + TimestampTz now = GetCurrentTimestamp();\r\n> +\r\n> + /*\r\n> + * Don't send a message unless it's been at least PGSTAT_STAT_INTERVAL\r\n> + * msec since we last sent one to avoid overloading the stats\r\n> + * collector.\r\n> + */\r\n> + if (!TimestampDifferenceExceeds(last_report, now,\r\n> + PGSTAT_STAT_INTERVAL)) return; last_report = now; }\r\n> \r\n> (Yeah, I know there is similar code in this module but 2 wrongs do not make a\r\n> right)\r\n> \r\n> I think logically it is better to put the 'now' and the 'last_report'\r\n> outside this if (!force) block. Otherwise, the forced report is not setting the\r\n> 'last_report' time and that just seems strange.\r\n> \r\n> Rearranging this code is better IMO. e.g.\r\n> - the conditions are expressed positive instead of negative (!)\r\n> - only one return point instead of multiple\r\n> - the 'last_report' is always set so that strangeness is eliminated\r\n> \r\n> SUGGESTED (it's the same code but rearranged)\r\n> \r\n> TimestampTz now = GetCurrentTimestamp();\r\n> \r\n> if (force || TimestampDifferenceExceeds(last_report, now,\r\n> PGSTAT_STAT_INTERVAL)) {\r\n> /*\r\n> * Prepare and send the message.\r\n> */\r\n> pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONXACT);\r\n> msg.m_databaseid = MyDatabaseId; msg.m_subid = subStats.subid;\r\n> msg.apply_commit_count = subStats.apply_commit_count;\r\n> msg.apply_rollback_count = subStats.apply_rollback_count;\r\n> pgstat_send(&msg, sizeof(PgStat_MsgSubscriptionXact));\r\n> last_report = now;\r\n> \r\n> /*\r\n> * Clear out the statistics.\r\n> */\r\n> subStats.apply_commit_count = 0;\r\n> subStats.apply_rollback_count = 0;\r\n> }\r\nYeah, your suggestion looks tidy.\r\nYet, I wasn't sure if I should set the 'last_report' for exit case,\r\nsince we don't use it after the worker exit.\r\nIn addition, we need to calculate GetCurrentTimestamp()\r\neven in the case 'force' is set to true.\r\nI'm not sure if that is correct.\r\n\r\nSo, I'd like to keep it as it as at this stage.\r\n\r\n\r\n> ~~~\r\n> \r\n> 12. src/backend/replication/logical/worker.c - LogicalRepSubscriptionStats\r\n> \r\n> @@ -238,6 +238,8 @@ static ApplyErrorCallbackArg apply_error_callback_arg\r\n> =\r\n> .ts = 0,\r\n> };\r\n> \r\n> +LogicalRepSubscriptionStats subStats = {InvalidOid, 0, 0};\r\n> \r\n> Maybe better to show explicit the member assignments (like the struct above\r\n> this one) to make it more clear.\r\nTrue. Fixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 13. src/backend/replication/logical/worker.c - subscription_stats_update\r\n> \r\n> @@ -3372,6 +3386,22 @@ TwoPhaseTransactionGid(Oid subid, TransactionId\r\n> xid, char *gid, int szgid)\r\n> snprintf(gid, szgid, \"pg_gid_%u_%u\", subid, xid); }\r\n> \r\n> +/*\r\n> + * Update the statistics of subscription.\r\n> + */\r\n> +static void\r\n> +subscription_stats_update(bool is_commit) {\r\n> +Assert(OidIsValid(subStats.subid));\r\n> +\r\n> + if (is_commit)\r\n> + subStats.apply_commit_count++;\r\n> + else\r\n> + subStats.apply_rollback_count++;\r\n> +\r\n> + pgstat_report_subscription_xact(false);\r\n> +}\r\n> +\r\n> \r\n> I felt maybe this would be look better split into 2 functions: e.g.\r\n> subscription_stats_incr_commit() and\r\n> subscription_stats_incr_rollback(). Then it would be more readable from all\r\n> the callers instead of the vague looking subscription_stats_update(true/false).\r\nOkay. Fixed.\r\n\r\nProbably, I suppose the ideal solution here would be probably to come up\r\nwith a good name for one unified function that explains the internal processing by itself.\r\nI spent some time to try to create a new good name (e.g. \"subscription_committed_stats_update\")\r\nbut all ideas weren't good. Then, I decided to make them separate.\r\n\r\n\r\n> ~~~\r\n> \r\n> 14. src/backend/replication/logical/worker.c - subscription_stats_update\r\n> \r\n> +/*\r\n> + * Update the statistics of subscription.\r\n> + */\r\n> +static void\r\n> +subscription_stats_update(bool is_commit) {\r\n> +Assert(OidIsValid(subStats.subid));\r\n> +\r\n> + if (is_commit)\r\n> + subStats.apply_commit_count++;\r\n> + else\r\n> + subStats.apply_rollback_count++;\r\n> +\r\n> + pgstat_report_subscription_xact(false);\r\n> +}\r\n> \r\n> Is it really necessary to be calling\r\n> pgstat_report_subscription_xactfrom here? That is already being called in the\r\n> LogicalRepApplyLoop. Isn't that enough?\r\nDeleted.\r\n\r\n\r\n> ~~~\r\n> \r\n> 15. src/backend/replication/logical/worker.c - LogicalRepApplyLoop\r\n> \r\n> @@ -2717,6 +2729,8 @@ LogicalRepApplyLoop(XLogRecPtr last_received)\r\n> if (endofstream)\r\n> break;\r\n> \r\n> + pgstat_report_subscription_xact(false);\r\n> +\r\n> \r\n> Wondering if this call is better to be done a couple of lines up (above the\r\n> 'if/break'). Especially if you remove the call from the subscription_stats_update\r\n> as suggested in my review comment #14.\r\nI feel this should be after the condition.\r\n\r\nI checked with my debugger that after the break of LogicalRepApplyLoop\r\n(when endofstream equals true), we'd call proc_exit. This leads to \r\nthe call pgstat_report_subscription_xact in the logicalrep_worker_exit.\r\n\r\nIn this function, we use it with 'true' argument.\r\nSo, if we call the pgstat_report_subscription_xact before the endofstream check,\r\nwe have to call the function twice in the end.\r\n\r\nTherefore, the current position makes sense to me.\r\n\r\n\r\n> ~~~\r\n> \r\n> 16. src/backend/utils/adt/pgstatfuncs.c - pg_stat_get_subscription_stats\r\n> \r\n> @@ -2424,7 +2424,11 @@\r\n> pg_stat_get_subscription_stats(PG_FUNCTION_ARGS)\r\n> INT8OID, -1, 0);\r\n> TupleDescInitEntry(tupdesc, (AttrNumber) 3, \"sync_error_count\",\r\n> INT8OID, -1, 0);\r\n> - TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"stats_reset\",\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"apply_commit_count\",\r\n> + INT8OID, -1, 0);\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 5, \"sync_rollback_count\",\r\n> + INT8OID, -1, 0);\r\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 6, \"stats_reset\",\r\n> TIMESTAMPTZOID, -1, 0);\r\n> \r\n> Bug? What is \"sync_rollback_count\"? Looks like a cut/paste error.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 17. src/include/pgstat.h\r\n> \r\n> +typedef struct PgStat_MsgSubscriptionXact { PgStat_MsgHdr m_hdr;\r\n> +\r\n> + /* determine the worker entry */\r\n> + Oid m_databaseid;\r\n> + Oid m_subid;\r\n> +\r\n> + PgStat_Counter apply_commit_count;\r\n> + PgStat_Counter apply_rollback_count;\r\n> +} PgStat_MsgSubscriptionXact;\r\n> \r\n> Is that m_databaseid even needed? I did not notice it getting used (e.g.\r\n> pgstat_recv_subscription_xact does not use it). Also, wasn't similar removed\r\n> from the other subscription error stats?\r\nFixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 18. src/include/pgstat.h\r\n> \r\n> + PgStat_Counter apply_rollback_count;\r\n> +} PgStat_MsgSubscriptionXact;\r\n> +\r\n> +\r\n> \r\n> The extra blank line can be removed.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 19. src/include/pgstat.h\r\n> \r\n> @@ -1177,6 +1201,8 @@ extern void pgstat_send_archiver(const char *xlog,\r\n> bool failed); extern void pgstat_send_bgwriter(void); extern void\r\n> pgstat_send_checkpointer(void); extern void pgstat_send_wal(bool force);\r\n> +extern void pgstat_report_subscription_xact(bool force);\r\n> +\r\n> \r\n> /* ----------\r\n> \r\n> The extra blank line can be removed.\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 20. Test for the column names.\r\n> \r\n> The patch added a couple of new columns to statistics so I was surprised there\r\n> were no regression test updates needed for these? How can that be?\r\n> Shouldn't there be just one regression test that validates the view column\r\n> names are what they are expected to be?\r\nIn my earlier versions, I had some tests\r\nthat covered major types of transactions (e.g. stream commit,\r\nstream abort, rollback prepared...) and waited for accurate update of new counters.\r\nBut, I dropped those after an advice the result of such kind of tests\r\nwouldn't become stable fundamentally.\r\n\r\nAlso, I quickly checked other similar views(pg_stat_slru, pg_stat_wal_receiver)\r\ncommit logs, especially when they introduce columns.\r\nBut, I couldn't find column name validations.\r\nSo, I feel this is aligned.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Wed, 2 Mar 2022 01:21:03 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 10:21 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n>\n> Also, I quickly checked other similar views(pg_stat_slru, pg_stat_wal_receiver)\n> commit logs, especially when they introduce columns.\n> But, I couldn't find column name validations.\n> So, I feel this is aligned.\n>\n\nI've looked at v26 patch and here are some random comments:\n\n+ /* determine the subscription entry */\n+ Oid m_subid;\n+\n+ PgStat_Counter apply_commit_count;\n+ PgStat_Counter apply_rollback_count;\n\nI think it's better to add the prefix \"m_\" to\napply_commit/rollback_count for consistency.\n\n---\n+/*\n+ * Increment the counter of commit for subscription statistics.\n+ */\n+static void\n+subscription_stats_incr_commit(void)\n+{\n+ Assert(OidIsValid(subStats.subid));\n+\n+ subStats.apply_commit_count++;\n+}\n+\n\nI think we don't need the Assert() here since it should not be a\nproblem even if subStats.subid is InvalidOid at least in this\nfunction.\n\nIf we remove it, we can remove both subscription_stats_incr_commit()\nand +subscription_stats_incr_rollback() as well.\n\n---\n+void\n+pgstat_report_subscription_xact(bool force)\n+{\n+ static TimestampTz last_report = 0;\n+ PgStat_MsgSubscriptionXact msg;\n+\n+ /* Bailout early if nothing to do */\n+ if (!OidIsValid(subStats.subid) ||\n+ (subStats.apply_commit_count == 0 &&\nsubStats.apply_rollback_count == 0))\n+ return;\n+\n\n+LogicalRepSubscriptionStats subStats =\n+{\n+ .subid = InvalidOid,\n+ .apply_commit_count = 0,\n+ .apply_rollback_count = 0,\n+};\n\nDo we need subStats.subid? I think we can pass MySubscription->oid (or\nMyLogicalRepWorker->subid) to pgstat_report_subscription_xact() along\nwith the pointer of the statistics (subStats). That way, we don't need\nto expose subStats.\n\nAlso, I think it's better to add \"Xact\" or something to the struct\nname. For example, SubscriptionXactStats.\n\n---\n+\n+typedef struct LogicalRepSubscriptionStats\n+{\n+ Oid subid;\n+\n+ int64 apply_commit_count;\n+ int64 apply_rollback_count;\n+} LogicalRepSubscriptionStats;\n\nWe need a description for this struct.\n\nProbably it is better to declare it in logicalworker.h instead so that\npgstat.c includes it instead of worker_internal.h? worker_internal.h\nis the header file shared by logical replication workers such as apply\nworker, tablesync worker, and launcher. So it might not be advisable\nto include it in pgstat.c.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 2 Mar 2022 14:18:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "Hi,\r\n\r\nA comments on the v26 patch.\r\n\r\nThe following document about pg_stat_subscription_stats view only says that\r\n\"showing statistics about errors\", should we add something about transactions\r\nhere? \r\n\r\n <row>\r\n <entry><structname>pg_stat_subscription_stats</structname><indexterm><primary>pg_stat_subscription_stats</primary></indexterm></entry>\r\n <entry>One row per subscription, showing statistics about errors.\r\n See <link linkend=\"monitoring-pg-stat-subscription-stats\">\r\n <structname>pg_stat_subscription_stats</structname></link> for details.\r\n </entry>\r\n </row>\r\n\r\n\r\nI noticed that the v24 patch has some changes about the description of this\r\nview. Maybe we can modify to \"showing statistics about errors and transactions\".\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Wed, 2 Mar 2022 08:28:34 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, March 2, 2022 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Wed, Mar 2, 2022 at 10:21 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Also, I quickly checked other similar views(pg_stat_slru,\r\n> > pg_stat_wal_receiver) commit logs, especially when they introduce columns.\r\n> > But, I couldn't find column name validations.\r\n> > So, I feel this is aligned.\r\n> >\r\n> \r\n> I've looked at v26 patch and here are some random comments:\r\nHi, thank you for reviewing !\r\n\r\n\r\n> + /* determine the subscription entry */\r\n> + Oid m_subid;\r\n> +\r\n> + PgStat_Counter apply_commit_count;\r\n> + PgStat_Counter apply_rollback_count;\r\n> \r\n> I think it's better to add the prefix \"m_\" to apply_commit/rollback_count for\r\n> consistency.\r\nFixed.\r\n\r\n \r\n> ---\r\n> +/*\r\n> + * Increment the counter of commit for subscription statistics.\r\n> + */\r\n> +static void\r\n> +subscription_stats_incr_commit(void)\r\n> +{\r\n> + Assert(OidIsValid(subStats.subid));\r\n> +\r\n> + subStats.apply_commit_count++;\r\n> +}\r\n> +\r\n> \r\n> I think we don't need the Assert() here since it should not be a problem even if\r\n> subStats.subid is InvalidOid at least in this function.\r\n> \r\n> If we remove it, we can remove both subscription_stats_incr_commit() and\r\n> +subscription_stats_incr_rollback() as well.\r\nRemoved the Assert() from both functions.\r\n\r\n\r\n> ---\r\n> +void\r\n> +pgstat_report_subscription_xact(bool force) {\r\n> + static TimestampTz last_report = 0;\r\n> + PgStat_MsgSubscriptionXact msg;\r\n> +\r\n> + /* Bailout early if nothing to do */\r\n> + if (!OidIsValid(subStats.subid) ||\r\n> + (subStats.apply_commit_count == 0 &&\r\n> subStats.apply_rollback_count == 0))\r\n> + return;\r\n> +\r\n> \r\n> +LogicalRepSubscriptionStats subStats =\r\n> +{\r\n> + .subid = InvalidOid,\r\n> + .apply_commit_count = 0,\r\n> + .apply_rollback_count = 0,\r\n> +};\r\n> \r\n> Do we need subStats.subid? I think we can pass MySubscription->oid (or\r\n> MyLogicalRepWorker->subid) to pgstat_report_subscription_xact() along\r\n> with the pointer of the statistics (subStats). That way, we don't need to expose\r\n> subStats.\r\nRemoved the subStats.subid. Also, now I pass the oid to the\r\npgstat_report_subscription_xact with the pointer of the statistics.\r\n\r\n> Also, I think it's better to add \"Xact\" or something to the struct name. For\r\n> example, SubscriptionXactStats.\r\nRenamed.\r\n\r\n> ---\r\n> +\r\n> +typedef struct LogicalRepSubscriptionStats {\r\n> + Oid subid;\r\n> +\r\n> + int64 apply_commit_count;\r\n> + int64 apply_rollback_count;\r\n> +} LogicalRepSubscriptionStats;\r\n> \r\n> We need a description for this struct.\r\n> \r\n> Probably it is better to declare it in logicalworker.h instead so that pgstat.c\r\n> includes it instead of worker_internal.h? worker_internal.h is the header file\r\n> shared by logical replication workers such as apply worker, tablesync worker,\r\n> and launcher. So it might not be advisable to include it in pgstat.c.\r\nChanged the definition place to logicalworker.h\r\nand added some explanations for it.\r\n\r\nAttached the updated v27.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 3 Mar 2022 03:28:19 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Wednesday, March 2, 2022 5:29 PM Shi, Yu/侍 雨 <shiy.fnst@fujitsu.com> wrote:\r\n> A comments on the v26 patch.\r\nThank you for checking the patch !\r\n\r\n> \r\n> The following document about pg_stat_subscription_stats view only says that\r\n> \"showing statistics about errors\", should we add something about transactions\r\n> here?\r\n> \r\n> <row>\r\n> \r\n> <entry><structname>pg_stat_subscription_stats</structname><indexterm\r\n> ><primary>pg_stat_subscription_stats</primary></indexterm></entry>\r\n> <entry>One row per subscription, showing statistics about errors.\r\n> See <link linkend=\"monitoring-pg-stat-subscription-stats\">\r\n> <structname>pg_stat_subscription_stats</structname></link> for\r\n> details.\r\n> </entry>\r\n> </row>\r\n> \r\n> \r\n> I noticed that the v24 patch has some changes about the description of this\r\n> view. Maybe we can modify to \"showing statistics about errors and\r\n> transactions\".\r\nYou are right. Fixed.\r\n\r\nNew patch v27 that incorporated your comments is shared in [1].\r\nKindly have a look at it.\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB837345FB72E3CE9C827AF42CED049%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 3 Mar 2022 03:33:41 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 8:58 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n\nThis patch introduces two new subscription statistics columns\n(apply_commit_count and apply_rollback_count) to the\npg_stat_subscription_stats view for counting cumulative transactions\ncommits/rollbacks for a particular subscription. Now, users can\nalready see the total number of xacts committed/rolled back in a\nparticular database via pg_stat_database, so this can be considered\nduplicate information. OTOH, some users might be interested in the\nstats for a subscription to know how many transactions are\nsuccessfully applied during replication because the information in\npg_stat_database also includes the operations that happened on the\nnode.\n\nI am not sure if it is worth adding this additional information or how\nuseful it will be for users. Does anyone else want to weigh in on\nthis?\n\nIf nobody else sees value in this then I feel it is better to mark\nthis patch as Returned with feedback or Rejected. We can come back to\nit later if we see more demand for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 09:00:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 12:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 3, 2022 at 8:58 AM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n>\n> This patch introduces two new subscription statistics columns\n> (apply_commit_count and apply_rollback_count) to the\n> pg_stat_subscription_stats view for counting cumulative transactions\n> commits/rollbacks for a particular subscription. Now, users can\n> already see the total number of xacts committed/rolled back in a\n> particular database via pg_stat_database, so this can be considered\n> duplicate information.\n\nRight.\n\n> OTOH, some users might be interested in the\n> stats for a subscription to know how many transactions are\n> successfully applied during replication because the information in\n> pg_stat_database also includes the operations that happened on the\n> node.\n\nI'm not sure how useful this information is in practice. What can we\nuse this information for?\n\nIIRC the original purpose of this proposed feature is to provide a way\nfor the users to understand the size and count of the succeeded and\nfailed transactions. At some point, the patch includes the statistics\nof only the counts of commits, rollbacks, and errors. If new\nstatistics also include the size, it might be useful to achieve the\noriginal goal. But I’m concerned that adding only apply_commit_count\nand apply_rollback_count ends up adding the duplicate statistics with\nno concrete use cases.\n\n> I am not sure if it is worth adding this additional information or how\n> useful it will be for users. Does anyone else want to weigh in on\n> this?\n>\n> If nobody else sees value in this then I feel it is better to mark\n> this patch as Returned with feedback or Rejected. We can come back to\n> it later if we see more demand for this.\n\nMarking as Returned with feedback makes sense to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 25 Mar 2022 14:36:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed transaction statistics to measure the logical replication\n progress"
},
{
"msg_contents": "Hi\r\n\r\nOn Friday, March 25, 2022 2:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Thu, Mar 24, 2022 at 12:30 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Mar 3, 2022 at 8:58 AM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> >\r\n> > This patch introduces two new subscription statistics columns\r\n> > (apply_commit_count and apply_rollback_count) to the\r\n> > pg_stat_subscription_stats view for counting cumulative transactions\r\n> > commits/rollbacks for a particular subscription. Now, users can\r\n> > already see the total number of xacts committed/rolled back in a\r\n> > particular database via pg_stat_database, so this can be considered\r\n> > duplicate information.\r\n> \r\n> Right.\r\n...\r\n> > I am not sure if it is worth adding this additional information or how\r\n> > useful it will be for users. Does anyone else want to weigh in on\r\n> > this?\r\n> >\r\n> > If nobody else sees value in this then I feel it is better to mark\r\n> > this patch as Returned with feedback or Rejected. We can come back to\r\n> > it later if we see more demand for this.\r\n> \r\n> Marking as Returned with feedback makes sense to me.\r\nOK. Thank you so much for sharing your opinions, Sawada-san and Amit-san.\r\n\r\nI changed the status of this entry to \"Returned with feedback\" accordingly.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 28 Mar 2022 06:10:31 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Failed transaction statistics to measure the logical replication\n progress"
}
] |
[
{
"msg_contents": "Hi all,\nI am a new maintainer of PostgreSQL in Fedora and RHEL. Currently, I am\nsolving usage SHA-1 for key-derivation in pgcrypto (the s2k-digest-algo).\nIn the documentation <https://www.postgresql.org/docs/8.3/pgcrypto.html>, I\nhave found that there are options SHA-1 or MD5. Unfortunately, none of\nthese algorithms are FIPS compliant. So I would like to ask if exists a\npossibility to add or enable support for some type of stronger hash\nalgorithm?\n\nThanks\n -Filip-\n\nHi all,I am a new maintainer of PostgreSQL in Fedora and RHEL. Currently, I am solving usage SHA-1 for key-derivation in pgcrypto (the s2k-digest-algo). In the documentation, I have found that there are options SHA-1 or MD5. Unfortunately, none of these algorithms are FIPS compliant. So I would like to ask if exists a possibility to add or enable support for some type of stronger hash algorithm? Thanks -Filip-",
"msg_date": "Thu, 8 Jul 2021 14:33:33 +0200",
"msg_from": "Filip Janus <fjanus@redhat.com>",
"msg_from_op": true,
"msg_subject": "SHA-1 FIPS - compliance"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 02:33:33PM +0200, Filip Janus wrote:\n> Hi all,\n> I am a new maintainer of PostgreSQL in Fedora and RHEL. Currently, I am solving\n> usage SHA-1 for key-derivation in pgcrypto (the s2k-digest-algo). In the\n> documentation, I have found that there are options SHA-1 or MD5. Unfortunately,\n> none of these algorithms are FIPS compliant. So I would like to ask if exists a\n> possibility to add or enable support for some type of stronger hash algorithm?\n\nI don't know of any official way to disable them, but I do know that PG\n14 will use a different set of algorithms that are more FIPS-compliant\nbecause we rely more on the OpenSSL for its implementation (or\nblockage).\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 09:58:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SHA-1 FIPS - compliance"
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 09:58:35AM -0400, Bruce Momjian wrote:\n> On Thu, Jul 8, 2021 at 02:33:33PM +0200, Filip Janus wrote:\n>> I am a new maintainer of PostgreSQL in Fedora and RHEL. Currently, I am solving\n>> usage SHA-1 for key-derivation in pgcrypto (the s2k-digest-algo). In the\n>> documentation, I have found that there are options SHA-1 or MD5. Unfortunately,\n>> none of these algorithms are FIPS compliant. So I would like to ask if exists a\n>> possibility to add or enable support for some type of stronger hash algorithm?\n\nPatches and improvements are always welcome.\n\n> I don't know of any official way to disable them, but I do know that PG\n> 14 will use a different set of algorithms that are more FIPS-compliant\n> because we rely more on the OpenSSL for its implementation (or\n> blockage).\n\nThe set of algorithms supported for pgcrypto does not change. The\nonly thing that does change is that, by going through the EVP layer\ninstead of the low-level cryptohash APIs, OpenSSL will not do a blind\nexit() when using algos that are not FIPS compliant (MD5 and SHA-1)\nwhen linking to OpenSSL 1.0.2 if FIPS is enabled at system or process\nlevel.\n--\nMichael",
"msg_date": "Fri, 20 Aug 2021 10:02:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SHA-1 FIPS - compliance"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhen reading the doc of ALTER SUBSCRIPTION I realized that 'refresh\noptions' in the following paragraph is not tagged:\n\n---\nAdditionally, refresh options as described under REFRESH PUBLICATION\nmay be specified, except in the case of DROP PUBLICATION.\n---\n\nWhen I read it for the first time, I got confused because we actually\nhave the 'refresh' option and this description in the paragraph of the\n'refresh' option. I think we can improve it by changing to\n'<replaceable>refresh_option</replaceable>'. Thoughts?\n\nThe patch is attached.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 8 Jul 2021 22:00:18 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "> On 8 Jul 2021, at 15:00, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> I think we can improve it by changing to\n> '<replaceable>refresh_option</replaceable>'. Thoughts?\n\nMy first thought was that the existing wording is clearer, referring to\n“options to refresh”. But thinking on it more, it’s easy to see someone\nconfusing the options part as referring to the (bool) “option” to refresh\nrather than refresh_option. I think your version is an improvement.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 15:14:48 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jul 8, 2021 at 10:14 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 8 Jul 2021, at 15:00, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > I think we can improve it by changing to\n> > '<replaceable>refresh_option</replaceable>'. Thoughts?\n>\n> My first thought was that the existing wording is clearer, referring to\n> “options to refresh”. But thinking on it more, it’s easy to see someone\n> confusing the options part as referring to the (bool) “option” to refresh\n> rather than refresh_option. I think your version is an improvement.\n\nThanks for your comments!\n\nI've added this patch to the next commitfest so as not to forget.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 13 Jul 2021 10:05:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> When reading the doc of ALTER SUBSCRIPTION I realized that 'refresh\n> options' in the following paragraph is not tagged:\n>\n> ---\n> Additionally, refresh options as described under REFRESH PUBLICATION\n> may be specified, except in the case of DROP PUBLICATION.\n> ---\n>\n> When I read it for the first time, I got confused because we actually\n> have the 'refresh' option and this description in the paragraph of the\n> 'refresh' option. I think we can improve it by changing to\n> '<replaceable>refresh_option</replaceable>'. Thoughts?\n>\n\nI see that one can get confused but how about changing it to\n\"Additionally, refresh options as described under <literal>REFRESH\nPUBLICATION</literal> (<replaceable>refresh_option</replaceable>) may\nbe specified,..\"? I think keeping \"refresh options\" in the text would\nbe good because there could be multiple such options.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 7 Aug 2021 12:03:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Sat, Aug 7, 2021 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 8, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi all,\n> >\n> > When reading the doc of ALTER SUBSCRIPTION I realized that 'refresh\n> > options' in the following paragraph is not tagged:\n> >\n> > ---\n> > Additionally, refresh options as described under REFRESH PUBLICATION\n> > may be specified, except in the case of DROP PUBLICATION.\n> > ---\n> >\n> > When I read it for the first time, I got confused because we actually\n> > have the 'refresh' option and this description in the paragraph of the\n> > 'refresh' option. I think we can improve it by changing to\n> > '<replaceable>refresh_option</replaceable>'. Thoughts?\n> >\n>\n> I see that one can get confused but how about changing it to\n> \"Additionally, refresh options as described under <literal>REFRESH\n> PUBLICATION</literal> (<replaceable>refresh_option</replaceable>) may\n> be specified,..\"? I think keeping \"refresh options\" in the text would\n> be good because there could be multiple such options.\n>\n\nI feel like it would be better to reword it in some way that avoids\nusing parentheses because they look like part of the syntax instead of\njust part of the sentence.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Sun, 8 Aug 2021 14:50:53 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Sun, Aug 8, 2021 at 10:21 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Aug 7, 2021 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 8, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Hi all,\n> > >\n> > > When reading the doc of ALTER SUBSCRIPTION I realized that 'refresh\n> > > options' in the following paragraph is not tagged:\n> > >\n> > > ---\n> > > Additionally, refresh options as described under REFRESH PUBLICATION\n> > > may be specified, except in the case of DROP PUBLICATION.\n> > > ---\n> > >\n> > > When I read it for the first time, I got confused because we actually\n> > > have the 'refresh' option and this description in the paragraph of the\n> > > 'refresh' option. I think we can improve it by changing to\n> > > '<replaceable>refresh_option</replaceable>'. Thoughts?\n> > >\n> >\n> > I see that one can get confused but how about changing it to\n> > \"Additionally, refresh options as described under <literal>REFRESH\n> > PUBLICATION</literal> (<replaceable>refresh_option</replaceable>) may\n> > be specified,..\"? I think keeping \"refresh options\" in the text would\n> > be good because there could be multiple such options.\n> >\n>\n> I feel like it would be better to reword it in some way that avoids\n> using parentheses because they look like part of the syntax instead of\n> just part of the sentence.\n>\n\nFair enough, feel free to propose if you find something better or if\nyou think the current text in the docs is good.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 Aug 2021 08:16:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Mon, Aug 9, 2021 at 12:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Aug 8, 2021 at 10:21 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Sat, Aug 7, 2021 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 8, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Hi all,\n> > > >\n> > > > When reading the doc of ALTER SUBSCRIPTION I realized that 'refresh\n> > > > options' in the following paragraph is not tagged:\n> > > >\n> > > > ---\n> > > > Additionally, refresh options as described under REFRESH PUBLICATION\n> > > > may be specified, except in the case of DROP PUBLICATION.\n> > > > ---\n> > > >\n> > > > When I read it for the first time, I got confused because we actually\n> > > > have the 'refresh' option and this description in the paragraph of the\n> > > > 'refresh' option. I think we can improve it by changing to\n> > > > '<replaceable>refresh_option</replaceable>'. Thoughts?\n> > > >\n> > >\n> > > I see that one can get confused but how about changing it to\n> > > \"Additionally, refresh options as described under <literal>REFRESH\n> > > PUBLICATION</literal> (<replaceable>refresh_option</replaceable>) may\n> > > be specified,..\"? I think keeping \"refresh options\" in the text would\n> > > be good because there could be multiple such options.\n> > >\n> >\n> > I feel like it would be better to reword it in some way that avoids\n> > using parentheses because they look like part of the syntax instead of\n> > just part of the sentence.\n> >\n>\n> Fair enough, feel free to propose if you find something better or if\n> you think the current text in the docs is good.\n>\n\nIMO just the same as your suggestion but without the parens would be good. e.g.\n\n\"Additionally, refresh options as described under <literal>REFRESH\nPUBLICATION</literal> <replaceable>refresh_option</replaceable> may be\nspecified,..\"\n\n\n",
"msg_date": "Mon, 9 Aug 2021 14:01:40 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Mon, Aug 9, 2021 at 1:01 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Aug 9, 2021 at 12:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Aug 8, 2021 at 10:21 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Sat, Aug 7, 2021 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Jul 8, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > Hi all,\n> > > > >\n> > > > > When reading the doc of ALTER SUBSCRIPTION I realized that 'refresh\n> > > > > options' in the following paragraph is not tagged:\n> > > > >\n> > > > > ---\n> > > > > Additionally, refresh options as described under REFRESH PUBLICATION\n> > > > > may be specified, except in the case of DROP PUBLICATION.\n> > > > > ---\n> > > > >\n> > > > > When I read it for the first time, I got confused because we actually\n> > > > > have the 'refresh' option and this description in the paragraph of the\n> > > > > 'refresh' option. I think we can improve it by changing to\n> > > > > '<replaceable>refresh_option</replaceable>'. Thoughts?\n> > > > >\n> > > >\n> > > > I see that one can get confused but how about changing it to\n> > > > \"Additionally, refresh options as described under <literal>REFRESH\n> > > > PUBLICATION</literal> (<replaceable>refresh_option</replaceable>) may\n> > > > be specified,..\"? I think keeping \"refresh options\" in the text would\n> > > > be good because there could be multiple such options.\n> > > >\n> > >\n> > > I feel like it would be better to reword it in some way that avoids\n> > > using parentheses because they look like part of the syntax instead of\n> > > just part of the sentence.\n> > >\n> >\n> > Fair enough, feel free to propose if you find something better or if\n> > you think the current text in the docs is good.\n> >\n>\n\nThank you for the comments!\n\n> IMO just the same as your suggestion but without the parens would be good. e.g.\n>\n> \"Additionally, refresh options as described under <literal>REFRESH\n> PUBLICATION</literal> <replaceable>refresh_option</replaceable> may be\n> specified,..\"\n\nBut \"REFRESH PUBLICATION refresh_option\" seems wrong in terms of SQL\nsyntax, not?\n\nGiven there could be multiple options how about using\n\"<replaceable>refresh_options</replaceable>\"? That is, the sentence\nwill be:\n\nAdditionally, <replaceable>refresh_options</replaceable> as described\nunder <literal>REFRESH PUBLICATION</literal> may be specified,\nexcept in the case of <literal>DROP PUBLICATION</literal>.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 10 Aug 2021 10:01:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 11:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Aug 9, 2021 at 1:01 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Aug 9, 2021 at 12:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sun, Aug 8, 2021 at 10:21 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > On Sat, Aug 7, 2021 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Jul 8, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > Hi all,\n> > > > > >\n> > > > > > When reading the doc of ALTER SUBSCRIPTION I realized that 'refresh\n> > > > > > options' in the following paragraph is not tagged:\n> > > > > >\n> > > > > > ---\n> > > > > > Additionally, refresh options as described under REFRESH PUBLICATION\n> > > > > > may be specified, except in the case of DROP PUBLICATION.\n> > > > > > ---\n> > > > > >\n> > > > > > When I read it for the first time, I got confused because we actually\n> > > > > > have the 'refresh' option and this description in the paragraph of the\n> > > > > > 'refresh' option. I think we can improve it by changing to\n> > > > > > '<replaceable>refresh_option</replaceable>'. Thoughts?\n> > > > > >\n> > > > >\n> > > > > I see that one can get confused but how about changing it to\n> > > > > \"Additionally, refresh options as described under <literal>REFRESH\n> > > > > PUBLICATION</literal> (<replaceable>refresh_option</replaceable>) may\n> > > > > be specified,..\"? I think keeping \"refresh options\" in the text would\n> > > > > be good because there could be multiple such options.\n> > > > >\n> > > >\n> > > > I feel like it would be better to reword it in some way that avoids\n> > > > using parentheses because they look like part of the syntax instead of\n> > > > just part of the sentence.\n> > > >\n> > >\n> > > Fair enough, feel free to propose if you find something better or if\n> > > you think the current text in the docs is good.\n> > >\n> >\n>\n> Thank you for the comments!\n>\n> > IMO just the same as your suggestion but without the parens would be good. e.g.\n> >\n> > \"Additionally, refresh options as described under <literal>REFRESH\n> > PUBLICATION</literal> <replaceable>refresh_option</replaceable> may be\n> > specified,..\"\n>\n> But \"REFRESH PUBLICATION refresh_option\" seems wrong in terms of SQL\n> syntax, not?\n>\n\nBecause the sentence says \"... as described under ...\" I thought it\nwas clear enough it was referring to the documentation below and not\nthe SQL syntax.\n\n> Given there could be multiple options how about using\n> \"<replaceable>refresh_options</replaceable>\"? That is, the sentence\n> will be:\n>\n> Additionally, <replaceable>refresh_options</replaceable> as described\n> under <literal>REFRESH PUBLICATION</literal> may be specified,\n> except in the case of <literal>DROP PUBLICATION</literal>.\n>\n\n+1 LGTM\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 12:17:07 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 6:31 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Aug 9, 2021 at 1:01 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Aug 9, 2021 at 12:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> But \"REFRESH PUBLICATION refresh_option\" seems wrong in terms of SQL\n> syntax, not?\n>\n> Given there could be multiple options how about using\n> \"<replaceable>refresh_options</replaceable>\"? That is, the sentence\n> will be:\n>\n> Additionally, <replaceable>refresh_options</replaceable> as described\n> under <literal>REFRESH PUBLICATION</literal> may be specified,\n> except in the case of <literal>DROP PUBLICATION</literal>.\n>\n\nNormally (at least on this doc page), we use this tag for some defined\noption, syntax and as refresh_options is none of them, it would look a\nbit awkward.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 08:57:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 12:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 6:31 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Aug 9, 2021 at 1:01 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Mon, Aug 9, 2021 at 12:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > But \"REFRESH PUBLICATION refresh_option\" seems wrong in terms of SQL\n> > syntax, not?\n> >\n> > Given there could be multiple options how about using\n> > \"<replaceable>refresh_options</replaceable>\"? That is, the sentence\n> > will be:\n> >\n> > Additionally, <replaceable>refresh_options</replaceable> as described\n> > under <literal>REFRESH PUBLICATION</literal> may be specified,\n> > except in the case of <literal>DROP PUBLICATION</literal>.\n> >\n>\n> Normally (at least on this doc page), we use this tag for some defined\n> option, syntax and as refresh_options is none of them, it would look a\n> bit awkward.\n\nIndeed.\n\nThinking more the idea proposed by Peter Smith, it looks unnatural to\nme, especially the part of \"REFRESH PUBLICATION refresh_option\":\n\nAdditionally, refresh options as described\nunder <literal>REFRESH PUBLICATION</literal>\n<replaceable>refresh_option</replaceable> may be specified,\nexcept in the case of <literal>DROP PUBLICATION</literal>.\n\nAs an alternative idea, how about using the \"refresh_option of REFRESH\nPUBLICATION\" instead ? That is,\n\nAdditionally, refresh options as described in\n<replaceable>refresh_option</replaceable> of\n<literal>REFRESH PUBLICATION</literal> may be specified,\nexcept in the case of <literal>DROP PUBLICATION</literal>.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 11 Aug 2021 16:57:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "> On 11 Aug 2021, at 09:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> Additionally, refresh options as described in\n> <replaceable>refresh_option</replaceable> of\n> <literal>REFRESH PUBLICATION</literal> may be specified,\n> except in the case of <literal>DROP PUBLICATION</literal>.\n\nSince this paragraph is under the literal option “refresh”, which takes a\nvalue, I still find your original patch to be the clearest.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 11 Aug 2021 10:42:09 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 5:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 11 Aug 2021, at 09:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > Additionally, refresh options as described in\n> > <replaceable>refresh_option</replaceable> of\n> > <literal>REFRESH PUBLICATION</literal> may be specified,\n> > except in the case of <literal>DROP PUBLICATION</literal>.\n>\n> Since this paragraph is under the literal option “refresh”, which takes a\n> value, I still find your original patch to be the clearest.\n\nYeah, I prefer my original patch over this idea. On the other hand, I\ncan see the point of review comment on it that Amit pointed out[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1KaWwUSkDEKPseVY-z00kQJfpfVFdJCXPv9_CrwVZPMhg%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 12 Aug 2021 11:52:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 12:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Yeah, I prefer my original patch over this idea. On the other hand, I\n> can see the point of review comment on it that Amit pointed out[1].\n>\n> Regards,\n>\n> [1] https://www.postgresql.org/message-id/CAA4eK1KaWwUSkDEKPseVY-z00kQJfpfVFdJCXPv9_CrwVZPMhg%40mail.gmail.com\n>\n\nPersonally, I don't really think the wording that results from the\noriginal patch is great, because it doesn't give the impression of\nmultiple options.\nI prefer something like:\n\nAdditionally, refresh options may be specified, as described under\n<literal>REFRESH PUBLICATION</literal> supported\n<replaceable>refresh_option</replaceable> values, except in the case\nof <literal>DROP PUBLICATION</literal>.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 12 Aug 2021 14:19:36 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On 12.08.21 04:52, Masahiko Sawada wrote:\n> On Wed, Aug 11, 2021 at 5:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>\n>>> On 11 Aug 2021, at 09:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>>> Additionally, refresh options as described in\n>>> <replaceable>refresh_option</replaceable> of\n>>> <literal>REFRESH PUBLICATION</literal> may be specified,\n>>> except in the case of <literal>DROP PUBLICATION</literal>.\n>>\n>> Since this paragraph is under the literal option “refresh”, which takes a\n>> value, I still find your original patch to be the clearest.\n> \n> Yeah, I prefer my original patch over this idea. On the other hand, I\n> can see the point of review comment on it that Amit pointed out[1].\n\nHow about this:\n\n- Additionally, refresh options as described\n- under <literal>REFRESH PUBLICATION</literal> may be specified.\n+ Additionally, the options described under <literal>REFRESH\n+ PUBLICATION</literal> may be specified, to control the implicit \nrefresh\n+ operation.\n\n\n",
"msg_date": "Tue, 7 Sep 2021 13:36:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "> On 7 Sep 2021, at 13:36, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 12.08.21 04:52, Masahiko Sawada wrote:\n>> On Wed, Aug 11, 2021 at 5:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> \n>>>> On 11 Aug 2021, at 09:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>> \n>>>> Additionally, refresh options as described in\n>>>> <replaceable>refresh_option</replaceable> of\n>>>> <literal>REFRESH PUBLICATION</literal> may be specified,\n>>>> except in the case of <literal>DROP PUBLICATION</literal>.\n>>> \n>>> Since this paragraph is under the literal option “refresh”, which takes a\n>>> value, I still find your original patch to be the clearest.\n>> Yeah, I prefer my original patch over this idea. On the other hand, I\n>> can see the point of review comment on it that Amit pointed out[1].\n> \n> How about this:\n> \n> - Additionally, refresh options as described\n> - under <literal>REFRESH PUBLICATION</literal> may be specified.\n> + Additionally, the options described under <literal>REFRESH\n> + PUBLICATION</literal> may be specified, to control the implicit refresh\n> + operation.\n\nLGTM.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 7 Sep 2021 14:01:14 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Tue, Sep 7, 2021 at 9:01 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 7 Sep 2021, at 13:36, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 12.08.21 04:52, Masahiko Sawada wrote:\n> >> On Wed, Aug 11, 2021 at 5:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>>\n> >>>> On 11 Aug 2021, at 09:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>>\n> >>>> Additionally, refresh options as described in\n> >>>> <replaceable>refresh_option</replaceable> of\n> >>>> <literal>REFRESH PUBLICATION</literal> may be specified,\n> >>>> except in the case of <literal>DROP PUBLICATION</literal>.\n> >>>\n> >>> Since this paragraph is under the literal option “refresh”, which takes a\n> >>> value, I still find your original patch to be the clearest.\n> >> Yeah, I prefer my original patch over this idea. On the other hand, I\n> >> can see the point of review comment on it that Amit pointed out[1].\n> >\n> > How about this:\n> >\n> > - Additionally, refresh options as described\n> > - under <literal>REFRESH PUBLICATION</literal> may be specified.\n> > + Additionally, the options described under <literal>REFRESH\n> > + PUBLICATION</literal> may be specified, to control the implicit refresh\n> > + operation.\n>\n> LGTM.\n\n+1\n\nAttached the patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 8 Sep 2021 20:41:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Wed, Sep 8, 2021 at 5:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Sep 7, 2021 at 9:01 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 7 Sep 2021, at 13:36, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > >\n> > > On 12.08.21 04:52, Masahiko Sawada wrote:\n> > >> On Wed, Aug 11, 2021 at 5:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >>>\n> > >>>> On 11 Aug 2021, at 09:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>>\n> > >>>> Additionally, refresh options as described in\n> > >>>> <replaceable>refresh_option</replaceable> of\n> > >>>> <literal>REFRESH PUBLICATION</literal> may be specified,\n> > >>>> except in the case of <literal>DROP PUBLICATION</literal>.\n> > >>>\n> > >>> Since this paragraph is under the literal option “refresh”, which takes a\n> > >>> value, I still find your original patch to be the clearest.\n> > >> Yeah, I prefer my original patch over this idea. On the other hand, I\n> > >> can see the point of review comment on it that Amit pointed out[1].\n> > >\n> > > How about this:\n> > >\n> > > - Additionally, refresh options as described\n> > > - under <literal>REFRESH PUBLICATION</literal> may be specified.\n> > > + Additionally, the options described under <literal>REFRESH\n> > > + PUBLICATION</literal> may be specified, to control the implicit refresh\n> > > + operation.\n> >\n> > LGTM.\n>\n> +1\n>\n> Attached the patch.\n>\n\nLGTM as well. Peter E., Daniel, does any one of you is intending to\npush this? If not, I can take care of this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Sep 2021 15:27:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "> On 14 Sep 2021, at 11:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> LGTM as well. Peter E., Daniel, does any one of you is intending to\n> push this? If not, I can take care of this.\n\nNo worries, I can pick it up.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 14 Sep 2021 14:35:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "> On 14 Sep 2021, at 14:35, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 14 Sep 2021, at 11:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n>> LGTM as well. Peter E., Daniel, does any one of you is intending to\n>> push this? If not, I can take care of this.\n> \n> No worries, I can pick it up.\n\nAnd done, thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 15 Sep 2021 09:58:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 4:58 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 14 Sep 2021, at 14:35, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> >> On 14 Sep 2021, at 11:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >> LGTM as well. Peter E., Daniel, does any one of you is intending to\n> >> push this? If not, I can take care of this.\n> >\n> > No worries, I can pick it up.\n\nSorry for the late reply. I was on vacation.\n\n> And done, thanks!\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 21 Sep 2021 09:04:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small documentation improvement for ALTER SUBSCRIPTION"
}
] |
[
{
"msg_contents": "Hi.\n\nThis is a proposal for a new feature in statistics collector.\nI think we need to add statistics about refresh matview to \npg_stat_all_tables view.\n\nWhen the \"REFRESH MATERIALIZED VIEW\" was executed, the number of times \nit was executed\nand date it took were not recorded anywhere.\n\n\"pg_stat_statements\" can be used to get the number of executions and the \ndate and time of execution,\nbut this information is statement-based, not view-based.\nAlso, that method requires the high cost of \"pg_stat_statements\".\n\nThis patch will add statistics(count, last time) about \"REFRESH \nMATERIALIZED VIEW\"\nto pg_stat_all_tables(pg_stat_user_tables, [pg_stat_sys_tables]).\n\nWhat do you think?\n\nRegards,\nSeino Yuki",
"msg_date": "Fri, 09 Jul 2021 01:39:54 +0900",
"msg_from": "Seino Yuki <seinoyu@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Add statistics refresh materialized view"
},
{
"msg_contents": "\n\nOn 2021/07/09 1:39, Seino Yuki wrote:\n> Hi.\n> \n> This is a proposal for a new feature in statistics collector.\n> I think we need to add statistics about refresh matview to pg_stat_all_tables view.\n\nWhy do you want to treat only REFRESH MATERIALIZED VIEW command special?\nWhat about other utility commands like TRUNCATE, CLUSTER, etc?\n\nIt's not good design to add new columns per utility command into\npg_stat_all_tables. Otherwise pg_stat_all_tables will have to have lots of\ncolumns to expose the stats of many utility commands at last. Which is\nugly and very user-unfriendly.\n\nMost entries in pg_stat_all_tables are basically for tables. So the columns\nabout REFRESH MATERIALIZED VIEW are useless for those most entries.\nThis is another reason why I think the design is not good.\n\n\n> \n> When the \"REFRESH MATERIALIZED VIEW\" was executed, the number of times it was executed\n> and date it took were not recorded anywhere.\n\npg_stat_statements and log_statement would help?\n\n\n> \n> \"pg_stat_statements\" can be used to get the number of executions and the date and time of execution,\n> but this information is statement-based, not view-based.\n\npg_stat_statements reports different records for REFRESH MATERIALIZED VIEW\ncommands on different views. So ISTM that we can aggregate the information\nper view, from pg_stat_statements. No?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 1 Sep 2021 23:15:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add statistics refresh materialized view"
},
{
"msg_contents": "On 2021-09-01 23:15, Fujii Masao wrote:\n> Why do you want to treat only REFRESH MATERIALIZED VIEW command \n> special?\n> What about other utility commands like TRUNCATE, CLUSTER, etc?\n\nFirst of all, knowing the update date and time of the MATVIEW is \nessential for actual operation.\nWithout that information, users will not be able to trust the MATVIEW.\n\nIn terms of the reliability of the information in the table,\nI think the priority of the REFRESHED MATVIEW is higher than that of \nTRUNCATE and CLUSTER.\n\n\n> It's not good design to add new columns per utility command into\n> pg_stat_all_tables. Otherwise pg_stat_all_tables will have to have lots \n> of\n> columns to expose the stats of many utility commands at last. Which is\n> ugly and very user-unfriendly.\n\n> Most entries in pg_stat_all_tables are basically for tables. So the \n> columns\n> about REFRESH MATERIALIZED VIEW are useless for those most entries.\n> This is another reason why I think the design is not good.\n\nI agree with this opinion.\nInitially, I thought about storing this information in pg_matviews,\nbut decided against it because of the overhead of adding it to the \nsystem catalog.\n\n\n> pg_stat_statements reports different records for REFRESH MATERIALIZED \n> VIEW\n> commands on different views. So ISTM that we can aggregate the \n> information\n> per view, from pg_stat_statements. No?\n\nI made this suggestion based on the premise that the last update date \nand time of the Mateview should always be retained.\nI think the same concept applies to Oracle Database.\nhttps://docs.oracle.com/cd/F19136_01/refrn/ALL_MVIEWS.html#GUID-8B9432B5-6B66-411A-936E-590D9D7671E9\nI thought it would be useless to enable pg_stat_statements and \nlog_statement to see this information.\n\n\nHowever, as you said, for most use cases, pg_stat_statements and \nlog_statement may be sufficient.\nI would like to withdraw this proposal.\n\nRegards,\n\n\n",
"msg_date": "Tue, 07 Sep 2021 18:11:14 +0900",
"msg_from": "Seino Yuki <seinoyu@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add statistics refresh materialized view"
},
{
"msg_contents": "On Tue, Sep 07, 2021 at 06:11:14PM +0900, Seino Yuki wrote:\n> I would like to withdraw this proposal.\n\nThis was registered in the CF, so marked as RwF.\n--\nMichael",
"msg_date": "Fri, 1 Oct 2021 15:49:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add statistics refresh materialized view"
},
{
"msg_contents": "Hi,\n\n\n> However, as you said, for most use cases, pg_stat_statements and \n> log_statement may be sufficient.\n> I would like to withdraw this proposal.\n>\nWell, they either require extensions or parameters to be set properly. \nOne advantage I see to store those kind of information is that it can be \nqueried by application developers (users are reporting old data for \nexample).\n\nWe currently have to rely on other ways to figure out if materialized \nviews were properly refreshed.\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 4 Jul 2024 11:43:30 -0400",
"msg_from": "Said Assemlal <sassemlal@neurorx.com>",
"msg_from_op": false,
"msg_subject": "Re: Add statistics refresh materialized view"
}
] |
[
{
"msg_contents": "Are we going to be forever explaining that enable_resultcache doesn't\ncache query results? Do we need a different name? \nenable_innerjoin_cache?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 12:51:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "enable_resultcache confusion"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 12:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Are we going to be forever explaining that enable_resultcache doesn't\n> cache query results?\n\nYes, I can see that causing ongoing confusion. Naming things is really hard...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jul 2021 13:29:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 10:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jul 8, 2021 at 12:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Are we going to be forever explaining that enable_resultcache doesn't\n> > cache query results?\n>\n> Yes, I can see that causing ongoing confusion. Naming things is really\n> hard...\n>\n>\nI agree that the chosen name is problematic. To borrow existing technical\nnomenclature, what we seem to be doing here is adding \"Node Memoization\"\n[1].\n\n\"enable_nodememoization\" would work for me - by avoiding Result and using\nNode the focus should remain without the bowels of the planner's plan and\nnot move to the output of the query as a whole. \"Node Cache\" would\nprobably work just as well if a wholesale change to Memoization doesn't\nseem appealing, but the semantics of that term seem closer to what is\nhappening here.\n\nThe description in the commit message suggests we can use this for a wide\nvariety of nodes so adding any node specific typing to the name seems\nunwise.\n\nDavid J.\n\n1. https://en.wikipedia.org/wiki/Memoization\n\nOn Thu, Jul 8, 2021 at 10:29 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jul 8, 2021 at 12:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Are we going to be forever explaining that enable_resultcache doesn't\n> cache query results?\n\nYes, I can see that causing ongoing confusion. Naming things is really hard...I agree that the chosen name is problematic. To borrow existing technical nomenclature, what we seem to be doing here is adding \"Node Memoization\" [1].\"enable_nodememoization\" would work for me - by avoiding Result and using Node the focus should remain without the bowels of the planner's plan and not move to the output of the query as a whole. \"Node Cache\" would probably work just as well if a wholesale change to Memoization doesn't seem appealing, but the semantics of that term seem closer to what is happening here.The description in the commit message suggests we can use this for a wide variety of nodes so adding any node specific typing to the name seems unwise.David J.1. https://en.wikipedia.org/wiki/Memoization",
"msg_date": "Thu, 8 Jul 2021 10:52:38 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> \"enable_nodememoization\" would work for me\n\nThat seems pretty unreadable. Maybe just \"enable_memoization\"?\n\nReally if we're going to do something here, we can't merely mess\nwith the GUC name. David had expressed a willingness to rename\neverything about ResultCache some time ago, but nobody stepped up\nwith a better name.\n\nMaybe name the plan node type Memoize, and the GUC \"enable_memoize\"?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Jul 2021 14:00:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "> On 8 Jul 2021, at 19:52, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\n> \"enable_nodememoization\" would work for me\n\nInclude \"node\" concatenated with other words risks users reading it as \"enable\nno demomoization\", with confusion following.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 20:01:59 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 12:51:45PM -0400, Bruce Momjian wrote:\n> Are we going to be forever explaining that enable_resultcache doesn't\n> cache query results? Do we need a different name? \n> enable_innerjoin_cache?\n\nSee also https://www.postgresql.org/message-id/CAApHDvos7z90hyiuX3kcFe0Q_4WYWfwFABOygo4aeBbemyF9sQ@mail.gmail.com\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Jul 2021 13:03:23 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Maybe name the plan node type Memoize, and the GUC \"enable_memoize\"?\n>\n>\n+1\n\nDavid J.\n\nOn Thu, Jul 8, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Maybe name the plan node type Memoize, and the GUC \"enable_memoize\"?+1David J.",
"msg_date": "Thu, 8 Jul 2021 11:03:27 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 6:03 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> On Thu, Jul 8, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Maybe name the plan node type Memoize, and the GUC \"enable_memoize\"?\n>\n> +1\n\n+1\n\n\n",
"msg_date": "Fri, 9 Jul 2021 16:26:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Fri, 9 Jul 2021 at 06:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Maybe name the plan node type Memoize, and the GUC \"enable_memoize\"?\n\nI really like that name.\n\nI'll wait to see if anyone else wants to voice their opinion before I\ndo any renaming work.\n\nDavid\n\n\n",
"msg_date": "Sat, 10 Jul 2021 03:35:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 11:35 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I really like that name.\n>\n> I'll wait to see if anyone else wants to voice their opinion before I\n> do any renaming work.\n\nI like it, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Jul 2021 15:29:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Sat, 10 Jul 2021 at 07:30, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jul 9, 2021 at 11:35 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I really like that name.\n> >\n> > I'll wait to see if anyone else wants to voice their opinion before I\n> > do any renaming work.\n>\n> I like it, too.\n\nGreat. I've attached my first draft patch to do the renaming.\n\nIt would be good to move fairly quickly on this before REL_14_STABLE\nand master diverge too much. At the moment the patch applies to both\nversions without any issues.\n\nDoes anyone not like the proposed name?\n\nDavid",
"msg_date": "Mon, 12 Jul 2021 02:56:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 02:56:58AM +1200, David Rowley wrote:\n> On Sat, 10 Jul 2021 at 07:30, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Jul 9, 2021 at 11:35 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > I really like that name.\n> > >\n> > > I'll wait to see if anyone else wants to voice their opinion before I\n> > > do any renaming work.\n> >\n> > I like it, too.\n> \n> Great. I've attached my first draft patch to do the renaming.\n\nIn REL_14, maybe you'd also want to update the release notes reference\n\n| This is useful if only a small percentage of rows is checked on\n| the inner side and is controlled by <xref\n| linkend=\"guc-enable-resultcache\"/>.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 11 Jul 2021 10:22:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Mon, 12 Jul 2021 at 03:22, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> | This is useful if only a small percentage of rows is checked on\n> | the inner side and is controlled by <xref\n> | linkend=\"guc-enable-resultcache\"/>.\n\nYou might be right there, but I'm not too sure if I changed that that\nit might cause a mention of the rename to be missed in the changes\nsince beta2 notes.\n\nAdditionally, I was unsure about touching typedefs.list. In the patch\nI changed it, but wasn't too sure if that was the correct thing to do.\nIn normal circumstances, i.e writing new code, I'd not touch it.\n\nDavid\n\n\n",
"msg_date": "Mon, 12 Jul 2021 11:47:27 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Mon, 12 Jul 2021 at 03:22, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> | This is useful if only a small percentage of rows is checked on\n>> | the inner side and is controlled by <xref\n>> | linkend=\"guc-enable-resultcache\"/>.\n\n> You might be right there, but I'm not too sure if I changed that that\n> it might cause a mention of the rename to be missed in the changes\n> since beta2 notes.\n\nYou need to change it, because IIUC that will be a dangling\ncross-reference, causing the v14 docs to fail to build at all.\n\n> Additionally, I was unsure about touching typedefs.list. In the patch\n> I changed it, but wasn't too sure if that was the correct thing to do.\n> In normal circumstances, i.e writing new code, I'd not touch it.\n\nI'd suggest replacing it in typedefs.list, since there is unlikely to\nbe any further update to v14's copy otherwise, and even in HEAD I'm not\nsure it'd get updated before we approach the v15 branch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Jul 2021 09:38:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Tue, 13 Jul 2021 at 01:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Mon, 12 Jul 2021 at 03:22, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> | This is useful if only a small percentage of rows is checked on\n> >> | the inner side and is controlled by <xref\n> >> | linkend=\"guc-enable-resultcache\"/>.\n>\n> > You might be right there, but I'm not too sure if I changed that that\n> > it might cause a mention of the rename to be missed in the changes\n> > since beta2 notes.\n>\n> You need to change it, because IIUC that will be a dangling\n> cross-reference, causing the v14 docs to fail to build at all.\n\nGood point. I'll adjust that for PG14.\n\nI plan on pushing the patch to master and PG14 in 24 hours time. If\nanyone is still on the fence or wishes to object to the name, please\nlet it be known before then.\n\nDavid\n\n\n",
"msg_date": "Tue, 13 Jul 2021 12:01:38 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Tue, 13 Jul 2021 at 12:01, David Rowley <dgrowleyml@gmail.com> wrote:\n> I plan on pushing the patch to master and PG14 in 24 hours time. If\n> anyone is still on the fence or wishes to object to the name, please\n> let it be known before then.\n\nRenaming complete. Result Cache is gone. Welcome Memoize.\n\nDavid\n\n\n",
"msg_date": "Wed, 14 Jul 2021 12:46:44 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_resultcache confusion"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 11:47:27AM +1200, David Rowley wrote:\n> On Mon, 12 Jul 2021 at 03:22, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > | This is useful if only a small percentage of rows is checked on\n> > | the inner side and is controlled by <xref\n> > | linkend=\"guc-enable-resultcache\"/>.\n> \n> You might be right there, but I'm not too sure if I changed that that\n> it might cause a mention of the rename to be missed in the changes\n> since beta2 notes.\n\nThe commit will appear in the PG 14 git log, and someone will then see\nthe rename is part of the changes since the previous beta. Also, when\nthe release notes \"as of\" date is updated, all commits since the\nprevious \"as of\" date will be reviewed. Yes, someone might try to\nupdate the release notes for this change and see it was already done,\nbut that is easily handled.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 15 Jul 2021 11:43:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: enable_resultcache confusion"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nDuring logical decoding, we have recently observed (and reported in [1]) \nerrors like:\n\nERROR: could not open relation with OID 0\n\nAfter investigating a recent issue on a PostgreSQL database that was \nencountering this error, we found that the logical decoding of relation \nrewrite with toast could produce this error without resetting the \ntoast_hash.\n\nWe were able to create this repro of the error:\n\npostgres=# \\! cat bdt_repro.sql\nselect pg_create_logical_replication_slot('bdt_slot','test_decoding');\nCREATE TABLE tbl1 (a INT, b TEXT);\nCREATE TABLE tbl2 (a INT);\nALTER TABLE tbl1 ALTER COLUMN b SET STORAGE EXTERNAL;\nBEGIN;\nINSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;\nALTER TABLE tbl1 ADD COLUMN id serial primary key;\nINSERT INTO tbl2 VALUES(1);\ncommit;\nselect * from pg_logical_slot_get_changes('bdt_slot', null, null);\n\nThat ends up on 12.5 with:\n\nERROR: could not open relation with OID 0\n\nAnd on current master with:\n\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nThe issue has been introduced by 325f2ec555 (and more precisely by its \nchange in reorderbuffer.c), so it does affect pre v11 versions:\n\ngit branch -r --contains 325f2ec555\n origin/HEAD -> origin/master\n origin/REL_11_STABLE\n origin/REL_12_STABLE\n origin/REL_13_STABLE\n origin/REL_14_STABLE\n origin/master\n\nThe fact that current master is producing a different behavior than 12.5 \n(for example) is due to 4daa140a2f that generates a failed assertion in \nsuch a case (after going to the code path that should print out the ERROR):\n\n#2 0x0000000000b29fab in ExceptionalCondition (conditionName=0xce6850 \n\"(rb->size >= sz) && (txn->size >= sz)\", errorType=0xce5f84 \n\"FailedAssertion\", fileName=0xce5fd0 \"reorderbuffer.c\", lineNumber=3141) \nat assert.c:69\n#3 0x00000000008ff1fb in ReorderBufferChangeMemoryUpdate (rb=0x11a7a40, \nchange=0x11c94b8, addition=false) at reorderbuffer.c:3141\n#4 0x00000000008fab27 in ReorderBufferReturnChange (rb=0x11a7a40, \nchange=0x11c94b8, upd_mem=true) at reorderbuffer.c:477\n#5 0x0000000000902ec1 in ReorderBufferToastReset (rb=0x11a7a40, \ntxn=0x11b1998) at reorderbuffer.c:4799\n#6 0x00000000008faaa2 in ReorderBufferReturnTXN (rb=0x11a7a40, \ntxn=0x11b1998) at reorderbuffer.c:448\n#7 0x00000000008fc95b in ReorderBufferCleanupTXN (rb=0x11a7a40, \ntxn=0x11b1998) at reorderbuffer.c:1540\n\nI am adding Amit to this thread to make him aware than this is related \nto the recent issue Jeremy and I were talking about in [1] - which we \nnow believe is not linked to the logical decoding and speculative insert \nbug fixed in 4daa140a2f but likely to this new toast rewrite bug.\n\nPlease find enclosed a patch proposal to:\n\n* Avoid the failed assertion on current master and generate the error \nmessage instead (should the code reach that stage).\n* Reset the toast_hash in case of relation rewrite with toast (so that \nthe logical decoding in the above repro is working).\n\nI am adding this patch to the next commitfest.\n\nThanks\nBertrand\n\n[1]: \nhttps://www.postgresql.org/message-id/CAA4eK1KcUPwwhDVhJmdQExc09AzEBZMGbOa-u3DYaJs1zzfEnA%40mail.gmail.com",
"msg_date": "Fri, 9 Jul 2021 08:51:34 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "[bug] Logical Decoding of relation rewrite with toast does not reset\n toast_hash"
},
{
"msg_contents": "Hi Drouvot,\r\nI can reproduce the issue you mentioned on REL_12_STABLE as well as Master branch, but the patch doesn't apply to REL_12_STABLE. After applied it to Master branch, it returns some wired result when run the query in the first time. \r\nAs you can see in the log below, after the first time execute the query `select * from pg_logical_slot_get_changes('bdt_slot', null, null);` it returns some extra data.\r\ndavid:postgres$ psql -d postgres\r\npsql (15devel)\r\nType \"help\" for help.\r\n\r\npostgres=# \\q\r\ndavid:postgres$ psql -d postgres\r\npsql (15devel)\r\nType \"help\" for help.\r\n\r\npostgres=# select pg_create_logical_replication_slot('bdt_slot','test_decoding');\r\n pg_create_logical_replication_slot \r\n------------------------------------\r\n (bdt_slot,0/1484FA8)\r\n(1 row)\r\n\r\npostgres=# CREATE TABLE tbl1 (a INT, b TEXT);\r\nCREATE TABLE\r\npostgres=# CREATE TABLE tbl2 (a INT);\r\nCREATE TABLE\r\npostgres=# ALTER TABLE tbl1 ALTER COLUMN b SET STORAGE EXTERNAL;\r\nALTER TABLE\r\npostgres=# \r\npostgres=# BEGIN;\r\nBEGIN\r\npostgres=*# INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;\r\nINSERT 0 1\r\npostgres=*# ALTER TABLE tbl1 ADD COLUMN id serial primary key;\r\nALTER TABLE\r\npostgres=*# INSERT INTO tbl2 VALUES(1);\r\nINSERT 0 1\r\npostgres=*# commit;\r\nCOMMIT\r\npostgres=# \r\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\r\n lsn | xid | \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n data \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n-----------+-----+--------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\r\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\r\n lsn | xid | data \r\n-----+-----+------\r\n(0 rows)\r\n\r\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\r\n lsn | xid | data \r\n-----+-----+------\r\n(0 rows)\r\n\r\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\r\n lsn | xid | data \r\n-----+-----+------\r\n(0 rows)\r\n\r\npostgres=# \r\n\r\nThank you,\r\nDavid",
"msg_date": "Fri, 06 Aug 2021 22:18:43 +0000",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "Hi David,\n\nOn 8/7/21 12:18 AM, David Zhang wrote:\n> Hi Drouvot,\n> I can reproduce the issue you mentioned on REL_12_STABLE as well as Master branch,\n\nThanks for looking at it!\n\n> but the patch doesn't apply to REL_12_STABLE.\n\nIndeed this patch version provided is done for the current Master branch.\n\n> After applied it to Master branch, it returns some wired result when run the query in the first time.\n> As you can see in the log below, after the first time execute the query `select * from pg_logical_slot_get_changes('bdt_slot', null, null);` it returns some extra data.\n> david:postgres$ psql -d postgres\n> psql (15devel)\n> Type \"help\" for help.\n>\n> postgres=# \\q\n> david:postgres$ psql -d postgres\n> psql (15devel)\n> Type \"help\" for help.\n>\n> postgres=# select pg_create_logical_replication_slot('bdt_slot','test_decoding');\n> pg_create_logical_replication_slot\n> ------------------------------------\n> (bdt_slot,0/1484FA8)\n> (1 row)\n>\n> postgres=# CREATE TABLE tbl1 (a INT, b TEXT);\n> CREATE TABLE\n> postgres=# CREATE TABLE tbl2 (a INT);\n> CREATE TABLE\n> postgres=# ALTER TABLE tbl1 ALTER COLUMN b SET STORAGE EXTERNAL;\n> ALTER TABLE\n> postgres=#\n> postgres=# BEGIN;\n> BEGIN\n> postgres=*# INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;\n> INSERT 0 1\n> postgres=*# ALTER TABLE tbl1 ADD COLUMN id serial primary key;\n> ALTER TABLE\n> postgres=*# INSERT INTO tbl2 VALUES(1);\n> INSERT 0 1\n> postgres=*# commit;\n> COMMIT\n> postgres=#\n> postgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\n> lsn | xid |\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> data\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> -----------+-----+--------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> postgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\n> lsn | xid | data\n> -----+-----+------\n> (0 rows)\n>\n> postgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\n> lsn | xid | data\n> -----+-----+------\n> (0 rows)\n>\n> postgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\n> lsn | xid | data\n> -----+-----+------\n> (0 rows)\n>\n> postgres=#\n\nI don't see extra data in your output and it looks like your copy/paste \nis missing some content, no?\n\nOn my side, that looks good and here is what i get with the patch applied:\n\npsql (15devel)\nType \"help\" for help.\n\npostgres=# \\i repro.sql\n pg_create_logical_replication_slot\n------------------------------------\n (bdt_slot,0/172DAF0)\n(1 row)\n\nCREATE TABLE\nCREATE TABLE\nALTER TABLE\nBEGIN\nINSERT 0 1\nALTER TABLE\nINSERT 0 1\nCOMMIT\n lsn | xid |\n\n\n\n\n\n\n data\n\n\n\n\n\n\n\n-----------+-----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------\n 0/172DB40 | 708 | BEGIN 708\n 0/1753298 | 708 | COMMIT 708\n 0/17532C8 | 709 | BEGIN 709\n 0/1754828 | 709 | COMMIT 709\n 0/1754828 | 710 | BEGIN 710\n 0/1754B10 | 710 | COMMIT 710\n 0/1754B10 | 711 | BEGIN 711\n 0/1755CA0 | 711 | table public.tbl1: INSERT: a[integer]:1 \nb[text]:'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'\n 0/176F970 | 711 | table public.tbl2: INSERT: a[integer]:1\n 0/1770688 | 711 | COMMIT 711\n(10 rows)\n\nThanks\n\nBertrand\n\n\n\n\n\n\n\nHi David,\n\nOn 8/7/21 12:18 AM, David Zhang wrote:\n \n\nHi Drouvot,\nI can reproduce the issue you mentioned on REL_12_STABLE as well as Master branch,\n\nThanks for looking at it!\n\n\n but the patch doesn't apply to REL_12_STABLE. \n\nIndeed this patch version provided is done for the current Master\n branch.\n\n\nAfter applied it to Master branch, it returns some wired result when run the query in the first time.\nAs you can see in the log below, after the first time execute the query `select * from pg_logical_slot_get_changes('bdt_slot', null, null);` it returns some extra data.\ndavid:postgres$ psql -d postgres\npsql (15devel)\nType \"help\" for help.\n\npostgres=# \\q\ndavid:postgres$ psql -d postgres\npsql (15devel)\nType \"help\" for help.\n\npostgres=# select pg_create_logical_replication_slot('bdt_slot','test_decoding');\n pg_create_logical_replication_slot\n------------------------------------\n (bdt_slot,0/1484FA8)\n(1 row)\n\npostgres=# CREATE TABLE tbl1 (a INT, b TEXT);\nCREATE TABLE\npostgres=# CREATE TABLE tbl2 (a INT);\nCREATE TABLE\npostgres=# ALTER TABLE tbl1 ALTER COLUMN b SET STORAGE EXTERNAL;\nALTER TABLE\npostgres=#\npostgres=# BEGIN;\nBEGIN\npostgres=*# INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;\nINSERT 0 1\npostgres=*# ALTER TABLE tbl1 ADD COLUMN id serial primary key;\nALTER TABLE\npostgres=*# INSERT INTO tbl2 VALUES(1);\nINSERT 0 1\npostgres=*# commit;\nCOMMIT\npostgres=#\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\n lsn | xid |\n\n\n\n\n\n\n\n\n\n\n\n data\n\n\n\n\n\n\n\n\n\n\n\n\n\n-----------+-----+--------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\n lsn | xid | data\n-----+-----+------\n(0 rows)\n\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\n lsn | xid | data\n-----+-----+------\n(0 rows)\n\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\n lsn | xid | data\n-----+-----+------\n(0 rows)\n\npostgres=#\n\nI don't see extra data in your output and it looks like your\n copy/paste is missing some content, no?\n\nOn my side, that looks good and here is what i get with the patch\n applied:\npsql\n(15devel) \n \n Type \"help\" for help.\n\n postgres=# \\i repro.sql\n pg_create_logical_replication_slot\n ------------------------------------\n (bdt_slot,0/172DAF0)\n (1 row)\n\n CREATE TABLE\n CREATE TABLE\n ALTER TABLE\n BEGIN\n INSERT 0 1\n ALTER TABLE\n INSERT 0 1\n COMMIT\n lsn | xid |\n\n\n\n\n\n\n data\n\n\n\n\n\n\n\n-----------+-----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n --------------------------------------\n 0/172DB40 | 708 | BEGIN 708\n 0/1753298 | 708 | COMMIT 708\n 0/17532C8 | 709 | BEGIN 709\n 0/1754828 | 709 | COMMIT 709\n 0/1754828 | 710 | BEGIN 710\n 0/1754B10 | 710 | COMMIT 710\n 0/1754B10 | 711 | BEGIN 711\n 0/1755CA0 | 711 | table public.tbl1: INSERT: a[integer]:1\nb[text]:'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'\n 0/176F970 | 711 | table public.tbl2: INSERT: a[integer]:1\n 0/1770688 | 711 | COMMIT 711\n (10 rows)\n\nThanks\nBertrand",
"msg_date": "Sat, 7 Aug 2021 09:20:09 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: [bug] Logical Decoding of relation\n rewrite with\n toast does not reset toast_hash"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 12:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Please find enclosed a patch proposal to:\n>\n> * Avoid the failed assertion on current master and generate the error message instead (should the code reach that stage).\n> * Reset the toast_hash in case of relation rewrite with toast (so that the logical decoding in the above repro is working).\n>\n\nI think instead of resetting toast_hash for this case why don't we set\n'relrewrite' for toast tables as well during rewrite? If we do that\nthen we will simply skip assembling toast chunks for the toast table.\nIn make_new_heap(), we are calling NewHeapCreateToastTable() to create\ntoast table where we can pass additional information (probably\n'toastid'), if required to set 'relrewrite'. Additionally, let's add a\ntest case if possible for this.\n\nBTW, I see this as an Open Item for PG-14 [1] which seems wrong to me\nas this is a bug from previous versions. I am not sure who added it\nbut do you see any reason for this to consider as an open item for\nPG-14?\n\n[1] - https://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 Aug 2021 14:07:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "On Mon, Aug 9, 2021 at 2:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 9, 2021 at 12:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >\n> > Please find enclosed a patch proposal to:\n> >\n> > * Avoid the failed assertion on current master and generate the error message instead (should the code reach that stage).\n> > * Reset the toast_hash in case of relation rewrite with toast (so that the logical decoding in the above repro is working).\n> >\n>\n> I think instead of resetting toast_hash for this case why don't we set\n> 'relrewrite' for toast tables as well during rewrite? If we do that\n> then we will simply skip assembling toast chunks for the toast table.\n> In make_new_heap(), we are calling NewHeapCreateToastTable() to create\n> toast table where we can pass additional information (probably\n> 'toastid'), if required to set 'relrewrite'. Additionally, let's add a\n> test case if possible for this.\n\nI agree with Amit, that setting relrewrite for the toast relation as\nwell is better as we can simply avoid processing the toast tuple as\nwell in such cases.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Aug 2021 15:08:54 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "Hi Amit,\n\nOn 8/9/21 10:37 AM, Amit Kapila wrote:\n> On Fri, Jul 9, 2021 at 12:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Please find enclosed a patch proposal to:\n>>\n>> * Avoid the failed assertion on current master and generate the error message instead (should the code reach that stage).\n>> * Reset the toast_hash in case of relation rewrite with toast (so that the logical decoding in the above repro is working).\n>>\n> I think instead of resetting toast_hash for this case why don't we set\n> 'relrewrite' for toast tables as well during rewrite? If we do that\n> then we will simply skip assembling toast chunks for the toast table.\n\nThanks for looking at it!\n\nI do agree, that would be even better than the current patch approach: \nI'll work on it.\n\n> In make_new_heap(), we are calling NewHeapCreateToastTable() to create\n> toast table where we can pass additional information (probably\n> 'toastid'), if required to set 'relrewrite'. Additionally, let's add a\n> test case if possible for this.\n+ 1 for the test case, it will be added in the next version of the patch.\n>\n> BTW, I see this as an Open Item for PG-14 [1] which seems wrong to me\n> as this is a bug from previous versions. I am not sure who added it\n\nMe neither.\n\n> but do you see any reason for this to consider as an open item for\n> PG-14?\n\nNo, I don't see any reasons as this is a bug from previous versions.\n\nThanks\n\nBertrand\n\n\n\n",
"msg_date": "Mon, 9 Aug 2021 12:07:24 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "On Mon, Aug 9, 2021 at 3:37 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi Amit,\n>\n> On 8/9/21 10:37 AM, Amit Kapila wrote:\n> > On Fri, Jul 9, 2021 at 12:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Please find enclosed a patch proposal to:\n> >>\n> >> * Avoid the failed assertion on current master and generate the error message instead (should the code reach that stage).\n> >> * Reset the toast_hash in case of relation rewrite with toast (so that the logical decoding in the above repro is working).\n> >>\n> > I think instead of resetting toast_hash for this case why don't we set\n> > 'relrewrite' for toast tables as well during rewrite? If we do that\n> > then we will simply skip assembling toast chunks for the toast table.\n>\n> Thanks for looking at it!\n>\n> I do agree, that would be even better than the current patch approach:\n> I'll work on it.\n>\n> > In make_new_heap(), we are calling NewHeapCreateToastTable() to create\n> > toast table where we can pass additional information (probably\n> > 'toastid'), if required to set 'relrewrite'. Additionally, let's add a\n> > test case if possible for this.\n> + 1 for the test case, it will be added in the next version of the patch.\n>\n\nThanks, please see, if you can prepare patches for the back-branches as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 Aug 2021 16:42:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "On Mon, Aug 9, 2021 at 3:37 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> > BTW, I see this as an Open Item for PG-14 [1] which seems wrong to me\n> > as this is a bug from previous versions. I am not sure who added it\n>\n> Me neither.\n>\n> > but do you see any reason for this to consider as an open item for\n> > PG-14?\n>\n> No, I don't see any reasons as this is a bug from previous versions.\n>\n\nThanks for the confirmation. I have moved this to the section: \"Older\nbugs affecting stable branches\".\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 09:07:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "Hi Amit,\n\nOn 8/9/21 1:12 PM, Amit Kapila wrote:\n> On Mon, Aug 9, 2021 at 3:37 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi Amit,\n>>\n>> On 8/9/21 10:37 AM, Amit Kapila wrote:\n>>> On Fri, Jul 9, 2021 at 12:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>> Please find enclosed a patch proposal to:\n>>>>\n>>>> * Avoid the failed assertion on current master and generate the error message instead (should the code reach that stage).\n>>>> * Reset the toast_hash in case of relation rewrite with toast (so that the logical decoding in the above repro is working).\n>>>>\n>>> I think instead of resetting toast_hash for this case why don't we set\n>>> 'relrewrite' for toast tables as well during rewrite? If we do that\n>>> then we will simply skip assembling toast chunks for the toast table.\n>> Thanks for looking at it!\n>>\n>> I do agree, that would be even better than the current patch approach:\n>> I'll work on it.\n>>\n>>> In make_new_heap(), we are calling NewHeapCreateToastTable() to create\n>>> toast table where we can pass additional information (probably\n>>> 'toastid'), if required to set 'relrewrite'. Additionally, let's add a\n>>> test case if possible for this.\n>> + 1 for the test case, it will be added in the next version of the patch.\n>>\n> Thanks, please see, if you can prepare patches for the back-branches as well.\n\nPlease find attached the new version that:\n\n- sets \"relwrewrite\" for the toast.\n\n- contains a new test case.\n\nAs far preparing the patches for the back-branches: I will do it for \nsure, but I would prefer that we agree on a polished version on current \nmaster first.\n\nThanks\n\nBertrand",
"msg_date": "Tue, 10 Aug 2021 13:59:57 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "Hi Amit,\n\nOn 8/10/21 1:59 PM, Drouvot, Bertrand wrote:\n> Hi Amit,\n>\n> On 8/9/21 1:12 PM, Amit Kapila wrote:\n>> On Mon, Aug 9, 2021 at 3:37 PM Drouvot, Bertrand \n>> <bdrouvot@amazon.com> wrote:\n>>> Hi Amit,\n>>>\n>>> On 8/9/21 10:37 AM, Amit Kapila wrote:\n>>>> On Fri, Jul 9, 2021 at 12:22 PM Drouvot, Bertrand \n>>>> <bdrouvot@amazon.com> wrote:\n>>>>> Please find enclosed a patch proposal to:\n>>>>>\n>>>>> * Avoid the failed assertion on current master and generate the \n>>>>> error message instead (should the code reach that stage).\n>>>>> * Reset the toast_hash in case of relation rewrite with toast (so \n>>>>> that the logical decoding in the above repro is working).\n>>>>>\n>>>> I think instead of resetting toast_hash for this case why don't we set\n>>>> 'relrewrite' for toast tables as well during rewrite? If we do that\n>>>> then we will simply skip assembling toast chunks for the toast table.\n>>> Thanks for looking at it!\n>>>\n>>> I do agree, that would be even better than the current patch approach:\n>>> I'll work on it.\n>>>\n>>>> In make_new_heap(), we are calling NewHeapCreateToastTable() to create\n>>>> toast table where we can pass additional information (probably\n>>>> 'toastid'), if required to set 'relrewrite'. Additionally, let's add a\n>>>> test case if possible for this.\n>>> + 1 for the test case, it will be added in the next version of the \n>>> patch.\n>>>\n>> Thanks, please see, if you can prepare patches for the back-branches \n>> as well.\n>\n> Please find attached the new version that:\n>\n> - sets \"relwrewrite\" for the toast.\n>\n> - contains a new test case.\n\nThe first version of the patch contained a change in \nReorderBufferToastReplace() (to put the call to \nRelationIsValid(toast_rel) and display the error message when it is not \nvalid before the call to ReorderBufferChangeMemoryUpdate()).\n\nThat way we also avoid the failed assertion described in the first \nmessage of this thread (but would report the error message instead).\n\nForgot to mention that I did not add this change in the new patch \nversion as I’m thinking it would be better to create another patch for \nthat purpose (as not really related to toast rewrite), what do you think?\n\nThanks\nBertrand\n\n\n\n",
"msg_date": "Thu, 12 Aug 2021 08:45:29 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 12:15 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> On 8/10/21 1:59 PM, Drouvot, Bertrand wrote:\n> > Hi Amit,\n> >\n>\n> The first version of the patch contained a change in\n> ReorderBufferToastReplace() (to put the call to\n> RelationIsValid(toast_rel) and display the error message when it is not\n> valid before the call to ReorderBufferChangeMemoryUpdate()).\n>\n> That way we also avoid the failed assertion described in the first\n> message of this thread (but would report the error message instead).\n>\n> Forgot to mention that I did not add this change in the new patch\n> version\n>\n\nI think that is the right decision.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Aug 2021 15:08:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 5:30 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi Amit,\n>\n> On 8/9/21 1:12 PM, Amit Kapila wrote:\n> > On Mon, Aug 9, 2021 at 3:37 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hi Amit,\n> >>\n> >> On 8/9/21 10:37 AM, Amit Kapila wrote:\n> >>> On Fri, Jul 9, 2021 at 12:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >>>> Please find enclosed a patch proposal to:\n> >>>>\n> >>>> * Avoid the failed assertion on current master and generate the error message instead (should the code reach that stage).\n> >>>> * Reset the toast_hash in case of relation rewrite with toast (so that the logical decoding in the above repro is working).\n> >>>>\n> >>> I think instead of resetting toast_hash for this case why don't we set\n> >>> 'relrewrite' for toast tables as well during rewrite? If we do that\n> >>> then we will simply skip assembling toast chunks for the toast table.\n> >> Thanks for looking at it!\n> >>\n> >> I do agree, that would be even better than the current patch approach:\n> >> I'll work on it.\n> >>\n> >>> In make_new_heap(), we are calling NewHeapCreateToastTable() to create\n> >>> toast table where we can pass additional information (probably\n> >>> 'toastid'), if required to set 'relrewrite'. Additionally, let's add a\n> >>> test case if possible for this.\n> >> + 1 for the test case, it will be added in the next version of the patch.\n> >>\n> > Thanks, please see, if you can prepare patches for the back-branches as well.\n>\n> Please find attached the new version that:\n>\n> - sets \"relwrewrite\" for the toast.\n>\n\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -3861,6 +3861,10 @@ RenameRelationInternal(Oid myrelid, const char\n*newrelname, bool is_internal, bo\n */\n namestrcpy(&(relform->relname), newrelname);\n\n+ /* reset relrewrite for toast */\n+ if (relform->relkind == RELKIND_TOASTVALUE)\n+ relform->relrewrite = InvalidOid;\n+\n\nI find this change quite ad-hoc. I think this API is quite generic to\nmake such a change. I see two ways for this (a) pass a new bool flag\n(reset_toast_rewrite) in this API and then make this change, (b) in\nthe specific place where we need this, change relrewrite separately\nvia a new API.\n\nI would prefer (b) in the ideal case, but I understand it would be an\nadditional cost, so maybe (a) is also okay. What do you people think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Aug 2021 16:30:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 4:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 5:30 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >\n> >\n> > Please find attached the new version that:\n> >\n> > - sets \"relwrewrite\" for the toast.\n> >\n>\n> --- a/src/backend/commands/tablecmds.c\n> +++ b/src/backend/commands/tablecmds.c\n> @@ -3861,6 +3861,10 @@ RenameRelationInternal(Oid myrelid, const char\n> *newrelname, bool is_internal, bo\n> */\n> namestrcpy(&(relform->relname), newrelname);\n>\n> + /* reset relrewrite for toast */\n> + if (relform->relkind == RELKIND_TOASTVALUE)\n> + relform->relrewrite = InvalidOid;\n> +\n>\n> I find this change quite ad-hoc. I think this API is quite generic to\n> make such a change. I see two ways for this (a) pass a new bool flag\n> (reset_toast_rewrite) in this API and then make this change, (b) in\n> the specific place where we need this, change relrewrite separately\n> via a new API.\n>\n> I would prefer (b) in the ideal case, but I understand it would be an\n> additional cost, so maybe (a) is also okay. What do you people think?\n>\n\nOne minor comment:\n+/*\n+ * Test decoding relation rewrite with toast.\n+ * The insert into tbl2 within the same transaction\n+ * is there to check there is no remaining toast_hash\n+ * not being reset.\n+ */\n\nYou can extend each line of comment up to 80 chars. The current one\nlooks a bit odd.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Aug 2021 16:58:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "Hi,\n\nOn 8/12/21 1:00 PM, Amit Kapila wrote:\n> On Tue, Aug 10, 2021 at 5:30 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi Amit,\n>>\n>> On 8/9/21 1:12 PM, Amit Kapila wrote:\n>>> On Mon, Aug 9, 2021 at 3:37 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>> Hi Amit,\n>>>>\n>>>> On 8/9/21 10:37 AM, Amit Kapila wrote:\n>>>>> On Fri, Jul 9, 2021 at 12:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>>>> Please find enclosed a patch proposal to:\n>>>>>>\n>>>>>> * Avoid the failed assertion on current master and generate the error message instead (should the code reach that stage).\n>>>>>> * Reset the toast_hash in case of relation rewrite with toast (so that the logical decoding in the above repro is working).\n>>>>>>\n>>>>> I think instead of resetting toast_hash for this case why don't we set\n>>>>> 'relrewrite' for toast tables as well during rewrite? If we do that\n>>>>> then we will simply skip assembling toast chunks for the toast table.\n>>>> Thanks for looking at it!\n>>>>\n>>>> I do agree, that would be even better than the current patch approach:\n>>>> I'll work on it.\n>>>>\n>>>>> In make_new_heap(), we are calling NewHeapCreateToastTable() to create\n>>>>> toast table where we can pass additional information (probably\n>>>>> 'toastid'), if required to set 'relrewrite'. Additionally, let's add a\n>>>>> test case if possible for this.\n>>>> + 1 for the test case, it will be added in the next version of the patch.\n>>>>\n>>> Thanks, please see, if you can prepare patches for the back-branches as well.\n>> Please find attached the new version that:\n>>\n>> - sets \"relwrewrite\" for the toast.\n>>\n> --- a/src/backend/commands/tablecmds.c\n> +++ b/src/backend/commands/tablecmds.c\n> @@ -3861,6 +3861,10 @@ RenameRelationInternal(Oid myrelid, const char\n> *newrelname, bool is_internal, bo\n> */\n> namestrcpy(&(relform->relname), newrelname);\n>\n> + /* reset relrewrite for toast */\n> + if (relform->relkind == RELKIND_TOASTVALUE)\n> + relform->relrewrite = InvalidOid;\n> +\n>\n> I find this change quite ad-hoc. I think this API is quite generic to\n> make such a change. I see two ways for this (a) pass a new bool flag\n> (reset_toast_rewrite) in this API and then make this change, (b) in\n> the specific place where we need this, change relrewrite separately\n> via a new API.\n>\n> I would prefer (b) in the ideal case, but I understand it would be an\n> additional cost, so maybe (a) is also okay. What do you people think?\n\nI would prefer a) because:\n\n- b) would need to update the exact same tuple one more time (means \ndoing more or less the same work: open relation, search for the tuple, \nupdate the tuple...)\n\n- a) would still give the ability for someone reading the code to \nunderstand where the relrewrite reset is needed (as opposed to the way \nthe patch is currently written)\n\n- finish_heap_swap() with swap_toast_by_content set to false, is the \nonly place where we currently need to reset explicitly relrewrite (so \nthat we would have the new API produced by b) being called only at that \nplace).\n\n- That means that b) would be only for code readability but at the price \nof extra cost.\n\n\nThat said, I think we can go with a) and rethink about b) later on if \nthere is a need of this new API in other places for other reasons.\n\nThanks\n\nBertrand\n\n\n\n",
"msg_date": "Fri, 13 Aug 2021 08:17:20 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "\nOn 8/12/21 1:28 PM, Amit Kapila wrote:\n> On Thu, Aug 12, 2021 at 4:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> On Tue, Aug 10, 2021 at 5:30 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>\n>>> Please find attached the new version that:\n>>>\n>>> - sets \"relwrewrite\" for the toast.\n>>>\n>> --- a/src/backend/commands/tablecmds.c\n>> +++ b/src/backend/commands/tablecmds.c\n>> @@ -3861,6 +3861,10 @@ RenameRelationInternal(Oid myrelid, const char\n>> *newrelname, bool is_internal, bo\n>> */\n>> namestrcpy(&(relform->relname), newrelname);\n>>\n>> + /* reset relrewrite for toast */\n>> + if (relform->relkind == RELKIND_TOASTVALUE)\n>> + relform->relrewrite = InvalidOid;\n>> +\n>>\n>> I find this change quite ad-hoc. I think this API is quite generic to\n>> make such a change. I see two ways for this (a) pass a new bool flag\n>> (reset_toast_rewrite) in this API and then make this change, (b) in\n>> the specific place where we need this, change relrewrite separately\n>> via a new API.\n>>\n>> I would prefer (b) in the ideal case, but I understand it would be an\n>> additional cost, so maybe (a) is also okay. What do you people think?\n>>\n> One minor comment:\n> +/*\n> + * Test decoding relation rewrite with toast.\n> + * The insert into tbl2 within the same transaction\n> + * is there to check there is no remaining toast_hash\n> + * not being reset.\n> + */\n>\n> You can extend each line of comment up to 80 chars. The current one\n> looks a bit odd.\n\nThanks. I'll update the patch and comment that way once we have decided \nif we are going for a) or b).\n\nThanks\n\nBertrand\n\n\n\n",
"msg_date": "Fri, 13 Aug 2021 08:19:21 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 11:47 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> On 8/12/21 1:00 PM, Amit Kapila wrote:\n> >>\n> >> - sets \"relwrewrite\" for the toast.\n> >>\n> > --- a/src/backend/commands/tablecmds.c\n> > +++ b/src/backend/commands/tablecmds.c\n> > @@ -3861,6 +3861,10 @@ RenameRelationInternal(Oid myrelid, const char\n> > *newrelname, bool is_internal, bo\n> > */\n> > namestrcpy(&(relform->relname), newrelname);\n> >\n> > + /* reset relrewrite for toast */\n> > + if (relform->relkind == RELKIND_TOASTVALUE)\n> > + relform->relrewrite = InvalidOid;\n> > +\n> >\n> > I find this change quite ad-hoc. I think this API is quite generic to\n> > make such a change. I see two ways for this (a) pass a new bool flag\n> > (reset_toast_rewrite) in this API and then make this change, (b) in\n> > the specific place where we need this, change relrewrite separately\n> > via a new API.\n> >\n> > I would prefer (b) in the ideal case, but I understand it would be an\n> > additional cost, so maybe (a) is also okay. What do you people think?\n>\n> I would prefer a) because:\n>\n> - b) would need to update the exact same tuple one more time (means\n> doing more or less the same work: open relation, search for the tuple,\n> update the tuple...)\n>\n> - a) would still give the ability for someone reading the code to\n> understand where the relrewrite reset is needed (as opposed to the way\n> the patch is currently written)\n>\n> - finish_heap_swap() with swap_toast_by_content set to false, is the\n> only place where we currently need to reset explicitly relrewrite (so\n> that we would have the new API produced by b) being called only at that\n> place).\n>\n> - That means that b) would be only for code readability but at the price\n> of extra cost.\n>\n\nAnybody else would like to weigh in? I think it is good if few others\nalso share their opinion as we need to backpatch this bug-fix.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Aug 2021 14:47:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "Hi Drouvot,\n\n> I don't see extra data in your output and it looks like your \n> copy/paste is missing some content, no?\n>\n> On my side, that looks good and here is what i get with the patch applied:\n>\nI ran the test again, now I got the same output as yours, and it looks \ngood for me. (The issue I mentioned in previous email was caused by my \nconsole output.)\n\nThank you,\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Fri, 13 Aug 2021 16:33:22 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: [bug] Logical Decoding of relation\n rewrite with toast does not reset toast_hash"
},
{
"msg_contents": "Hi,\n\nOn 8/13/21 11:17 AM, Amit Kapila wrote:\n> On Fri, Aug 13, 2021 at 11:47 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> On 8/12/21 1:00 PM, Amit Kapila wrote:\n>>>> - sets \"relwrewrite\" for the toast.\n>>>>\n>>> --- a/src/backend/commands/tablecmds.c\n>>> +++ b/src/backend/commands/tablecmds.c\n>>> @@ -3861,6 +3861,10 @@ RenameRelationInternal(Oid myrelid, const char\n>>> *newrelname, bool is_internal, bo\n>>> */\n>>> namestrcpy(&(relform->relname), newrelname);\n>>>\n>>> + /* reset relrewrite for toast */\n>>> + if (relform->relkind == RELKIND_TOASTVALUE)\n>>> + relform->relrewrite = InvalidOid;\n>>> +\n>>>\n>>> I find this change quite ad-hoc. I think this API is quite generic to\n>>> make such a change. I see two ways for this (a) pass a new bool flag\n>>> (reset_toast_rewrite) in this API and then make this change, (b) in\n>>> the specific place where we need this, change relrewrite separately\n>>> via a new API.\n>>>\n>>> I would prefer (b) in the ideal case, but I understand it would be an\n>>> additional cost, so maybe (a) is also okay. What do you people think?\n>> I would prefer a) because:\n>>\n>> - b) would need to update the exact same tuple one more time (means\n>> doing more or less the same work: open relation, search for the tuple,\n>> update the tuple...)\n>>\n>> - a) would still give the ability for someone reading the code to\n>> understand where the relrewrite reset is needed (as opposed to the way\n>> the patch is currently written)\n>>\n>> - finish_heap_swap() with swap_toast_by_content set to false, is the\n>> only place where we currently need to reset explicitly relrewrite (so\n>> that we would have the new API produced by b) being called only at that\n>> place).\n>>\n>> - That means that b) would be only for code readability but at the price\n>> of extra cost.\n>>\n> Anybody else would like to weigh in? I think it is good if few others\n> also share their opinion as we need to backpatch this bug-fix.\n\nI had a second thoughts on it and now think option b) is better.\n\nIt would make the code clearer by using a new API and the extra cost of \nit (mainly search again for the pg_class tuple and update it) should be ok.\n\nPlease find patch version V3 making use of a new API.\n\nThanks\n\nBertrand",
"msg_date": "Wed, 18 Aug 2021 09:56:42 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 1:27 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 8/13/21 11:17 AM, Amit Kapila wrote:\n> > On Fri, Aug 13, 2021 at 11:47 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> On 8/12/21 1:00 PM, Amit Kapila wrote:\n> >>>> - sets \"relwrewrite\" for the toast.\n> >>>>\n> >>> --- a/src/backend/commands/tablecmds.c\n> >>> +++ b/src/backend/commands/tablecmds.c\n> >>> @@ -3861,6 +3861,10 @@ RenameRelationInternal(Oid myrelid, const char\n> >>> *newrelname, bool is_internal, bo\n> >>> */\n> >>> namestrcpy(&(relform->relname), newrelname);\n> >>>\n> >>> + /* reset relrewrite for toast */\n> >>> + if (relform->relkind == RELKIND_TOASTVALUE)\n> >>> + relform->relrewrite = InvalidOid;\n> >>> +\n> >>>\n> >>> I find this change quite ad-hoc. I think this API is quite generic to\n> >>> make such a change. I see two ways for this (a) pass a new bool flag\n> >>> (reset_toast_rewrite) in this API and then make this change, (b) in\n> >>> the specific place where we need this, change relrewrite separately\n> >>> via a new API.\n> >>>\n> >>> I would prefer (b) in the ideal case, but I understand it would be an\n> >>> additional cost, so maybe (a) is also okay. What do you people think?\n> >> I would prefer a) because:\n> >>\n> >> - b) would need to update the exact same tuple one more time (means\n> >> doing more or less the same work: open relation, search for the tuple,\n> >> update the tuple...)\n> >>\n> >> - a) would still give the ability for someone reading the code to\n> >> understand where the relrewrite reset is needed (as opposed to the way\n> >> the patch is currently written)\n> >>\n> >> - finish_heap_swap() with swap_toast_by_content set to false, is the\n> >> only place where we currently need to reset explicitly relrewrite (so\n> >> that we would have the new API produced by b) being called only at that\n> >> place).\n> >>\n> >> - That means that b) would be only for code readability but at the price\n> >> of extra cost.\n> >>\n> > Anybody else would like to weigh in? I think it is good if few others\n> > also share their opinion as we need to backpatch this bug-fix.\n>\n> I had a second thoughts on it and now think option b) is better.\n>\n> It would make the code clearer by using a new API and the extra cost of\n> it (mainly search again for the pg_class tuple and update it) should be ok.\n>\n\nI agree especially because I am not very comfortable changing the\nRenameRelationInternal() API in back branches. One minor comment:\n\n+\n+ /*\n+ * Reset the relrewrite for the toast. We need to call\n+ * CommandCounterIncrement() first to avoid the\n+ * \"tuple already updated by self\" error. Indeed the exact same\n+ * pg_class tuple has already been updated while\n+ * calling RenameRelationInternal().\n+ */\n+ CommandCounterIncrement();\n\nIt would be better if we can write the above comment as \"The\ncommand-counter increment is required here as we are about to update\nthe tuple that is updated as part of RenameRelationInternal.\" or\nsomething like that.\n\nI would like to push and back-patch the proposed patch (after some\nminor edits in the comments) unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 Aug 2021 15:31:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "Hi,\n\nOn 8/18/21 12:01 PM, Amit Kapila wrote:\n> On Wed, Aug 18, 2021 at 1:27 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi,\n>>\n>> On 8/13/21 11:17 AM, Amit Kapila wrote:\n>>> On Fri, Aug 13, 2021 at 11:47 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>> On 8/12/21 1:00 PM, Amit Kapila wrote:\n>>>>>> - sets \"relwrewrite\" for the toast.\n>>>>>>\n>>>>> --- a/src/backend/commands/tablecmds.c\n>>>>> +++ b/src/backend/commands/tablecmds.c\n>>>>> @@ -3861,6 +3861,10 @@ RenameRelationInternal(Oid myrelid, const char\n>>>>> *newrelname, bool is_internal, bo\n>>>>> */\n>>>>> namestrcpy(&(relform->relname), newrelname);\n>>>>>\n>>>>> + /* reset relrewrite for toast */\n>>>>> + if (relform->relkind == RELKIND_TOASTVALUE)\n>>>>> + relform->relrewrite = InvalidOid;\n>>>>> +\n>>>>>\n>>>>> I find this change quite ad-hoc. I think this API is quite generic to\n>>>>> make such a change. I see two ways for this (a) pass a new bool flag\n>>>>> (reset_toast_rewrite) in this API and then make this change, (b) in\n>>>>> the specific place where we need this, change relrewrite separately\n>>>>> via a new API.\n>>>>>\n>>>>> I would prefer (b) in the ideal case, but I understand it would be an\n>>>>> additional cost, so maybe (a) is also okay. What do you people think?\n>>>> I would prefer a) because:\n>>>>\n>>>> - b) would need to update the exact same tuple one more time (means\n>>>> doing more or less the same work: open relation, search for the tuple,\n>>>> update the tuple...)\n>>>>\n>>>> - a) would still give the ability for someone reading the code to\n>>>> understand where the relrewrite reset is needed (as opposed to the way\n>>>> the patch is currently written)\n>>>>\n>>>> - finish_heap_swap() with swap_toast_by_content set to false, is the\n>>>> only place where we currently need to reset explicitly relrewrite (so\n>>>> that we would have the new API produced by b) being called only at that\n>>>> place).\n>>>>\n>>>> - That means that b) would be only for code readability but at the price\n>>>> of extra cost.\n>>>>\n>>> Anybody else would like to weigh in? I think it is good if few others\n>>> also share their opinion as we need to backpatch this bug-fix.\n>> I had a second thoughts on it and now think option b) is better.\n>>\n>> It would make the code clearer by using a new API and the extra cost of\n>> it (mainly search again for the pg_class tuple and update it) should be ok.\n>>\n> I agree especially because I am not very comfortable changing the\n> RenameRelationInternal() API in back branches. One minor comment:\n>\n> +\n> + /*\n> + * Reset the relrewrite for the toast. We need to call\n> + * CommandCounterIncrement() first to avoid the\n> + * \"tuple already updated by self\" error. Indeed the exact same\n> + * pg_class tuple has already been updated while\n> + * calling RenameRelationInternal().\n> + */\n> + CommandCounterIncrement();\n>\n> It would be better if we can write the above comment as \"The\n> command-counter increment is required here as we are about to update\n> the tuple that is updated as part of RenameRelationInternal.\" or\n> something like that.\n>\n> I would like to push and back-patch the proposed patch (after some\n> minor edits in the comments) unless someone thinks otherwise.\n\nThanks!\n\nI've updated the comment and prepared the back patch versions:\n\nPlease find attached:\n\nv4-0001-toast-rewrite-master-branch.patch: to be applied on the master \nand REL_14_STABLE branches\n\nv4-0001-toast-rewrite-13-stable-branch.patch: to be applied on the \nREL_13_STABLE and REL_12_STABLE branches\n\nv4-0001-toast-rewrite-11-stable-branch.patch: to be applied on the \nREL_11_STABLE branch\n\nI stopped the back patching here as the issue has been introduced in \n325f2ec555.\n\nThanks\n\nBertrand",
"msg_date": "Wed, 18 Aug 2021 16:39:42 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 8:09 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 8/18/21 12:01 PM, Amit Kapila wrote:\n> > On Wed, Aug 18, 2021 at 1:27 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hi,\n>\n> I've updated the comment and prepared the back patch versions:\n>\n\nI have verified and all your patches look good to me. I'll push and\nbackpatch this by Wednesday unless there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 23 Aug 2021 15:32:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
},
{
"msg_contents": "Hi,\n\nOn 8/23/21 12:02 PM, Amit Kapila wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Wed, Aug 18, 2021 at 8:09 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi,\n>>\n>> On 8/18/21 12:01 PM, Amit Kapila wrote:\n>>> On Wed, Aug 18, 2021 at 1:27 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>>> Hi,\n>> I've updated the comment and prepared the back patch versions:\n>>\n> I have verified and all your patches look good to me. I'll push and\n> backpatch this by Wednesday unless there are more comments.\n\nI just saw that the patch has been committed.\n\nThanks for your help and time on this.\n\nI'll mark the corresponding commitfest entry as \"Committed\".\n\nThanks\n\nBertrand\n\n\n\n",
"msg_date": "Wed, 25 Aug 2021 07:55:47 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset\n toast_hash"
},
{
"msg_contents": "On Wed, Aug 25, 2021 at 11:26 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> I just saw that the patch has been committed.\n>\n> Thanks for your help and time on this.\n>\n> I'll mark the corresponding commitfest entry as \"Committed\".\n>\n\nThanks for your work on this. I have also marked it closed in\nPostgreSQL_14_Open_Items doc [1]\n\n[1] - https://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 26 Aug 2021 13:47:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug] Logical Decoding of relation rewrite with toast does not\n reset toast_hash"
}
] |
[
{
"msg_contents": "I found that the psql tab auto-complete was not working for some cases\nof CREATE PUBLICATION [1].\n\nCREATE PUBLICATION name\n [ FOR TABLE [ ONLY ] table_name [ * ] [, ...]\n | FOR ALL TABLES ]\n [ WITH ( publication_parameter [= value] [, ... ] ) ]\n\n~~~\n\nFor example, the following scenarios did not work as I was expecting:\n\n\"create publication mypub for all tables \" TAB --> expected complete\nwith \"WITH (\"\n\n\"create publication mypub for all ta\" TAB --> expected complete with \"TABLES\"\n\n\"create publication mypub for all tables w\" TAB --> expected complete\nwith \"WITH (\"\n\n\"create publication mypub for table mytable \" TAB --> expected\ncomplete with \"WITH (\"\n\n~~~\n\nPSA a small patch which seems to improve at least for those\naforementioned cases.\n\nNow results are:\n\n\"create publication mypub for all tables \" TAB --> \"create publication\nmypub for all tables WITH ( \"\n\n\"create publication mypub for all ta\" TAB --> \"create publication\nmypub for all tables \"\n\n\"create publication mypub for all tables w\" TAB --> \"create\npublication mypub for all tables with ( \"\n\n\"create publication mypub for table mytable \" TAB --> \"create\npublication mypub for table mytable WITH ( \"\n\n------\n[1] https://www.postgresql.org/docs/devel/sql-createpublication.html\n\nKind Regards,\nPeter Smith\nFujitsu Australia",
"msg_date": "Fri, 9 Jul 2021 17:36:04 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "psql tab auto-complete for CREATE PUBLICATION"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 1:06 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I found that the psql tab auto-complete was not working for some cases\n> of CREATE PUBLICATION [1].\n>\n> CREATE PUBLICATION name\n> [ FOR TABLE [ ONLY ] table_name [ * ] [, ...]\n> | FOR ALL TABLES ]\n> [ WITH ( publication_parameter [= value] [, ... ] ) ]\n>\n> ~~~\n>\n> For example, the following scenarios did not work as I was expecting:\n>\n> \"create publication mypub for all tables \" TAB --> expected complete\n> with \"WITH (\"\n>\n> \"create publication mypub for all ta\" TAB --> expected complete with \"TABLES\"\n>\n> \"create publication mypub for all tables w\" TAB --> expected complete\n> with \"WITH (\"\n>\n> \"create publication mypub for table mytable \" TAB --> expected\n> complete with \"WITH (\"\n>\n> ~~~\n>\n> PSA a small patch which seems to improve at least for those\n> aforementioned cases.\n>\n> Now results are:\n>\n> \"create publication mypub for all tables \" TAB --> \"create publication\n> mypub for all tables WITH ( \"\n>\n> \"create publication mypub for all ta\" TAB --> \"create publication\n> mypub for all tables \"\n>\n> \"create publication mypub for all tables w\" TAB --> \"create\n> publication mypub for all tables with ( \"\n>\n> \"create publication mypub for table mytable \" TAB --> \"create\n> publication mypub for table mytable WITH ( \"\n>\n\nThanks for the patch, the changes look good to me.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 16 Jul 2021 22:49:14 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql tab auto-complete for CREATE PUBLICATION"
},
{
"msg_contents": "\n\nOn 2021/07/17 2:19, vignesh C wrote:\n> Thanks for the patch, the changes look good to me.\n\nThe patch looks good to me, too. Pushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 1 Sep 2021 22:04:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: psql tab auto-complete for CREATE PUBLICATION"
},
{
"msg_contents": "On Wed, Sep 1, 2021 at 11:04 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/07/17 2:19, vignesh C wrote:\n> > Thanks for the patch, the changes look good to me.\n>\n> The patch looks good to me, too. Pushed. Thanks!\n>\n\nThanks for pushing!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Sep 2021 08:54:43 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql tab auto-complete for CREATE PUBLICATION"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI'd like to add kerberos authentication support for postgres_fdw by adding two\noptions to user mapping: krb_client_keyfile and gssencmode.\n\nIn the backend we have krb_server_keyfile option to specify a keytab file to\nbe used by postgres server, krb_client_keyfile is doing mostly the same thing.\nThis allows postgres_fdw(backend process) to authenticate on behalf of a\nlogged in user who is querying the foreign table. The credential is kept in\nthe backend process memory instead of local file to prevent abuse by users\non the same host.\n\nBecause backend process is accessing the filesystem of the server host, this\noption should only be manipulated by super user. Otherwise, normal user may\nsteal the identity or probe the server filesystem. This principal is the same to\nsslcert and sslkey options in user mapping.\n\nThoughts?\n\nBest regards,\nPeifeng",
"msg_date": "Fri, 9 Jul 2021 09:46:37 +0000",
"msg_from": "Peifeng Qiu <peifengq@vmware.com>",
"msg_from_op": true,
"msg_subject": "Support kerberos authentication for postgres_fdw"
},
{
"msg_contents": "Peifeng Qiu <peifengq@vmware.com> writes:\n> I'd like to add kerberos authentication support for postgres_fdw by adding two\n> options to user mapping: krb_client_keyfile and gssencmode.\n\nAs you note, this'd have to be restricted to superusers, which makes it\nseem like a pretty bad idea. We really don't want to be in a situation\nof pushing people to run day-to-day stuff as superuser. Yeah, having\naccess to kerberos auth sounds good on the surface, but it seems like\nit would be a net loss in security because of that.\n\nIs there some other way?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Jul 2021 09:49:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peifeng Qiu <peifengq@vmware.com> writes:\n> > I'd like to add kerberos authentication support for postgres_fdw by adding two\n> > options to user mapping: krb_client_keyfile and gssencmode.\n>\n> As you note, this'd have to be restricted to superusers, which makes it\n> seem like a pretty bad idea. We really don't want to be in a situation\n> of pushing people to run day-to-day stuff as superuser. Yeah, having\n> access to kerberos auth sounds good on the surface, but it seems like\n> it would be a net loss in security because of that.\n>\n> Is there some other way?\n\nISTM the right way to do this would be using Kerberos delegation. That\nis, the system would be set up so that the postgres service principal\nis trusted for kerberos delegation and it would then pass through the\nactual Kerberos authentication from the client.\n\nAt least at a quick glance this does not look like what this patch is\ndoing, sadly.\n\nWhat does kerberos auth with a fixed key on the client (being the\npostgres server in this auth step) actually help with?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sat, 10 Jul 2021 22:45:17 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
},
{
"msg_contents": ">As you note, this'd have to be restricted to superusers, which makes it\n>seem like a pretty bad idea. We really don't want to be in a situation\n>of pushing people to run day-to-day stuff as superuser. Yeah, having\n>access to kerberos auth sounds good on the surface, but it seems like\n>it would be a net loss in security because of that.\n\nI can imagine the use case would be a superuser creates the user\nmapping and foreign table, then grants access of foreign table to\na normal user. This way the normal user can execute queries on the\nforeign table but can't access sensitive information in user mapping.\n\nThe main purpose of this patch is to provide a simple way to do\nkerberos authentication with the least modification possible.\n\n>ISTM the right way to do this would be using Kerberos delegation. That\n>is, the system would be set up so that the postgres service principal\n>is trusted for kerberos delegation and it would then pass through the\n>actual Kerberos authentication from the client.\n\nI agree this sounds like the ideal solution. If I understand it correctly,\nthis approach requires both postgres servers to use same kerberos\nsettings(kdc, realm, etc), and the FDW server can just \"forward\"\nnecessary information to authenticate on behalf of the same user.\nI will spend some time to investigate it and reach out later.\n\nBest regards,\nPeifeng\n\n\n\n\n\n\n\n\n\n\n\n>As you note, this'd have to be restricted to superusers, which makes it\n>seem like a pretty bad idea. We really don't want to be in a situation\n>of pushing people to run day-to-day stuff as superuser. Yeah, having\n>access to kerberos auth sounds good on the surface, but it seems like\n>it would be a net loss in security because of that.\n\n\nI can imagine the use case would be a superuser creates the user\nmapping and foreign table, then grants access of foreign table to\na normal user. This way the normal user can execute queries on the\nforeign table but can't access sensitive information in user mapping.\n\n\nThe main purpose of this patch is to provide a simple way to do\nkerberos authentication with the least modification possible.\n\n\n\n>ISTM the right way to do this would be using Kerberos delegation. That\n>is, the system would be set up so that the postgres service principal\n>is trusted for kerberos delegation and it would then pass through the\n>actual Kerberos authentication from the client.\n\n\nI agree this sounds like the ideal solution. If I understand it correctly,\nthis approach requires both postgres servers to use same kerberos\nsettings(kdc, realm, etc), and the FDW server can just \"forward\"\nnecessary information to authenticate on behalf of the same user.\nI will spend some time to investigate it and reach out later.\n\n\n\nBest regards,\n\nPeifeng",
"msg_date": "Mon, 12 Jul 2021 03:43:21 +0000",
"msg_from": "Peifeng Qiu <peifengq@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 5:43 AM Peifeng Qiu <peifengq@vmware.com> wrote:\n>\n> >As you note, this'd have to be restricted to superusers, which makes it\n> >seem like a pretty bad idea. We really don't want to be in a situation\n> >of pushing people to run day-to-day stuff as superuser. Yeah, having\n> >access to kerberos auth sounds good on the surface, but it seems like\n> >it would be a net loss in security because of that.\n>\n> I can imagine the use case would be a superuser creates the user\n> mapping and foreign table, then grants access of foreign table to\n> a normal user. This way the normal user can execute queries on the\n> foreign table but can't access sensitive information in user mapping.\n>\n> The main purpose of this patch is to provide a simple way to do\n> kerberos authentication with the least modification possible.\n\nBut in this case, what dose Kerberos give over just using a password\nbased solution? It adds complexity, but what's teh actual gain?\n\n\n> >ISTM the right way to do this would be using Kerberos delegation. That\n> >is, the system would be set up so that the postgres service principal\n> >is trusted for kerberos delegation and it would then pass through the\n> >actual Kerberos authentication from the client.\n>\n> I agree this sounds like the ideal solution. If I understand it correctly,\n> this approach requires both postgres servers to use same kerberos\n> settings(kdc, realm, etc), and the FDW server can just \"forward\"\n> necessary information to authenticate on behalf of the same user.\n> I will spend some time to investigate it and reach out later.\n\nI don't actually know if they have to be in the same realm, I *think*\nkerberos delegations work across trusted realms, but I'm not sure\nabout that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 12 Jul 2021 11:34:32 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
},
{
"msg_contents": ">But in this case, what dose Kerberos give over just using a password\n>based solution? It adds complexity, but what's teh actual gain?\n\nThat's due to policy of some customers. They require all login to be kerberos\nbased and password-less. I suppose this way they don't need to maintain\npasswords in each database and the same keytab file may be used in\nconnections to multiple databases.\nIf we can do the delegation approach right, it's clearly a superior solution\nsince keytab file management is also quite heavy burden.\n\n\n\n\n\n\n\n\n>But in this case, what dose Kerberos give over just using a password\n\n>based solution? It adds complexity, but what's teh actual gain?\n\n\nThat's due to policy of some customers. They require all login to be kerberos\nbased and password-less. I suppose this way they don't need to maintain\npasswords in each database and the same keytab file may be used in\nconnections to multiple databases.\nIf we can do the delegation approach right, it's clearly a superior solution\nsince keytab file management is also quite heavy burden.",
"msg_date": "Mon, 12 Jul 2021 11:23:33 +0000",
"msg_from": "Peifeng Qiu <peifengq@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Jul 12, 2021 at 5:43 AM Peifeng Qiu <peifengq@vmware.com> wrote:\n> > >As you note, this'd have to be restricted to superusers, which makes it\n> > >seem like a pretty bad idea. We really don't want to be in a situation\n> > >of pushing people to run day-to-day stuff as superuser. Yeah, having\n> > >access to kerberos auth sounds good on the surface, but it seems like\n> > >it would be a net loss in security because of that.\n> >\n> > I can imagine the use case would be a superuser creates the user\n> > mapping and foreign table, then grants access of foreign table to\n> > a normal user. This way the normal user can execute queries on the\n> > foreign table but can't access sensitive information in user mapping.\n> >\n> > The main purpose of this patch is to provide a simple way to do\n> > kerberos authentication with the least modification possible.\n> \n> But in this case, what dose Kerberos give over just using a password\n> based solution? It adds complexity, but what's teh actual gain?\n\nThis is a bad idea.\n\n> > >ISTM the right way to do this would be using Kerberos delegation. That\n> > >is, the system would be set up so that the postgres service principal\n> > >is trusted for kerberos delegation and it would then pass through the\n> > >actual Kerberos authentication from the client.\n> >\n> > I agree this sounds like the ideal solution. If I understand it correctly,\n> > this approach requires both postgres servers to use same kerberos\n> > settings(kdc, realm, etc), and the FDW server can just \"forward\"\n> > necessary information to authenticate on behalf of the same user.\n> > I will spend some time to investigate it and reach out later.\n> \n> I don't actually know if they have to be in the same realm, I *think*\n> kerberos delegations work across trusted realms, but I'm not sure\n> about that.\n\nThis is a good idea, and yes, delegation works just fine across realms\nif the environment is properly set up for cross-realm trust.\n\nKerberos delegation is absolutely the way to go here. I don't think we\nshould even be thinking of accepting something that requires users to\nput a bunch of keytab files on the PG server to allow that server to\nreach out to other servers...\n\nI'd be happy to work with someone on an effort to support Kerberos\ndelegated credentials; it's been something that I've wanted to work on\nfor a long time.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 12 Jul 2021 17:26:54 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI'd like to add kerberos authentication support for postgres_fdw by adding two\noptions to user mapping: krb_client_keyfile and gssencmode.\n\nIn the backend we have krb_server_keyfile option to specify a keytab file to\nbe used by postgres server, krb_client_keyfile is doing mostly the same thing.\nThis allows postgres_fdw(backend process) to authenticate on behalf of a\nlogged in user who is querying the foreign table. The credential is kept in\nthe backend process memory instead of local file to prevent abuse by users\non the same host.\n\nBecause backend process is accessing the filesystem of the server host, this\noption should only be manipulated by super user. Otherwise, normal user may\nsteal the identity or probe the server filesystem. This principal is the same to\nsslcert and sslkey options in user mapping.\n\nThoughts?\n\nBest regards,\nPeifeng",
"msg_date": "Fri, 9 Jul 2021 10:13:20 +0000",
"msg_from": "Peifeng Qiu <peifengq@vmware.com>",
"msg_from_op": true,
"msg_subject": "Support kerberos authentication for postgres_fdw"
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 10:13:20AM +0000, Peifeng Qiu wrote:\n> I'd like to add kerberos authentication support for postgres_fdw by adding two\n> options to user mapping: krb_client_keyfile and gssencmode.\n\nYou may want to register this patch into the next commit fest, to get\nit reviewed for a potential integration in 15:\nhttps://commitfest.postgresql.org/34/\n--\nMichael",
"msg_date": "Sat, 10 Jul 2021 20:32:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
}
] |
[
{
"msg_contents": "Hi,\n\nAs suggested on a different thread [1], pg_receivewal can increase it's test\ncoverage. There exists a non trivial amount of code that handles gzip\ncompression. The current patch introduces tests that cover creation of gzip\ncompressed WAL files and the handling of gzip partial segments. Finally the\nintegrity of the compressed files is verified.\n\nI hope you find this useful.\n\nCheers,\n//Georgios\n\n[1] https://www.postgresql.org/message-id/flat/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E%3D%40pm.me",
"msg_date": "Fri, 09 Jul 2021 11:26:58 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": true,
"msg_subject": "Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 11:26:58AM +0000, Georgios wrote:\n> As suggested on a different thread [1], pg_receivewal can increase it's test\n> coverage. There exists a non trivial amount of code that handles gzip\n> compression. The current patch introduces tests that cover creation of gzip\n> compressed WAL files and the handling of gzip partial segments. Finally the\n> integrity of the compressed files is verified.\n\n+ # Verify compressed file's integrity\n+ my $gzip_is_valid = system_log('gzip', '--test', $gzip_wals[0]);\n+ is($gzip_is_valid, 0, \"program gzip verified file's integrity\");\nlibz and gzip are usually split across different packages, hence there\nis no guarantee that this command is always available (same comment as\nfor LZ4 from a couple of days ago).\n\n+ [\n+ 'pg_receivewal', '-D', $stream_dir, '--verbose',\n+ '--endpos', $nextlsn, '-Z', '5'\n+ ],\nI would keep the compression level to a minimum here, to limit CPU\nusage but still compress something faster.\n\n+ # Verify compressed file's integrity\n+ my $gzip_is_valid = system_log('gzip', '--test', $gzip_wals[0]);\n+ is($gzip_is_valid, 0, \"program gzip verified file's integrity\");\nShouldn't this be coded as a loop going through @gzip_wals?\n--\nMichael",
"msg_date": "Mon, 12 Jul 2021 15:42:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Monday, July 12th, 2021 at 08:42, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Jul 09, 2021 at 11:26:58AM +0000, Georgios wrote:\n>\n> > As suggested on a different thread [1], pg_receivewal can increase it's test\n> >\n> > coverage. There exists a non trivial amount of code that handles gzip\n> >\n> > compression. The current patch introduces tests that cover creation of gzip\n> >\n> > compressed WAL files and the handling of gzip partial segments. Finally the\n> >\n> > integrity of the compressed files is verified.\n>\n> - # Verify compressed file's integrity\n>\n>\n> - my $gzip_is_valid = system_log('gzip', '--test', $gzip_wals[0]);\n>\n>\n> - is($gzip_is_valid, 0, \"program gzip verified file's integrity\");\n>\n>\n>\n> libz and gzip are usually split across different packages, hence there\n>\n> is no guarantee that this command is always available (same comment as\n>\n> for LZ4 from a couple of days ago).\n\n\nOf course. Though while going for it, I did find in Makefile.global.in:\n\n TAR = @TAR@\n XGETTEXT = @XGETTEXT@\n\n GZIP = gzip\n BZIP2 = bzip2\n\n DOWNLOAD = wget -O $@ --no-use-server-timestamps\n\nWhich is also used by GNUmakefile.in\n\n distcheck: dist\n rm -rf $(dummy)\n mkdir $(dummy)\n $(GZIP) -d -c $(distdir).tar.gz | $(TAR) xf -\n install_prefix=`cd $(dummy) && pwd`; \\\n\n\nThis to my understanding means that gzip is expected to exist.\nIf this is correct, then simply checking for the headers should\nsuffice, since that is the only dependency for the files to be\ncreated.\n\nIf this is wrong, then I will add the discovery code as in the\nother patch.\n\n>\n> - [\n>\n>\n> - 'pg_receivewal', '-D', $stream_dir, '--verbose',\n>\n>\n> - '--endpos', $nextlsn, '-Z', '5'\n>\n>\n> - ],\n>\n>\n>\n> I would keep the compression level to a minimum here, to limit CPU\n>\n> usage but still compress something faster.\n>\n> - # Verify compressed file's integrity\n>\n>\n> - my $gzip_is_valid = system_log('gzip', '--test', $gzip_wals[0]);\n>\n>\n> - is($gzip_is_valid, 0, \"program gzip verified file's integrity\");\n>\n>\n>\n> Shouldn't this be coded as a loop going through @gzip_wals?\n\nI would hope that there is only one gz file created. There is a line\nfurther up that tests exactly that.\n\n+ is (scalar(@gzip_wals), 1, \"one gzip compressed WAL was created\");\n\n\nThen there should also be a partial gz file which is tested further ahead.\n\nCheers,\n//Georgios\n\n> -----------------------------------------------------------\n>\n> Michael\n\n\n",
"msg_date": "Mon, 12 Jul 2021 09:42:32 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Monday, July 12th, 2021 at 11:42, <gkokolatos@pm.me> wrote:\n\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n>\n> On Monday, July 12th, 2021 at 08:42, Michael Paquier michael@paquier.xyz wrote:\n>\n> > On Fri, Jul 09, 2021 at 11:26:58AM +0000, Georgios wrote:\n> >\n> > > As suggested on a different thread [1], pg_receivewal can increase it's test\n> > >\n> > > coverage. There exists a non trivial amount of code that handles gzip\n> > >\n> > > compression. The current patch introduces tests that cover creation of gzip\n> > >\n> > > compressed WAL files and the handling of gzip partial segments. Finally the\n> > >\n> > > integrity of the compressed files is verified.\n> >\n> > - # Verify compressed file's integrity\n> >\n> >\n> > - my $gzip_is_valid = system_log('gzip', '--test', $gzip_wals[0]);\n> >\n> >\n> > - is($gzip_is_valid, 0, \"program gzip verified file's integrity\");\n> >\n> >\n> >\n> > libz and gzip are usually split across different packages, hence there\n> >\n> > is no guarantee that this command is always available (same comment as\n> >\n> > for LZ4 from a couple of days ago).\n>\n> Of course. Though while going for it, I did find in Makefile.global.in:\n>\n> TAR = @TAR@\n>\n> XGETTEXT = @XGETTEXT@\n>\n> GZIP = gzip\n>\n> BZIP2 = bzip2\n>\n> DOWNLOAD = wget -O $@ --no-use-server-timestamps\n>\n> Which is also used by GNUmakefile.in\n>\n> distcheck: dist\n>\n> rm -rf $(dummy)\n>\n> mkdir $(dummy)\n>\n> $(GZIP) -d -c $(distdir).tar.gz | $(TAR) xf -\n>\n> install_prefix=`cd $(dummy) && pwd`; \\\n>\n> This to my understanding means that gzip is expected to exist.\n>\n> If this is correct, then simply checking for the headers should\n>\n> suffice, since that is the only dependency for the files to be\n>\n> created.\n>\n> If this is wrong, then I will add the discovery code as in the\n>\n> other patch.\n>\n> > - [\n> >\n> >\n> > - 'pg_receivewal', '-D', $stream_dir, '--verbose',\n> >\n> >\n> > - '--endpos', $nextlsn, '-Z', '5'\n> >\n> >\n> > - ],\n> >\n> >\n> >\n> > I would keep the compression level to a minimum here, to limit CPU\n> >\n> > usage but still compress something faster.\n> >\n> > - # Verify compressed file's integrity\n> >\n> >\n> > - my $gzip_is_valid = system_log('gzip', '--test', $gzip_wals[0]);\n> >\n> >\n> > - is($gzip_is_valid, 0, \"program gzip verified file's integrity\");\n> >\n> >\n> >\n> > Shouldn't this be coded as a loop going through @gzip_wals?\n>\n> I would hope that there is only one gz file created. There is a line\n>\n> further up that tests exactly that.\n>\n> - is (scalar(@gzip_wals), 1, \"one gzip compressed WAL was created\");\n\n\nLet me amend that. The line should be instead:\n\n is (scalar(keys @gzip_wals), 1, \"one gzip compressed WAL was created\");\n\nTo properly test that there is one entry.\n\nLet me provide with v2 to fix this.\n\nCheers,\n//Georgios\n\n>\n> Then there should also be a partial gz file which is tested further ahead.\n>\n> Cheers,\n>\n> //Georgios\n>\n> > Michael\n\n\n",
"msg_date": "Mon, 12 Jul 2021 09:56:30 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Monday, July 12th, 2021 at 11:56, <gkokolatos@pm.me> wrote:\n\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n>\n> On Monday, July 12th, 2021 at 11:42, gkokolatos@pm.me wrote:\n>\n> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> >\n> > On Monday, July 12th, 2021 at 08:42, Michael Paquier michael@paquier.xyz wrote:\n> >\n> > > On Fri, Jul 09, 2021 at 11:26:58AM +0000, Georgios wrote:\n> > >\n> > > > As suggested on a different thread [1], pg_receivewal can increase it's test\n> > > >\n> > > > coverage. There exists a non trivial amount of code that handles gzip\n> > > >\n> > > > compression. The current patch introduces tests that cover creation of gzip\n> > > >\n> > > > compressed WAL files and the handling of gzip partial segments. Finally the\n> > > >\n> > > > integrity of the compressed files is verified.\n> > >\n> > > - # Verify compressed file's integrity\n> > >\n> > >\n> > > - my $gzip_is_valid = system_log('gzip', '--test', $gzip_wals[0]);\n> > >\n> > >\n> > > - is($gzip_is_valid, 0, \"program gzip verified file's integrity\");\n> > >\n> > >\n> > >\n> > > libz and gzip are usually split across different packages, hence there\n> > >\n> > > is no guarantee that this command is always available (same comment as\n> > >\n> > > for LZ4 from a couple of days ago).\n> >\n> > Of course. Though while going for it, I did find in Makefile.global.in:\n> >\n> > TAR = @TAR@\n> >\n> > XGETTEXT = @XGETTEXT@\n> >\n> > GZIP = gzip\n> >\n> > BZIP2 = bzip2\n> >\n> > DOWNLOAD = wget -O $@ --no-use-server-timestamps\n> >\n> > Which is also used by GNUmakefile.in\n> >\n> > distcheck: dist\n> >\n> > rm -rf $(dummy)\n> >\n> > mkdir $(dummy)\n> >\n> > $(GZIP) -d -c $(distdir).tar.gz | $(TAR) xf -\n> >\n> > install_prefix=`cd $(dummy) && pwd`; \\\n> >\n> > This to my understanding means that gzip is expected to exist.\n> >\n> > If this is correct, then simply checking for the headers should\n> >\n> > suffice, since that is the only dependency for the files to be\n> >\n> > created.\n> >\n> > If this is wrong, then I will add the discovery code as in the\n> >\n> > other patch.\n> >\n> > > - [\n> > >\n> > >\n> > > - 'pg_receivewal', '-D', $stream_dir, '--verbose',\n> > >\n> > >\n> > > - '--endpos', $nextlsn, '-Z', '5'\n> > >\n> > >\n> > > - ],\n> > >\n> > >\n> > >\n> > > I would keep the compression level to a minimum here, to limit CPU\n> > >\n> > > usage but still compress something faster.\n> > >\n> > > - # Verify compressed file's integrity\n> > >\n> > >\n> > > - my $gzip_is_valid = system_log('gzip', '--test', $gzip_wals[0]);\n> > >\n> > >\n> > > - is($gzip_is_valid, 0, \"program gzip verified file's integrity\");\n> > >\n> > >\n> > >\n> > > Shouldn't this be coded as a loop going through @gzip_wals?\n> >\n> > I would hope that there is only one gz file created. There is a line\n> >\n> > further up that tests exactly that.\n> >\n> > - is (scalar(@gzip_wals), 1, \"one gzip compressed WAL was created\");\n>\n> Let me amend that. The line should be instead:\n>\n> is (scalar(keys @gzip_wals), 1, \"one gzip compressed WAL was created\");\n>\n> To properly test that there is one entry.\n>\n> Let me provide with v2 to fix this.\n\n\nPlease find v2 attached with the above.\n\nCheers,\n//Georgios\n\n>\n> Cheers,\n>\n> //Georgios\n>\n> > Then there should also be a partial gz file which is tested further ahead.\n> >\n> > Cheers,\n> >\n> > //Georgios\n> >\n> >\n> > > Michael",
"msg_date": "Mon, 12 Jul 2021 10:27:35 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "Le 12/07/2021 à 12:27, gkokolatos@pm.me a écrit :\n>>>>\n>>>> Shouldn't this be coded as a loop going through @gzip_wals?\n>>> I would hope that there is only one gz file created. There is a line\n>>>\n>>> further up that tests exactly that.\n>>>\n>>> - is (scalar(@gzip_wals), 1, \"one gzip compressed WAL was created\");\n>> Let me amend that. The line should be instead:\n>>\n>> is (scalar(keys @gzip_wals), 1, \"one gzip compressed WAL was created\");\n>>\n>> To properly test that there is one entry.\n>>\n>> Let me provide with v2 to fix this.\n\n\nThe following tests are not correct in Perl even if Perl returns the\nright value.\n\n+ is (scalar(keys @gzip_wals), 1, \"one gzip compressed WAL was created\");\n\n\n+ is (scalar(keys @gzip_partial_wals), 1,\n+ \"one partial gzip compressed WAL was created\");\n\n\nFunction keys or values are used only with hashes but here you are using\narrays. To obtain the length of the array you can just use the scalar\nfunction as Perl returns the length of the array when it is called in a\nscalar context. Please use the following instead:\n\n\n+ is (scalar(@gzip_wals), 1, \"one gzip compressed WAL was created\");\n\n\n+ is (scalar(@gzip_partial_wals), 1,\n+ \"one partial gzip compressed WAL was created\");\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n",
"msg_date": "Mon, 12 Jul 2021 13:00:53 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 09:42:32AM +0000, gkokolatos@pm.me wrote:\n> This to my understanding means that gzip is expected to exist.\n> If this is correct, then simply checking for the headers should\n> suffice, since that is the only dependency for the files to be\n> created.\n\nYou cannot expect this to work on Windows when it comes to MSVC for\nexample, as gzip may not be in the environment PATH so the test would\nfail hard. Let's just rely on $ENV{GZIP} instead, and skip the test\nif it is not defined.\n--\nMichael",
"msg_date": "Mon, 12 Jul 2021 20:04:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Monday, July 12th, 2021 at 13:00, Gilles Darold <gilles@darold.net> wrote:\n\n> Le 12/07/2021 à 12:27, gkokolatos@pm.me a écrit :\n>\n> > > > > Shouldn't this be coded as a loop going through @gzip_wals?\n> > > > >\n> > > > > I would hope that there is only one gz file created. There is a line\n> > > >\n> > > > further up that tests exactly that.\n> > > >\n> > > > - is (scalar(@gzip_wals), 1, \"one gzip compressed WAL was created\");\n> > > >\n> > > > Let me amend that. The line should be instead:\n> > >\n> > > is (scalar(keys @gzip_wals), 1, \"one gzip compressed WAL was created\");\n> > >\n> > > To properly test that there is one entry.\n> > >\n> > > Let me provide with v2 to fix this.\n>\n> The following tests are not correct in Perl even if Perl returns the\n>\n> right value.\n>\n> + is (scalar(keys @gzip_wals), 1, \"one gzip compressed WAL was created\");\n>\n> + is (scalar(keys @gzip_partial_wals), 1,\n>\n> + \"one partial gzip compressed WAL was created\");\n>\n> Function keys or values are used only with hashes but here you are using\n>\n> arrays. To obtain the length of the array you can just use the scalar\n>\n> function as Perl returns the length of the array when it is called in a\n>\n> scalar context. Please use the following instead:\n>\n> + is (scalar(@gzip_wals), 1, \"one gzip compressed WAL was created\");\n>\n> + is (scalar(@gzip_partial_wals), 1,\n>\n> + \"one partial gzip compressed WAL was created\");\n\nYou are absolutely correct. I had used that in v1, yet since it got called out\nI doubted myself, assumed I was wrong and the rest is history. I shall ament the\namendment for v3 of the patch.\n\nCheers,\n//Georgios\n\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Gilles Darold\n>\n> http://www.darold.net/\n\n\n",
"msg_date": "Mon, 12 Jul 2021 15:01:16 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Monday, July 12th, 2021 at 13:04, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jul 12, 2021 at 09:42:32AM +0000, gkokolatos@pm.me wrote:\n>\n> > This to my understanding means that gzip is expected to exist.\n> >\n> > If this is correct, then simply checking for the headers should\n> >\n> > suffice, since that is the only dependency for the files to be\n> >\n> > created.\n>\n> You cannot expect this to work on Windows when it comes to MSVC for\n>\n> example, as gzip may not be in the environment PATH so the test would\n>\n> fail hard. Let's just rely on $ENV{GZIP} instead, and skip the test\n>\n> if it is not defined.\n\nI am admittedly not so well versed on Windows systems. Thank you for\ninforming me.\n\nPlease find attached v3 of the patch where $ENV{GZIP_PROGRAM} is used\ninstead. To the best of my knowledge one should avoid using $ENV{GZIP}\nbecause that would translate to the obsolete, yet used environment\nvariable GZIP which holds a set of default options for gzip. In essence\nit would be equivalent to executing:\n GZIP=gzip gzip --test <files>\nwhich can result to errors similar to:\n gzip: gzip: non-option in GZIP environment variable\n\nCheers,\n//Georgios\n\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Michael",
"msg_date": "Mon, 12 Jul 2021 15:07:50 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Monday, July 12th, 2021 at 17:07, <gkokolatos@pm.me> wrote:\n\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n>\n> On Monday, July 12th, 2021 at 13:04, Michael Paquier michael@paquier.xyz wrote:\n>\n> > On Mon, Jul 12, 2021 at 09:42:32AM +0000, gkokolatos@pm.me wrote:\n> >\n> > > This to my understanding means that gzip is expected to exist.\n> > >\n> > > If this is correct, then simply checking for the headers should\n> > >\n> > > suffice, since that is the only dependency for the files to be\n> > >\n> > > created.\n> >\n> > You cannot expect this to work on Windows when it comes to MSVC for\n> >\n> > example, as gzip may not be in the environment PATH so the test would\n> >\n> > fail hard. Let's just rely on $ENV{GZIP} instead, and skip the test\n> >\n> > if it is not defined.\n>\n> I am admittedly not so well versed on Windows systems. Thank you for\n>\n> informing me.\n>\n> Please find attached v3 of the patch where $ENV{GZIP_PROGRAM} is used\n>\n> instead. To the best of my knowledge one should avoid using $ENV{GZIP}\n>\n> because that would translate to the obsolete, yet used environment\n>\n> variable GZIP which holds a set of default options for gzip. In essence\n>\n> it would be equivalent to executing:\n>\n> GZIP=gzip gzip --test <files>\n>\n> which can result to errors similar to:\n>\n> gzip: gzip: non-option in GZIP environment variable\n>\n\nAfter a bit more thinking, I went ahead and added on top of v3 a test\nverifying that the gzip program can actually be called.\n\nPlease find v4 attached.\n\nCheers,\n//Georgios\n\n>\n> > Michael",
"msg_date": "Mon, 12 Jul 2021 16:46:29 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 04:46:29PM +0000, gkokolatos@pm.me wrote:\n> On Monday, July 12th, 2021 at 17:07, <gkokolatos@pm.me> wrote:\n>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nAre you using outlook? The format of your messages gets blurry on the\nPG website, so does it for me.\n\n>> I am admittedly not so well versed on Windows systems. Thank you for\n>> informing me.\n>> Please find attached v3 of the patch where $ENV{GZIP_PROGRAM} is used\n>> instead. To the best of my knowledge one should avoid using $ENV{GZIP}\n>> because that would translate to the obsolete, yet used environment\n>> variable GZIP which holds a set of default options for gzip. In essence\n>> it would be equivalent to executing:\n>> GZIP=gzip gzip --test <files>\n>> which can result to errors similar to:\n>> gzip: gzip: non-option in GZIP environment variable\n\n-# make this available to TAP test scripts\n+# make these available to TAP test scripts\n export TAR\n+export GZIP_PROGRAM=$(GZIP)\n\nWow. So this comes from the fact that the command gzip can feed on\nthe environment variable from the same name. I was not aware of\nthat, and a comment would be in place here. That means complicating a\nbit the test flow for people on Windows, but I am fine to live with\nthat as long as this does not fail hard. One extra thing we could do\nis drop this part of the test, but I agree that this is useful to have\naround as a validity check.\n\n> After a bit more thinking, I went ahead and added on top of v3 a test\n> verifying that the gzip program can actually be called.\n\n+ system_log($gzip, '--version') != 0);\nChecking after that does not hurt, indeed. I am wondering if we\nshould do that for TAR.\n\nAnother thing I find unnecessary is the number of the tests. This\ndoes two rounds of pg_receivewal just to test the long and short\noptions of -Z/-compress, which brings only coverage to make sure that\nboth option names are handled. That's a high cost for a low amound of\nextra coverage, so let's cut the runtime in half and just use the\nround with --compress.\n\nThere was also a bit of confusion with ZLIB and gzip in the variable\nnames and the comments, the latter being only the command while the\ncompression happens with zlib. With a round of indentation on top of\nall that, I ge tthe attached.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Tue, 13 Jul 2021 10:53:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Tuesday, July 13th, 2021 at 03:53, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jul 12, 2021 at 04:46:29PM +0000, gkokolatos@pm.me wrote:\n>\n> > On Monday, July 12th, 2021 at 17:07, gkokolatos@pm.me wrote:\n> >\n> > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n>\n> Are you using outlook? The format of your messages gets blurry on the\n>\n> PG website, so does it for me.\n\nI am using protonmail's web page. I was not aware of the issue. Thank you\nfor bringing it up to my attention. I shall try to address it.\n\n>\n> > > I am admittedly not so well versed on Windows systems. Thank you for\n> > >\n> > > informing me.\n> > >\n> > > Please find attached v3 of the patch where $ENV{GZIP_PROGRAM} is used\n> > >\n> > > instead. To the best of my knowledge one should avoid using $ENV{GZIP}\n> > >\n> > > because that would translate to the obsolete, yet used environment\n> > >\n> > > variable GZIP which holds a set of default options for gzip. In essence\n> > >\n> > > it would be equivalent to executing:\n> > >\n> > > GZIP=gzip gzip --test <files>\n> > >\n> > > which can result to errors similar to:\n> > >\n> > > gzip: gzip: non-option in GZIP environment variable\n>\n> -# make this available to TAP test scripts\n>\n> +# make these available to TAP test scripts\n>\n> export TAR\n>\n> +export GZIP_PROGRAM=$(GZIP)\n>\n> Wow. So this comes from the fact that the command gzip can feed on\n>\n> the environment variable from the same name. I was not aware of\n>\n> that, and a comment would be in place here. That means complicating a\n>\n> bit the test flow for people on Windows, but I am fine to live with\n>\n> that as long as this does not fail hard. One extra thing we could do\n>\n> is drop this part of the test, but I agree that this is useful to have\n>\n> around as a validity check.\n\nGreat.\n\n>\n> > After a bit more thinking, I went ahead and added on top of v3 a test\n> >\n> > verifying that the gzip program can actually be called.\n>\n> - system_log($gzip, '--version') != 0);\n>\n>\n>\n> Checking after that does not hurt, indeed. I am wondering if we\n>\n> should do that for TAR.\n\nI do not think that this will be a necessity for TAR. TAR after all\nis discovered by autoconf, which gzip is not.\n\n>\n> Another thing I find unnecessary is the number of the tests. This\n>\n> does two rounds of pg_receivewal just to test the long and short\n>\n> options of -Z/-compress, which brings only coverage to make sure that\n>\n> both option names are handled. That's a high cost for a low amound of\n>\n> extra coverage, so let's cut the runtime in half and just use the\n>\n> round with --compress.\n\nI am sorry this was not so clear. It is indeed running twice the binary\nwith different flags. However the goal is not to check the flags, but\nto make certain that the partial file has now been completed. That is\nwhy there was code asserting that the previous FILENAME.gz.partial file\nafter the second invocation is converted to FILENAME.gz.\n\nAdditionally the second invocation of pg_receivewal is extending the\ncoverage of FindStreamingStart().\n\nThe different flags was an added bonus.\n\n>\n> There was also a bit of confusion with ZLIB and gzip in the variable\n>\n> names and the comments, the latter being only the command while the\n>\n> compression happens with zlib. With a round of indentation on top of\n>\n> all that, I ge tthe attached.\n>\n> What do you think?\n\nThank you very much for the patch. I would prefer to keep the parts that\ntests that .gz.partial are completed on a subsequent run if you agree.\n\nCheers,\n//Georgios\n\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Michael\n\n\n",
"msg_date": "Tue, 13 Jul 2021 06:36:59 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 06:36:59AM +0000, gkokolatos@pm.me wrote:\n> I am sorry this was not so clear. It is indeed running twice the binary\n> with different flags. However the goal is not to check the flags, but\n> to make certain that the partial file has now been completed. That is\n> why there was code asserting that the previous FILENAME.gz.partial file\n> after the second invocation is converted to FILENAME.gz.\n\nThe first run you are adding checks the same thing thanks to\npg_switch_wal(), where pg_receivewal completes the generation of\n000000010000000000000002.gz and finishes with\n000000010000000000000003.gz.partial.\n\n> Additionally the second invocation of pg_receivewal is extending the\n> coverage of FindStreamingStart().\n\nHmm. It looks like a waste in runtime once we mix LZ4 in that as that\nwould mean 5 runs of pg_receivewal, but we really need only three of\nthem with --endpos:\n- One with ZLIB compression.\n- One with LZ4 compression.\n- One without compression.\n\nDo you think that we could take advantage of what is now the only run\nof pg_receivewal --endpos for that? We could make the ZLIB checks run\nfirst, conditionally, and then let the last command with --endpos\nperform a full scan of the contents in $stream_dir with the .gz files\nalready in place. The addition of LZ4 would be an extra conditional\nblock similar to what's introduced in ZLIB, running before the last\ncommand without compression.\n--\nMichael",
"msg_date": "Tue, 13 Jul 2021 16:37:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 04:37:53PM +0900, Michael Paquier wrote:\n> Hmm. It looks like a waste in runtime once we mix LZ4 in that as that\n> would mean 5 runs of pg_receivewal, but we really need only three of\n> them with --endpos:\n> - One with ZLIB compression.\n> - One with LZ4 compression.\n> - One without compression.\n> \n> Do you think that we could take advantage of what is now the only run\n> of pg_receivewal --endpos for that? We could make the ZLIB checks run\n> first, conditionally, and then let the last command with --endpos\n> perform a full scan of the contents in $stream_dir with the .gz files\n> already in place. The addition of LZ4 would be an extra conditional\n> block similar to what's introduced in ZLIB, running before the last\n> command without compression.\n\nPoking at this problem, I partially take this statement back as this\nrequires an initial run of pg_receivewal --endpos to ensure the\ncreation of one .gz and one .gz.partial. So I guess that this should\nbe structured as:\n1) Keep the existing pg_receivewal --endpos.\n2) Add the ZLIB block, with one pg_receivewal --endpos.\n3) Add at the end one extra pg_receivewal --endpos, outside of the ZLIB\nblock, which should check the creation of a .partial, non-compressed\nsegment. This should mention that we place this command at the end of\nthe test for the start streaming point computation.\n\nLZ4 tests would be then between 2) and 3), or 1) and 2).\n--\nMichael",
"msg_date": "Tue, 13 Jul 2021 17:14:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Tuesday, July 13th, 2021 at 09:37, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 13, 2021 at 06:36:59AM +0000, gkokolatos@pm.me wrote:\n>\n> > I am sorry this was not so clear. It is indeed running twice the binary\n> > with different flags. However the goal is not to check the flags, but\n> > to make certain that the partial file has now been completed. That is\n> > why there was code asserting that the previous FILENAME.gz.partial file\n> > after the second invocation is converted to FILENAME.gz.\n>\n> The first run you are adding checks the same thing thanks to\n> pg_switch_wal(), where pg_receivewal completes the generation of\n> 000000010000000000000002.gz and finishes with\n> 000000010000000000000003.gz.partial.\n\nThis is correct. It is the 000000010000000000000003 awl that the rest\nof the tests are targeting.\n\n>\n> > Additionally the second invocation of pg_receivewal is extending the\n> > coverage of FindStreamingStart().\n>\n> Hmm. It looks like a waste in runtime once we mix LZ4 in that as that\n> would mean 5 runs of pg_receivewal, but we really need only three of\n> them with --endpos:\n> - One with ZLIB compression.\n> - One with LZ4 compression.\n> - One without compression.\n>\n> Do you think that we could take advantage of what is now the only run\n> of pg_receivewal --endpos for that? We could make the ZLIB checks run\n> first, conditionally, and then let the last command with --endpos\n> perform a full scan of the contents in $stream_dir with the .gz files\n> already in place. The addition of LZ4 would be an extra conditional\n> block similar to what's introduced in ZLIB, running before the last\n> command without compression.\n\nI will admit that for the current patch I am not taking lz4 into account as\nat the moment I have little idea as to how the lz4 patch will advance with the\nreview rounds. I simply accepted that it will be rebased on top of the patch\nin the current thread and probably need to modify the current then.\n\nBut I digress. I would like have some combination of .gz and .gz.parial but\nI will not take too strong of a stance. I am happy to go with your suggestion.\n\nCheers,\n//Georgios\n\n>\n> --\n>\n> Michael\n\n\n",
"msg_date": "Tue, 13 Jul 2021 08:22:51 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Tuesday, July 13th, 2021 at 10:14, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 13, 2021 at 04:37:53PM +0900, Michael Paquier wrote:\n>\n> Poking at this problem, I partially take this statement back as this\n> requires an initial run of pg_receivewal --endpos to ensure the\n> creation of one .gz and one .gz.partial. So I guess that this should\n> be structured as:\n>\n> 1. Keep the existing pg_receivewal --endpos.\n> 2. Add the ZLIB block, with one pg_receivewal --endpos.\n> 3. Add at the end one extra pg_receivewal --endpos, outside of the ZLIB\n> block, which should check the creation of a .partial, non-compressed\n> segment. This should mention that we place this command at the end of\n> the test for the start streaming point computation.\n> LZ4 tests would be then between 2) and 3), or 1) and 2).\n\nSounds great. Let me cook up v6 for this.\n\n>\n> --\n>\n> Michael\n\n\n",
"msg_date": "Tue, 13 Jul 2021 08:28:44 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 08:28:44AM +0000, gkokolatos@pm.me wrote:\n> Sounds great. Let me cook up v6 for this.\n\nThanks. Could you use v5 I posted upthread as a base? There were\nsome improvements in the variable names, the comments and the test\ndescriptions.\n--\nMichael",
"msg_date": "Tue, 13 Jul 2021 19:26:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Tuesday, July 13th, 2021 at 12:26, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 13, 2021 at 08:28:44AM +0000, gkokolatos@pm.me wrote:\n> > Sounds great. Let me cook up v6 for this.\n>\n> Thanks. Could you use v5 I posted upthread as a base? There were\n> some improvements in the variable names, the comments and the test\n> descriptions.\n\nAgreed. For the record that is why I said v6 :)\n\nCheers,\n//Georgios\n\n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> Michael\n\n\n",
"msg_date": "Tue, 13 Jul 2021 11:16:06 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 11:16:06AM +0000, gkokolatos@pm.me wrote:\n> Agreed. For the record that is why I said v6 :)\n\nOkay, thanks.\n--\nMichael",
"msg_date": "Wed, 14 Jul 2021 11:17:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Wednesday, July 14th, 2021 at 04:17, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 13, 2021 at 11:16:06AM +0000, gkokolatos@pm.me wrote:\n> > Agreed. For the record that is why I said v6 :)\n> Okay, thanks.\n\nPlease find v6 attached.\n\nCheers,\n//Georgios\n\n> ---------------\n>\n> Michael",
"msg_date": "Wed, 14 Jul 2021 14:11:09 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 02:11:09PM +0000, gkokolatos@pm.me wrote:\n> Please find v6 attached.\n\nThanks. I have spent some time checking this stuff in details, and\nI did some tests on Windows while on it. A run of pgperltidy was\nmissing. A second thing is that you added one useless WAL segment\nswitch in the ZLIB block, and two at the end, causing the first two in\nthe set of three (one in the ZLIB block and one in the final command)\nto be no-ops as they followed a previous WAL switch. The final one\nwas not needed as no WAL is generated after that.\n\nAnd applied. Let's see if the buildfarm has anything to say. Perhaps\nthis will even catch some bugs that pre-existed.\n--\nMichael",
"msg_date": "Thu, 15 Jul 2021 16:00:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Thursday, July 15th, 2021 at 09:00, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jul 14, 2021 at 02:11:09PM +0000, gkokolatos@pm.me wrote:\n>\n> > Please find v6 attached.\n>\n> Thanks. I have spent some time checking this stuff in details, and\n> I did some tests on Windows while on it. A run of pgperltidy was\n> missing. A second thing is that you added one useless WAL segment\n> switch in the ZLIB block, and two at the end, causing the first two in\n> the set of three (one in the ZLIB block and one in the final command)\n> to be no-ops as they followed a previous WAL switch. The final one\n> was not needed as no WAL is generated after that.\n>\n\nThank you for the work and comments.\n\n> And applied. Let's see if the buildfarm has anything to say. Perhaps\n> this will even catch some bugs that pre-existed.\n\nLet us hope that it will prevent some bugs from happening.\n\nCheers,\n//Georgios\n\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Michael\n\n\n",
"msg_date": "Thu, 15 Jul 2021 07:48:08 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 07:48:08AM +0000, gkokolatos@pm.me wrote:\n> Let us hope that it will prevent some bugs from happening.\n\nThe buildfarm has two reports.\n\n1) bowerbird on Windows/MSVC:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2021-07-15%2010%3A30%3A36\npg_receivewal: fatal: could not fsync existing write-ahead log file\n\"000000010000000000000002.partial\": Permission denied\nnot ok 20 - streaming some WAL using ZLIB compression\nI don't think the existing code can be blamed for that as this means a\nfailure with gzflush(). Likely a concurrency issue as that's an\nEACCES. If that's repeatable, that could point to an actual issue\nwith pg_receivewal --compress.\n\n2) curculio:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2021-07-15%2010%3A30%3A15\n# Running: gzip --test\n /home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/t_020_pg_receivewal_primary_data/archive_wal/000000010000000000000002.gz\n /home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/t_020_pg_receivewal_primary_data/archive_wal/000000010000000000000003.gz.partial\ngzip:\n /home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/t_020_pg_receivewal_primary_data/archive_wal/000000010000000000000003.gz.partial:\n unknown suffix: ignored\n not ok 24 - gzip verified the integrity of compressed WAL segments\n\nLooking at the OpenBSD code (usr.bin/compress/main.c), long options\nare supported, where --version does exit(0) without printing\nanything, and --test is supported even if that's not on the man pages.\nset_outfile() is doing a discard of the file suffixes it does not\nrecognize, and I think that their implementation bumps on .gz.partial\nand generates an exit code of 512 to map with WARNING. I still wish\nto keep this test, and I'd like to think that the contents of\n@zlib_wals are enough in terms of coverage. What do you think?\n--\nMichael",
"msg_date": "Thu, 15 Jul 2021 20:35:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 08:35:27PM +0900, Michael Paquier wrote:\n> 1) bowerbird on Windows/MSVC:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2021-07-15%2010%3A30%3A36\n> pg_receivewal: fatal: could not fsync existing write-ahead log file\n> \"000000010000000000000002.partial\": Permission denied\n> not ok 20 - streaming some WAL using ZLIB compression\n> I don't think the existing code can be blamed for that as this means a\n> failure with gzflush(). Likely a concurrency issue as that's an\n> EACCES. If that's repeatable, that could point to an actual issue\n> with pg_receivewal --compress.\n\nFor this one, I'll try to test harder on my own host. I am curious to\nsee if the other Windows members running the TAP tests have anything\nto say. Looking at the code of zlib, this would come from gz_zero()\nin gzflush(), which could blow up on a write() in gz_comp().\n\n> 2) curculio:\n> \n> Looking at the OpenBSD code (usr.bin/compress/main.c), long options\n> are supported, where --version does exit(0) without printing\n> anything, and --test is supported even if that's not on the man pages.\n> set_outfile() is doing a discard of the file suffixes it does not\n> recognize, and I think that their implementation bumps on .gz.partial\n> and generates an exit code of 512 to map with WARNING. I still wish\n> to keep this test, and I'd like to think that the contents of\n> @zlib_wals are enough in terms of coverage. What do you think?\n\nAfter thinking more about this one, I have taken the course to just\nremove the .gz.partial segment from the check, a full segment should\nbe enough in terms of coverage. I prefer this simplification over a\nrename of the .partial segment or a tweak of the error code to map\nwith WARNING.\n--\nMichael",
"msg_date": "Thu, 15 Jul 2021 21:35:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Thursday, July 15th, 2021 at 14:35, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Jul 15, 2021 at 08:35:27PM +0900, Michael Paquier wrote:\n\n>\n> > 2. curculio:\n> > Looking at the OpenBSD code (usr.bin/compress/main.c), long options\n> > are supported, where --version does exit(0) without printing\n> > set_outfile() is doing a discard of the file suffixes it does not\n> > recognize, and I think that their implementation bumps on .gz.partial\n> > and generates an exit code of 512 to map with WARNING. I still wish\n> > to keep this test, and I'd like to think that the contents of\n> > @zlib_wals are enough in terms of coverage. What do you think?\n>\n> After thinking more about this one, I have taken the course to just\n> remove the .gz.partial segment from the check, a full segment should\n> be enough in terms of coverage. I prefer this simplification over a\n> rename of the .partial segment or a tweak of the error code to map\n> with WARNING.\n\nFair enough.\n\nCheers,\n//Georgios\n\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Michael\n\n\n",
"msg_date": "Thu, 15 Jul 2021 13:49:07 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 09:35:52PM +0900, Michael Paquier wrote:\n> For this one, I'll try to test harder on my own host. I am curious to\n> see if the other Windows members running the TAP tests have anything\n> to say. Looking at the code of zlib, this would come from gz_zero()\n> in gzflush(), which could blow up on a write() in gz_comp().\n\nbowerbird has just failed for the second time in a row on EACCESS, so\nthere is more here than meets the eye. Looking at the code, I think I\nhave spotted what it is and the buildfarm logs give a very good hint:\n# Running: pg_receivewal -D\n:/prog/bf/root/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/t_020_pg_receivewal_primary_data/archive_wal\n--verbose --endpos 0/3000028 --compress 1\npg_receivewal: starting log streaming at 0/2000000 (timeline 1)\npg_receivewal: fatal: could not fsync existing write-ahead log file\n\"000000010000000000000002.partial\": Permission denied\nnot ok 20 - streaming some WAL using ZLIB compression\n\n--compress is used and the sync fails for a non-compressed segment.\nLooking at the code it is pretty obvious that open_walfile() is\ngetting confused with the handling of an existing .partial segment\nwhile walmethods.c uses dir_data->compression in all the places that\nmatter. So that's a legit bug, that happens only when mixing\npg_receivewal runs for where successive runs use the compression or\nnon-compression modes.\n\nI am amazed that the other buildfarm members are not complaining, to\nbe honest. jacana runs this TAP test with MinGW and ZLIB, and does\nnot complain.\n--\nMichael",
"msg_date": "Fri, 16 Jul 2021 08:59:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Fri, Jul 16, 2021 at 08:59:11AM +0900, Michael Paquier wrote:\n> --compress is used and the sync fails for a non-compressed segment.\n> Looking at the code it is pretty obvious that open_walfile() is\n> getting confused with the handling of an existing .partial segment\n> while walmethods.c uses dir_data->compression in all the places that\n> matter. So that's a legit bug, that happens only when mixing\n> pg_receivewal runs for where successive runs use the compression or\n> non-compression modes.\n\nDitto. After reading the code more carefully, the code is actually\nable to work even if it could be cleaner:\n1) dir_existsfile() would check for the existence of a\nnon-compressed, partial segment all the time.\n2) If this non-compressed file was padded, the code would use\nopen_for_write() that would open a compressed, partial segment.\n3) The compressed, partial segment would be the one flushed.\n\nThis behavior is rather debatable, and it would be more instinctive to\nme to just skip any business related to the pre-padding if compression\nis enabled, at the cost of one extra callback in WalWriteMethod to\ngrab the compression level (dir_open_for_write() skips that for\ncompression) to allow receivelog.c to handle that. But at the same\ntime few users are going to care about that as pg_receivewal has most\nlikely always the same set of options, so complicating this code is\nnot really appealing either.\n\n> I am amazed that the other buildfarm members are not complaining, to\n> be honest. jacana runs this TAP test with MinGW and ZLIB, and does\n> not complain.\n\nI have spent more time on that with my own environment, and while\ntesting I have bumped on a different issue with zlib, which was\nreally weird. In the same scenario as above, gzdopen() has been\nfailing for me at step 2), causing the test to loop forever. We\ndocument to use DLLs for ZLIB coming from zlib.net, but the ones\navailable there are really outdated as far as I can see (found some\ncalled zlib.lib/dll myself, breaking Solution.pm). For now I have\ndisabled those tests on Windows to bring back bowerbird to green, but\nthere is something else going on here. We don't do much tests with\nZLIB on Windows for pg_basebackup and pg_dump, so there may be some\nmore issues?\n\n@Andrew: which version of ZLIB are you using on bowerbird? That's the\none in c:\\prog\\3p64. That's a zdll.lib, right?\n--\nMichael",
"msg_date": "Fri, 16 Jul 2021 14:08:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Fri, Jul 16, 2021 at 02:08:57PM +0900, Michael Paquier wrote:\n> This behavior is rather debatable, and it would be more instinctive to\n> me to just skip any business related to the pre-padding if compression\n> is enabled, at the cost of one extra callback in WalWriteMethod to\n> grab the compression level (dir_open_for_write() skips that for\n> compression) to allow receivelog.c to handle that. But at the same\n> time few users are going to care about that as pg_receivewal has most\n> likely always the same set of options, so complicating this code is\n> not really appealing either.\n\nI have chewed on that over the weekend, and skipping the padding logic\nif we are in compression mode in open_walfile() makes sense, so\nattached is a patch that I'd like to backpatch.\n\nAnother advantage of this patch is the handling of \".gz\" is reduced to\none code path instead of four. That makes a bit easier the\nintroduction of new compression methods.\n\nA second thing that was really confusing is that the name of the WAL\nsegment generated in this code path completely ignored the type of\ncompression. This led to one confusing error message if failing to\nopen a segment for write where we'd mention a .partial file rather\nthan a .gz.partial file. The versions of zlib I used on Windows\nlooked buggy so I cannot conclude there, but I am sure that this\nshould allow bowerbird to handle the test correctly.\n--\nMichael",
"msg_date": "Mon, 19 Jul 2021 16:03:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 04:03:33PM +0900, Michael Paquier wrote:\n> Another advantage of this patch is the handling of \".gz\" is reduced to\n> one code path instead of four. That makes a bit easier the\n> introduction of new compression methods.\n> \n> A second thing that was really confusing is that the name of the WAL\n> segment generated in this code path completely ignored the type of\n> compression. This led to one confusing error message if failing to\n> open a segment for write where we'd mention a .partial file rather\n> than a .gz.partial file. The versions of zlib I used on Windows\n> looked buggy so I cannot conclude there, but I am sure that this\n> should allow bowerbird to handle the test correctly.\n\nAfter more testing and more review, I have applied and backpatched\nthis stuff. Another thing I did on HEAD was to enable again the ZLIB\nportion of the pg_receivewal tests on Windows. bowerdird should stay\ngreen (I hope), and it is better to have as much more coverage as\npossible for all that.\n--\nMichael",
"msg_date": "Tue, 20 Jul 2021 13:31:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce pg_receivewal gzip compression tests"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at find_hash_columns() in nodeAgg.c\n\nIt seems the first loop tries to determine the max column number needed,\nalong with whether all columns are needed.\n\nThe loop can be re-written as shown in the patch.\n\nIn normal cases, we don't need to perform scanDesc->natts iterations.\nIn best case scenario, the loop would terminate after two iterations.\n\nPlease provide your comment.\n\nThanks",
"msg_date": "Fri, 9 Jul 2021 08:20:04 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "short circuit suggestion in find_hash_columns()"
},
{
"msg_contents": "On Sat, 10 Jul 2021 at 03:15, Zhihong Yu <zyu@yugabyte.com> wrote:\n> I was looking at find_hash_columns() in nodeAgg.c\n>\n> It seems the first loop tries to determine the max column number needed, along with whether all columns are needed.\n>\n> The loop can be re-written as shown in the patch.\n\nThis runs during ExecInitAgg(). Do you have a test case where you're\nseeing any performance gains from this change?\n\nDavid\n\n\n",
"msg_date": "Sat, 10 Jul 2021 03:28:38 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: short circuit suggestion in find_hash_columns()"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 8:28 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sat, 10 Jul 2021 at 03:15, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > I was looking at find_hash_columns() in nodeAgg.c\n> >\n> > It seems the first loop tries to determine the max column number needed,\n> along with whether all columns are needed.\n> >\n> > The loop can be re-written as shown in the patch.\n>\n> This runs during ExecInitAgg(). Do you have a test case where you're\n> seeing any performance gains from this change?\n>\n> David\n>\n\nHi,\nI made some attempt in varying related test but haven't seen much\ndifference in performance.\n\nLet me spend more time (possibly in off hours) on this.\n\nCheers\n\nOn Fri, Jul 9, 2021 at 8:28 AM David Rowley <dgrowleyml@gmail.com> wrote:On Sat, 10 Jul 2021 at 03:15, Zhihong Yu <zyu@yugabyte.com> wrote:\n> I was looking at find_hash_columns() in nodeAgg.c\n>\n> It seems the first loop tries to determine the max column number needed, along with whether all columns are needed.\n>\n> The loop can be re-written as shown in the patch.\n\nThis runs during ExecInitAgg(). Do you have a test case where you're\nseeing any performance gains from this change?\n\nDavidHi,I made some attempt in varying related test but haven't seen much difference in performance.Let me spend more time (possibly in off hours) on this.Cheers",
"msg_date": "Fri, 9 Jul 2021 09:49:53 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: short circuit suggestion in find_hash_columns()"
}
] |
[
{
"msg_contents": "Hi,\n\nI've always had a hard time distinguishing various types of\nprocesses/terms used in postgres. I look at the source code every time\nto understand them, yet I don't feel satisfied with my understanding.\nI request any hacker (having a better idea than me) to help me with\nwhat each different process does and how they are different from each\nother? Of course, I'm clear with normal backends (user sessions), bg\nworkers, but the others need a bit more understanding.\n\nI appreciate the help.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 9 Jul 2021 21:24:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "What are exactly bootstrap processes, auxiliary processes, standalone\n backends, normal backends(user sessions)?"
},
{
"msg_contents": "Greetings,\n\n* Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> I've always had a hard time distinguishing various types of\n> processes/terms used in postgres. I look at the source code every time\n> to understand them, yet I don't feel satisfied with my understanding.\n> I request any hacker (having a better idea than me) to help me with\n> what each different process does and how they are different from each\n> other? Of course, I'm clear with normal backends (user sessions), bg\n> workers, but the others need a bit more understanding.\n\nThere was an effort to try to pull these things together because, yeah,\nit seems a bit messy.\n\nI'd suggest you take a look at:\n\nhttps://www.postgresql.org/message-id/flat/CAMN686FE0OdZKp9YPO=htC6LnA6aW4r-+jq=3Q5RAoFQgW8EtA@mail.gmail.com\n\nThanks,\n\nStephen",
"msg_date": "Mon, 12 Jul 2021 17:30:49 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 3:00 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> > I've always had a hard time distinguishing various types of\n> > processes/terms used in postgres. I look at the source code every time\n> > to understand them, yet I don't feel satisfied with my understanding.\n> > I request any hacker (having a better idea than me) to help me with\n> > what each different process does and how they are different from each\n> > other? Of course, I'm clear with normal backends (user sessions), bg\n> > workers, but the others need a bit more understanding.\n>\n> There was an effort to try to pull these things together because, yeah,\n> it seems a bit messy.\n>\n> I'd suggest you take a look at:\n>\n> https://www.postgresql.org/message-id/flat/CAMN686FE0OdZKp9YPO=htC6LnA6aW4r-+jq=3Q5RAoFQgW8EtA@mail.gmail.com\n\nThanks. I will check it.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 15 Jul 2021 19:57:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 09:24:19PM +0530, Bharath Rupireddy wrote:\n> I've always had a hard time distinguishing various types of\n> processes/terms used in postgres. I look at the source code every time\n> to understand them, yet I don't feel satisfied with my understanding.\n> I request any hacker (having a better idea than me) to help me with\n> what each different process does and how they are different from each\n> other? Of course, I'm clear with normal backends (user sessions), bg\n> workers, but the others need a bit more understanding.\n\nIt sounds like something that should be in the glossary, which currently refers\nto but doesn't define \"auxiliary processes\".\n\n * Background writer, checkpointer, WAL writer and archiver run during normal\n * operation. Startup process and WAL receiver also consume 2 slots, but WAL\n * writer is launched only after startup has exited, so we only need 5 slots.\n */\n#define NUM_AUXILIARY_PROCS 5\n\nBootstrap is run by initdb:\nsrc/bin/initdb/initdb.c: \"\\\"%s\\\" --boot -x0 %s %s \"\n\nStandalone backend is run by --single, right ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 15 Jul 2021 09:47:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 8:17 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Jul 09, 2021 at 09:24:19PM +0530, Bharath Rupireddy wrote:\n> > I've always had a hard time distinguishing various types of\n> > processes/terms used in postgres. I look at the source code every time\n> > to understand them, yet I don't feel satisfied with my understanding.\n> > I request any hacker (having a better idea than me) to help me with\n> > what each different process does and how they are different from each\n> > other? Of course, I'm clear with normal backends (user sessions), bg\n> > workers, but the others need a bit more understanding.\n>\n> It sounds like something that should be in the glossary, which currently refers\n> to but doesn't define \"auxiliary processes\".\n\nThanks. I strongly feel that it should be documented somewhere. I will\nbe happy if someone with a clear idea about these various processes\ndoes it.\n\n> * Background writer, checkpointer, WAL writer and archiver run during normal\n> * operation. Startup process and WAL receiver also consume 2 slots, but WAL\n> * writer is launched only after startup has exited, so we only need 5 slots.\n> */\n> #define NUM_AUXILIARY_PROCS 5\n>\n> Bootstrap is run by initdb:\n> src/bin/initdb/initdb.c: \"\\\"%s\\\" --boot -x0 %s %s \"\n>\n> Standalone backend is run by --single, right ?\n\nMaybe(?). I found another snippet below:\n\n if (argc > 1 && strcmp(argv[1], \"--boot\") == 0)\n AuxiliaryProcessMain(argc, argv); /* does not return */\n else if (argc > 1 && strcmp(argv[1], \"--describe-config\") == 0)\n GucInfoMain(); /* does not return */\n else if (argc > 1 && strcmp(argv[1], \"--single\") == 0)\n PostgresMain(argc, argv,\n NULL, /* no dbname */\n strdup(get_user_name_or_exit(progname))); /*\ndoes not return */\n else\n PostmasterMain(argc, argv); /* does not return */\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 17 Jul 2021 09:25:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On 2021-Jul-17, Bharath Rupireddy wrote:\n\n> On Thu, Jul 15, 2021 at 8:17 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, Jul 09, 2021 at 09:24:19PM +0530, Bharath Rupireddy wrote:\n> > > I've always had a hard time distinguishing various types of\n> > > processes/terms used in postgres. I look at the source code every time\n> > > to understand them, yet I don't feel satisfied with my understanding.\n> > > I request any hacker (having a better idea than me) to help me with\n> > > what each different process does and how they are different from each\n> > > other? Of course, I'm clear with normal backends (user sessions), bg\n> > > workers, but the others need a bit more understanding.\n> >\n> > It sounds like something that should be in the glossary, which currently refers\n> > to but doesn't define \"auxiliary processes\".\n> \n> Thanks. I strongly feel that it should be documented somewhere. I will\n> be happy if someone with a clear idea about these various processes\n> does it.\n\n\tAuxiliary process\n\n\tProcess of an <glossterm linkend=\"glossary-instance\">instance</glossterm> in\n\tcharge of some specific, hardcoded background task. Examples are\n\tthe startup process,\n\tthe WAL receiver (but not the WAL senders),\n\tthe WAL writer,\n\tthe archiver,\n\tthe <glossterm linkend=\"glossary-background-writer\">background writer</glossterm>,\n\tthe <glossterm linkend=\"glossary-checkpointer\">checkpointer</glossterm>,\n\tthe <glossterm linkend=\"glossary-stats-collector\">statistics collector</glossterm>,\n\tand the <glossterm linkend=\"glossary-logger\">logger</glossterm>.\n\nWe should probably include individual glossary entries for those that\ndon't already have one. Maybe revise the entries for ones that do so\nthat they start with \"An auxiliary process that ...\"\n\nI just realized that the autovac launcher is not nominally an auxiliary\nprocess (per SubPostmasterMain), which is odd since notionally it\nclearly is. Maybe we should include it in the list anyway, with\nsomething like\n\n\"and the autovacuum launcher (but not the autovacuum workers)\".\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)\n\n\n",
"msg_date": "Sat, 17 Jul 2021 10:45:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On Sat, Jul 17, 2021 at 10:45:52AM -0400, Alvaro Herrera wrote:\n> On 2021-Jul-17, Bharath Rupireddy wrote:\n> \n> > On Thu, Jul 15, 2021 at 8:17 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Fri, Jul 09, 2021 at 09:24:19PM +0530, Bharath Rupireddy wrote:\n> > > > I've always had a hard time distinguishing various types of\n> > > > processes/terms used in postgres. I look at the source code every time\n> > > > to understand them, yet I don't feel satisfied with my understanding.\n> > > > I request any hacker (having a better idea than me) to help me with\n> > > > what each different process does and how they are different from each\n> > > > other? Of course, I'm clear with normal backends (user sessions), bg\n> > > > workers, but the others need a bit more understanding.\n> > >\n> > > It sounds like something that should be in the glossary, which currently refers\n> > > to but doesn't define \"auxiliary processes\".\n> > \n> > Thanks. I strongly feel that it should be documented somewhere. I will\n> > be happy if someone with a clear idea about these various processes\n> > does it.\n> \n> \tAuxiliary process\n> \n> \tProcess of an <glossterm linkend=\"glossary-instance\">instance</glossterm> in\n\nI think \"of an instance\" is a distraction here, even if technically accurate.\n\n> \tcharge of some specific, hardcoded background task. Examples are\n\nAnd I think \"hardcoded\" doesn't mean anything beyond what \"specific\" means.\n\nMaybe you'd say: .. process which handles a specific, central task for the\ncluster instance.\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Sat, 17 Jul 2021 09:58:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sat, Jul 17, 2021 at 10:45:52AM -0400, Alvaro Herrera wrote:\n>> Process of an <glossterm linkend=\"glossary-instance\">instance</glossterm> in\n>> charge of some specific, hardcoded background task. Examples are\n\n> And I think \"hardcoded\" doesn't mean anything beyond what \"specific\" means.\n\n> Maybe you'd say: .. process which handles a specific, central task for the\n> cluster instance.\n\nMeh. \"Specific\" and \"background\" both seem to be useful terms here.\nI do not think \"central\" is a useful adjective.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Jul 2021 11:57:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On Sat, Jul 17, 2021 at 10:45:52AM -0400, Alvaro Herrera wrote:\n> On 2021-Jul-17, Bharath Rupireddy wrote:\n> > On Thu, Jul 15, 2021 at 8:17 PM Justin Pryzby wrote:\n> > > It sounds like something that should be in the glossary, which currently refers\n> > > to but doesn't define \"auxiliary processes\".\n> > \n> > Thanks. I strongly feel that it should be documented somewhere. I will\n> > be happy if someone with a clear idea about these various processes\n> > does it.\n> \n> \tAuxiliary process\n\nI elaborated on your definition and added here.\nhttps://commitfest.postgresql.org/34/3285/",
"msg_date": "Sat, 14 Aug 2021 19:39:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On 2021-Aug-14, Justin Pryzby wrote:\n\n> I elaborated on your definition and added here.\n> https://commitfest.postgresql.org/34/3285/\n\nThanks! This works for me. After looking at it, it seemed to me that\nlisting the autovacuum launcher is perfectly adapted, so we might as\nwell do it; and add verbiage about it to the autovacuum entry. (I was\nfirst adding a whole new glossary entry for it, but it seemed overkill.)\n\nI also ended up adding an entry for WAL sender -- seems to round things\nnicely.\n\n... In doing so I noticed that the definition for startup process and\nWAL receiver is slightly wrong. WAL receiver only receives, it doesn't\nreplay; it is always the startup process the one that replays. So I\nchanged that too.\n\nWhat do you think?\n\nPS: I almost want to add a note to the startup process entry, something\nlike \"(The name is historical: it refers to its task before the\nintroduction of replication, when it was only related to the server\nstarting up after a crash.)\"\n\nPPS: Do we want the list to be in alphabetical order, or some other\norder?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)",
"msg_date": "Mon, 6 Sep 2021 21:18:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On Mon, Sep 06, 2021 at 09:18:13PM -0300, Alvaro Herrera wrote:\n> On 2021-Aug-14, Justin Pryzby wrote:\n> \n> > I elaborated on your definition and added here.\n> > https://commitfest.postgresql.org/34/3285/\n> \n> Thanks! This works for me. After looking at it, it seemed to me that\n> listing the autovacuum launcher is perfectly adapted, so we might as\n> well do it; and add verbiage about it to the autovacuum entry. (I was\n> first adding a whole new glossary entry for it, but it seemed overkill.)\n> \n> I also ended up adding an entry for WAL sender -- seems to round things\n> nicely.\n\nSure.\n\nMaybe change \"Stats collector\" to say that it *receives* stats (not collects\nthem). And maybe say \"(if enabled)\" or (unless disabled).\n\n> PS: I almost want to add a note to the startup process entry, something\n> like \"(The name is historical: it refers to its task before the\n> introduction of replication, when it was only related to the server\n> starting up after a crash.)\"\n\nI'd say \"(The name is historical: the startup process was named before\nreplication was implemented; the name refers to its task as it relates to the\nserver startup following a crash.)\"\n\n> PPS: Do we want the list to be in alphabetical order, or some other\n> order?\n\nI copied them out of the header file - but that may not matter nor make sense\nto someone reading the glossary.\n\nCheers\n-- \nJustin\n\n\n",
"msg_date": "Mon, 6 Sep 2021 19:49:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On Tue, Sep 7, 2021 at 5:48 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Aug-14, Justin Pryzby wrote:\n>\n> > I elaborated on your definition and added here.\n> > https://commitfest.postgresql.org/34/3285/\n>\n> Thanks! This works for me. After looking at it, it seemed to me that\n> listing the autovacuum launcher is perfectly adapted, so we might as\n> well do it; and add verbiage about it to the autovacuum entry. (I was\n> first adding a whole new glossary entry for it, but it seemed overkill.)\n>\n> I also ended up adding an entry for WAL sender -- seems to round things\n> nicely.\n>\n> ... In doing so I noticed that the definition for startup process and\n> WAL receiver is slightly wrong. WAL receiver only receives, it doesn't\n> replay; it is always the startup process the one that replays. So I\n> changed that too.\n\nThanks for the v2 patch, here are some comments on it:\n\n1) How about\nA set of background processes (<firstterm>autovacuum\nlauncher</firstterm> and <firstterm>autovacuum workers</firstterm>)\nthat routinely perform\ninstead of\nA set of background processes that routinely perform\n?\n\n2) In what way we call autovacuum launcher an auxiliary process but\nnot autovacuum worker? And autovacuum isn't a background worker right?\nWhy can't we call it an auxiliary process?\n+ (but not the autovacuum workers),\n\n3) Isn't it \"WAL sender\" instead of \"WAL senders\"?\n+ (but not the <glossterm linkend=\"glossary-wal-sender\">WAL\nsenders</glossterm>),\n\n\n4) replays WAL during replication? Isn't it \"replays WAL during crash\nrecovery or in standby mode\"\n+ An auxiliary process that replays WAL during replication and\n+ crash recovery.\n\n5) Should we mention that WAL archiver too is optional similar to\nLogger (process)? Also, let us rearrange the text a bit to be in sync.\n+ An auxiliary process which (if enabled) saves copies of\n+ <glossterm linkend=\"glossary-wal-file\">WAL files</glossterm>\n\n+ An auxiliary process which (if enabled)\n writes information about database events into the current\n\n6) Shouldn't we mention \"<glossterm\nlinkend=\"glossary-auxiliary-proc\">auxiliary process</glossterm>\ninstead of just plain \"auxilary process\"?\n\n7) Shouldn't we mention \"<glossterm\nlinkend=\"glossary-primary-server\">primary</glossterm>\"? instead of\n\"primary server\"?\n+ to receive WAL from the primary server for replay by the\n\n8) I agree to not call walsender an auxiliary process because it is\ntype of a <glossterm linkend=\"glossary-backend\">backend</glossterm>\nprocess that understands replication commands only. Instead of saying\n\"A process that runs...\"\nwhy can't we mention that in the description?\n+ A process that runs on a server that streams WAL over a\n+ network. The receiving end can be a\n+ <glossterm linkend=\"glossary-wal-receiver\">WAL receiver</glossterm>\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 7 Sep 2021 16:52:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "Thanks Bharath and Justin -- I think I took all the suggestions and made\na few other changes of my own. Here's the result.\n\nI'm not 100% happy with the historical note in \"startup process\", mostly\nbecause it uses the word \"name\" three times too close to each other.\nDidn't quickly see an obvious way to reword it to avoid that.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)",
"msg_date": "Mon, 13 Sep 2021 13:15:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
},
{
"msg_contents": "On 2021-Sep-13, Alvaro Herrera wrote:\n\n> Thanks Bharath and Justin -- I think I took all the suggestions and made\n> a few other changes of my own. Here's the result.\n\nPushed this with very minor additional changes, thanks.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 20 Sep 2021 12:28:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: What are exactly bootstrap processes, auxiliary processes,\n standalone backends, normal backends(user sessions)?"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI just noticed there's no tab completion for CREATE SCHEMA\nAUTHORIZATION, nor for anything after CREATE SCHEMA <name>.\n\nPlease find attached a patch that adds this.\n\n- ilmari",
"msg_date": "Fri, 09 Jul 2021 17:20:04 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> Hi Hackers,\n>\n> I just noticed there's no tab completion for CREATE SCHEMA\n> AUTHORIZATION, nor for anything after CREATE SCHEMA <name>.\n>\n> Please find attached a patch that adds this.\n\nAdded to the 2021-09 commit fest: https://commitfest.postgresql.org/34/3252/\n\n- ilmari\n\n\n",
"msg_date": "Thu, 15 Jul 2021 12:13:46 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>\n>> Hi Hackers,\n>>\n>> I just noticed there's no tab completion for CREATE SCHEMA\n>> AUTHORIZATION, nor for anything after CREATE SCHEMA <name>.\n>>\n>> Please find attached a patch that adds this.\n>\n> Added to the 2021-09 commit fest: https://commitfest.postgresql.org/34/3252/\n\nHere's an updated version that also reduces the duplication between the\nvarious role list queries.\n\n- ilmari",
"msg_date": "Sat, 07 Aug 2021 22:09:19 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "Hello Dagfinn,\n\nI had a look at your patch and below are my review comments.\nPlease correct me if I am missing something.\n\n 1. For me the patch does not apply cleanly. I have been facing the error\n of trailing whitespaces.\n surajkhamkar@localhost:postgres$ git apply\n v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch\n v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:25: trailing\n whitespace.\n #define Query_for_list_of_schema_roles \\\n v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:26: trailing\n whitespace.\n Query_for_list_of_roles \\\n v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:30: trailing\n whitespace.\n #define Query_for_list_of_grant_roles \\\n v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:31: trailing\n whitespace.\n Query_for_list_of_schema_roles \\\n v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:32: trailing\n whitespace.\n \" UNION ALL SELECT 'PUBLIC'\"\\\n error: patch failed: src/bin/psql/tab-complete.c:758\n error: src/bin/psql/tab-complete.c: patch does not apply\n\n 2. We can remove space in before \\ and below change\n +\" UNION ALL SELECT 'PUBLIC'\" \\\n\n Should be,\n +\" UNION ALL SELECT 'PUBLIC' \"\\\n\n 3. role_specification has CURRENT_ROLE, CURRENT_USER and SESSION_USER.\n But current changes are missing CURRENT_ROLE.\n postgres@53724=#CREATE SCHEMA AUTHORIZATION\n CURRENT_USER pg_execute_server_program pg_read_all_data\n\n pg_read_all_stats pg_signal_backend pg_write_all_data\n\n SESSION_USER pg_database_owner pg_monitor\n\n pg_read_all_settings pg_read_server_files\n pg_stat_scan_tables\n pg_write_server_files surajkhamkar\n\n 4. I'm not sure about this but do we need to enable tab completion for IF\n NOT EXIST?\n\n 5. I think we are not handling IF NOT EXIST that's why it's not\n completing tab completion\n for AUTHORIZATION.\n\n 6. As we are here we can also enable missing tab completion for ALTER\n SCHEMA.\n After OWNER TO we should also get CURRENT_ROLE, CURRENT_USER and\n SESSION_USER.\n postgres@53724=#ALTER SCHEMA sch owner to\n pg_database_owner pg_monitor\n pg_read_all_settings\n pg_read_server_files pg_stat_scan_tables\n pg_write_server_files\n pg_execute_server_program pg_read_all_data pg_read_all_stats\n\n pg_signal_backend pg_write_all_data surajkhamkar\n\n 7. Similarly, as we can drop multiple schemas' simultaneously, we should\n enable tab completion for\n comma with CASCADE and RESTRICT\n postgres@53724=#DROP SCHEMA sch\n CASCADE RESTRICT\n\n\nThanks.\n\nOn Sun, Aug 8, 2021 at 2:39 AM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>\nwrote:\n\n> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>\n> > ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n> >\n> >> Hi Hackers,\n> >>\n> >> I just noticed there's no tab completion for CREATE SCHEMA\n> >> AUTHORIZATION, nor for anything after CREATE SCHEMA <name>.\n> >>\n> >> Please find attached a patch that adds this.\n> >\n> > Added to the 2021-09 commit fest:\n> https://commitfest.postgresql.org/34/3252/\n>\n> Here's an updated version that also reduces the duplication between the\n> various role list queries.\n>\n> - ilmari\n>\n>\n\nHello Dagfinn,I had a look at your patch and below are my review comments.Please correct me if I am missing something.For me the patch does not apply cleanly. I have been facing the error of trailing whitespaces.surajkhamkar@localhost:postgres$ git apply v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patchv2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:25: trailing whitespace.#define Query_for_list_of_schema_roles \\v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:26: trailing whitespace.Query_for_list_of_roles \\v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:30: trailing whitespace.#define Query_for_list_of_grant_roles \\v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:31: trailing whitespace.Query_for_list_of_schema_roles \\v2-0001-Add-tab-completion-for-CREATE-SCHEMA.patch:32: trailing whitespace.\" UNION ALL SELECT 'PUBLIC'\"\\error: patch failed: src/bin/psql/tab-complete.c:758error: src/bin/psql/tab-complete.c: patch does not applyWe can remove space in before \\ and below change+\" UNION ALL SELECT 'PUBLIC'\" \\Should be,+\" UNION ALL SELECT 'PUBLIC' \"\\role_specification has CURRENT_ROLE, CURRENT_USER and SESSION_USER. But current changes are missing CURRENT_ROLE.postgres@53724=#CREATE SCHEMA AUTHORIZATION CURRENT_USER pg_execute_server_program pg_read_all_data pg_read_all_stats pg_signal_backend pg_write_all_data SESSION_USER pg_database_owner pg_monitor pg_read_all_settings pg_read_server_files pg_stat_scan_tables pg_write_server_files surajkhamkarI'm not sure about this but do we need to enable tab completion for IF NOT EXIST?I think we are not handling IF NOT EXIST that's why it's not completing tab completionfor AUTHORIZATION.As we are here we can also enable missing tab completion for ALTER SCHEMA.After OWNER TO we should also get CURRENT_ROLE, CURRENT_USER and SESSION_USER.postgres@53724=#ALTER SCHEMA sch owner to pg_database_owner pg_monitor pg_read_all_settings pg_read_server_files pg_stat_scan_tables pg_write_server_files pg_execute_server_program pg_read_all_data pg_read_all_stats pg_signal_backend pg_write_all_data surajkhamkar\n Similarly, as we can drop multiple schemas' simultaneously, we should enable tab completion forcomma with CASCADE and RESTRICTpostgres@53724=#DROP SCHEMA sch CASCADE RESTRICTThanks.On Sun, Aug 8, 2021 at 2:39 AM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>\n>> Hi Hackers,\n>>\n>> I just noticed there's no tab completion for CREATE SCHEMA\n>> AUTHORIZATION, nor for anything after CREATE SCHEMA <name>.\n>>\n>> Please find attached a patch that adds this.\n>\n> Added to the 2021-09 commit fest: https://commitfest.postgresql.org/34/3252/\n\nHere's an updated version that also reduces the duplication between the\nvarious role list queries.\n\n- ilmari",
"msg_date": "Mon, 9 Aug 2021 14:53:31 +0530",
"msg_from": "Suraj Khamkar <khamkarsuraj.b@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "Hi Suraj,\n\nSuraj Khamkar <khamkarsuraj.b@gmail.com> writes:\n\n> Hello Dagfinn,\n>\n> I had a look at your patch and below are my review comments.\n> Please correct me if I am missing something.\n>\n> 1. For me the patch does not apply cleanly. I have been facing the error\n> of trailing whitespaces.\n\nI do not get these errors, neither with the patch file I still have\nlocally, or by saving the attachment from my original email. Are you\nsure something in your download process hasn't converted it to Windows\nline endings (\\r\\n), or otherwise mangled the whitespace?\n\n> 2. We can remove space in before \\ and below change\n> +\" UNION ALL SELECT 'PUBLIC'\" \\\n>\n> Should be,\n> +\" UNION ALL SELECT 'PUBLIC' \"\\\n\nThe patch doesn't add any lines that end with quote-space-backslash.\nAs for the space before the quote, that is not necessary either, since the\nnext line starts with a space after the quote. Either way, the updated\nversion of the patch doesn't add any new lines with continuation after a\nstring constant, so the point is moot.\n\n> 3. role_specification has CURRENT_ROLE, CURRENT_USER and SESSION_USER.\n> But current changes are missing CURRENT_ROLE.\n\nAh, I was looking at the documentation for 13, but CURRENT_ROLE is only\nallowed in this context as of 14. Fixed.\n\n> 4. I'm not sure about this but do we need to enable tab completion for IF\n> NOT EXIST?\n>\n> 5. I think we are not handling IF NOT EXIST that's why it's not\n> completing tab completion\n> for AUTHORIZATION.\n\nAs you note, psql currently doesn't currently tab-complete IF NOT EXISTS\nfor any command, so that would be a subject for a separate patch.\n\n> 6. As we are here we can also enable missing tab completion for ALTER\n> SCHEMA.\n> After OWNER TO we should also get CURRENT_ROLE, CURRENT_USER and\n> SESSION_USER.\n\nI did an audit of all the uses of Query_for_list_of_roles, and there\nturned out be several more that accept CURRENT_ROLE, CURRENT_USER and\nSESSION_USER that they weren't tab-completed for. I also renamed the\nconstant to Query_for_list_of_owner_roles, but I'm not 100% happy with\nthat name either.\n\n> 7. Similarly, as we can drop multiple schemas' simultaneously, we should\n> enable tab completion for\n> comma with CASCADE and RESTRICT\n> postgres@53724=#DROP SCHEMA sch\n> CASCADE RESTRICT\n\nThe tab completion code for DROP is generic for all object types (see\nthe words_after_create array and the create_or_drop_command_generator\nfunction), so that should be done genericallly, and is thus outside the\nscope for this patch.\n\n> Thanks.\n\nThanks for the review. Updated patch attached, with the CURRENT/SESSION\nROLE/USER changes for other commands separated out.\n\n- ilmari",
"msg_date": "Mon, 09 Aug 2021 19:00:02 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "On Mon, Aug 09, 2021 at 07:00:02PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Thanks for the review. Updated patch attached, with the CURRENT/SESSION\n> ROLE/USER changes for other commands separated out.\n\n+#define Query_for_list_of_owner_roles \\\n+Query_for_list_of_roles \\\n \" UNION ALL SELECT 'CURRENT_ROLE'\"\\\n \" UNION ALL SELECT 'CURRENT_USER'\"\\\n \" UNION ALL SELECT 'SESSION_USER'\"\nI don't object to the refactoring you are doing here with three\nQuery_for_list_of_*_roles each one depending on the other for clarity.\nNeither do I really object to not using COMPLETE_WITH_QUERY() with\nsome extra UNION ALL hardcoded in each code path as there 6 cases for\n_owner_, 6 for _grant_ and 6 for _roles if my count is right. Still,\nif I may ask, wouldn't it be better to document a bit what's the\nexpectation behind each one of them? Perhaps the names of the queries\nare too generic for the purposes where they are used (say _grant_ for\nCREATE USER MAPPING looks confusing)?\n\n+ else if (Matches(\"CREATE\", \"SCHEMA\", \"AUTHORIZATION\"))\n+ COMPLETE_WITH_QUERY(Query_for_list_of_owner_roles);\n+ else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\"))\n+ COMPLETE_WITH_QUERY(Query_for_list_of_owner_roles);\n+ else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\", MatchAny))\n+ COMPLETE_WITH(\"CREATE\", \"GRANT\");\n+ else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny))\n+ COMPLETE_WITH(\"AUTHORIZATION\", \"CREATE\", \"GRANT\");\nLooks like you forgot the case \"CREATE SCHEMA AUTHORIZATION MatchAny\"\nthat should be completed by GRANT and CREATE.\n--\nMichael",
"msg_date": "Wed, 11 Aug 2021 10:16:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "Thanks Dagfinn for the updated patches.\n\nI do not get these errors, neither with the patch file I still have\n> locally, or by saving the attachment from my original email. Are you\n> sure something in your download process hasn't converted it to Windows\n> line endings (\\r\\n), or otherwise mangled the whitespace?\n\n\nNo. I have downloaded patch on linux and patch also doesn't have any\ntrailing\nspaces, though it is throwing an error.\n\nHere are few other comments,\n\n 1. USER MAPPING does not have SESSION_USER as username in syntax\n (though it works) but your changes provide the same. Also, we have USER\n in\n list which is missing in current code changes.\n CREATE USER MAPPING [ IF NOT EXISTS ] FOR { user_name | USER |\n CURRENT_ROLE | CURRENT_USER | PUBLIC }\n SERVER server_name\n [ OPTIONS ( option 'value' [ , ... ] ) ]\n 2. It might not be a scope of this ticket but as we are changing the\n query for ALTER GROUP,\n we should complete the role_specification after ALTER GROUP.\n postgres@17077=#ALTER GROUP\n pg_database_owner pg_monitor\n pg_read_all_settings\n pg_read_server_files pg_stat_scan_tables\n pg_write_server_files\n surajkhamkar. pg_execute_server_program pg_read_all_data\n\n pg_read_all_stats pg_signal_backend pg_write_all_data\n\n 3. Missing schema_elements after CREATE SCHEMA AUTHORIZATION username to\n tab-complete .\n schema_elements might be CREATE, GRAND etc.\n\n\nThanks..\n\nOn Mon, Aug 9, 2021 at 11:30 PM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>\nwrote:\n\n> Hi Suraj,\n>\n> Suraj Khamkar <khamkarsuraj.b@gmail.com> writes:\n>\n> > Hello Dagfinn,\n> >\n> > I had a look at your patch and below are my review comments.\n> > Please correct me if I am missing something.\n> >\n> > 1. For me the patch does not apply cleanly. I have been facing the\n> error\n> > of trailing whitespaces.\n>\n> I do not get these errors, neither with the patch file I still have\n> locally, or by saving the attachment from my original email. Are you\n> sure something in your download process hasn't converted it to Windows\n> line endings (\\r\\n), or otherwise mangled the whitespace?\n>\n> > 2. We can remove space in before \\ and below change\n> > +\" UNION ALL SELECT 'PUBLIC'\" \\\n> >\n> > Should be,\n> > +\" UNION ALL SELECT 'PUBLIC' \"\\\n>\n> The patch doesn't add any lines that end with quote-space-backslash.\n> As for the space before the quote, that is not necessary either, since the\n> next line starts with a space after the quote. Either way, the updated\n> version of the patch doesn't add any new lines with continuation after a\n> string constant, so the point is moot.\n>\n> > 3. role_specification has CURRENT_ROLE, CURRENT_USER and SESSION_USER.\n> > But current changes are missing CURRENT_ROLE.\n>\n> Ah, I was looking at the documentation for 13, but CURRENT_ROLE is only\n> allowed in this context as of 14. Fixed.\n>\n> > 4. I'm not sure about this but do we need to enable tab completion\n> for IF\n> > NOT EXIST?\n> >\n> > 5. I think we are not handling IF NOT EXIST that's why it's not\n> > completing tab completion\n> > for AUTHORIZATION.\n>\n> As you note, psql currently doesn't currently tab-complete IF NOT EXISTS\n> for any command, so that would be a subject for a separate patch.\n>\n> > 6. As we are here we can also enable missing tab completion for ALTER\n> > SCHEMA.\n> > After OWNER TO we should also get CURRENT_ROLE, CURRENT_USER and\n> > SESSION_USER.\n>\n> I did an audit of all the uses of Query_for_list_of_roles, and there\n> turned out be several more that accept CURRENT_ROLE, CURRENT_USER and\n> SESSION_USER that they weren't tab-completed for. I also renamed the\n> constant to Query_for_list_of_owner_roles, but I'm not 100% happy with\n> that name either.\n>\n> > 7. Similarly, as we can drop multiple schemas' simultaneously, we\n> should\n> > enable tab completion for\n> > comma with CASCADE and RESTRICT\n> > postgres@53724=#DROP SCHEMA sch\n> > CASCADE RESTRICT\n>\n> The tab completion code for DROP is generic for all object types (see\n> the words_after_create array and the create_or_drop_command_generator\n> function), so that should be done genericallly, and is thus outside the\n> scope for this patch.\n>\n> > Thanks.\n>\n> Thanks for the review. Updated patch attached, with the CURRENT/SESSION\n> ROLE/USER changes for other commands separated out.\n>\n> - ilmari\n>\n>\n\nThanks Dagfinn for the updated patches.I do not get these errors, neither with the patch file I still havelocally, or by saving the attachment from my original email. Are yousure something in your download process hasn't converted it to Windowsline endings (\\r\\n), or otherwise mangled the whitespace? No. I have downloaded patch on linux and patch also doesn't have any trailingspaces, though it is throwing an error.Here are few other comments,USER MAPPING does not have SESSION_USER as username in syntax (though it works) but your changes provide the same. Also, we have USER inlist which is missing in current code changes.CREATE USER MAPPING [ IF NOT EXISTS ] FOR { user_name | USER | CURRENT_ROLE | CURRENT_USER | PUBLIC } SERVER server_name [ OPTIONS ( option 'value' [ , ... ] ) ]It might not be a scope of this ticket but as we are changing the query for ALTER GROUP,we should complete the role_specification after ALTER GROUP.postgres@17077=#ALTER GROUP pg_database_owner pg_monitor pg_read_all_settings pg_read_server_files pg_stat_scan_tables pg_write_server_files surajkhamkar. pg_execute_server_program pg_read_all_data pg_read_all_stats pg_signal_backend pg_write_all_data Missing schema_elements after CREATE SCHEMA AUTHORIZATION username to tab-complete .schema_elements might be CREATE, GRAND etc.Thanks..On Mon, Aug 9, 2021 at 11:30 PM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:Hi Suraj,\n\nSuraj Khamkar <khamkarsuraj.b@gmail.com> writes:\n\n> Hello Dagfinn,\n>\n> I had a look at your patch and below are my review comments.\n> Please correct me if I am missing something.\n>\n> 1. For me the patch does not apply cleanly. I have been facing the error\n> of trailing whitespaces.\n\nI do not get these errors, neither with the patch file I still have\nlocally, or by saving the attachment from my original email. Are you\nsure something in your download process hasn't converted it to Windows\nline endings (\\r\\n), or otherwise mangled the whitespace?\n\n> 2. We can remove space in before \\ and below change\n> +\" UNION ALL SELECT 'PUBLIC'\" \\\n>\n> Should be,\n> +\" UNION ALL SELECT 'PUBLIC' \"\\\n\nThe patch doesn't add any lines that end with quote-space-backslash.\nAs for the space before the quote, that is not necessary either, since the\nnext line starts with a space after the quote. Either way, the updated\nversion of the patch doesn't add any new lines with continuation after a\nstring constant, so the point is moot.\n\n> 3. role_specification has CURRENT_ROLE, CURRENT_USER and SESSION_USER.\n> But current changes are missing CURRENT_ROLE.\n\nAh, I was looking at the documentation for 13, but CURRENT_ROLE is only\nallowed in this context as of 14. Fixed.\n\n> 4. I'm not sure about this but do we need to enable tab completion for IF\n> NOT EXIST?\n>\n> 5. I think we are not handling IF NOT EXIST that's why it's not\n> completing tab completion\n> for AUTHORIZATION.\n\nAs you note, psql currently doesn't currently tab-complete IF NOT EXISTS\nfor any command, so that would be a subject for a separate patch.\n\n> 6. As we are here we can also enable missing tab completion for ALTER\n> SCHEMA.\n> After OWNER TO we should also get CURRENT_ROLE, CURRENT_USER and\n> SESSION_USER.\n\nI did an audit of all the uses of Query_for_list_of_roles, and there\nturned out be several more that accept CURRENT_ROLE, CURRENT_USER and\nSESSION_USER that they weren't tab-completed for. I also renamed the\nconstant to Query_for_list_of_owner_roles, but I'm not 100% happy with\nthat name either.\n\n> 7. Similarly, as we can drop multiple schemas' simultaneously, we should\n> enable tab completion for\n> comma with CASCADE and RESTRICT\n> postgres@53724=#DROP SCHEMA sch\n> CASCADE RESTRICT\n\nThe tab completion code for DROP is generic for all object types (see\nthe words_after_create array and the create_or_drop_command_generator\nfunction), so that should be done genericallly, and is thus outside the\nscope for this patch.\n\n> Thanks.\n\nThanks for the review. Updated patch attached, with the CURRENT/SESSION\nROLE/USER changes for other commands separated out.\n\n- ilmari",
"msg_date": "Wed, 11 Aug 2021 23:04:10 +0530",
"msg_from": "Suraj Khamkar <khamkarsuraj.b@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 10:16:15AM +0900, Michael Paquier wrote:\n> + else if (Matches(\"CREATE\", \"SCHEMA\", \"AUTHORIZATION\"))\n> + COMPLETE_WITH_QUERY(Query_for_list_of_owner_roles);\n> + else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\"))\n> + COMPLETE_WITH_QUERY(Query_for_list_of_owner_roles);\n> + else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\", MatchAny))\n> + COMPLETE_WITH(\"CREATE\", \"GRANT\");\n> + else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny))\n> + COMPLETE_WITH(\"AUTHORIZATION\", \"CREATE\", \"GRANT\");\n> Looks like you forgot the case \"CREATE SCHEMA AUTHORIZATION MatchAny\"\n> that should be completed by GRANT and CREATE.\n\nThis patch has been waiting on author for more than a couple of weeks,\nso I have marked it as returned with feedback in the CF app. Please\nfeel free to resubmit if you are able to work more on that.\n--\nMichael",
"msg_date": "Fri, 3 Dec 2021 10:19:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Wed, Aug 11, 2021 at 10:16:15AM +0900, Michael Paquier wrote:\n>> + else if (Matches(\"CREATE\", \"SCHEMA\", \"AUTHORIZATION\"))\n>> + COMPLETE_WITH_QUERY(Query_for_list_of_owner_roles);\n>> + else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\"))\n>> + COMPLETE_WITH_QUERY(Query_for_list_of_owner_roles);\n>> + else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\", MatchAny))\n>> + COMPLETE_WITH(\"CREATE\", \"GRANT\");\n>> + else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny))\n>> + COMPLETE_WITH(\"AUTHORIZATION\", \"CREATE\", \"GRANT\");\n>> Looks like you forgot the case \"CREATE SCHEMA AUTHORIZATION MatchAny\"\n>> that should be completed by GRANT and CREATE.\n>\n> This patch has been waiting on author for more than a couple of weeks,\n> so I have marked it as returned with feedback in the CF app. Please\n> feel free to resubmit if you are able to work more on that.\n\nLooks like I completely dropped the ball on this one, sorry. Here's a\nrebased patch which uses the new COMPLETE_WITH_QUERY_PLUS functionality\nadded in commit 02b8048ba5dc36238f3e7c3c58c5946220298d71.\n\n\n- ilmari",
"msg_date": "Fri, 14 Apr 2023 17:04:35 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 05:04:35PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Looks like I completely dropped the ball on this one, sorry.\n\nSo did I ;)\n\n> Here's a\n> rebased patch which uses the new COMPLETE_WITH_QUERY_PLUS functionality\n> added in commit 02b8048ba5dc36238f3e7c3c58c5946220298d71.\n\nThanks, I'll look at it.\n--\nMichael",
"msg_date": "Sat, 15 Apr 2023 11:06:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 11:06:25AM +0900, Michael Paquier wrote:\n> Thanks, I'll look at it.\n\n+ else if (Matches(\"CREATE\", \"SCHEMA\", \"AUTHORIZATION\", MatchAny) ||\n+ Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\", MatchAny))\n+ COMPLETE_WITH(\"CREATE\", \"GRANT\");\n+ else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny))\n+ COMPLETE_WITH(\"AUTHORIZATION\", \"CREATE\", \"GRANT\");\n\nI had this grammar under my eyes a few days ago for a different patch,\nand there are much more objects types that can be appended to a CREATE\nSCHEMA, like triggers, sequences, tables or views, so this is\nincomplete, isn't it?\n--\nMichael",
"msg_date": "Tue, 2 May 2023 11:09:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Sat, Apr 15, 2023 at 11:06:25AM +0900, Michael Paquier wrote:\n>> Thanks, I'll look at it.\n>\n> + else if (Matches(\"CREATE\", \"SCHEMA\", \"AUTHORIZATION\", MatchAny) ||\n> + Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\", MatchAny))\n> + COMPLETE_WITH(\"CREATE\", \"GRANT\");\n> + else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny))\n> + COMPLETE_WITH(\"AUTHORIZATION\", \"CREATE\", \"GRANT\");\n>\n> I had this grammar under my eyes a few days ago for a different patch,\n> and there are much more objects types that can be appended to a CREATE\n> SCHEMA, like triggers, sequences, tables or views, so this is\n> incomplete, isn't it?\n\nThis is for completing the word CREATE itself after CREATE SCHEMA\n[[<name>] AUTHORIZATION] <name>. The things that can come after that\nare already handled generically earlier in the function:\n\n/* CREATE */\n /* complete with something you can create */\n else if (TailMatches(\"CREATE\"))\n matches = rl_completion_matches(text, create_command_generator);\n\ncreate_command_generator uses the words_after_create array, which lists\nall the things that can be created.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 02 May 2023 11:48:43 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n>\n>> On Sat, Apr 15, 2023 at 11:06:25AM +0900, Michael Paquier wrote:\n>>> Thanks, I'll look at it.\n>>\n>> + else if (Matches(\"CREATE\", \"SCHEMA\", \"AUTHORIZATION\", MatchAny) ||\n>> + Matches(\"CREATE\", \"SCHEMA\", MatchAny, \"AUTHORIZATION\", MatchAny))\n>> + COMPLETE_WITH(\"CREATE\", \"GRANT\");\n>> + else if (Matches(\"CREATE\", \"SCHEMA\", MatchAny))\n>> + COMPLETE_WITH(\"AUTHORIZATION\", \"CREATE\", \"GRANT\");\n>>\n>> I had this grammar under my eyes a few days ago for a different patch,\n>> and there are much more objects types that can be appended to a CREATE\n>> SCHEMA, like triggers, sequences, tables or views, so this is\n>> incomplete, isn't it?\n>\n> This is for completing the word CREATE itself after CREATE SCHEMA\n> [[<name>] AUTHORIZATION] <name>. The things that can come after that\n> are already handled generically earlier in the function:\n>\n> /* CREATE */\n> /* complete with something you can create */\n> else if (TailMatches(\"CREATE\"))\n> matches = rl_completion_matches(text, create_command_generator);\n>\n> create_command_generator uses the words_after_create array, which lists\n> all the things that can be created.\n\nBut, looking closer at the docs, only tables, views, indexes, sequences\nand triggers can be created as part of a CREATE SCHEMA statement. Maybe\nwe should add a HeadMatches(\"CREATE\", \"SCHEMA\") exception in the above?\n\n- ilmari\n\n\n",
"msg_date": "Tue, 02 May 2023 13:19:49 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "On Tue, May 02, 2023 at 01:19:49PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n>> This is for completing the word CREATE itself after CREATE SCHEMA\n>> [[<name>] AUTHORIZATION] <name>. The things that can come after that\n>> are already handled generically earlier in the function:\n>>\n>> /* CREATE */\n>> /* complete with something you can create */\n>> else if (TailMatches(\"CREATE\"))\n>> matches = rl_completion_matches(text, create_command_generator);\n>>\n>> create_command_generator uses the words_after_create array, which lists\n>> all the things that can be created.\n\nYou are right. I have completely forgotten that this code path would\nappend everything that supports CREATE for a CREATE SCHEMA command :)\n\n> But, looking closer at the docs, only tables, views, indexes, sequences\n> and triggers can be created as part of a CREATE SCHEMA statement. Maybe\n> we should add a HeadMatches(\"CREATE\", \"SCHEMA\") exception in the above?\n\nYes, it looks like we are going to need an exception and append only\nthe keywords that are supported, or we will end up recommending mostly\nthings that are not accepted by the parser.\n--\nMichael",
"msg_date": "Sat, 6 May 2023 11:58:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Tue, May 02, 2023 at 01:19:49PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n>>> This is for completing the word CREATE itself after CREATE SCHEMA\n>>> [[<name>] AUTHORIZATION] <name>. The things that can come after that\n>>> are already handled generically earlier in the function:\n>>>\n>>> /* CREATE */\n>>> /* complete with something you can create */\n>>> else if (TailMatches(\"CREATE\"))\n>>> matches = rl_completion_matches(text, create_command_generator);\n>>>\n>>> create_command_generator uses the words_after_create array, which lists\n>>> all the things that can be created.\n>\n> You are right. I have completely forgotten that this code path would\n> append everything that supports CREATE for a CREATE SCHEMA command :)\n>\n>> But, looking closer at the docs, only tables, views, indexes, sequences\n>> and triggers can be created as part of a CREATE SCHEMA statement. Maybe\n>> we should add a HeadMatches(\"CREATE\", \"SCHEMA\") exception in the above?\n>\n> Yes, it looks like we are going to need an exception and append only\n> the keywords that are supported, or we will end up recommending mostly\n> things that are not accepted by the parser.\n\nHere's an updated v3 patch with that. While adding that, I noticed that\nCREATE UNLOGGED only tab-completes TABLE and MATERIALIZED VIEW, not\nSEQUENCE, so I added that (and removed MATERIALIZED VIEW when part of\nCREATE SCHEMA).\n\n- ilmari",
"msg_date": "Mon, 08 May 2023 17:36:27 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "On Mon, May 08, 2023 at 05:36:27PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Here's an updated v3 patch with that. While adding that, I noticed that\n> CREATE UNLOGGED only tab-completes TABLE and MATERIALIZED VIEW, not\n> SEQUENCE, so I added that (and removed MATERIALIZED VIEW when part of\n> CREATE SCHEMA).\n\n+ /* but not MATVIEW in CREATE SCHEMA */\n+ if (HeadMatches(\"CREATE\", \"SCHEMA\"))\n+ COMPLETE_WITH(\"TABLE\", \"SEQUENCE\");\n+ else\n+ COMPLETE_WITH(\"TABLE\", \"SEQUENCE\", \"MATERIALIZED VIEW\");\n\nThis may look strange at first glance, but the grammar is what it\nis.. Perhaps matviews could be part of that at some point. Or not. :)\n\n+ /* only some object types can be created as part of CREATE SCHEMA */\n+ if (HeadMatches(\"CREATE\", \"SCHEMA\"))\n+ COMPLETE_WITH(\"TABLE\", \"VIEW\", \"INDEX\", \"SEQUENCE\", \"TRIGGER\",\n+ /* for INDEX and TABLE/SEQUENCE, respectively */\n+ \"UNIQUE\", \"UNLOGGED\");\n\nNot including TEMPORARY is OK here as the grammar does not allow a\ndirectly to create a temporary schema. The (many) code paths that\nhave TailMatches() to cope with CREATE SCHEMA would continue the\ncompletion of added, but at least this approach avoids the\nrecommendation if possible.\n\nThat looks pretty much OK to me. One tiny comment I have is that this\nlacks brackets for the inner blocks, so I have added some in the v4\nattached.\n--\nMichael",
"msg_date": "Tue, 9 May 2023 12:26:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "On Tue, May 09, 2023 at 12:26:16PM +0900, Michael Paquier wrote:\n> That looks pretty much OK to me. One tiny comment I have is that this\n> lacks brackets for the inner blocks, so I have added some in the v4\n> attached.\n\nThe indentation was a bit wrong, so fixed it, and applied on HEAD.\n--\nMichael",
"msg_date": "Fri, 30 Jun 2023 10:36:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Tue, May 09, 2023 at 12:26:16PM +0900, Michael Paquier wrote:\n>> That looks pretty much OK to me. One tiny comment I have is that this\n>> lacks brackets for the inner blocks, so I have added some in the v4\n>> attached.\n>\n> The indentation was a bit wrong, so fixed it, and applied on HEAD.\n\nThanks!\n\n- ilmari\n\n\n",
"msg_date": "Fri, 30 Jun 2023 10:33:22 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE SCHEMAAUTHORIZATION"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17098\nLogged by: Alexander Lakhin\nEmail address: exclusion@gmail.com\nPostgreSQL version: 14beta2\nOperating system: Ubuntu 20.04\nDescription: \n\nWhen trying to add to an extension a type that is already exists in the\nextension while the extension is being dropped I get a failed assertion with\nthe following stack:\r\nCore was generated by `postgres: law regression [local] ALTER EXTENSION \n '.\r\nProgram terminated with signal SIGABRT, Aborted.\r\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\r\n50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\r\n(gdb) bt\r\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\r\n#1 0x00007f59c4321859 in __GI_abort () at abort.c:79\r\n#2 0x000055c3c74912e0 in ExceptionalCondition\n(conditionName=conditionName@entry=0x55c3c76a024c \"strvalue != NULL\", \r\n errorType=errorType@entry=0x55c3c74f000b \"FailedAssertion\",\nfileName=0x7ffc8db80390 \"\\301\\022I\\307\\303U\", \r\n fileName@entry=0x55c3c76a0241 \"snprintf.c\",\nlineNumber=lineNumber@entry=442) at assert.c:69\r\n#3 0x000055c3c74ee489 in dopr (target=target@entry=0x7ffc8db80970,\nformat=0x55c3c7521ee7 \"\\\"\", \r\n format@entry=0x55c3c7521ec0 \"%s is already a member of extension\n\\\"%s\\\"\", args=0x7ffc8db80a20) at snprintf.c:442\r\n#4 0x000055c3c74eeb5c in pg_vsnprintf (str=<optimized out>,\ncount=<optimized out>, count@entry=1024, \r\n fmt=fmt@entry=0x55c3c7521ec0 \"%s is already a member of extension\n\\\"%s\\\"\", args=args@entry=0x7ffc8db80a20)\r\n at snprintf.c:195\r\n#5 0x000055c3c74e4e33 in pvsnprintf (buf=<optimized out>,\nlen=len@entry=1024, \r\n fmt=fmt@entry=0x55c3c7521ec0 \"%s is already a member of extension\n\\\"%s\\\"\", args=args@entry=0x7ffc8db80a20)\r\n at psprintf.c:110\r\n#6 0x000055c3c74e6221 in appendStringInfoVA (str=str@entry=0x7ffc8db80a00,\n\r\n fmt=fmt@entry=0x55c3c7521ec0 \"%s is already a member of extension\n\\\"%s\\\"\", args=args@entry=0x7ffc8db80a20)\r\n at stringinfo.c:149\r\n#7 0x000055c3c7495fbe in errmsg (fmt=fmt@entry=0x55c3c7521ec0 \"%s is\nalready a member of extension \\\"%s\\\"\")\r\n at elog.c:919\r\n#8 0x000055c3c7120cda in ExecAlterExtensionContentsStmt\n(stmt=stmt@entry=0x55c3c7810f60, \r\n objAddr=objAddr@entry=0x7ffc8db80c14) at extension.c:3342\r\n#9 0x000055c3c7357837 in ProcessUtilitySlow\n(pstate=pstate@entry=0x55c3c7831d60, pstmt=pstmt@entry=0x55c3c7811270, \r\n queryString=queryString@entry=0x55c3c7810450 \"ALTER EXTENSION cube ADD\nTYPE side;\", \r\n context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0,\nqueryEnv=queryEnv@entry=0x0, \r\n dest=0x55c3c7811340, qc=0x7ffc8db81130) at utility.c:1550\r\n#10 0x000055c3c7356b48 in standard_ProcessUtility (pstmt=0x55c3c7811270, \r\n queryString=0x55c3c7810450 \"ALTER EXTENSION cube ADD TYPE side;\",\nreadOnlyTree=<optimized out>, \r\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x55c3c7811340, qc=0x7ffc8db81130)\r\n at utility.c:1049\r\n#11 0x000055c3c7356c31 in ProcessUtility (pstmt=pstmt@entry=0x55c3c7811270,\nqueryString=<optimized out>, \r\n readOnlyTree=<optimized out>,\ncontext=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>, \r\n queryEnv=<optimized out>, dest=0x55c3c7811340, qc=0x7ffc8db81130) at\nutility.c:527\r\n#12 0x000055c3c7354157 in PortalRunUtility\n(portal=portal@entry=0x55c3c78725b0, pstmt=pstmt@entry=0x55c3c7811270, \r\n isTopLevel=isTopLevel@entry=true,\nsetHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x55c3c7811340,\n\r\n qc=qc@entry=0x7ffc8db81130) at pquery.c:1147\r\n#13 0x000055c3c7354459 in PortalRunMulti\n(portal=portal@entry=0x55c3c78725b0, isTopLevel=isTopLevel@entry=true, \r\n setHoldSnapshot=setHoldSnapshot@entry=false,\ndest=dest@entry=0x55c3c7811340, altdest=altdest@entry=0x55c3c7811340, \r\n qc=qc@entry=0x7ffc8db81130) at pquery.c:1304\r\n#14 0x000055c3c735488d in PortalRun (portal=portal@entry=0x55c3c78725b0,\ncount=count@entry=9223372036854775807, \r\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,\ndest=dest@entry=0x55c3c7811340, \r\n altdest=altdest@entry=0x55c3c7811340, qc=0x7ffc8db81130) at\npquery.c:786\r\n#15 0x000055c3c7350ada in exec_simple_query (\r\n query_string=query_string@entry=0x55c3c7810450 \"ALTER EXTENSION cube ADD\nTYPE side;\") at postgres.c:1214\r\n#16 0x000055c3c7352aac in PostgresMain (argc=argc@entry=1,\nargv=argv@entry=0x7ffc8db81320, dbname=<optimized out>, \r\n username=<optimized out>) at postgres.c:4486\r\n#17 0x000055c3c72ada9c in BackendRun (port=port@entry=0x55c3c7831220) at\npostmaster.c:4507\r\n#18 0x000055c3c72b0cb1 in BackendStartup (port=port@entry=0x55c3c7831220) at\npostmaster.c:4229\r\n#19 0x000055c3c72b0ef8 in ServerLoop () at postmaster.c:1745\r\n#20 0x000055c3c72b2445 in PostmasterMain (argc=3, argv=<optimized out>) at\npostmaster.c:1417\r\n#21 0x000055c3c71f309e in main (argc=3, argv=0x55c3c780a4c0) at main.c:209\r\n\r\nReproduced with the following script:\r\necho \"\r\nCREATE EXTENSION cube;\r\nDROP EXTENSION cube;\r\n\" >/tmp/ce.sql\r\n\r\necho \"\r\nCREATE TYPE side AS ENUM('front');\r\nALTER EXTENSION cube ADD TYPE side;\r\n\" >/tmp/ae.sql\r\n\r\nfor n in `seq 10`; do\r\n echo \"iteration $n\"\r\n ( { for f in `seq 1000`; do cat /tmp/ce.sql; done } | psql ) >psql-1.log\n2>&1 &\r\n ( { for f in `seq 1000`; do cat /tmp/ae.sql; done } | psql ) >psql-2.log\n2>&1 &\r\n wait\r\n coredumpctl --no-pager && break\r\ndone\r\n\r\nReproduced on REL_12_STABLE..master. The assert was added by 6d842be6c, so\non REL_11_STABLE I see just:\r\nERROR: type side is already a member of extension \"(null)\"",
"msg_date": "Fri, 09 Jul 2021 20:00:01 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #17098: Assert failed on composing an error message when adding a\n type to an extension being dropped"
},
{
"msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> When trying to add to an extension a type that is already exists in the\n> extension while the extension is being dropped I get a failed assertion with\n> the following stack:\n\nI think that the root issue here is that ExecAlterExtensionContentsStmt\nfails to acquire any sort of lock on the extension. Considering that\nit *does* lock the object to be added/dropped, that's a rather glaring\noversight. Fortunately it seems easily fixable ... though I wonder\nhow many other similar oversights we have.\n\nHowever, that root issue is converted from a relatively minor bug into\na server crash because snprintf.c treats a NULL pointer passed to %s\nas a crash-worthy error. I have advocated for that behavior in the\npast, but I'm starting to wonder if it wouldn't be wiser to change\nover to the glibc-ish behavior of printing \"(null)\" or the like.\nIt seems like we've long since found all the interesting bugs that\nthe assert-or-crash behavior could reveal, and now we're down to\nweird corner cases where its main effect is to weaken our robustness.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Jul 2021 11:28:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17098: Assert failed on composing an error message when\n adding a type to an extension being dropped"
},
{
"msg_contents": "[ moved from -bugs list for more visibility ]\n\nI wrote:\n> However, that root issue is converted from a relatively minor bug into\n> a server crash because snprintf.c treats a NULL pointer passed to %s\n> as a crash-worthy error. I have advocated for that behavior in the\n> past, but I'm starting to wonder if it wouldn't be wiser to change\n> over to the glibc-ish behavior of printing \"(null)\" or the like.\n> It seems like we've long since found all the interesting bugs that\n> the assert-or-crash behavior could reveal, and now we're down to\n> weird corner cases where its main effect is to weaken our robustness.\n\nI did a little more thinking about this. I believe the strongest\nargument for having snprintf.c crash on NULL is that it keeps us\nfrom relying on having more-forgiving behavior in case we're using\nplatform-supplied *printf functions (cf commit 0c62356cc). However,\nthat is only relevant for code that's meant to go into pre-v12 branches,\nsince we stopped using libc's versions of these functions in v12.\n\nSo one plausible way to approach this is to say that we should wait\nuntil v11 is EOL and then change it.\n\nHowever, that feels overly conservative to me. I doubt that anyone\nis *intentionally* relying on *printf not crashing on a NULL pointer.\nFor example, in the case that started this thread:\n\n if (OidIsValid(oldExtension))\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"%s is already a member of extension \\\"%s\\\"\",\n getObjectDescription(&object, false),\n get_extension_name(oldExtension))));\n\nthe problem is failure to consider the possibility that\nget_extension_name could return NULL due to a just-committed\nconcurrent DROP EXTENSION. I'm afraid there are a lot of\ncorner cases like that still lurking.\n\nSo my feeling about this is that switching snprintf.c's behavior\nwould produce some net gain in robustness for v12 and up, while\nnot making things any worse for the older branches. I still hold\nto the opinion that we've already flushed out all the cases of\npassing NULL that we're likely to find via ordinary testing.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Jul 2021 13:20:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "On Mon, 2021-07-12 at 13:20 -0400, Tom Lane wrote:\n> > However, that root issue is converted from a relatively minor bug into\n> > a server crash because snprintf.c treats a NULL pointer passed to %s\n> > as a crash-worthy error. I have advocated for that behavior in the\n> > past, but I'm starting to wonder if it wouldn't be wiser to change\n> > over to the glibc-ish behavior of printing \"(null)\" or the like.\n> \n> So my feeling about this is that switching snprintf.c's behavior\n> would produce some net gain in robustness for v12 and up, while\n> not making things any worse for the older branches. I still hold\n> to the opinion that we've already flushed out all the cases of\n> passing NULL that we're likely to find via ordinary testing.\n\nNew cases could be introduced in the future and might remain undetected.\n\nWhat about adding an Assert that gags on NULLs, but still printing them\nas \"(null)\"? That would help find such problems in a debug build.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 13 Jul 2021 07:52:20 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed\n on composing an error message when adding a type to an extension being\n dropped)"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Mon, 2021-07-12 at 13:20 -0400, Tom Lane wrote:\n>> So my feeling about this is that switching snprintf.c's behavior\n>> would produce some net gain in robustness for v12 and up, while\n>> not making things any worse for the older branches. I still hold\n>> to the opinion that we've already flushed out all the cases of\n>> passing NULL that we're likely to find via ordinary testing.\n\n> New cases could be introduced in the future and might remain undetected.\n> What about adding an Assert that gags on NULLs, but still printing them\n> as \"(null)\"? That would help find such problems in a debug build.\n\nI think you're missing my main point, which is that it seems certain that\nthere are corner cases that do this *now*. I'm proposing that we redefine\nthis as not being a crash case, full stop.\n\nNow, what we don't have control of is what will happen in pre-v12\nbranches on platforms where we use the system's *printf. However,\nnote what I wrote in the log for 0c62356cc:\n\n Per commit e748e902d, we appear to have little or no coverage in the\n buildfarm of machines that will dump core when asked to printf a\n null string pointer.\n\nThus it appears that a large fraction of the world is already either\nusing glibc or following glibc's lead on this point. If we do likewise,\nit will remove some crash cases and not introduce any new ones.\n\nIn hindsight I feel like 0c62356cc was an overreaction to the unusual\nproperty e748e902d's bug had, namely that \"(null)\" was getting printed\nin a place where it would not show up in any visible output. Since\nwe certainly wouldn't consider that behavior OK if we saw it, you'd\nreally have to assume that there are more lurking bugs with that same\nproperty in order to conclude that the Assert is worth its keep.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jul 2021 10:29:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "I wrote:\n> Now, what we don't have control of is what will happen in pre-v12\n> branches on platforms where we use the system's *printf. However,\n> note what I wrote in the log for 0c62356cc:\n> Per commit e748e902d, we appear to have little or no coverage in the\n> buildfarm of machines that will dump core when asked to printf a\n> null string pointer.\n> Thus it appears that a large fraction of the world is already either\n> using glibc or following glibc's lead on this point.\n\nFurther to that point: I just ran around and verified that the system\nprintf prints \"(null)\" rather than crashing on FreeBSD 12.2, NetBSD 8.99,\nOpenBSD 6.8, macOS 11.4, and Solaris 11.3. AIX 7.2 and HPUX 10.20 print\n\"\", but still don't crash. If we change snprintf.c then we will also be\nokay on Windows, because we've always used our own snprintf on that\nplatform. In short, the only place I can find where there is actually\nany hazard is Solaris 10 [1]. I do not think we should let the risk of\nobscure bugs in pre-v12 versions on one obsolete OS drive our\ndecision-making about this.\n\n\t\t\tregards, tom lane\n\n[1] Per experimentation locally and on the GCC compile farm, using\nthe attached.",
"msg_date": "Tue, 13 Jul 2021 11:52:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "Em ter., 13 de jul. de 2021 às 11:29, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Mon, 2021-07-12 at 13:20 -0400, Tom Lane wrote:\n> >> So my feeling about this is that switching snprintf.c's behavior\n> >> would produce some net gain in robustness for v12 and up, while\n> >> not making things any worse for the older branches. I still hold\n> >> to the opinion that we've already flushed out all the cases of\n> >> passing NULL that we're likely to find via ordinary testing.\n>\n> > New cases could be introduced in the future and might remain undetected.\n> > What about adding an Assert that gags on NULLs, but still printing them\n> > as \"(null)\"? That would help find such problems in a debug build.\n>\n> I think you're missing my main point, which is that it seems certain that\n> there are corner cases that do this *now*. I'm proposing that we redefine\n> this as not being a crash case, full stop.\n>\nI agree with Laurenz Albe, that on Debug builds, *printf with NULL, must\ncrash.\nOn production builds, fine, printing (null).\nThis will put a little more pressure on support, \"Hey what mean's (null) in\nmy logs?\",\nbut it's ok.\n\nregards,\nRanier Vilela\n\nEm ter., 13 de jul. de 2021 às 11:29, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Mon, 2021-07-12 at 13:20 -0400, Tom Lane wrote:\n>> So my feeling about this is that switching snprintf.c's behavior\n>> would produce some net gain in robustness for v12 and up, while\n>> not making things any worse for the older branches. I still hold\n>> to the opinion that we've already flushed out all the cases of\n>> passing NULL that we're likely to find via ordinary testing.\n\n> New cases could be introduced in the future and might remain undetected.\n> What about adding an Assert that gags on NULLs, but still printing them\n> as \"(null)\"? That would help find such problems in a debug build.\n\nI think you're missing my main point, which is that it seems certain that\nthere are corner cases that do this *now*. I'm proposing that we redefine\nthis as not being a crash case, full stop.I agree with \nLaurenz Albe, that on Debug builds, *printf with NULL, must crash.On production builds, fine, printing (null).This will put a little more pressure on support, \"Hey what mean's (null) in my logs?\", but it's ok. regards,Ranier Vilela",
"msg_date": "Tue, 13 Jul 2021 15:04:19 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em ter., 13 de jul. de 2021 às 11:29, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> I think you're missing my main point, which is that it seems certain that\n>> there are corner cases that do this *now*. I'm proposing that we redefine\n>> this as not being a crash case, full stop.\n\n> I agree with Laurenz Albe, that on Debug builds, *printf with NULL, must\n> crash.\n\nDid you see my followup? The vast majority of live systems do not do\nthat, so we are accomplishing nothing of value by insisting it's a\ncrash-worthy bug.\n\nI flat out don't agree that \"crash on debug builds but it's okay on\nproduction\" is a useful way to define this. I spend way too much\ntime already on bug reports that only manifest with asserts enabled.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jul 2021 14:26:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "Em ter., 13 de jul. de 2021 às 15:26, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Em ter., 13 de jul. de 2021 às 11:29, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n> >> I think you're missing my main point, which is that it seems certain\n> that\n> >> there are corner cases that do this *now*. I'm proposing that we\n> redefine\n> >> this as not being a crash case, full stop.\n>\n> > I agree with Laurenz Albe, that on Debug builds, *printf with NULL, must\n> > crash.\n>\n> Did you see my followup?\n\nI am trying.\n\n\n> The vast majority of live systems do not do\n> that, so we are accomplishing nothing of value by insisting it's a\n> crash-worthy bug.\n>\nI agreed.\n\n\n> I flat out don't agree that \"crash on debug builds but it's okay on\n> production\" is a useful way to define this. I spend way too much\n> time already on bug reports that only manifest with asserts enabled.\n>\nI understand.\nBug reports will decrease, because of that, people will lose interest and\nmotivation to report,\nbecause (null), it doesn't seem like a serious error and my server didn't\ncrash.\n\nIt's a tricky tradeoff.\n\nregards,\nRanier Vilela\n\nEm ter., 13 de jul. de 2021 às 15:26, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em ter., 13 de jul. de 2021 às 11:29, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> I think you're missing my main point, which is that it seems certain that\n>> there are corner cases that do this *now*. I'm proposing that we redefine\n>> this as not being a crash case, full stop.\n\n> I agree with Laurenz Albe, that on Debug builds, *printf with NULL, must\n> crash.\n\nDid you see my followup?I am trying. The vast majority of live systems do not do\nthat, so we are accomplishing nothing of value by insisting it's a\ncrash-worthy bug.I agreed.\n\nI flat out don't agree that \"crash on debug builds but it's okay on\nproduction\" is a useful way to define this. I spend way too much\ntime already on bug reports that only manifest with asserts enabled.I understand.Bug reports will decrease, because of that, people will lose interest and motivation to report, because (null), it doesn't seem like a serious error and my server didn't crash.It's a tricky tradeoff.regards,Ranier Vilela",
"msg_date": "Tue, 13 Jul 2021 15:38:02 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "On Tue, 2021-07-13 at 14:26 -0400, Tom Lane wrote:\n> Did you see my followup? The vast majority of live systems do not do\n> that, so we are accomplishing nothing of value by insisting it's a\n> crash-worthy bug.\n> \n> I flat out don't agree that \"crash on debug builds but it's okay on\n> production\" is a useful way to define this. I spend way too much\n> time already on bug reports that only manifest with asserts enabled.\n\nYou convinced my that printing \"(null)\" is better than crashing.\nHaving a \"(null)\" show up in a weird place is certainly a minor inconvenience.\n\nBut I don't buy your second point: if it is like that, why do we have\nAsserts at all?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 14 Jul 2021 07:52:39 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed\n on composing an error message when adding a type to an extension being\n dropped)"
},
{
"msg_contents": "On 13.07.21 20:26, Tom Lane wrote:\n> Did you see my followup? The vast majority of live systems do not do\n> that, so we are accomplishing nothing of value by insisting it's a\n> crash-worthy bug.\n\nBut there are no guarantees that that will be maintained in the future. \nIn the past, it has often come back to bite us when we relied on \nimplementation-dependent behavior in the C library or the compiler, \nbecause no optimization might invalidate old assumptions.\n\nIn this particular case, I would for example be quite curious how those \nalternative minimal C libraries such as musl-libc handle this.\n\n\n\n",
"msg_date": "Wed, 14 Jul 2021 08:05:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> In this particular case, I would for example be quite curious how those \n> alternative minimal C libraries such as musl-libc handle this.\n\nInteresting question, so I took a look:\n\nhttps://git.musl-libc.org/cgit/musl/tree/src/stdio/vfprintf.c#n593\n\n case 's':\n\t\t\ta = arg.p ? arg.p : \"(null)\";\n\t\t\t...\n\nAny others you'd like to consider?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jul 2021 12:26:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "I wrote:\n> Interesting question, so I took a look:\n> https://git.musl-libc.org/cgit/musl/tree/src/stdio/vfprintf.c#n593\n> case 's':\n> \t\t\ta = arg.p ? arg.p : \"(null)\";\n\nBTW, the adjacent code shows that musl is also supporting glibc's\n\"%m\" extension, so I imagine that they are endeavoring to be\ncompatible with glibc, and this goes along with that. But that\njust supports my larger point: printing \"(null)\" is clearly the\nde facto standard now, whether or not POSIX has caught up with it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jul 2021 12:41:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "On 14.07.21 18:26, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> In this particular case, I would for example be quite curious how those\n>> alternative minimal C libraries such as musl-libc handle this.\n> \n> Interesting question, so I took a look:\n> \n> https://git.musl-libc.org/cgit/musl/tree/src/stdio/vfprintf.c#n593\n> \n> case 's':\n> \t\t\ta = arg.p ? arg.p : \"(null)\";\n> \t\t\t...\n> \n> Any others you'd like to consider?\n\nSimilar here: \nhttps://github.com/ensc/dietlibc/blob/master/lib/__v_printf.c#L188\n\nI think unless we get counterexamples, this is all good.\n\n\n",
"msg_date": "Thu, 22 Jul 2021 08:20:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 14.07.21 18:26, Tom Lane wrote:\n>> https://git.musl-libc.org/cgit/musl/tree/src/stdio/vfprintf.c#n593\n>> case 's':\n>> a = arg.p ? arg.p : \"(null)\";\n\n> Similar here: \n> https://github.com/ensc/dietlibc/blob/master/lib/__v_printf.c#L188\n\nI also took a look at μClibc, as well as glibc itself, and learned some\nadditional facts. glibc's behavior is not just 'print \"(null)\" instead'.\nIt is 'print \"(null)\" if the field width allows at least six characters,\notherwise print nothing'. μClibc is bug-compatible with this, but other\nimplementations seem to generally just substitute \"(null)\" for the input\nstring and run with that. I'm inclined to side with the latter camp.\nI'd rather see something like \"(nu\" than empty because the latter looks\ntoo much like it might be correct output; so I think glibc is expending\nextra code to produce a less-good result.\n\n> I think unless we get counterexamples, this is all good.\n\nBarring objections, I'll press ahead with changing snprintf.c to do this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 10:24:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: printf %s with NULL pointer (was Re: BUG #17098: Assert failed on\n composing an error message when adding a type to an extension being dropped)"
}
] |
[
{
"msg_contents": "Fix numeric_mul() overflow due to too many digits after decimal point.\n\nThis fixes an overflow error when using the numeric * operator if the\nresult has more than 16383 digits after the decimal point by rounding\nthe result. Overflow errors should only occur if the result has too\nmany digits *before* the decimal point.\n\nDiscussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e7fc488ad67caaad33f6d5177081884495cb81cb\n\nModified Files\n--------------\nsrc/backend/utils/adt/numeric.c | 10 +++++++++-\nsrc/test/regress/expected/numeric.out | 6 ++++++\nsrc/test/regress/sql/numeric.sql | 2 ++\n3 files changed, 17 insertions(+), 1 deletion(-)",
"msg_date": "Sat, 10 Jul 2021 11:54:15 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix numeric_mul() overflow due to too many digits after\n decimal "
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> This fixes an overflow error when using the numeric * operator if the\n> result has more than 16383 digits after the decimal point by rounding\n> the result. Overflow errors should only occur if the result has too\n> many digits *before* the decimal point.\n\nI think this needs a bit more thought. Before, a case like\n\tselect 1e-16000 * 1e-16000;\nproduced\n\tERROR: value overflows numeric format\nNow you get an exact zero (with a lot of trailing zeroes, but still\nit's just zero). Doesn't that represent catastrophic loss of\nprecision?\n\nIn general, I'm disturbed that we just threw away the previous\npromise that numeric multiplication results were exact. That\nseems like a pretty fundamental property --- which is stated\nin so many words in the manual, btw --- and I'm not sure I want\nto give it up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Jul 2021 11:01:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix numeric_mul() overflow due to too many digits after\n decimal"
},
{
"msg_contents": "On Sat, 10 Jul 2021 at 16:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I think this needs a bit more thought. Before, a case like\n> select 1e-16000 * 1e-16000;\n> produced\n> ERROR: value overflows numeric format\n> Now you get an exact zero (with a lot of trailing zeroes, but still\n> it's just zero). Doesn't that represent catastrophic loss of\n> precision?\n\nHmm, \"overflow\" isn't a great result for that case either. Zero is the\nclosest we can get to the exact result with a fixed number of digits\nafter the decimal point.\n\n> In general, I'm disturbed that we just threw away the previous\n> promise that numeric multiplication results were exact. That\n> seems like a pretty fundamental property --- which is stated\n> in so many words in the manual, btw --- and I'm not sure I want\n> to give it up.\n\nPerhaps we should amend the statement about numeric multiplication to\nsay that it's exact within the limits of the numeric type's supported\nscale, which we also document in the manual as 16383.\n\nThat seems a lot better than throwing an overflow error for a result\nthat isn't very big, which limits what's possible with numeric\nmultiplication to much less than 16383 digits.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 10 Jul 2021 18:03:54 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix numeric_mul() overflow due to too many digits after\n decimal"
},
{
"msg_contents": "[ moving to pghackers for wider visibility ]\n\nDean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Sat, 10 Jul 2021 at 16:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In general, I'm disturbed that we just threw away the previous\n>> promise that numeric multiplication results were exact. That\n>> seems like a pretty fundamental property --- which is stated\n>> in so many words in the manual, btw --- and I'm not sure I want\n>> to give it up.\n\n> Perhaps we should amend the statement about numeric multiplication to\n> say that it's exact within the limits of the numeric type's supported\n> scale, which we also document in the manual as 16383.\n> That seems a lot better than throwing an overflow error for a result\n> that isn't very big, which limits what's possible with numeric\n> multiplication to much less than 16383 digits.\n\nTBH, I don't agree. I think this is strictly worse than what we\ndid before, and we should just revert it. It's no longer possible\nto reason about what numeric multiplication will do. I think\nthrowing an error if we can't represent the result exactly is a\npreferable behavior. If you don't want exact results, use float8.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Jul 2021 13:30:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix numeric_mul() overflow due to too many digits after\n decimal"
},
{
"msg_contents": "On Sat, 10 Jul 2021 at 18:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> [ moving to pghackers for wider visibility ]\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > On Sat, 10 Jul 2021 at 16:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> In general, I'm disturbed that we just threw away the previous\n> >> promise that numeric multiplication results were exact. That\n> >> seems like a pretty fundamental property --- which is stated\n> >> in so many words in the manual, btw --- and I'm not sure I want\n> >> to give it up.\n>\n> > Perhaps we should amend the statement about numeric multiplication to\n> > say that it's exact within the limits of the numeric type's supported\n> > scale, which we also document in the manual as 16383.\n> > That seems a lot better than throwing an overflow error for a result\n> > that isn't very big, which limits what's possible with numeric\n> > multiplication to much less than 16383 digits.\n>\n> TBH, I don't agree. I think this is strictly worse than what we\n> did before, and we should just revert it. It's no longer possible\n> to reason about what numeric multiplication will do. I think\n> throwing an error if we can't represent the result exactly is a\n> preferable behavior. If you don't want exact results, use float8.\n\nActually, I think it makes it a lot easier to reason about what it\nwill do -- it'll return the exact result, or the exact result\ncorrectly rounded to 16383 digits.\n\nThat seems perfectly reasonable to me, since almost every numeric\noperation ends up having to round at some point, and almost all of\nthem don't produce exact results.\n\nThe previous behaviour might seem like a principled stance to take,\nfrom a particular perspective, but it's pretty hard to work with in\npractice.\n\nFor example, say you had 2 numbers, each with 10000 digits after the\ndecimal point, that you wanted to multiply. If it throws an overflow\nerror for results with more than 16383 digits after the decimal point,\nthen you'd have to round one or both input numbers before multiplying.\nTo get the most accurate result possible, you'd actually have to round\neach number to have the same number of *significant digits*, rather\nthan the same number of digits after the decimal point. In general,\nthat's quite complicated -- suppose, after rounding, you had\n\n x = [x1 digits] . [x2 digits]\n y = [y1 digits] . [y2 digits]\n\nwhere x1+x2 = y1+y2 to minimise the error in the final result, and\nx2+y2 = 16383 to maximise the result scale. x1 and y1 are known\n(though we don't have a convenient way to obtain them), so that's a\npair of simultaneous equations to solve to decide the optimal amount\nof rounding to apply before multiplying. And after all that, the\nresult will only be accurate to around x1+x2 (or y1+y2) significant\ndigits, and around x1+y1 of those will be before the decimal point, so\neven though it will return a result with 16383 digits after the\ndecimal point, lots of those digits will be completely wrong.\n\nWith the new code, you just multiply the numbers and the result is\ncorrectly rounded to 16383 digits.\n\nIn general, I'd argue that using numeric isn't about getting exact\nresults, since nearly all real computations don't have an exact\nresult. Really, it's about getting high-precision results that would\nbe impossible with float8. (And if you don't care about numbers with\n16383 digits after the decimal point, this change won't affect you.)\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 10 Jul 2021 19:19:12 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix numeric_mul() overflow due to too many digits after\n decimal"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile analyzing a possible use of an uninitialized variable, I checked that\n*_bt_restore_page* can lead to memory corruption,\nby not checking the maximum limit of array items which is\nMaxIndexTuplesPerPage.\n\nIt can also generate a dangling pointer by incrementing it beyond the\nlimits it can point to.\n\nWhile there, I promoted a reduction of scope and adaptation of the type of\nthe *len* parameter to match XLogRecGetBlockData function.\n\npass regress check at Windows and check-world at Linux.\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 11 Jul 2021 16:51:04 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Protect against possible memory corruption\n (src/backend/access/nbtree/nbtxlog.c)"
},
{
"msg_contents": "On 11/07/2021 22:51, Ranier Vilela wrote:\n> Hi,\n> \n> While analyzing a possible use of an uninitialized variable, I checked that\n> *_bt_restore_page* can lead to memory corruption,\n> by not checking the maximum limit of array items which is\n> MaxIndexTuplesPerPage.\n\n> +\t/* Protect against corrupted recovery file */\n> +\tnitems = (len / sizeof(IndexTupleData));\n> +\tif (nitems < 0 || nitems > MaxIndexTuplesPerPage)\n> +\t\telog(PANIC, \"_bt_restore_page: cannot restore %d items to page\", nitems);\n> +\n\nThat's not right. You don't get the number of items by dividing like \nthat. 'len' includes the tuple data as well, not just the IndexTupleData \nheader.\n\n> @@ -73,12 +79,9 @@ _bt_restore_page(Page page, char *from, int len)\n> \tnitems = i;\n> \n> \tfor (i = nitems - 1; i >= 0; i--)\n> -\t{\n> \t\tif (PageAddItem(page, items[i], itemsizes[i], nitems - i,\n> \t\t\t\t\t\tfalse, false) == InvalidOffsetNumber)\n> \t\t\telog(PANIC, \"_bt_restore_page: cannot add item to page\");\n> -\t\tfrom += itemsz;\n> -\t}\n> }\n\nI agree with this change (except that I would leave the braces in \nplace). The 'from' that's calculated here is plain wrong; oversight in \ncommit 7e30c186da. Fortunately it's not used, so it can just be removed.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 12 Jul 2021 01:19:47 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Protect against possible memory corruption\n (src/backend/access/nbtree/nbtxlog.c)"
},
{
"msg_contents": "Em dom., 11 de jul. de 2021 às 19:19, Heikki Linnakangas <hlinnaka@iki.fi>\nescreveu:\n\n> On 11/07/2021 22:51, Ranier Vilela wrote:\n> > Hi,\n> >\n> > While analyzing a possible use of an uninitialized variable, I checked\n> that\n> > *_bt_restore_page* can lead to memory corruption,\n> > by not checking the maximum limit of array items which is\n> > MaxIndexTuplesPerPage.\n>\n> > + /* Protect against corrupted recovery file */\n> > + nitems = (len / sizeof(IndexTupleData));\n> > + if (nitems < 0 || nitems > MaxIndexTuplesPerPage)\n> > + elog(PANIC, \"_bt_restore_page: cannot restore %d items to\n> page\", nitems);\n> > +\n>\n> That's not right. You don't get the number of items by dividing like\n> that. 'len' includes the tuple data as well, not just the IndexTupleData\n> header.\n>\nThanks for the quick review.\n\nNot totally wrong.\nIf it is not possible, know the upper limits, before the loop.\nIt is necessary to do this inside the loop.\n\nattached v1 of patch.\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 11 Jul 2021 20:34:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Protect against possible memory corruption\n (src/backend/access/nbtree/nbtxlog.c)"
},
{
"msg_contents": "On 12/07/2021 02:34, Ranier Vilela wrote:\n> If it is not possible, know the upper limits, before the loop.\n> It is necessary to do this inside the loop.\n\n> @@ -49,10 +47,14 @@ _bt_restore_page(Page page, char *from, int len)\n> \t * To get the items back in the original order, we add them to the page in\n> \t * reverse. To figure out where one tuple ends and another begins, we\n> \t * have to scan them in forward order first.\n> +\t * Check the array upper limit to not overtake him.\n> \t */\n> \ti = 0;\n> -\twhile (from < end)\n> +\twhile (from < end && i <= MaxIndexTuplesPerPage)\n> \t{\n> +\t\tIndexTupleData itupdata;\n> +\t\tSize\t\titemsz;\n> +\n> \t\t/*\n> \t\t * As we step through the items, 'from' won't always be properly\n> \t\t * aligned, so we need to use memcpy(). Further, we use Item (which\n\nIf we bother checking it, we should throw an error if the check fails, \nnot just silently soldier on. Also, shouldn't it be '<', not '<='? In \ngeneral though, we don't do much checking on WAL records, we assume that \nthe contents are sane. It would be nice to add more checks and make WAL \nredo routines more robust to corrupt records, but this seems like an odd \nplace to start.\n\nI committed the removal of bogus assignment to 'from'. Thanks!\n\n- Heikki\n\n\n",
"msg_date": "Mon, 12 Jul 2021 11:20:57 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Protect against possible memory corruption\n (src/backend/access/nbtree/nbtxlog.c)"
},
{
"msg_contents": "Em seg., 12 de jul. de 2021 às 05:20, Heikki Linnakangas <hlinnaka@iki.fi>\nescreveu:\n\n> On 12/07/2021 02:34, Ranier Vilela wrote:\n> > If it is not possible, know the upper limits, before the loop.\n> > It is necessary to do this inside the loop.\n>\n> > @@ -49,10 +47,14 @@ _bt_restore_page(Page page, char *from, int len)\n> > * To get the items back in the original order, we add them to the\n> page in\n> > * reverse. To figure out where one tuple ends and another\n> begins, we\n> > * have to scan them in forward order first.\n> > + * Check the array upper limit to not overtake him.\n> > */\n> > i = 0;\n> > - while (from < end)\n> > + while (from < end && i <= MaxIndexTuplesPerPage)\n> > {\n> > + IndexTupleData itupdata;\n> > + Size itemsz;\n> > +\n> > /*\n> > * As we step through the items, 'from' won't always be\n> properly\n> > * aligned, so we need to use memcpy(). Further, we use\n> Item (which\n>\n> If we bother checking it, we should throw an error if the check fails,\n> not just silently soldier on. Also, shouldn't it be '<', not '<='?\n\nShould be '<', you are right.\n\nIn\n> general though, we don't do much checking on WAL records, we assume that\n> the contents are sane. It would be nice to add more checks and make WAL\n> redo routines more robust to corrupt records, but this seems like an odd\n> place to start.\n>\nIf WAL records can't be corrupted at _bt_restore_page, that's ok, it's safe.\n\n\n> I committed the removal of bogus assignment to 'from'. Thanks!\n>\nThanks for the commit.\n\nregards,\nRanier Vilela\n\nEm seg., 12 de jul. de 2021 às 05:20, Heikki Linnakangas <hlinnaka@iki.fi> escreveu:On 12/07/2021 02:34, Ranier Vilela wrote:\n> If it is not possible, know the upper limits, before the loop.\n> It is necessary to do this inside the loop.\n\n> @@ -49,10 +47,14 @@ _bt_restore_page(Page page, char *from, int len)\n> * To get the items back in the original order, we add them to the page in\n> * reverse. To figure out where one tuple ends and another begins, we\n> * have to scan them in forward order first.\n> + * Check the array upper limit to not overtake him.\n> */\n> i = 0;\n> - while (from < end)\n> + while (from < end && i <= MaxIndexTuplesPerPage)\n> {\n> + IndexTupleData itupdata;\n> + Size itemsz;\n> +\n> /*\n> * As we step through the items, 'from' won't always be properly\n> * aligned, so we need to use memcpy(). Further, we use Item (which\n\nIf we bother checking it, we should throw an error if the check fails, \nnot just silently soldier on. Also, shouldn't it be '<', not '<='?Should be '<', you are right. In \ngeneral though, we don't do much checking on WAL records, we assume that \nthe contents are sane. It would be nice to add more checks and make WAL \nredo routines more robust to corrupt records, but this seems like an odd \nplace to start.If WAL records can't be corrupted at _bt_restore_page, that's ok, it's safe.\n\nI committed the removal of bogus assignment to 'from'. Thanks!Thanks for the commit. regards,Ranier Vilela",
"msg_date": "Mon, 12 Jul 2021 13:17:19 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Protect against possible memory corruption\n (src/backend/access/nbtree/nbtxlog.c)"
}
] |
[
{
"msg_contents": "Sorry I have sent a duplicate email. I will first continue discussion\nin the other thread and then submit it after we have a conclusion.\nThanks.\n\nPeifeng\n\n\n\n\n\n\n\n\n\nSorry I have sent a duplicate email. I will first continue discussion\n\nin the other thread and then submit it after we have a conclusion.\n\nThanks.\n\n\n\n\nPeifeng",
"msg_date": "Mon, 12 Jul 2021 02:07:29 +0000",
"msg_from": "Peifeng Qiu <peifengq@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile I’m reading source codes related to vacuum, I found comments which\ndon’t seem to fit the reality. I think the commit[1] just forgot to fix them.\nWhat do you think?\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=dc7420c2c9274a283779ec19718d2d16323640c0\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 12 Jul 2021 20:13:47 +0900",
"msg_from": "ikedamsh@oss.nttdata.com",
"msg_from_op": true,
"msg_subject": "Fix comments of heap_prune_chain()"
},
{
"msg_contents": "On 2021-Jul-12, ikedamsh@oss.nttdata.com wrote:\n\n> While I’m reading source codes related to vacuum, I found comments which\n> don’t seem to fit the reality. I think the commit[1] just forgot to fix them.\n> What do you think?\n> \n> [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=dc7420c2c9274a283779ec19718d2d16323640c0\n\nThat sounds believable, but can you be more specific?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 12 Jul 2021 16:31:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments of heap_prune_chain()"
},
{
"msg_contents": "On Mon, 12 Jul 2021 at 13:14, <ikedamsh@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> While I’m reading source codes related to vacuum, I found comments which\n> don’t seem to fit the reality. I think the commit[1] just forgot to fix\nthem.\n> What do you think?\n\nHmm, yes, those are indeed some leftovers.\n\nSome comments on the suggested changes:\n\n\n- * caused by HeapTupleSatisfiesVacuum. We just add entries to the arrays\nin\n+ * caused by heap_prune_satisfies_vacuum. We just add entries to the\narrays in\n\nI think that HeapTupleSatisfiesVacuumHorizon might be an alternative\ncorrect replacement here.\n\n\n- elog(ERROR, \"unexpected HeapTupleSatisfiesVacuum result\");\n+ elog(ERROR, \"unexpected heap_prune_satisfies_vacuum\nresult\");\n\nThe type of the value is HTSV_Result; where HTSV stands for\nHeapTupleSatisfiesVacuum, so if we were to replace this, I'd go for\n\"unexpected result from heap_prune_satisfies_vacuum\" as a message instead.\n\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n\nOn Mon, 12 Jul 2021 at 13:14, <ikedamsh@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> While I’m reading source codes related to vacuum, I found comments which\n> don’t seem to fit the reality. I think the commit[1] just forgot to fix them.\n> What do you think?\n\nHmm, yes, those are indeed some leftovers.\n\nSome comments on the suggested changes:\n\n\n- * caused by HeapTupleSatisfiesVacuum. We just add entries to the arrays in\n+ * caused by heap_prune_satisfies_vacuum. We just add entries to the arrays in\n\nI think that HeapTupleSatisfiesVacuumHorizon might be an alternative correct replacement here.\n\n- elog(ERROR, \"unexpected HeapTupleSatisfiesVacuum result\");\n+ elog(ERROR, \"unexpected heap_prune_satisfies_vacuum result\");\n\nThe type of the value is HTSV_Result; where HTSV stands for HeapTupleSatisfiesVacuum, so if we were to replace this, I'd go for \"unexpected result from heap_prune_satisfies_vacuum\" as a message instead.\n\nKind regards,Matthias van de Meent",
"msg_date": "Mon, 12 Jul 2021 22:57:15 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments of heap_prune_chain()"
},
{
"msg_contents": "On 2021-Jul-12, Alvaro Herrera wrote:\n\n> On 2021-Jul-12, ikedamsh@oss.nttdata.com wrote:\n> \n> > While I’m reading source codes related to vacuum, I found comments which\n> > don’t seem to fit the reality. I think the commit[1] just forgot to fix them.\n> > What do you think?\n> > \n> > [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=dc7420c2c9274a283779ec19718d2d16323640c0\n> \n> That sounds believable, but can you be more specific?\n\nOh, apologies, I didn't realize there was an attachment. That seems\nspecific enough :-)\n\nIn my defense, the archives don't show the attachment either:\nhttps://www.postgresql.org/message-id/5CB29811-2B1D-4244-8DE2-B1E02495426B%40oss.nttdata.com\nI think we've seen this kind of problem before -- the MIME structure of\nthe message is quite unusual, which is why neither my MUA nor the\narchives show it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Los trabajadores menos efectivos son sistematicamente llevados al lugar\ndonde pueden hacer el menor daño posible: gerencia.\" (El principio Dilbert)\n\n\n",
"msg_date": "Mon, 12 Jul 2021 20:17:55 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments of heap_prune_chain()"
},
{
"msg_contents": "(This is out of topic)\n\nAt Mon, 12 Jul 2021 20:17:55 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> Oh, apologies, I didn't realize there was an attachment. That seems\n> specific enough :-)\n> \n> In my defense, the archives don't show the attachment either:\n> https://www.postgresql.org/message-id/5CB29811-2B1D-4244-8DE2-B1E02495426B%40oss.nttdata.com\n> I think we've seen this kind of problem before -- the MIME structure of\n> the message is quite unusual, which is why neither my MUA nor the\n> archives show it.\n\nThe same for me. Multipart structure of that mail looks like odd.\n\nmultipart/laternative\n text/plain - mail body quoted-printable\n multipart/mixed\n text/html - HTML alternative body\n appliation/octet-stream - the patch\n text/html - garbage\n\n\nI found an issue in bugzilla about this behavior\n\nhttps://bugzilla.mozilla.org/show_bug.cgi?id=1362539\n\nThe primary issue is Apple-mail's strange mime-composition.\nI'm not sure whether it is avoidable by some settings.\n\n(I don't think the alternative HTML body is useful at least for this\n mailling list.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Jul 2021 10:22:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments of heap_prune_chain()"
},
{
"msg_contents": "On 2021/07/13 5:57, Matthias van de Meent wrote:\n> \n> \n> On Mon, 12 Jul 2021 at 13:14, <ikedamsh@oss.nttdata.com\n> <mailto:ikedamsh@oss.nttdata.com>> wrote:\n>>\n>> Hi,\n>>\n>> While I’m reading source codes related to vacuum, I found comments which\n>> don’t seem to fit the reality. I think the commit[1] just forgot to fix them.\n>> What do you think?\n> \n> Hmm, yes, those are indeed some leftovers.\n> \n> Some comments on the suggested changes:\n> \n> \n> - * caused by HeapTupleSatisfiesVacuum. We just add entries to the arrays in\n> + * caused by heap_prune_satisfies_vacuum. We just add entries to the arrays in\n> \n> I think that HeapTupleSatisfiesVacuumHorizon might be an alternative correct\n> replacement here.\n> \n> \n> - elog(ERROR, \"unexpected HeapTupleSatisfiesVacuum result\");\n> + elog(ERROR, \"unexpected heap_prune_satisfies_vacuum result\");\n> \n> The type of the value is HTSV_Result; where HTSV stands for\n> HeapTupleSatisfiesVacuum, so if we were to replace this, I'd go for\n> \"unexpected result from heap_prune_satisfies_vacuum\" as a message instead.\n\nThanks for your comments. I agree your suggestions.\n\nI also updated prstate->vistest to heap_prune_satisfies_vacuum of v1 patch\nbecause heap_prune_satisfies_vacuum() tests with not only prstate->vistest but\nalso prstate->old_snap_xmin. I think it's more accurate representation.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Tue, 13 Jul 2021 19:05:10 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments of heap_prune_chain()"
},
{
"msg_contents": "\n\nOn 2021/07/13 10:22, Kyotaro Horiguchi wrote:\n> (This is out of topic)\n> \n> At Mon, 12 Jul 2021 20:17:55 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n>> Oh, apologies, I didn't realize there was an attachment. That seems\n>> specific enough :-)\n>>\n>> In my defense, the archives don't show the attachment either:\n>> https://www.postgresql.org/message-id/5CB29811-2B1D-4244-8DE2-B1E02495426B%40oss.nttdata.com\n>> I think we've seen this kind of problem before -- the MIME structure of\n>> the message is quite unusual, which is why neither my MUA nor the\n>> archives show it.\n> \n> The same for me. Multipart structure of that mail looks like odd.\n> \n> multipart/laternative\n> text/plain - mail body quoted-printable\n> multipart/mixed\n> text/html - HTML alternative body\n> appliation/octet-stream - the patch\n> text/html - garbage\n> \n> \n> I found an issue in bugzilla about this behavior\n> \n> https://bugzilla.mozilla.org/show_bug.cgi?id=1362539\n> \n> The primary issue is Apple-mail's strange mime-composition.\n> I'm not sure whether it is avoidable by some settings.\n> \n> (I don't think the alternative HTML body is useful at least for this\n> mailling list.)\n\nThanks for replying and sorry for the above.\nThe reason is that I sent from MacBook PC as Horiguchi-san said.\n\nI changed my email client and I confirmed that I could send an\nemail with a new patch. So, please check it.\nhttps://www.postgresql.org/message-id/1aa07e2a-b715-5649-6c62-4fff96304d18%40oss.nttdata.com\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 13 Jul 2021 19:13:17 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments of heap_prune_chain()"
}
] |
[
{
"msg_contents": "While looking into one of the pg_upgrade issue, I found it\n\nchallenging to find out the database that has the datallowconn set to\n\n'false' that was throwing following error:\n\n\n*\"All non-template0 databases must allow connections, i.e. their\npg_database.datallowconn must be true\"*\n\n\nedb=# create database mydb;\n\nCREATE DATABASE\n\nedb=# update pg_database set datallowconn='false' where datname like 'mydb';\n\nUPDATE 1\n\n\nNow, when I try to upgrade the server, without the patch we get above\n\nerror, which leaves no clue behind about which database has datallowconn\n\nset to 'false'. It can be argued that we can query the pg_database\n\ncatalog and find that out easily, but at times it is challenging to get\n\nthat from the customer environment. But, anyways I feel we have scope to\n\nimprove the error message here per the attached patch.\n\n\nWith attached patch, now I get following error:\n\n*\"All non-template0 databases must allow connections, i.e. their\npg_database.datallowconn must be true; database \"mydb\" has datallowconn set\nto false.\"*\n\n\n\nRegards,\n\nJeevan Ladhe",
"msg_date": "Mon, 12 Jul 2021 16:58:54 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "On Mon, 2021-07-12 at 16:58 +0530, Jeevan Ladhe wrote:\n> While looking into one of the pg_upgrade issue, I found it\n> challenging to find out the database that has the datallowconn set to\n> 'false' that was throwing following error:\n> \n> \"All non-template0 databases must allow connections, i.e. their pg_database.datallowconn must be true\"\n> \n> It can be argued that we can query the pg_database\n> catalog and find that out easily, but at times it is challenging to get\n> that from the customer environment.\n>\n> With attached patch, now I get following error:\n> \"All non-template0 databases must allow connections, i.e. their pg_database.datallowconn must be true; database \"mydb\" has datallowconn set to false.\"\n\nI am in favor of that in principle, but I think that additional information\nshould be separate line.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 12 Jul 2021 14:06:31 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "+1 for the change. Patch looks good to me.\n\nOn Mon, Jul 12, 2021 at 4:59 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> While looking into one of the pg_upgrade issue, I found it\n>\n> challenging to find out the database that has the datallowconn set to\n>\n> 'false' that was throwing following error:\n>\n>\n> *\"All non-template0 databases must allow connections, i.e. their\n> pg_database.datallowconn must be true\"*\n>\n>\n> edb=# create database mydb;\n>\n> CREATE DATABASE\n>\n> edb=# update pg_database set datallowconn='false' where datname like\n> 'mydb';\n>\n> UPDATE 1\n>\n>\n> Now, when I try to upgrade the server, without the patch we get above\n>\n> error, which leaves no clue behind about which database has datallowconn\n>\n> set to 'false'. It can be argued that we can query the pg_database\n>\n> catalog and find that out easily, but at times it is challenging to get\n>\n> that from the customer environment. But, anyways I feel we have scope to\n>\n> improve the error message here per the attached patch.\n>\n>\n> With attached patch, now I get following error:\n>\n> *\"All non-template0 databases must allow connections, i.e. their\n> pg_database.datallowconn must be true; database \"mydb\" has datallowconn set\n> to false.\"*\n>\n>\n>\n> Regards,\n>\n> Jeevan Ladhe\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\n+1 for the change. Patch looks good to me.On Mon, Jul 12, 2021 at 4:59 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:While looking into one of the pg_upgrade issue, I found it\nchallenging to find out the database that has the datallowconn set to\n'false' that was throwing following error:\n\n\"All non-template0 databases must allow connections, i.e. their pg_database.datallowconn must be true\"\n\nedb=# create database mydb;\nCREATE DATABASE\nedb=# update pg_database set datallowconn='false' where datname like 'mydb';\nUPDATE 1\n\nNow, when I try to upgrade the server, without the patch we get above\nerror, which leaves no clue behind about which database has datallowconn\nset to 'false'. It can be argued that we can query the pg_database\ncatalog and find that out easily, but at times it is challenging to get\nthat from the customer environment. But, anyways I feel we have scope to\nimprove the error message here per the attached patch.\n\nWith attached patch, now I get following error:\n\"All non-template0 databases must allow connections, i.e. their pg_database.datallowconn must be true; database \"mydb\" has datallowconn set to false.\"\n\n\nRegards,\nJeevan Ladhe\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Mon, 12 Jul 2021 17:43:30 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 02:06:31PM +0200, Laurenz Albe wrote:\n> On Mon, 2021-07-12 at 16:58 +0530, Jeevan Ladhe wrote:\n> > While looking into one of the pg_upgrade issue, I found it\n> > challenging to find out the database that has the datallowconn set to\n> > 'false' that was throwing following error:\n> > \n> > \"All non-template0 databases must allow connections, i.e. their pg_database.datallowconn must be true\"\n> > \n> > It can be argued that we can query the pg_database\n> > catalog and find that out easily, but at times it is challenging to get\n> > that from the customer environment.\n> >\n> > With attached patch, now I get following error:\n> > \"All non-template0 databases must allow connections, i.e. their pg_database.datallowconn must be true; database \"mydb\" has datallowconn set to false.\"\n> \n> I am in favor of that in principle, but I think that additional information\n> should be separate line.\n\nI think the style for this kind of error is established by commit 1634d3615.\nThis shows \"In database: ...\"\n\nMore importantly, if you're going to show the name of the problematic DB, you\nshould show the name of *all* the problem DBs. Otherwise it gives the\nimpression the upgrade will progress if you fix just that one.\n\nThe admin might fix DB123, restart their upgrade procedure, spend 5 or 15\nminutes with that, only to have it then fail on DB1234.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 12 Jul 2021 07:20:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "> The admin might fix DB123, restart their upgrade procedure, spend 5 or 15\n> minutes with that, only to have it then fail on DB1234.\n>\n\nAgree with this observation.\n\nHere is a patch that writes the list of all the databases other than\ntemplate0\nthat are having their pg_database.datallowconn to false in a file. Similar\napproach is seen in other functions like check_for_data_types_usage(),\ncheck_for_data_types_usage() etc. Thanks Suraj Kharage for the offline\nsuggestion.\n\nPFA patch.\n\nFor experiment, here is how it turns out after the fix.\n\npostgres=# update pg_database set datallowconn='false' where datname in\n('mydb', 'mydb1', 'mydb2');\nUPDATE 3\n\n$ pg_upgrade -d /tmp/v96/data -D /tmp/v13/data -b $HOME/v96/install/bin -B\n$HOME/v13/install/bin\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\nChecking database user is the install user ok\nChecking database connection settings fatal\n\nAll non-template0 databases must allow connections, i.e. their\npg_database.datallowconn must be true. Your installation contains\nnon-template0 databases with their pg_database.datallowconn set to\nfalse. Consider allowing connection for all non-template0 databases\nusing:\n UPDATE pg_catalog.pg_database SET datallowconn='true' WHERE datname NOT\nLIKE 'template0';\nA list of databases with the problem is given in the file:\n databases_with_datallowconn_false.txt\n\nFailure, exiting\n\n$ cat databases_with_datallowconn_false.txt\nmydb\nmydb1\nmydb2\n\n\nRegards,\nJeevan Ladhe",
"msg_date": "Tue, 13 Jul 2021 18:57:00 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "Thanks Jeevan for working on this.\nOverall patch looks good to me.\n\n+ pg_fatal(\"All non-template0 databases must allow connections, i.e.\ntheir\\n\"\n+ \"pg_database.datallowconn must be true. Your installation contains\\n\"\n+ \"non-template0 databases with their pg_database.datallowconn set to\\n\"\n+ \"false. Consider allowing connection for all non-template0 databases\\n\"\n+ \"using:\\n\"\n+ \" UPDATE pg_catalog.pg_database SET datallowconn='true' WHERE datname\nNOT LIKE 'template0';\\n\"\n+ \"A list of databases with the problem is given in the file:\\n\"\n+ \" %s\\n\\n\", output_path);\n\nInstead of giving suggestion about updating the pg_database catalog, can we\ngive \"ALTER DATABASE <datname> ALLOW_CONNECTIONS true;\" command?\nAlso, it would be good if we give 2 spaces after full stop in an error\nmessage.\n\nOn Tue, Jul 13, 2021 at 6:57 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n>\n> The admin might fix DB123, restart their upgrade procedure, spend 5 or 15\n>> minutes with that, only to have it then fail on DB1234.\n>>\n>\n> Agree with this observation.\n>\n> Here is a patch that writes the list of all the databases other than\n> template0\n> that are having their pg_database.datallowconn to false in a file. Similar\n> approach is seen in other functions like check_for_data_types_usage(),\n> check_for_data_types_usage() etc. Thanks Suraj Kharage for the offline\n> suggestion.\n>\n> PFA patch.\n>\n> For experiment, here is how it turns out after the fix.\n>\n> postgres=# update pg_database set datallowconn='false' where datname in\n> ('mydb', 'mydb1', 'mydb2');\n> UPDATE 3\n>\n> $ pg_upgrade -d /tmp/v96/data -D /tmp/v13/data -b $HOME/v96/install/bin -B\n> $HOME/v13/install/bin\n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok\n> Checking database user is the install user ok\n> Checking database connection settings fatal\n>\n> All non-template0 databases must allow connections, i.e. their\n> pg_database.datallowconn must be true. Your installation contains\n> non-template0 databases with their pg_database.datallowconn set to\n> false. Consider allowing connection for all non-template0 databases\n> using:\n> UPDATE pg_catalog.pg_database SET datallowconn='true' WHERE datname\n> NOT LIKE 'template0';\n> A list of databases with the problem is given in the file:\n> databases_with_datallowconn_false.txt\n>\n> Failure, exiting\n>\n> $ cat databases_with_datallowconn_false.txt\n> mydb\n> mydb1\n> mydb2\n>\n>\n> Regards,\n> Jeevan Ladhe\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nThanks Jeevan for working on this.Overall patch looks good to me.+\t\tpg_fatal(\"All non-template0 databases must allow connections, i.e. their\\n\"+\t\t\t\t \"pg_database.datallowconn must be true. Your installation contains\\n\"+\t\t\t\t \"non-template0 databases with their pg_database.datallowconn set to\\n\"+\t\t\t\t \"false. Consider allowing connection for all non-template0 databases\\n\"+\t\t\t\t \"using:\\n\"+\t\t\t\t \" UPDATE pg_catalog.pg_database SET datallowconn='true' WHERE datname NOT LIKE 'template0';\\n\"+\t\t\t\t \"A list of databases with the problem is given in the file:\\n\"+\t\t\t\t \" %s\\n\\n\", output_path);Instead of giving suggestion about updating the pg_database catalog, can we give \"ALTER DATABASE <datname> ALLOW_CONNECTIONS true;\" command?Also, it would be good if we give 2 spaces after full stop in an error message.On Tue, Jul 13, 2021 at 6:57 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:The admin might fix DB123, restart their upgrade procedure, spend 5 or 15\nminutes with that, only to have it then fail on DB1234.Agree with this observation.Here is a patch that writes the list of all the databases other than template0that are having their pg_database.datallowconn to false in a file. Similarapproach is seen in other functions like check_for_data_types_usage(),check_for_data_types_usage() etc. Thanks Suraj Kharage for the offlinesuggestion.PFA patch.For experiment, here is how it turns out after the fix.postgres=# update pg_database set datallowconn='false' where datname in ('mydb', 'mydb1', 'mydb2');UPDATE 3$ pg_upgrade -d /tmp/v96/data -D /tmp/v13/data -b $HOME/v96/install/bin -B $HOME/v13/install/binPerforming Consistency Checks-----------------------------Checking cluster versions okChecking database user is the install user okChecking database connection settings fatalAll non-template0 databases must allow connections, i.e. theirpg_database.datallowconn must be true. Your installation containsnon-template0 databases with their pg_database.datallowconn set tofalse. Consider allowing connection for all non-template0 databasesusing: UPDATE pg_catalog.pg_database SET datallowconn='true' WHERE datname NOT LIKE 'template0';A list of databases with the problem is given in the file: databases_with_datallowconn_false.txtFailure, exiting$ cat databases_with_datallowconn_false.txt mydbmydb1mydb2Regards,Jeevan Ladhe\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Wed, 14 Jul 2021 10:57:33 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "> On 14 Jul 2021, at 07:27, Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:\n\n> Overall patch looks good to me.\n\nAgreed, I think this is a good change and in line with how the check functions\nwork in general.\n\n> Instead of giving suggestion about updating the pg_database catalog, can we give \"ALTER DATABASE <datname> ALLOW_CONNECTIONS true;\" command?\n\nI would actually prefer to not give any suggestions at all, we typically don't\nin these error messages. Since there are many ways to do it (dropping the\ndatabase being one) I think leaving that to the user is per application style.\n\n> Also, it would be good if we give 2 spaces after full stop in an error message.\n\nCorrect, fixed in the attached which also tweaks the language slightly to match\nother errors.\n\nI propose to commit the attached, which also adds a function comment while\nthere, unless there are objections.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 21 Oct 2021 12:21:47 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "Hi Daniel,\n\nWas wondering if we had any barriers to getting this committed.\nI believe it will be good to have this change and also it will be more in\nline\nwith other check functions also.\n\nRegards,\nJeevan\n\nOn Thu, Oct 21, 2021 at 3:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 14 Jul 2021, at 07:27, Suraj Kharage <suraj.kharage@enterprisedb.com>\n> wrote:\n>\n> > Overall patch looks good to me.\n>\n> Agreed, I think this is a good change and in line with how the check\n> functions\n> work in general.\n>\n> > Instead of giving suggestion about updating the pg_database catalog, can\n> we give \"ALTER DATABASE <datname> ALLOW_CONNECTIONS true;\" command?\n>\n> I would actually prefer to not give any suggestions at all, we typically\n> don't\n> in these error messages. Since there are many ways to do it (dropping the\n> database being one) I think leaving that to the user is per application\n> style.\n>\n> > Also, it would be good if we give 2 spaces after full stop in an error\n> message.\n>\n> Correct, fixed in the attached which also tweaks the language slightly to\n> match\n> other errors.\n>\n> I propose to commit the attached, which also adds a function comment while\n> there, unless there are objections.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\nHi Daniel,Was wondering if we had any barriers to getting this committed.I believe it will be good to have this change and also it will be more in linewith other check functions also.Regards,JeevanOn Thu, Oct 21, 2021 at 3:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 14 Jul 2021, at 07:27, Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:\n\n> Overall patch looks good to me.\n\nAgreed, I think this is a good change and in line with how the check functions\nwork in general.\n\n> Instead of giving suggestion about updating the pg_database catalog, can we give \"ALTER DATABASE <datname> ALLOW_CONNECTIONS true;\" command?\n\nI would actually prefer to not give any suggestions at all, we typically don't\nin these error messages. Since there are many ways to do it (dropping the\ndatabase being one) I think leaving that to the user is per application style.\n\n> Also, it would be good if we give 2 spaces after full stop in an error message.\n\nCorrect, fixed in the attached which also tweaks the language slightly to match\nother errors.\n\nI propose to commit the attached, which also adds a function comment while\nthere, unless there are objections.\n\n--\nDaniel Gustafsson https://vmware.com/",
"msg_date": "Wed, 1 Dec 2021 15:29:44 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "> On 1 Dec 2021, at 10:59, Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:\n\n> Was wondering if we had any barriers to getting this committed.\n\nNo barrier other than available time to, I will try to get to it shortly.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 1 Dec 2021 11:15:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "On Wed, Dec 1, 2021 at 3:45 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 1 Dec 2021, at 10:59, Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\n> wrote:\n>\n> > Was wondering if we had any barriers to getting this committed.\n>\n> No barrier other than available time to, I will try to get to it shortly.\n>\n\nGreat! Thank you.\n\n\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\nOn Wed, Dec 1, 2021 at 3:45 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 1 Dec 2021, at 10:59, Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:\n\n> Was wondering if we had any barriers to getting this committed.\n\nNo barrier other than available time to, I will try to get to it shortly.Great! Thank you.\n\n--\nDaniel Gustafsson https://vmware.com/",
"msg_date": "Thu, 2 Dec 2021 12:28:08 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
},
{
"msg_contents": "> On 1 Dec 2021, at 11:15, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 1 Dec 2021, at 10:59, Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:\n> \n>> Was wondering if we had any barriers to getting this committed.\n> \n> No barrier other than available time to, I will try to get to it shortly.\n\nThe \"shortly\" aspect wasn't really fulfilled, but I got around to taking\nanother look at this today and pushed it now with a few small changes to\nreflect how pg_upgrade has changed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 22:48:24 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] improve the pg_upgrade error message"
}
] |
[
{
"msg_contents": "Hackers,\n\nThe Commitfest 2021-07 is now in progress. It is one of the biggest one.\nTotal number of patches of this commitfest is 342.\n\nNeeds review: 204.\nWaiting on Author: 40.\nReady for Committer: 18.\nCommitted: 57.\nMoved to next CF: 3.\nWithdrawn: 15.\nRejected: 3.\nReturned with Feedback: 2.\nTotal: 342.\n\nIf you are a patch author, please check http://commitfest.cputube.org to be\nsure your patch still applies, compiles, and passes tests.\n\nWe need your involvement and participation in reviewing the patches. Let's\ntry and make this happen.\n\n--\nRegards.\nIbrar Ahmed\n\nHackers,The Commitfest 2021-07 is now in progress. It is one of the biggest one. Total number of patches of this commitfest is 342.Needs review: 204. Waiting on Author: 40. Ready for Committer: 18. Committed: 57. Moved to next CF: 3. Withdrawn: 15. Rejected: 3. Returned with Feedback: 2. Total: 342.If you are a patch author, please check http://commitfest.cputube.org to be sure your patch still applies, compiles, and passes tests.We need your involvement and participation in reviewing the patches. Let's try and make this happen.--Regards.Ibrar Ahmed",
"msg_date": "Mon, 12 Jul 2021 16:59:41 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "2021-07 CF now in progress"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 4:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n> Hackers,\n>\n> The Commitfest 2021-07 is now in progress. It is one of the biggest one.\n> Total number of patches of this commitfest is 342.\n>\n> Needs review: 204.\n> Waiting on Author: 40.\n> Ready for Committer: 18.\n> Committed: 57.\n> Moved to next CF: 3.\n> Withdrawn: 15.\n> Rejected: 3.\n> Returned with Feedback: 2.\n> Total: 342.\n>\n> If you are a patch author, please check http://commitfest.cputube.org to\n> be sure your patch still applies, compiles, and passes tests.\n>\n> We need your involvement and participation in reviewing the patches. Let's\n> try and make this happen.\n>\n> --\n> Regards.\n> Ibrar Ahmed\n>\n\n\nOver the past one week, statuses of 47 patches have been changed from\n\"Needs review\". This still leaves us with 157 patches\nrequiring reviews. As always, your continuous support is appreciated to get\nus over the line.\n\nPlease look at the patches requiring review in the current commitfest.\nTest, provide feedback where needed, and update the patch status.\n\nTotal: 342.\n\nNeeds review: 157.\nWaiting on Author: 74.\nReady for Committer: 15.\nCommitted: 68.\nMoved to next CF: 3.\nWithdrawn: 19.\nRejected: 4.\nReturned with Feedback: 2.\n\n\n-- \nIbrar Ahmed\n\nOn Mon, Jul 12, 2021 at 4:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:Hackers,The Commitfest 2021-07 is now in progress. It is one of the biggest one. Total number of patches of this commitfest is 342.Needs review: 204. Waiting on Author: 40. Ready for Committer: 18. Committed: 57. Moved to next CF: 3. Withdrawn: 15. Rejected: 3. Returned with Feedback: 2. Total: 342.If you are a patch author, please check http://commitfest.cputube.org to be sure your patch still applies, compiles, and passes tests.We need your involvement and participation in reviewing the patches. Let's try and make this happen.--Regards.Ibrar AhmedOver the past one week, statuses of 47 patches have been changed from \"Needs review\". This still leaves us with 157 patchesrequiring reviews. As always, your continuous support is appreciated to get us over the line.Please look at the patches requiring review in the current commitfest. Test, provide feedback where needed, and update the patch status.Total: 342.Needs review: 157. Waiting on Author: 74.Ready for Committer: 15. Committed: 68. Moved to next CF: 3. Withdrawn: 19. Rejected: 4. Returned with Feedback: 2.-- Ibrar Ahmed",
"msg_date": "Mon, 19 Jul 2021 16:37:18 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2021-07 CF now in progress"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 4:37 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Mon, Jul 12, 2021 at 4:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>>\n>> Hackers,\n>>\n>> The Commitfest 2021-07 is now in progress. It is one of the biggest one.\n>> Total number of patches of this commitfest is 342.\n>>\n>> Needs review: 204.\n>> Waiting on Author: 40.\n>> Ready for Committer: 18.\n>> Committed: 57.\n>> Moved to next CF: 3.\n>> Withdrawn: 15.\n>> Rejected: 3.\n>> Returned with Feedback: 2.\n>> Total: 342.\n>>\n>> If you are a patch author, please check http://commitfest.cputube.org to\n>> be sure your patch still applies, compiles, and passes tests.\n>>\n>> We need your involvement and participation in reviewing the patches.\n>> Let's try and make this happen.\n>>\n>> --\n>> Regards.\n>> Ibrar Ahmed\n>>\n>\n>\n> Over the past one week, statuses of 47 patches have been changed from\n> \"Needs review\". This still leaves us with 157 patches\n> requiring reviews. As always, your continuous support is appreciated to\n> get us over the line.\n>\n> Please look at the patches requiring review in the current commitfest.\n> Test, provide feedback where needed, and update the patch status.\n>\n> Total: 342.\n>\n> Needs review: 157.\n> Waiting on Author: 74.\n> Ready for Committer: 15.\n> Committed: 68.\n> Moved to next CF: 3.\n> Withdrawn: 19.\n> Rejected: 4.\n> Returned with Feedback: 2.\n>\n>\n> --\n> Ibrar Ahmed\n>\n\nOver the past one week, some progress was made, however, there are still\n155 patches in total that require reviews. Time to continue pushing for\nmaximising patch reviews and getting stuff committed in PostgreSQL.\n\nTotal: 342.\nNeeds review: 155.\nWaiting on Author: 67.\nReady for Committer: 20.\nCommitted: 70.\nMoved to next CF: 3.\nWithdrawn: 20.\nRejected: 5.\nReturned with Feedback: 2.\n\nThank you for your continued effort and support.\n\n-- \nIbrar Ahmed\n\nOn Mon, Jul 19, 2021 at 4:37 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Mon, Jul 12, 2021 at 4:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:Hackers,The Commitfest 2021-07 is now in progress. It is one of the biggest one. Total number of patches of this commitfest is 342.Needs review: 204. Waiting on Author: 40. Ready for Committer: 18. Committed: 57. Moved to next CF: 3. Withdrawn: 15. Rejected: 3. Returned with Feedback: 2. Total: 342.If you are a patch author, please check http://commitfest.cputube.org to be sure your patch still applies, compiles, and passes tests.We need your involvement and participation in reviewing the patches. Let's try and make this happen.--Regards.Ibrar AhmedOver the past one week, statuses of 47 patches have been changed from \"Needs review\". This still leaves us with 157 patchesrequiring reviews. As always, your continuous support is appreciated to get us over the line.Please look at the patches requiring review in the current commitfest. Test, provide feedback where needed, and update the patch status.Total: 342.Needs review: 157. Waiting on Author: 74.Ready for Committer: 15. Committed: 68. Moved to next CF: 3. Withdrawn: 19. Rejected: 4. Returned with Feedback: 2.-- Ibrar Ahmed Over the past one week, some progress was made, however, there are still 155 patches in total that require reviews. Time to continue pushing for maximising patch reviews and getting stuff committed in PostgreSQL.Total: 342.Needs review: 155. Waiting on Author: 67. Ready for Committer: 20. Committed: 70. Moved to next CF: 3. Withdrawn: 20. Rejected: 5. Returned with Feedback: 2. Thank you for your continued effort and support.-- Ibrar Ahmed",
"msg_date": "Mon, 26 Jul 2021 17:52:39 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2021-07 CF now in progress"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 5:52 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Mon, Jul 19, 2021 at 4:37 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>>\n>>\n>> On Mon, Jul 12, 2021 at 4:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com>\n>> wrote:\n>>\n>>>\n>>> Hackers,\n>>>\n>>> The Commitfest 2021-07 is now in progress. It is one of the biggest one.\n>>> Total number of patches of this commitfest is 342.\n>>>\n>>> Needs review: 204.\n>>> Waiting on Author: 40.\n>>> Ready for Committer: 18.\n>>> Committed: 57.\n>>> Moved to next CF: 3.\n>>> Withdrawn: 15.\n>>> Rejected: 3.\n>>> Returned with Feedback: 2.\n>>> Total: 342.\n>>>\n>>> If you are a patch author, please check http://commitfest.cputube.org to\n>>> be sure your patch still applies, compiles, and passes tests.\n>>>\n>>> We need your involvement and participation in reviewing the patches.\n>>> Let's try and make this happen.\n>>>\n>>> --\n>>> Regards.\n>>> Ibrar Ahmed\n>>>\n>>\n>>\n>> Over the past one week, statuses of 47 patches have been changed from\n>> \"Needs review\". This still leaves us with 157 patches\n>> requiring reviews. As always, your continuous support is appreciated to\n>> get us over the line.\n>>\n>> Please look at the patches requiring review in the current commitfest.\n>> Test, provide feedback where needed, and update the patch status.\n>>\n>> Total: 342.\n>>\n>> Needs review: 157.\n>> Waiting on Author: 74.\n>> Ready for Committer: 15.\n>> Committed: 68.\n>> Moved to next CF: 3.\n>> Withdrawn: 19.\n>> Rejected: 4.\n>> Returned with Feedback: 2.\n>>\n>>\n>> --\n>> Ibrar Ahmed\n>>\n>\n> Over the past one week, some progress was made, however, there are still\n> 155 patches in total that require reviews. Time to continue pushing for\n> maximising patch reviews and getting stuff committed in PostgreSQL.\n>\n> Total: 342.\n> Needs review: 155.\n> Waiting on Author: 67.\n> Ready for Committer: 20.\n> Committed: 70.\n> Moved to next CF: 3.\n> Withdrawn: 20.\n> Rejected: 5.\n> Returned with Feedback: 2.\n>\n> Thank you for your continued effort andI support.\n>\n> --\n> Ibrar Ahmed\n>\n\nHere is the current state of the commitfest. It looks like it should be\nclosed now. I don't have permission to do that.\n\nNeeds review: 148.\nWaiting on Author: 61.\nReady for Committer: 22.\nCommitted: 79.\nMoved to next CF: 4.\nReturned with Feedback: 2.\nRejected: 6.\nWithdrawn: 20.\n\nThanks to everyone who worked on the commitfest.\n\n\n-- \nIbrar Ahmed\n\nOn Mon, Jul 26, 2021 at 5:52 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Mon, Jul 19, 2021 at 4:37 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Mon, Jul 12, 2021 at 4:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:Hackers,The Commitfest 2021-07 is now in progress. It is one of the biggest one. Total number of patches of this commitfest is 342.Needs review: 204. Waiting on Author: 40. Ready for Committer: 18. Committed: 57. Moved to next CF: 3. Withdrawn: 15. Rejected: 3. Returned with Feedback: 2. Total: 342.If you are a patch author, please check http://commitfest.cputube.org to be sure your patch still applies, compiles, and passes tests.We need your involvement and participation in reviewing the patches. Let's try and make this happen.--Regards.Ibrar AhmedOver the past one week, statuses of 47 patches have been changed from \"Needs review\". This still leaves us with 157 patchesrequiring reviews. As always, your continuous support is appreciated to get us over the line.Please look at the patches requiring review in the current commitfest. Test, provide feedback where needed, and update the patch status.Total: 342.Needs review: 157. Waiting on Author: 74.Ready for Committer: 15. Committed: 68. Moved to next CF: 3. Withdrawn: 19. Rejected: 4. Returned with Feedback: 2.-- Ibrar Ahmed Over the past one week, some progress was made, however, there are still 155 patches in total that require reviews. Time to continue pushing for maximising patch reviews and getting stuff committed in PostgreSQL.Total: 342.Needs review: 155. Waiting on Author: 67. Ready for Committer: 20. Committed: 70. Moved to next CF: 3. Withdrawn: 20. Rejected: 5. Returned with Feedback: 2. Thank you for your continued effort andI support.-- Ibrar Ahmed\nHere is the current state of the commitfest. It looks like it should be closed now. I don't have permission to do that.Needs review: 148. Waiting on Author: 61. Ready for Committer: 22. Committed: 79. Moved to next CF: 4. Returned with Feedback: 2. Rejected: 6. Withdrawn: 20. Thanks to everyone who worked on the commitfest.-- Ibrar Ahmed",
"msg_date": "Mon, 2 Aug 2021 20:33:53 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2021-07 CF now in progress"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 12:34 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Mon, Jul 26, 2021 at 5:52 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>\n>>\n>>\n>> On Mon, Jul 19, 2021 at 4:37 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>>\n>>>\n>>>\n>>> On Mon, Jul 12, 2021 at 4:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>>>\n>>>>\n>>>> Hackers,\n>>>>\n>>>> The Commitfest 2021-07 is now in progress. It is one of the biggest one. Total number of patches of this commitfest is 342.\n>>>>\n>>>> Needs review: 204.\n>>>> Waiting on Author: 40.\n>>>> Ready for Committer: 18.\n>>>> Committed: 57.\n>>>> Moved to next CF: 3.\n>>>> Withdrawn: 15.\n>>>> Rejected: 3.\n>>>> Returned with Feedback: 2.\n>>>> Total: 342.\n>>>>\n>>>> If you are a patch author, please check http://commitfest.cputube.org to be sure your patch still applies, compiles, and passes tests.\n>>>>\n>>>> We need your involvement and participation in reviewing the patches. Let's try and make this happen.\n>>>>\n>>>> --\n>>>> Regards.\n>>>> Ibrar Ahmed\n>>>\n>>>\n>>>\n>>> Over the past one week, statuses of 47 patches have been changed from \"Needs review\". This still leaves us with 157 patches\n>>> requiring reviews. As always, your continuous support is appreciated to get us over the line.\n>>>\n>>> Please look at the patches requiring review in the current commitfest. Test, provide feedback where needed, and update the patch status.\n>>>\n>>> Total: 342.\n>>>\n>>> Needs review: 157.\n>>> Waiting on Author: 74.\n>>> Ready for Committer: 15.\n>>> Committed: 68.\n>>> Moved to next CF: 3.\n>>> Withdrawn: 19.\n>>> Rejected: 4.\n>>> Returned with Feedback: 2.\n>>>\n>>>\n>>> --\n>>> Ibrar Ahmed\n>>\n>>\n>> Over the past one week, some progress was made, however, there are still 155 patches in total that require reviews. Time to continue pushing for maximising patch reviews and getting stuff committed in PostgreSQL.\n>>\n>> Total: 342.\n>> Needs review: 155.\n>> Waiting on Author: 67.\n>> Ready for Committer: 20.\n>> Committed: 70.\n>> Moved to next CF: 3.\n>> Withdrawn: 20.\n>> Rejected: 5.\n>> Returned with Feedback: 2.\n>>\n>> Thank you for your continued effort andI support.\n>>\n>> --\n>> Ibrar Ahmed\n>\n\nThank you for working as a commitfest manager.\n\n> Here is the current state of the commitfest. It looks like it should be closed now. I don't have permission to do that.\n\nI can close this commitfest. But should we mark uncommitted patches as\n\"Moved to next CF\" or \"Returned with Feedback\" beforehand?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 3 Aug 2021 10:08:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2021-07 CF now in progress"
},
{
"msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> I can close this commitfest. But should we mark uncommitted patches as\n> \"Moved to next CF\" or \"Returned with Feedback\" beforehand?\n\nShould be moved to next CF, or at least most of them should be.\n(But I doubt anyone wants to try to kill off patches that\naren't going anywhere right now.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Aug 2021 21:43:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2021-07 CF now in progress"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 10:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > I can close this commitfest. But should we mark uncommitted patches as\n> > \"Moved to next CF\" or \"Returned with Feedback\" beforehand?\n>\n> Should be moved to next CF, or at least most of them should be.\n> (But I doubt anyone wants to try to kill off patches that\n> aren't going anywhere right now.)\n\nAgreed. I'll close this commitfest soon. The patches are automatically\nmarked as \"Moved to next CF\" by closing the commitfest?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 3 Aug 2021 10:50:47 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2021-07 CF now in progress"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 10:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 3, 2021 at 10:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > > I can close this commitfest. But should we mark uncommitted patches as\n> > > \"Moved to next CF\" or \"Returned with Feedback\" beforehand?\n> >\n> > Should be moved to next CF, or at least most of them should be.\n> > (But I doubt anyone wants to try to kill off patches that\n> > aren't going anywhere right now.)\n>\n> Agreed. I'll close this commitfest soon. The patches are automatically\n> marked as \"Moved to next CF\" by closing the commitfest?\n\nIt seems not.\n\nAnyway, I've closed this commitfest. I'll move uncommitted patch\nentries to the next commitfest.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 3 Aug 2021 11:14:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2021-07 CF now in progress"
}
] |
[
{
"msg_contents": "Over on [1], Ronan is working on allowing Datum sorts for nodeSort.c\nwhen we're just sorting a single Datum.\n\nI was looking at his v4 patch and noticed that he'd modified\nfree_sort_tuple() to conditionally only free the sort tuple if it's\nnon-NULL. Without this change, the select.sql regression test fails\non:\n\nselect * from onek,\n (values ((select i from\n (values(10000), (2), (389), (1000), (2000), ((select 10029))) as foo(i)\n order by i asc limit 1))) bar (i)\n where onek.unique1 = bar.i;\n\nThe limit 1 makes this a bounded sort and we call free_sort_tuple()\nduring make_bounded_heap().\n\nIt looks like this has likely never come up before because the only\ntime we use tuplesort_set_bound() is in nodeSort.c and\nnodeIncrementalSort.c, none of those currently use datum sorts.\n\nHowever, I'm thinking this is still a bug that should be fixed\nseparately from Ronan's main patch.\n\nDoes anyone else have thoughts on this?\n\nThe fragment in question is:\n\n@@ -4773,6 +4773,14 @@ leader_takeover_tapes(Tuplesortstate *state)\n static void\n free_sort_tuple(Tuplesortstate *state, SortTuple *stup)\n {\n- FREEMEM(state, GetMemoryChunkSpace(stup->tuple));\n- pfree(stup->tuple);\n+ /*\n+ * If the SortTuple is actually only a single Datum, which was not copied\n+ * as it is a byval type, do not try to free it nor account for it in\n+ * memory used.\n+ */\n+ if (stup->tuple)\n+ {\n+ FREEMEM(state, GetMemoryChunkSpace(stup->tuple));\n+ pfree(stup->tuple);\n+ }\n }\n\nDavid\n\n[1] https://www.postgresql.org/message-id/3060002.hb0XKQ11pn@aivenronan\n\n\n",
"msg_date": "Tue, 13 Jul 2021 01:22:26 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is tuplesort meant to support bounded datum sorts?"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> It looks like this has likely never come up before because the only\n> time we use tuplesort_set_bound() is in nodeSort.c and\n> nodeIncrementalSort.c, none of those currently use datum sorts.\n> However, I'm thinking this is still a bug that should be fixed\n> separately from Ronan's main patch.\n\nYeah, I think you're right. The comment seems a little confused\nthough. Maybe there's no need for it at all --- there's equivalent\ncode in e.g. writetup_datum that has no comment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Jul 2021 12:10:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is tuplesort meant to support bounded datum sorts?"
},
{
"msg_contents": "On Tue, 13 Jul 2021 at 04:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > It looks like this has likely never come up before because the only\n> > time we use tuplesort_set_bound() is in nodeSort.c and\n> > nodeIncrementalSort.c, none of those currently use datum sorts.\n> > However, I'm thinking this is still a bug that should be fixed\n> > separately from Ronan's main patch.\n>\n> Yeah, I think you're right. The comment seems a little confused\n> though. Maybe there's no need for it at all --- there's equivalent\n> code in e.g. writetup_datum that has no comment.\n\nThanks for looking at this. I've pushed a fix and backpatched.\n\nDavid\n\n\n",
"msg_date": "Tue, 13 Jul 2021 13:36:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is tuplesort meant to support bounded datum sorts?"
}
] |
[
{
"msg_contents": "Hi,\n\nAs suggested in [1], starting a new thread for discussing $subject\nseparately. {pre, post}_auth_delay waiting logic currently uses\npg_usleep which can't detect postmaster death. So, there are chances\nthat some of the backends still stay in the system even when a\npostmaster crashes (for whatever reasons it may be). Please have a\nlook at the attached patch that does $subject. I pulled out some of\nthe comments from the other thread related to the $subject, [2], [3],\n[4], [5].\n\n[1] - https://www.postgresql.org/message-id/YOv8Yxd5zrbr3k%2BH%40paquier.xyz\n[2] - https://www.postgresql.org/message-id/162764.1624892517%40sss.pgh.pa.us\n[3] - https://www.postgresql.org/message-id/20210705.145251.462698229911576780.horikyota.ntt%40gmail.com\n[4] - https://www.postgresql.org/message-id/flat/20210705155553.GD20766%40tamriel.snowman.net\n[5] - https://www.postgresql.org/message-id/YOOnlP4NtWVzfsyb%40paquier.xyz\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 12 Jul 2021 21:26:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 9:26 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> As suggested in [1], starting a new thread for discussing $subject\n> separately. {pre, post}_auth_delay waiting logic currently uses\n> pg_usleep which can't detect postmaster death. So, there are chances\n> that some of the backends still stay in the system even when a\n> postmaster crashes (for whatever reasons it may be). Please have a\n> look at the attached patch that does $subject. I pulled out some of\n> the comments from the other thread related to the $subject, [2], [3],\n> [4], [5].\n>\n> [1] - https://www.postgresql.org/message-id/YOv8Yxd5zrbr3k%2BH%40paquier.xyz\n> [2] - https://www.postgresql.org/message-id/162764.1624892517%40sss.pgh.pa.us\n> [3] - https://www.postgresql.org/message-id/20210705.145251.462698229911576780.horikyota.ntt%40gmail.com\n> [4] - https://www.postgresql.org/message-id/flat/20210705155553.GD20766%40tamriel.snowman.net\n> [5] - https://www.postgresql.org/message-id/YOOnlP4NtWVzfsyb%40paquier.xyz\n\nI added this to the commitfest - https://commitfest.postgresql.org/34/3255/\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 15 Jul 2021 19:56:47 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On 7/12/21, 9:00 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> As suggested in [1], starting a new thread for discussing $subject\r\n> separately. {pre, post}_auth_delay waiting logic currently uses\r\n> pg_usleep which can't detect postmaster death. So, there are chances\r\n> that some of the backends still stay in the system even when a\r\n> postmaster crashes (for whatever reasons it may be). Please have a\r\n> look at the attached patch that does $subject. I pulled out some of\r\n> the comments from the other thread related to the $subject, [2], [3],\r\n> [4], [5].\r\n\r\n+ <row>\r\n+ <entry><literal>PostAuthDelay</literal></entry>\r\n+ <entry>Waiting on connection startup after authentication to allow attach\r\n+ from a debugger.</entry>\r\n+ </row>\r\n+ <row>\r\n+ <entry><literal>PreAuthDelay</literal></entry>\r\n+ <entry>Waiting on connection startup before authentication to allow\r\n+ attach from a debugger.</entry>\r\n+ </row>\r\n\r\nI would suggest changing \"attach from a debugger\" to \"attaching with a\r\ndebugger.\"\r\n\r\nif (PreAuthDelay > 0)\r\n-\t\tpg_usleep(PreAuthDelay * 1000000L);\r\n+\t{\r\n+\t\t/*\r\n+\t\t * Do not use WL_LATCH_SET during backend initialization because the\r\n+\t\t * MyLatch may point to shared latch later.\r\n+\t\t */\r\n+\t\t(void) WaitLatch(MyLatch,\r\n+\t\t\t\t\t\t WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\r\n+\t\t\t\t\t\t PreAuthDelay * 1000L,\r\n+\t\t\t\t\t\t WAIT_EVENT_PRE_AUTH_DELAY);\r\n+\t}\r\n\r\nIIUC you want to use the same set of flags as PostAuthDelay for\r\nPreAuthDelay, but the stated reason in this comment for leaving out\r\nWL_LATCH_SET suggests otherwise. It's not clear to me why the latch\r\npossibly pointing to a shared latch in the future is an issue. Should\r\nthis instead say that we leave out WL_LATCH_SET for consistency with\r\nPostAuthDelay?\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 22 Jul 2021 23:10:14 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 4:40 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> I would suggest changing \"attach from a debugger\" to \"attaching with a\n> debugger.\"\n\nThanks. IMO, the following looks better:\n <entry>Waiting on connection startup before authentication to allow\n attaching a debugger to the process.</entry>\n <entry>Waiting on connection startup after authentication to allow\n attaching a debugger to the process.</entry>\n\n> IIUC you want to use the same set of flags as PostAuthDelay for\n> PreAuthDelay, but the stated reason in this comment for leaving out\n> WL_LATCH_SET suggests otherwise. It's not clear to me why the latch\n> possibly pointing to a shared latch in the future is an issue. Should\n> this instead say that we leave out WL_LATCH_SET for consistency with\n> PostAuthDelay?\n\nIf WL_LATCH_SET is used for PostAuthDelay, the waiting doesn't happen\nbecause the MyLatch which is a shared latch would be set by\nSwitchToSharedLatch. More details at [1].\nIf WL_LATCH_SET is used for PreAuthDelay, actually there's no problem\nbecause MyLatch is still not initialized properly in BackendInitialize\nwhen waiting for PreAuthDelay, it still points to local latch, but\nlater gets pointed to shared latch and gets set SwitchToSharedLatch.\nBut relying on MyLatch there seems to me somewhat relying on an\nuninitialized variable. More details at [1].\n\nFor PreAuthDelay, with the comment I wanted to say that the MyLatch is\nnot the correct one we would want to wait for. Since there is no\nproblem in using it there, I changed the comment to following:\n /*\n * Let's not use WL_LATCH_SET for PreAuthDelay to be consistent with\n * PostAuthDelay.\n */\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACVF8AZi1bK8oH-Qoz3tYVpqFuzxcDRPdF-3p5BvF6GTxA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 24 Jul 2021 21:46:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On 7/24/21, 9:16 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Fri, Jul 23, 2021 at 4:40 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I would suggest changing \"attach from a debugger\" to \"attaching with a\r\n>> debugger.\"\r\n>\r\n> Thanks. IMO, the following looks better:\r\n> <entry>Waiting on connection startup before authentication to allow\r\n> attaching a debugger to the process.</entry>\r\n> <entry>Waiting on connection startup after authentication to allow\r\n> attaching a debugger to the process.</entry>\r\n\r\nYour phrasing looks good to me.\r\n\r\n>> IIUC you want to use the same set of flags as PostAuthDelay for\r\n>> PreAuthDelay, but the stated reason in this comment for leaving out\r\n>> WL_LATCH_SET suggests otherwise. It's not clear to me why the latch\r\n>> possibly pointing to a shared latch in the future is an issue. Should\r\n>> this instead say that we leave out WL_LATCH_SET for consistency with\r\n>> PostAuthDelay?\r\n>\r\n> If WL_LATCH_SET is used for PostAuthDelay, the waiting doesn't happen\r\n> because the MyLatch which is a shared latch would be set by\r\n> SwitchToSharedLatch. More details at [1].\r\n> If WL_LATCH_SET is used for PreAuthDelay, actually there's no problem\r\n> because MyLatch is still not initialized properly in BackendInitialize\r\n> when waiting for PreAuthDelay, it still points to local latch, but\r\n> later gets pointed to shared latch and gets set SwitchToSharedLatch.\r\n> But relying on MyLatch there seems to me somewhat relying on an\r\n> uninitialized variable. More details at [1].\r\n>\r\n> For PreAuthDelay, with the comment I wanted to say that the MyLatch is\r\n> not the correct one we would want to wait for. Since there is no\r\n> problem in using it there, I changed the comment to following:\r\n> /*\r\n> * Let's not use WL_LATCH_SET for PreAuthDelay to be consistent with\r\n> * PostAuthDelay.\r\n> */\r\n\r\nHow about we elaborate a bit?\r\n\r\n WL_LATCH_SET is not used for consistency with PostAuthDelay.\r\n MyLatch isn't fully initialized for the backend at this point,\r\n anyway.\r\n\r\n+\t\t/*\r\n+\t\t * PostAuthDelay will not get applied, if WL_LATCH_SET is used. This\r\n+\t\t * is because the latch could have been set initially.\r\n+\t\t */\r\n\r\nI would suggest the following:\r\n\r\n If WL_LATCH_SET is used, PostAuthDelay may not be applied,\r\n since the latch might already be set.\r\n\r\nOtherwise, this patch looks good and could probably be marked ready-\r\nfor-committer.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 26 Jul 2021 17:33:03 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 11:03 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > For PreAuthDelay, with the comment I wanted to say that the MyLatch is\n> > not the correct one we would want to wait for. Since there is no\n> > problem in using it there, I changed the comment to following:\n> > /*\n> > * Let's not use WL_LATCH_SET for PreAuthDelay to be consistent with\n> > * PostAuthDelay.\n> > */\n>\n> How about we elaborate a bit?\n>\n> WL_LATCH_SET is not used for consistency with PostAuthDelay.\n> MyLatch isn't fully initialized for the backend at this point,\n> anyway.\n\n+1.\n\n> + /*\n> + * PostAuthDelay will not get applied, if WL_LATCH_SET is used. This\n> + * is because the latch could have been set initially.\n> + */\n>\n> I would suggest the following:\n>\n> If WL_LATCH_SET is used, PostAuthDelay may not be applied,\n> since the latch might already be set.\n\n+1.\n\n> Otherwise, this patch looks good and could probably be marked ready-\n> for-committer.\n\nPSA v3 patch.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 27 Jul 2021 11:04:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On 7/26/21, 10:34 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> PSA v3 patch.\r\n\r\nLGTM. The pre/post_auth_delay parameters seem to work as intended,\r\nand they are responsive to postmaster crashes. I didn't find any\r\nexamples of calling WaitLatch() without WL_LATCH_SET, but the function\r\nappears to have support for that. (In fact, it just sets the latch\r\nvariable to NULL in that case, so perhaps we should replace MyLatch\r\nwith NULL in the patch.) I do see that WaitLatchOrSocket() is\r\nsometimes called without WL_LATCH_SET, though.\r\n\r\nI am marking this patch as ready-for-committer.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 27 Jul 2021 17:23:18 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "Hello.\n\nAt Tue, 27 Jul 2021 11:04:09 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Jul 26, 2021 at 11:03 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> PSA v3 patch.\n\nI have some comments.\n\n- No harm, but it's pointless to feed MyLatch to WaitLatch when\n WL_LATCH_SET is not set (or rather misleading).\n\n- It seems that the additional wait-event is effectively useless for\n most of the processes. Considering that this feature is for debugging\n purpose, it'd be better to use ps display instead (or additionally)\n if we want to see the wait event anywhere.\n\nThe events of autovacuum workers can be seen in pg_stat_activity properly.\n\nFor client-backends, that state cannot be seen in\npg_stat_activity. That seems inevitable since backends aren't\nallocated a PGPROC entry yet at that time. (So the wait event is set\nto local memory as a safety measure in this case.) On the other hand,\nI find it inconvenient that the ps display is shown as just \"postgres\"\nwhile in that state. I think we can show \"postgres: preauth waiting\"\nor something. (It is shown as \"postgres: username dbname [conn]\ninitializing\" while PostAuthDelay)\n\nBackground workers behave the same way to client backends for the same\nreason to the above. We might be able to *fix* that but I'm not sure\nit's worth doing that only for this specific case.\n\nAutovacuum launcher is seen in pg_stat_activity but clients cannot\nstart connection before autovac launcher starts unless unless process\nstartup time is largely fluctuated. So the status is effectively\nuseless in regard to the process.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 28 Jul 2021 11:42:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 8:12 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> At Tue, 27 Jul 2021 11:04:09 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Mon, Jul 26, 2021 at 11:03 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > PSA v3 patch.\n>\n> I have some comments.\n>\n> - No harm, but it's pointless to feed MyLatch to WaitLatch when\n> WL_LATCH_SET is not set (or rather misleading).\n\n+1. I can send NULL to WaitLatch.\n\n> - It seems that the additional wait-event is effectively useless for\n> most of the processes. Considering that this feature is for debugging\n> purpose, it'd be better to use ps display instead (or additionally)\n> if we want to see the wait event anywhere.\n\nHm. That's a good idea to show up in the ps display.\n\n> The events of autovacuum workers can be seen in pg_stat_activity properly.\n>\n> For client-backends, that state cannot be seen in\n> pg_stat_activity. That seems inevitable since backends aren't\n> allocated a PGPROC entry yet at that time. (So the wait event is set\n> to local memory as a safety measure in this case.) On the other hand,\n> I find it inconvenient that the ps display is shown as just \"postgres\"\n> while in that state. I think we can show \"postgres: preauth waiting\"\n> or something. (It is shown as \"postgres: username dbname [conn]\n> initializing\" while PostAuthDelay)\n\nHm. Is n't it better to show something like below in the ps display?\nfor pre_auth_delay: \"postgres: pre auth delay\"\nfor post_auth_delay: \"postgres: <<existing message>> post auth delay\"\n\nBut, I'm not sure whether this ps display thing will add any value to\nthe end user who always can't see the ps display. So, how about having\nboth i.e. ps display (useful for pre auth delay cases) and wait event\n(useful for post auth delay)?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 28 Jul 2021 21:10:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 09:10:35PM +0530, Bharath Rupireddy wrote:\n> On Wed, Jul 28, 2021 at 8:12 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > - It seems that the additional wait-event is effectively useless for\n> > most of the processes. Considering that this feature is for debugging\n> > purpose, it'd be better to use ps display instead (or additionally)\n> > if we want to see the wait event anywhere.\n> \n> Hm. That's a good idea to show up in the ps display.\n\nKeep in mind that ps is apparently so expensive under windows that the GUC\ndefaults to off.\n\nThe admin can leave the ps display off, but I wonder if it's of any concern\nthat something so expensive can be caused by an unauthenticated connection.\n\n> > The events of autovacuum workers can be seen in pg_stat_activity properly.\n> >\n> > For client-backends, that state cannot be seen in\n> > pg_stat_activity. That seems inevitable since backends aren't\n> > allocated a PGPROC entry yet at that time. (So the wait event is set\n> > to local memory as a safety measure in this case.) On the other hand,\n> > I find it inconvenient that the ps display is shown as just \"postgres\"\n> > while in that state. I think we can show \"postgres: preauth waiting\"\n> > or something. (It is shown as \"postgres: username dbname [conn]\n> > initializing\" while PostAuthDelay)\n> \n> Hm. Is n't it better to show something like below in the ps display?\n> for pre_auth_delay: \"postgres: pre auth delay\"\n> for post_auth_delay: \"postgres: <<existing message>> post auth delay\"\n> \n> But, I'm not sure whether this ps display thing will add any value to\n> the end user who always can't see the ps display. So, how about having\n> both i.e. ps display (useful for pre auth delay cases) and wait event\n> (useful for post auth delay)?\n\n\n",
"msg_date": "Wed, 28 Jul 2021 13:16:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Jul 28, 2021 at 09:10:35PM +0530, Bharath Rupireddy wrote:\n>> Hm. That's a good idea to show up in the ps display.\n\n> Keep in mind that ps is apparently so expensive under windows that the GUC\n> defaults to off.\n> The admin can leave the ps display off, but I wonder if it's of any concern\n> that something so expensive can be caused by an unauthenticated connection.\n\nI'm detecting a certain amount of lily-gilding here. Neither of these\ndelays are meant for anything except debugging purposes, and nobody as\nfar as I've heard has ever expressed great concern about identifying\nwhich process they need to attach to for that purpose. So I think it\nis a *complete* waste of time to add any cycles to connection startup\nto make these delays more visible.\n\nI follow the idea of using WaitLatch to ensure that the delays are\ninterruptible by postmaster signals, but even that isn't worth a\nlot given the expected use of these things. I don't see a need to\nexpend any extra effort on wait-reporting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 14:32:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On 7/28/21, 11:32 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> I'm detecting a certain amount of lily-gilding here. Neither of these\r\n> delays are meant for anything except debugging purposes, and nobody as\r\n> far as I've heard has ever expressed great concern about identifying\r\n> which process they need to attach to for that purpose. So I think it\r\n> is a *complete* waste of time to add any cycles to connection startup\r\n> to make these delays more visible.\r\n>\r\n> I follow the idea of using WaitLatch to ensure that the delays are\r\n> interruptible by postmaster signals, but even that isn't worth a\r\n> lot given the expected use of these things. I don't see a need to\r\n> expend any extra effort on wait-reporting.\r\n\r\n+1. The proposed patch doesn't make the delay visibility any worse\r\nthan what's already there.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 28 Jul 2021 20:28:12 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 08:28:12PM +0000, Bossart, Nathan wrote:\n> On 7/28/21, 11:32 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n>> I follow the idea of using WaitLatch to ensure that the delays are\n>> interruptible by postmaster signals, but even that isn't worth a\n>> lot given the expected use of these things. I don't see a need to\n>> expend any extra effort on wait-reporting.\n> \n> +1. The proposed patch doesn't make the delay visibility any worse\n> than what's already there.\n\nAgreed to just drop the patch (my opinion about this patch is\nunchanged). Not to mention that wait events are not available at SQL\nlevel at this stage yet.\n--\nMichael",
"msg_date": "Thu, 29 Jul 2021 09:52:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "At Thu, 29 Jul 2021 09:52:08 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Jul 28, 2021 at 08:28:12PM +0000, Bossart, Nathan wrote:\n> > On 7/28/21, 11:32 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> >> I follow the idea of using WaitLatch to ensure that the delays are\n> >> interruptible by postmaster signals, but even that isn't worth a\n> >> lot given the expected use of these things. I don't see a need to\n> >> expend any extra effort on wait-reporting.\n> > \n> > +1. The proposed patch doesn't make the delay visibility any worse\n> > than what's already there.\n> \n> Agreed to just drop the patch (my opinion about this patch is\n> unchanged). Not to mention that wait events are not available at SQL\n> level at this stage yet.\n\nI'm +1 to not adding wait event stuff at all. So the only advantage\nthis patch would offer is interruptivity. I vote +-0.0 for adding that\ninterruptivity (+1.0 from the previous opinion of mine:p).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 29 Jul 2021 16:59:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
},
{
"msg_contents": "On 7/29/21, 12:59 AM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> At Thu, 29 Jul 2021 09:52:08 +0900, Michael Paquier <michael@paquier.xyz> wrote in\r\n>> On Wed, Jul 28, 2021 at 08:28:12PM +0000, Bossart, Nathan wrote:\r\n>> > On 7/28/21, 11:32 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n>> >> I follow the idea of using WaitLatch to ensure that the delays are\r\n>> >> interruptible by postmaster signals, but even that isn't worth a\r\n>> >> lot given the expected use of these things. I don't see a need to\r\n>> >> expend any extra effort on wait-reporting.\r\n>> >\r\n>> > +1. The proposed patch doesn't make the delay visibility any worse\r\n>> > than what's already there.\r\n>>\r\n>> Agreed to just drop the patch (my opinion about this patch is\r\n>> unchanged). Not to mention that wait events are not available at SQL\r\n>> level at this stage yet.\r\n>\r\n> I'm +1 to not adding wait event stuff at all. So the only advantage\r\n> this patch would offer is interruptivity. I vote +-0.0 for adding that\r\n> interruptivity (+1.0 from the previous opinion of mine:p).\r\n\r\nI'm still in favor of moving to WaitLatch() for pre/post_auth_delay,\r\nbut I don't think we should worry about the wait reporting stuff. The\r\npatch doesn't add a tremendous amount of complexity, it improves the\r\nbehavior on postmaster crashes, and it follows the best practice\r\ndescribed in pgsleep.c of using WaitLatch() for long sleeps.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 29 Jul 2021 23:27:32 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Use WaitLatch for {pre, post}_auth_delay instead of pg_usleep"
}
] |
[
{
"msg_contents": "Autoconf's AC_CHECK_DECLS always defines HAVE_DECL_whatever\nas 1 or 0, but some of the entries in msvc/Solution.pm show\nsuch symbols as \"undef\" instead. Shouldn't we fix it as\nper attached? This is probably only cosmetic at the moment,\nbut it could bite us someday if someone wrote a complex\nconditional using one of these symbols.\n\nThese apparently-bogus values date to Peter's 8f4fb4c64,\nwhich created that table; but AFAICS it was just faithfully\nemulating the previous confused state of affairs.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 12 Jul 2021 19:46:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bogus HAVE_DECL_FOO entries in msvc/Solution.pm"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 07:46:32PM -0400, Tom Lane wrote:\n> Autoconf's AC_CHECK_DECLS always defines HAVE_DECL_whatever\n> as 1 or 0, but some of the entries in msvc/Solution.pm show\n> such symbols as \"undef\" instead. Shouldn't we fix it as\n> per attached? This is probably only cosmetic at the moment,\n> but it could bite us someday if someone wrote a complex\n> conditional using one of these symbols.\n\nHmm. I have not tested, but agreed that this is inconsistent. I\nwould tend to vote for a backpatch to keep some consistency across the\nbranches as changes in this area could easily lead to rather conflicts\nharder to parse.\n--\nMichael",
"msg_date": "Tue, 13 Jul 2021 10:56:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bogus HAVE_DECL_FOO entries in msvc/Solution.pm"
},
{
"msg_contents": "On 13.07.21 01:46, Tom Lane wrote:\n> Autoconf's AC_CHECK_DECLS always defines HAVE_DECL_whatever\n> as 1 or 0, but some of the entries in msvc/Solution.pm show\n> such symbols as \"undef\" instead. Shouldn't we fix it as\n> per attached? This is probably only cosmetic at the moment,\n> but it could bite us someday if someone wrote a complex\n> conditional using one of these symbols.\n\nYes, I think that is correct.\n\n\n",
"msg_date": "Tue, 13 Jul 2021 06:17:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Bogus HAVE_DECL_FOO entries in msvc/Solution.pm"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Jul 12, 2021 at 07:46:32PM -0400, Tom Lane wrote:\n>> Autoconf's AC_CHECK_DECLS always defines HAVE_DECL_whatever\n>> as 1 or 0, but some of the entries in msvc/Solution.pm show\n>> such symbols as \"undef\" instead. Shouldn't we fix it as\n>> per attached? This is probably only cosmetic at the moment,\n>> but it could bite us someday if someone wrote a complex\n>> conditional using one of these symbols.\n\n> Hmm. I have not tested, but agreed that this is inconsistent. I\n> would tend to vote for a backpatch to keep some consistency across the\n> branches as changes in this area could easily lead to rather conflicts\n> harder to parse.\n\nThat's easy enough in v13 and up, which have 8f4fb4c64 so that\nSolution.pm looks like this. We could make it consistent in older\nbranches by manually hacking pg_config.h.win32 ... but I'm wondering\nif the smarter plan wouldn't be to back-patch 8f4fb4c64. Without\nthat, we're at risk of messing up anytime we back-patch something\nthat involves a change in the set of configure-defined symbols, which\nwe do with some regularity.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jul 2021 00:25:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus HAVE_DECL_FOO entries in msvc/Solution.pm"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 12:25:06AM -0400, Tom Lane wrote:\n> That's easy enough in v13 and up, which have 8f4fb4c64 so that\n> Solution.pm looks like this. We could make it consistent in older\n> branches by manually hacking pg_config.h.win32 ... but I'm wondering\n> if the smarter plan wouldn't be to back-patch 8f4fb4c64. Without\n> that, we're at risk of messing up anytime we back-patch something\n> that involves a change in the set of configure-defined symbols, which\n> we do with some regularity.\n\nI was thinking to just do the easiest move and fix this issue down to\n13, not bothering about older branches :p\n\nLooking at the commit, a backpatch is not that complicated and it is\npossible to check the generation of pg_config.h on non-MSVC\nenvironments if some objects are missing. Still, I think that it\nwould be better to be careful and test this code properly on Windows\nwith a real build. It means that.. Err... Andrew or I should look\nat that. I am not sure that the potential maintenance gain is worth\npoking at the stable branches, to be honest.\n--\nMichael",
"msg_date": "Tue, 13 Jul 2021 16:53:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bogus HAVE_DECL_FOO entries in msvc/Solution.pm"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jul 13, 2021 at 12:25:06AM -0400, Tom Lane wrote:\n>> That's easy enough in v13 and up, which have 8f4fb4c64 so that\n>> Solution.pm looks like this. We could make it consistent in older\n>> branches by manually hacking pg_config.h.win32 ... but I'm wondering\n>> if the smarter plan wouldn't be to back-patch 8f4fb4c64.\n\n> ... I am not sure that the potential maintenance gain is worth\n> poking at the stable branches, to be honest.\n\nFair enough. I wasn't very eager to do the legwork on that, either,\ngiven that the issue is (so far) only cosmetic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jul 2021 09:58:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus HAVE_DECL_FOO entries in msvc/Solution.pm"
},
{
"msg_contents": "On 13.07.21 09:53, Michael Paquier wrote:\n> I was thinking to just do the easiest move and fix this issue down to\n> 13, not bothering about older branches :p\n> \n> Looking at the commit, a backpatch is not that complicated and it is\n> possible to check the generation of pg_config.h on non-MSVC\n> environments if some objects are missing. Still, I think that it\n> would be better to be careful and test this code properly on Windows\n> with a real build. It means that.. Err... Andrew or I should look\n> at that. I am not sure that the potential maintenance gain is worth\n> poking at the stable branches, to be honest.\n\nWe have lived with the previous system for a decade, so I think \nbackpatching this would be a bit excessive.\n\n\n",
"msg_date": "Tue, 13 Jul 2021 17:01:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Bogus HAVE_DECL_FOO entries in msvc/Solution.pm"
}
] |
[
{
"msg_contents": "Hello,\n\nDuring reading the documentation of libpq [1] , I found the following\ndescription:\n\n In the nonblocking state, calls to PQsendQuery, PQputline, PQputnbytes,\n PQputCopyData, and PQendcopy will not block but instead return an error\n if they need to be called again.\n\n[1] https://www.postgresql.org/docs/devel/libpq-async.html\n\nHowever, looking into the code, PQsendQuery seems not to return an error\nin non-bloking mode even if unable to send all data. In such cases,\npqSendSome will return 1 but it doesn't cause an error. Moreover,\nwe would not need to call PQsendQuery again. Indead, we need to call\nPQflush until it returns 0, as documented with regard to PQflush.\n\nDo we need to fix the description of PQsetnonblocking?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 13 Jul 2021 11:59:49 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Question about non-blocking mode in libpq"
},
{
"msg_contents": "On Tue, 13 Jul 2021 11:59:49 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hello,\n> \n> During reading the documentation of libpq [1] , I found the following\n> description:\n> \n> In the nonblocking state, calls to PQsendQuery, PQputline, PQputnbytes,\n> PQputCopyData, and PQendcopy will not block but instead return an error\n> if they need to be called again.\n> \n> [1] https://www.postgresql.org/docs/devel/libpq-async.html\n> \n> However, looking into the code, PQsendQuery seems not to return an error\n> in non-bloking mode even if unable to send all data. In such cases,\n> pqSendSome will return 1 but it doesn't cause an error. Moreover,\n> we would not need to call PQsendQuery again. Indead, we need to call\n> PQflush until it returns 0, as documented with regard to PQflush.\n> \n> Do we need to fix the description of PQsetnonblocking?\n\nI have further questions. Reading the following statement:\n\n \"In the nonblocking state, calls to PQsendQuery, PQputline, PQputnbytes,\n PQputCopyData, and PQendcopy will not block\" \n\nthis seems to me that this is a list of functions that could block in blocking\nmode, but I wander PQflush also could block because it calls pqSendSome, right?\n\nAlso, in the last paragraph of the section, I can find the following:\n\n \"After sending any command or data on a nonblocking connection, call PQflush. ...\"\n\nHowever, ISTM we don't need to call PQflush in non-bloking mode and we can\ncall PQgetResult immediately because PQgetResult internally calls pqFlush\nuntil it returns 0 (or -1).\n\n /*\n * If data remains unsent, send it. Else we might be waiting for the\n * result of a command the backend hasn't even got yet.\n */\n while ((flushResult = pqFlush(conn)) > 0) \n {\n if (pqWait(false, true, conn))\n {\n flushResult = -1;\n break;\n }\n }\n\nTherefore, I wander the last paragraph of this section is\nnow unnecessary. right?\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 19 Jul 2021 23:11:29 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "On 2021-Jul-19, Yugo NAGATA wrote:\n\n> On Tue, 13 Jul 2021 11:59:49 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> > However, looking into the code, PQsendQuery seems not to return an error\n> > in non-bloking mode even if unable to send all data. In such cases,\n> > pqSendSome will return 1 but it doesn't cause an error. Moreover,\n> > we would not need to call PQsendQuery again. Indead, we need to call\n> > PQflush until it returns 0, as documented with regard to PQflush.\n> > \n> > Do we need to fix the description of PQsetnonblocking?\n\nYeah, I think you're right -- these functions don't error out, the\ncommands are just stored locally in the output buffer.\n\n> \"In the nonblocking state, calls to PQsendQuery, PQputline, PQputnbytes,\n> PQputCopyData, and PQendcopy will not block\" \n> \n> this seems to me that this is a list of functions that could block in blocking\n> mode, but I wander PQflush also could block because it calls pqSendSome, right?\n\nI don't see that. If pqSendSome can't write anything, it'll just return 1.\n\n> Also, in the last paragraph of the section, I can find the following:\n> \n> \"After sending any command or data on a nonblocking connection, call PQflush. ...\"\n> \n> However, ISTM we don't need to call PQflush in non-bloking mode and we can\n> call PQgetResult immediately because PQgetResult internally calls pqFlush\n> until it returns 0 (or -1).\n\nWell, maybe you don't *need* to PQflush(); but if you don't call it,\nthen the commands will sit in the output buffer indefinitely, which\nmeans the server won't execute them. So even if it works to just call\nPQgetResult and have it block, surely you would like to only call\nPQgetResult when the query has already been executed and the result\nalready been received and processed; that is, so that you can call\nPQgetResult and obtain the result immediately, and avoid (say) blocking\na GUI interface while PQgetResult flushes the commands out, the server\nexecutes the query and sends the results back.\n\n> Therefore, I wander the last paragraph of this section is\n> now unnecessary. right?\n\nDoesn't seem so to me.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Jul 2021 12:05:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "Hello Alvaro,\n\nOn Tue, 20 Jul 2021 12:05:11 -0400\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2021-Jul-19, Yugo NAGATA wrote:\n> \n> > On Tue, 13 Jul 2021 11:59:49 +0900\n> > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > > However, looking into the code, PQsendQuery seems not to return an error\n> > > in non-bloking mode even if unable to send all data. In such cases,\n> > > pqSendSome will return 1 but it doesn't cause an error. Moreover,\n> > > we would not need to call PQsendQuery again. Indead, we need to call\n> > > PQflush until it returns 0, as documented with regard to PQflush.\n> > > \n> > > Do we need to fix the description of PQsetnonblocking?\n> \n> Yeah, I think you're right -- these functions don't error out, the\n> commands are just stored locally in the output buffer.\n\nThank you for your explanation!\nI attached a patch fix the description.\n\n> > \"In the nonblocking state, calls to PQsendQuery, PQputline, PQputnbytes,\n> > PQputCopyData, and PQendcopy will not block\" \n> > \n> > this seems to me that this is a list of functions that could block in blocking\n> > mode, but I wander PQflush also could block because it calls pqSendSome, right?\n> \n> I don't see that. If pqSendSome can't write anything, it'll just return 1.\n\nWell, is this the case of non-blocking mode, nor? If I understood correctly,\npqSendSome could block in blocking mode, so PQflush could block, too. I thought\nwe should add PQflush to the list in the description to enphasis that this would\nnot block in non-blocking mode. However, now I don't think so because PQflush\nseems useful only in non-blocking mode.\n\n> > Also, in the last paragraph of the section, I can find the following:\n> > \n> > \"After sending any command or data on a nonblocking connection, call PQflush. ...\"\n> > \n> > However, ISTM we don't need to call PQflush in non-bloking mode and we can\n> > call PQgetResult immediately because PQgetResult internally calls pqFlush\n> > until it returns 0 (or -1).\n> \n> Well, maybe you don't *need* to PQflush(); but if you don't call it,\n> then the commands will sit in the output buffer indefinitely, which\n> means the server won't execute them. So even if it works to just call\n> PQgetResult and have it block, surely you would like to only call\n> PQgetResult when the query has already been executed and the result\n> already been received and processed; that is, so that you can call\n> PQgetResult and obtain the result immediately, and avoid (say) blocking\n> a GUI interface while PQgetResult flushes the commands out, the server\n> executes the query and sends the results back.\n\nI understood that, although PQgetResult() also flushes the buffer, we still\nshould call PQflush() beforehand because we would not like get blocked after\ncalling PQgetResult(). Thanks.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Wed, 21 Jul 2021 10:15:09 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 10:15:09AM +0900, Yugo NAGATA wrote:\n> I understood that, although PQgetResult() also flushes the buffer, we still\n> should call PQflush() beforehand because we would not like get blocked after\n> calling PQgetResult(). Thanks.\n\nI modified your patch, attached, that I would like to apply to all\nsupported versions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 31 Oct 2023 12:58:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I modified your patch, attached, that I would like to apply to all\n> supported versions.\n\nThis seems to have lost the information about what to do if these\nfunctions fail. I think probably the only possible failure cause\nin nonblock mode is \"unable to enlarge the buffer because OOM\",\nbut that's certainly not the same thing as \"cannot fail\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Oct 2023 13:58:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 01:58:34PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I modified your patch, attached, that I would like to apply to all\n> > supported versions.\n> \n> This seems to have lost the information about what to do if these\n> functions fail. I think probably the only possible failure cause\n> in nonblock mode is \"unable to enlarge the buffer because OOM\",\n> but that's certainly not the same thing as \"cannot fail\".\n\nOkay, I added \"_successful_ calls\", attached. I am not sure what else\nto add.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 31 Oct 2023 17:16:56 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Okay, I added \"_successful_ calls\", attached. I am not sure what else\n> to add.\n\nWhat I'm objecting to is removal of the bit about \"if they need to be\ncalled again\". That provides a hint that retry is the appropriate\nresponse to a failure. Admittedly, it's not 100% clear, but your\nversion makes it 0% clear.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Oct 2023 21:11:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 09:11:06PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Okay, I added \"_successful_ calls\", attached. I am not sure what else\n> > to add.\n> \n> What I'm objecting to is removal of the bit about \"if they need to be\n> called again\". That provides a hint that retry is the appropriate\n> response to a failure. Admittedly, it's not 100% clear, but your\n> version makes it 0% clear.\n\nI thought the original docs said you had to re-call on failure (it would\nnot block but it would fail if it could not be sent), while we are now\nsaying that it will be queued in the input buffer.\n\nIs retry really something we need to mention now? If out of memory is\nour only failure case now (\"unable to enlarge the buffer because OOM\"),\nis retry really a realistic option?\n\nAm I missing something?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 31 Oct 2023 21:55:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Oct 31, 2023 at 09:11:06PM -0400, Tom Lane wrote:\n>> What I'm objecting to is removal of the bit about \"if they need to be\n>> called again\". That provides a hint that retry is the appropriate\n>> response to a failure. Admittedly, it's not 100% clear, but your\n>> version makes it 0% clear.\n\n> I thought the original docs said you had to re-call on failure (it would\n> not block but it would fail if it could not be sent), while we are now\n> saying that it will be queued in the input buffer.\n\nFor these functions in nonblock mode, failure means \"we didn't queue it\".\n\n> Is retry really something we need to mention now? If out of memory is\n> our only failure case now (\"unable to enlarge the buffer because OOM\"),\n> is retry really a realistic option?\n\nWell, ideally the application would do something to alleviate the\nOOM problem before retrying. I don't know if we want to go so far\nas to discuss that. I do object to giving the impression that\nfailure is impossible, which I think your proposed wording does.\n\nAn orthogonal issue with your latest wording is that it's unclear\nwhether *unsuccessful* calls to these functions will block.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Oct 2023 22:16:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 10:16:07PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Oct 31, 2023 at 09:11:06PM -0400, Tom Lane wrote:\n> >> What I'm objecting to is removal of the bit about \"if they need to be\n> >> called again\". That provides a hint that retry is the appropriate\n> >> response to a failure. Admittedly, it's not 100% clear, but your\n> >> version makes it 0% clear.\n> \n> > I thought the original docs said you had to re-call on failure (it would\n> > not block but it would fail if it could not be sent), while we are now\n> > saying that it will be queued in the input buffer.\n> \n> For these functions in nonblock mode, failure means \"we didn't queue it\".\n> \n> > Is retry really something we need to mention now? If out of memory is\n> > our only failure case now (\"unable to enlarge the buffer because OOM\"),\n> > is retry really a realistic option?\n> \n> Well, ideally the application would do something to alleviate the\n> OOM problem before retrying. I don't know if we want to go so far\n> as to discuss that. I do object to giving the impression that\n> failure is impossible, which I think your proposed wording does.\n> \n> An orthogonal issue with your latest wording is that it's unclear\n> whether *unsuccessful* calls to these functions will block.\n\nOkay, I see your point now. Here is an updated patch that addresses\nboth issues.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Wed, 1 Nov 2023 08:47:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "On Wed, Nov 1, 2023 at 08:47:33AM -0400, Bruce Momjian wrote:\n> On Tue, Oct 31, 2023 at 10:16:07PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Tue, Oct 31, 2023 at 09:11:06PM -0400, Tom Lane wrote:\n> > >> What I'm objecting to is removal of the bit about \"if they need to be\n> > >> called again\". That provides a hint that retry is the appropriate\n> > >> response to a failure. Admittedly, it's not 100% clear, but your\n> > >> version makes it 0% clear.\n> > \n> > > I thought the original docs said you had to re-call on failure (it would\n> > > not block but it would fail if it could not be sent), while we are now\n> > > saying that it will be queued in the input buffer.\n> > \n> > For these functions in nonblock mode, failure means \"we didn't queue it\".\n> > \n> > > Is retry really something we need to mention now? If out of memory is\n> > > our only failure case now (\"unable to enlarge the buffer because OOM\"),\n> > > is retry really a realistic option?\n> > \n> > Well, ideally the application would do something to alleviate the\n> > OOM problem before retrying. I don't know if we want to go so far\n> > as to discuss that. I do object to giving the impression that\n> > failure is impossible, which I think your proposed wording does.\n> > \n> > An orthogonal issue with your latest wording is that it's unclear\n> > whether *unsuccessful* calls to these functions will block.\n> \n> Okay, I see your point now. Here is an updated patch that addresses\n> both issues.\n\nPatch applied to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 13:01:32 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 01:01:32PM -0500, Bruce Momjian wrote:\n> On Wed, Nov 1, 2023 at 08:47:33AM -0400, Bruce Momjian wrote:\n> > On Tue, Oct 31, 2023 at 10:16:07PM -0400, Tom Lane wrote:\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > On Tue, Oct 31, 2023 at 09:11:06PM -0400, Tom Lane wrote:\n> > > >> What I'm objecting to is removal of the bit about \"if they need to be\n> > > >> called again\". That provides a hint that retry is the appropriate\n> > > >> response to a failure. Admittedly, it's not 100% clear, but your\n> > > >> version makes it 0% clear.\n> > > \n> > > > I thought the original docs said you had to re-call on failure (it would\n> > > > not block but it would fail if it could not be sent), while we are now\n> > > > saying that it will be queued in the input buffer.\n> > > \n> > > For these functions in nonblock mode, failure means \"we didn't queue it\".\n> > > \n> > > > Is retry really something we need to mention now? If out of memory is\n> > > > our only failure case now (\"unable to enlarge the buffer because OOM\"),\n> > > > is retry really a realistic option?\n> > > \n> > > Well, ideally the application would do something to alleviate the\n> > > OOM problem before retrying. I don't know if we want to go so far\n> > > as to discuss that. I do object to giving the impression that\n> > > failure is impossible, which I think your proposed wording does.\n> > > \n> > > An orthogonal issue with your latest wording is that it's unclear\n> > > whether *unsuccessful* calls to these functions will block.\n> > \n> > Okay, I see your point now. Here is an updated patch that addresses\n> > both issues.\n> \n> Patch applied to master.\n\nMy apologies, I forgot this needed to backpatched, so done now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 14:05:23 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about non-blocking mode in libpq"
}
] |
[
{
"msg_contents": "Hi all\n\nIf you're trying to build postgres with Visual Studio Build Tools 16 2019\nusing the optional v140 toolset that installs the Visual Studio 14 2019\nC/C++ toolchain to get binaries that're fully compatible with the EDB\npostgres builds, you may run into some confusing issues.\n\nUse this incantation in cmd.exe (not a powershell.exe or pwsh.exe session)\nto select the VS 16 msbuild with vs 14 compiler:\n\n \"%PROGRAMFILES(x86)%\\Microsoft Visual\nStudio\\2019\\BuildTools\\VC\\Auxiliary\\Build\\vcvarsall.bat\" amd64\n-vcvars_ver=14.0\n\nall on one line, then run src\\tools\\msvc\\build.bat as normal.\n\nIf you instead attempt to use the vcvarsall.bat from the v140 toolchain\nthat VS Build Tools 2019 installed for you, it'll appear to work, but\ncompilation will then fail by spamming:\n\n some.vcxproj(17,3): error MSB4019: The imported project\n\"C:\\Microsoft.Cpp.Default.props\" was not found. Confirm that the path in\nthe <Import> declaration is correct, and that the file exists on disk.\n\nThis is because the v140 toolset does not include the v140 msbuild. You're\nexpected to use the v160 msbuild and configure it to use the v140 toolset\ninstead.\n\nSimilar issues occur when you try to use the CMake generator \"Visual Studio\n14 2015\" with a VS Build Tools 2019-installed version of the 140 toolchain;\nyou have to instead use -G \"Visual Studio 16 2019\" -T \"v140\" to select the\nVS 16 msbuild and tell it to use the v140 toolchain. Crazy stuff.\n\nIf you instead just run:\n\n \"%PROGRAMFILES(x86)%\\Microsoft Visual\nStudio\\2019\\BuildTools\\VC\\Auxiliary\\Build\\vcvarsall.bat\" amd64\n\nThen compilation will run fine, but the resulting binary will use the\nversion 16 MSVC compilers, runtime library and redist, etc.\n\n\nNote that all these builds will target the default Windows 10 SDK. That\nshould be fine; we're very conservative in postgres about new Windows\nfeatures and functions, and we do dynamic lookups for a few symbols when\nwe're not sure if they'll be available. But you can force it to compile for\nWindows 7 and higher with by editing Mk.pm and adding the definitions\n\n WINVER=0x0601\n _WIN32_WINNT=0x0601\n\nto your project. I didn't find a way to add custom preprocessor definitions\nin config.pl so for testing purposes I hacked it into MSBuildProject.pm in\nthe <PreprocessorDefinitions> clause as\n\n ;WINVER=0x0601;_WIN32_WINNT=0x0601\n\nI've attached a patch that teaches config.pl about a new 'definitions'\noption to make this more graceful.\n\nSee\nhttps://docs.microsoft.com/en-us/cpp/porting/modifying-winver-and-win32-winnt?view=msvc-160\n\n\nIf you don't have the toolchain installed, you can install Chocolatey\n(there's a one-liner on their website) then:\n\n choco install -y visualstudio2019buildtools\n\n choco install -y visualstudio2019-vc++ --packageparameters \"--add\nMicrosoft.VisualStudio.Component.VC.140\"\n\nYou may also want\n\n choco install -y winflexbison\n\n(I've attached a patch that teaches pgflex.pl and pgbision.pl to use\nwin_flex.exe and win_bison.exe if they're found, and to accept full paths\nfor these tools in buildenv.pl).",
"msg_date": "Tue, 13 Jul 2021 14:10:04 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Quick tip on building pg with Visual Studio Build Tools 2019 + small\n patches"
},
{
"msg_contents": "\nOn 7/13/21 2:10 AM, Craig Ringer wrote:\n>\n>\n> If you don't have the toolchain installed, you can install Chocolatey\n> (there's a one-liner on their website) then:\n>\n> choco install -y visualstudio2019buildtools\n>\n> choco install -y visualstudio2019-vc++ --packageparameters \"--add\n> Microsoft.VisualStudio.Component.VC.140\"\n\n\nThe first of these is probably redundant, and the second might install\nmore than required. Here's my recipe for what I use in testing patches\nwith MSVC:\n\n\nchoco install -y --no-progress --limit-output\nvisualstudio2019-workload-vctools --install-args=\"--add\nMicrosoft.VisualStudio.Component.VC.CLI.Support\"\n\n\nThat gives you the normal command line compilers. After that these\npackages are installed:\n\n\nvcredist140 14.29.30037\nvisualstudio-installer 2.0.1\nvisualstudio2019-workload-vctools 1.0.1\nvisualstudio2019buildtools 16.10.1.0\n\n\n>\n> You may also want\n>\n> choco install -y winflexbison\n>\n> (I've attached a patch that teaches pgflex.pl <http://pgflex.pl> and\n> pgbision.pl <http://pgbision.pl> to use win_flex.exe and win_bison.exe\n> if they're found, and to accept full paths for these tools in\n> buildenv.pl <http://buildenv.pl>).\n\n\n\nA simpler alternative is just to rename the chocolatey shims. Here's a\nps1 fragment I use:\n\n\n $cbin = \"c:\\ProgramData\\chocolatey\\bin\"\n Rename-Item -Path $cbin\\win_bison.exe -NewName bison.exe\n Rename-Item -Path $cbin\\win_flex.exe -NewName flex.exe\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 14 Jul 2021 16:28:13 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Quick tip on building pg with Visual Studio Build Tools 2019 +\n small patches"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a few functions making unnecessary repeated calls to\nPQserverVersion(conn); instead of just calling once and assigning to a\nlocal variable.\n\nPSA a little patch which culls those extra calls.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 13 Jul 2021 19:02:27 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove repeated calls to PQserverVersion"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 07:02:27PM +1000, Peter Smith wrote:\n> I found a few functions making unnecessary repeated calls to\n> PQserverVersion(conn); instead of just calling once and assigning to a\n> local variable.\n\nDoes it really matter? PQserverVersion() does a simple lookup at the\ninternals of PGconn.\n--\nMichael",
"msg_date": "Tue, 13 Jul 2021 19:36:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove repeated calls to PQserverVersion"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jul 13, 2021 at 07:02:27PM +1000, Peter Smith wrote:\n>> I found a few functions making unnecessary repeated calls to\n>> PQserverVersion(conn); instead of just calling once and assigning to a\n>> local variable.\n\n> Does it really matter? PQserverVersion() does a simple lookup at the\n> internals of PGconn.\n\nYeah, it'd have to be mighty hot code to be worth caring about that,\nand none of these spots look like it could be worth it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jul 2021 10:15:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove repeated calls to PQserverVersion"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 12:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Tue, Jul 13, 2021 at 07:02:27PM +1000, Peter Smith wrote:\n> >> I found a few functions making unnecessary repeated calls to\n> >> PQserverVersion(conn); instead of just calling once and assigning to a\n> >> local variable.\n>\n> > Does it really matter? PQserverVersion() does a simple lookup at the\n> > internals of PGconn.\n>\n> Yeah, it'd have to be mighty hot code to be worth caring about that,\n> and none of these spots look like it could be worth it.\n\nI agree there would be no observable performance improvements.\n\nBut I never made any claims about performance; my motivation for this\ntrivial patch was more like just \"code tidy\" or \"refactor\", so\napplying performance as the only worthiness criteria for a \"code tidy\"\npatch seemed like a misrepresentation here.\n\nOf course you can judge the patch is still not worthwhile for other\nreasons. So be it.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 14 Jul 2021 09:57:27 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove repeated calls to PQserverVersion"
},
{
"msg_contents": "On 2021-Jul-14, Peter Smith wrote:\n\n> But I never made any claims about performance; my motivation for this\n> trivial patch was more like just \"code tidy\" or \"refactor\", so\n> applying performance as the only worthiness criteria for a \"code tidy\"\n> patch seemed like a misrepresentation here.\n> \n> Of course you can judge the patch is still not worthwhile for other\n> reasons. So be it.\n\nPersonally, I like the simplicity of the function call in those places,\nbecause when reading just that line one immediately knows where the\nvalue is coming from. If you assign it to a variable, the line is not\nstandalone and I have to find out where the assignment is, and verify\nthat there hasn't been any other assignment in between.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)\n\n\n",
"msg_date": "Tue, 13 Jul 2021 20:19:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Remove repeated calls to PQserverVersion"
},
{
"msg_contents": "> On 14 Jul 2021, at 02:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> Personally, I like the simplicity of the function call in those places,\n\n+1\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 14 Jul 2021 09:02:44 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove repeated calls to PQserverVersion"
}
] |
[
{
"msg_contents": "Hello,\n\nI would like to know if there is any interest in working to reduce the usage \nand propagation of resjunk columns in the planner.\n\nI think this topic is worth investigating, because most of the time when we \nrequest a sorted path without ever needing the sort key afterwards, we still \ncarry the sort key to the final tuple, where the JunkFilter will finally get rid \nof it.\n\nRationale\n========\n\nThis would allow several optimizations.\n\n1) Index not needing to output the column \n\nThis one was mentioned as far back as 2009 [1] and is still relevant today. \nIf we query SELECT a FROM t1 ORDER BY b; and we have an index, on b, we \nshouldn't output b at all since we don't need it in the upper nodes. This \nmight not look like a huge problem by itself, but as noted in [1] it becomes \nvery expensive in the case of a functional index. This is alleviated for \nIndexOnlyScan because it is capable of fetching the value from the index \nitself, but it is still a problem.\n\nTake this query as an example:\n\nregression=# explain (verbose) select two from tenk2 order by hundred;\n QUERY PLAN \n-----------------------------------------------------------------------------------------\n Index Scan using tenk2_hundred on public.tenk2 (cost=0.29..1574.20 \nrows=10000 width=8)\n Output: two, hundred\n(2 rows)\n\n\nWe should be able to transform it into:\n\nregression=# explain (verbose) select two from tenk2 order by hundred;\n QUERY PLAN \n-----------------------------------------------------------------------------------------\n Index Scan using tenk2_hundred on public.tenk2 (cost=0.29..1574.20 \nrows=10000 width=4)\n Output: two\n(2 rows)\n\n\n2) Other nodes\n\nOther nodes might benefit from it, for exemple in FDW. Right now the sort key \nis always returned from the underlying FDW, but if the data can be sorted that \ncould be a net win.\n\n3) Incremental Sort\n\nWhile working on the patch to allow Sort nodes to use the datumOnly \noptimization, a suggestion came up to also use it in the IncrementalSort. This \nis not possible today because even if we don't need the previously-sorted \ncolumns anymore, we still need to output them as resjunk columns.\n\n4) Narrower tuples in dynamic shared memory.\n\nDSM bandwidth is quite expensive, so if we can avoid exchanging some \nattributes here it could be a net win.\n\n\nProposal\n=======\n\nI've been trying to test this idea using a very simple approach. If that is of \ninterest, I can clean up my branch and post a simple patch to discuss \nspecifics, but I'd like to keep a high level discussion first. \n\nThe idea would be to:\n - \"tag\" resjunk TargetEntries according to why they were added. So a column \nadded as sort group clause would be tagged as such, and be recognisable\n - in the planner, instead of using the whole processed target list to build \nthe finaltarget, we would remove resjunk entries we don't actually need (those \nadded only as sortgroup clauses as of now, but there may be other kind of \nresjunk entries we can safely omit).\n - inject those columns only when generating the input targets needed for \nsorting, grouping, window functions and the likes. \n\nUsing only this already allows optimization number 1), because if no Sort node \nneeds to be added the pathtarget just cascade to the bottom of the path.\n\nThere is one big downside to this: it introduces a mismatch between the \nfinaltarget and the output of the previous node (for example sort). This adds a \ncostly Result node everywhere, performing an expensive projection instead of \nthe much simpler JunkFilter we currently have:\n\nregression=# explain (verbose) select two from tenk2 order by four;\n QUERY PLAN \n------------------------------------------------------------------------------\n Result (cost=1109.39..1234.39 rows=10000 width=4)\n Output: two\n -> Sort (cost=1109.39..1134.39 rows=10000 width=8)\n Output: two, four\n Sort Key: tenk2.four\n -> Seq Scan on public.tenk2 (cost=0.00..445.00 rows=10000 width=8)\n Output: two, four\n(7 rows)\n\n\nI think this is something that could easily be solved, either by teaching some \nnodes to do simple projections, consisting only of removing / reordering some \nattributes. This would match what ExecJunkFilter does, generalized to any \nkind of \"subset of attributes\" projection.\n\nAlternatively, we could also perform that at the Result level, leaving \nindividual nodes alone, by implementing a simpler result node using the \nJunkFilter mechanism when it's possible (either with a full-blown \n\"SimpleResult\" specific node, or a different execprocnode in the Result).\n\nIf the idea seems worthy, I'll keep working on it and send you a patch \ndemonstrating the idea.\n\n[1] https://www.postgresql.org/message-id/flat/9957.1250956747%40sss.pgh.pa.us\n\nBest regards,\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 13 Jul 2021 16:19:27 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Early Sort/Group resjunk column elimination."
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 10:19 AM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n>\n> Hello,\n>\n> I would like to know if there is any interest in working to reduce the usage\n> and propagation of resjunk columns in the planner.\n>\n> I think this topic is worth investigating, because most of the time when we\n> request a sorted path without ever needing the sort key afterwards, we still\n> carry the sort key to the final tuple, where the JunkFilter will finally get rid\n> of it.\n>\n> Rationale\n> ========\n>\n> This would allow several optimizations.\n>\n> 1) Index not needing to output the column\n>\n> This one was mentioned as far back as 2009 [1] and is still relevant today.\n> If we query SELECT a FROM t1 ORDER BY b; and we have an index, on b, we\n> shouldn't output b at all since we don't need it in the upper nodes. This\n> might not look like a huge problem by itself, but as noted in [1] it becomes\n> very expensive in the case of a functional index. This is alleviated for\n> IndexOnlyScan because it is capable of fetching the value from the index\n> itself, but it is still a problem.\n>\n> Take this query as an example:\n>\n> regression=# explain (verbose) select two from tenk2 order by hundred;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------\n> Index Scan using tenk2_hundred on public.tenk2 (cost=0.29..1574.20\n> rows=10000 width=8)\n> Output: two, hundred\n> (2 rows)\n>\n>\n> We should be able to transform it into:\n>\n> regression=# explain (verbose) select two from tenk2 order by hundred;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------\n> Index Scan using tenk2_hundred on public.tenk2 (cost=0.29..1574.20\n> rows=10000 width=4)\n> Output: two\n> (2 rows)\n>\n>\n> 2) Other nodes\n>\n> Other nodes might benefit from it, for exemple in FDW. Right now the sort key\n> is always returned from the underlying FDW, but if the data can be sorted that\n> could be a net win.\n>\n> 3) Incremental Sort\n>\n> While working on the patch to allow Sort nodes to use the datumOnly\n> optimization, a suggestion came up to also use it in the IncrementalSort. This\n> is not possible today because even if we don't need the previously-sorted\n> columns anymore, we still need to output them as resjunk columns.\n>\n> 4) Narrower tuples in dynamic shared memory.\n>\n> DSM bandwidth is quite expensive, so if we can avoid exchanging some\n> attributes here it could be a net win.\n>\n>\n> Proposal\n> =======\n>\n> I've been trying to test this idea using a very simple approach. If that is of\n> interest, I can clean up my branch and post a simple patch to discuss\n> specifics, but I'd like to keep a high level discussion first.\n>\n> The idea would be to:\n> - \"tag\" resjunk TargetEntries according to why they were added. So a column\n> added as sort group clause would be tagged as such, and be recognisable\n> - in the planner, instead of using the whole processed target list to build\n> the finaltarget, we would remove resjunk entries we don't actually need (those\n> added only as sortgroup clauses as of now, but there may be other kind of\n> resjunk entries we can safely omit).\n> - inject those columns only when generating the input targets needed for\n> sorting, grouping, window functions and the likes.\n>\n> Using only this already allows optimization number 1), because if no Sort node\n> needs to be added the pathtarget just cascade to the bottom of the path.\n>\n> There is one big downside to this: it introduces a mismatch between the\n> finaltarget and the output of the previous node (for example sort). This adds a\n> costly Result node everywhere, performing an expensive projection instead of\n> the much simpler JunkFilter we currently have:\n>\n> regression=# explain (verbose) select two from tenk2 order by four;\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> Result (cost=1109.39..1234.39 rows=10000 width=4)\n> Output: two\n> -> Sort (cost=1109.39..1134.39 rows=10000 width=8)\n> Output: two, four\n> Sort Key: tenk2.four\n> -> Seq Scan on public.tenk2 (cost=0.00..445.00 rows=10000 width=8)\n> Output: two, four\n> (7 rows)\n>\n>\n> I think this is something that could easily be solved, either by teaching some\n> nodes to do simple projections, consisting only of removing / reordering some\n> attributes. This would match what ExecJunkFilter does, generalized to any\n> kind of \"subset of attributes\" projection.\n>\n> Alternatively, we could also perform that at the Result level, leaving\n> individual nodes alone, by implementing a simpler result node using the\n> JunkFilter mechanism when it's possible (either with a full-blown\n> \"SimpleResult\" specific node, or a different execprocnode in the Result).\n>\n> If the idea seems worthy, I'll keep working on it and send you a patch\n> demonstrating the idea.\n>\n> [1] https://www.postgresql.org/message-id/flat/9957.1250956747%40sss.pgh.pa.us\n\nThanks for hacking on this; as you're not surprised given I made the\noriginal suggestion, I'm particularly interested in this for\nincremental sort benefits, but I find the other examples you gave\ncompelling also.\n\nOf course I haven't seen code yet, but my first intuition is to try to\navoid adding extra nodes and teach the (hopefully few) relevant nodes\nto remove the resjunk entries themselves. Presumably in this case that\nwould mostly be the sort nodes (including gather merge).\n\nOne thing to pay attention to here is that we can't necessarily remove\nresjunk entries every time in a sort node since, for example, in\nparallel mode the gather merge node above it will need those entries\nto complete the sort.\n\nI'm interested to see what you're working on with a patch.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Fri, 16 Jul 2021 11:37:15 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Early Sort/Group resjunk column elimination."
},
{
"msg_contents": "Le vendredi 16 juillet 2021, 17:37:15 CEST James Coleman a écrit :\n> Thanks for hacking on this; as you're not surprised given I made the\n> original suggestion, I'm particularly interested in this for\n> incremental sort benefits, but I find the other examples you gave\n> compelling also.\n> \n> Of course I haven't seen code yet, but my first intuition is to try to\n> avoid adding extra nodes and teach the (hopefully few) relevant nodes\n> to remove the resjunk entries themselves. Presumably in this case that\n> would mostly be the sort nodes (including gather merge).\n> \n> One thing to pay attention to here is that we can't necessarily remove\n> resjunk entries every time in a sort node since, for example, in\n> parallel mode the gather merge node above it will need those entries\n> to complete the sort.\n\nYes that is actually a concern, especially as the merge node is already \nhandled specially when applying a projection. \n\n> \n> I'm interested to see what you're working on with a patch.\n\nI am posting this proof-of-concept, for the record, but I don't think the \nnumerous problems can be solved easily. I tried to teach Sort to use a limited \nsort of projection, but it brings its own slate of problems...\n\nQuick list of problems with the current implementation, leaving aside the fact \nthat it's quite hacky in a few places:\n\n* result nodes are added for numerous types of non-projection-capable paths, \nsince the above (final) target includes resjunk columns which should be \neliminated. \n* handling of appendrel seems difficult, as both ordered and unordered appends \nare generated at the same time against the same target\n* I'm having trouble understanding the usefulness of a building physical \ntlists for SubqueryScans\n\nThe second patch is a very hacky way to try to eliminate some generated result \nnodes. The idea is to bypass the whole interpreter when using a \"simple\" \nprojection which is just a reduction of the number of columns, and teach Sort \nand Result to perform it. To do this, I added a parameter to \nis_projection_capable_path to make the test depend on the actual asked target: \nfor a sort node, only a \"simple\" projection. \n\nThe implementation uses a junkfilter which assumes nothing else than Const and \nouter var will be present. \n\nI don't feel like this is going anywhere, but at least it's here for \ndiscussion and posterity, if someone is interested.\n\n\n-- \nRonan Dunklau",
"msg_date": "Tue, 20 Jul 2021 17:47:57 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: Early Sort/Group resjunk column elimination."
}
] |
[
{
"msg_contents": "Hi,\nI was looking at index_drop() in PG 11 branch.\nIn if (concurrent)block, the heap and index relations are overwritten since\nthey were opened a few lines above the concurrent check.\n\nShouldn't the two relations be closed first ?\n\nthanks\n\ndiff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c\nindex 9d8f873944..625b72ae85 100644\n--- a/src/backend/catalog/index.c\n+++ b/src/backend/catalog/index.c\n@@ -1641,6 +1641,9 @@ index_drop(Oid indexId, bool concurrent)\n * conflicts with existing predicate locks, so now is the\ntime to move\n * them to the heap relation.\n */\n+ heap_close(userHeapRelation, NoLock);\n+ index_close(userIndexRelation, NoLock);\n+\n userHeapRelation = heap_open(heapId,\nShareUpdateExclusiveLock);\n userIndexRelation = index_open(indexId,\nShareUpdateExclusiveLock);\n TransferPredicateLocksToHeapRelation(userIndexRelation);\n\nHi,I was looking at index_drop() in PG 11 branch.In if (concurrent)block, the heap and index relations are overwritten since they were opened a few lines above the concurrent check.Shouldn't the two relations be closed first ?thanksdiff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.cindex 9d8f873944..625b72ae85 100644--- a/src/backend/catalog/index.c+++ b/src/backend/catalog/index.c@@ -1641,6 +1641,9 @@ index_drop(Oid indexId, bool concurrent) * conflicts with existing predicate locks, so now is the time to move * them to the heap relation. */+ heap_close(userHeapRelation, NoLock);+ index_close(userIndexRelation, NoLock);+ userHeapRelation = heap_open(heapId, ShareUpdateExclusiveLock); userIndexRelation = index_open(indexId, ShareUpdateExclusiveLock); TransferPredicateLocksToHeapRelation(userIndexRelation);",
"msg_date": "Tue, 13 Jul 2021 15:13:57 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "closing heap relation"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jul 13, 2021 at 3:13 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> I was looking at index_drop() in PG 11 branch.\n> In if (concurrent)block, the heap and index relations are overwritten\n> since they were opened a few lines above the concurrent check.\n>\n> Shouldn't the two relations be closed first ?\n>\n> thanks\n>\n> diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c\n> index 9d8f873944..625b72ae85 100644\n> --- a/src/backend/catalog/index.c\n> +++ b/src/backend/catalog/index.c\n> @@ -1641,6 +1641,9 @@ index_drop(Oid indexId, bool concurrent)\n> * conflicts with existing predicate locks, so now is the\n> time to move\n> * them to the heap relation.\n> */\n> + heap_close(userHeapRelation, NoLock);\n> + index_close(userIndexRelation, NoLock);\n> +\n> userHeapRelation = heap_open(heapId,\n> ShareUpdateExclusiveLock);\n> userIndexRelation = index_open(indexId,\n> ShareUpdateExclusiveLock);\n> TransferPredicateLocksToHeapRelation(userIndexRelation);\n>\nPlease disregard the above.\n\nThe relations were closed a bit earlier.\n\nHi,On Tue, Jul 13, 2021 at 3:13 PM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,I was looking at index_drop() in PG 11 branch.In if (concurrent)block, the heap and index relations are overwritten since they were opened a few lines above the concurrent check.Shouldn't the two relations be closed first ?thanksdiff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.cindex 9d8f873944..625b72ae85 100644--- a/src/backend/catalog/index.c+++ b/src/backend/catalog/index.c@@ -1641,6 +1641,9 @@ index_drop(Oid indexId, bool concurrent) * conflicts with existing predicate locks, so now is the time to move * them to the heap relation. */+ heap_close(userHeapRelation, NoLock);+ index_close(userIndexRelation, NoLock);+ userHeapRelation = heap_open(heapId, ShareUpdateExclusiveLock); userIndexRelation = index_open(indexId, ShareUpdateExclusiveLock); TransferPredicateLocksToHeapRelation(userIndexRelation);Please disregard the above.The relations were closed a bit earlier.",
"msg_date": "Tue, 13 Jul 2021 15:21:18 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: closing heap relation"
}
] |
[
{
"msg_contents": "\\db+ and \\l+ show sizes of tablespaces and databases, so I was surprised in the\npast that \\dn+ didn't show sizes of schemas. I would find that somewhat\nconvenient, and I assume other people would use it even more useful.\n\n\\db+ and \\l+ seem to walk the filesystem, and this is distinguished from those\ncases. (Also, schemas are per-DB, not global).\n\nMaybe it's an issue if \\dn+ is slow and expensive, since that's how to display\nACL. But \\db+ has the same issue. Maybe there should be a \\db++ and \\dn++ to\nallow \\dn+ to showing the ACL but not the size.\n\npg_relation_size() only includes one fork, and the other functions include\ntoast, which should be in its separate schema, so it has to be summed across\nforks.\n\npostgres=# \\dnS+\n child | postgres | | | 946 MB\n information_schema | postgres | postgres=UC/postgres+| | 88 kB\n | | =U/postgres | | \n pg_catalog | postgres | postgres=UC/postgres+| system catalog schema | 42 MB\n | | =U/postgres | | \n pg_toast | postgres | | reserved schema for TOAST tables | 3908 MB\n public | postgres | postgres=UC/postgres+| standard public schema | 5627 MB\n | | =UC/postgres | | \n\n From c2d68eb54f785c759253d4100460aa1af9cbc676 Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Tue, 13 Jul 2021 21:25:48 -0500\nSubject: [PATCH] psql: \\dn+ to show size of each schema..\n\nSee also: 358a897fa, 528ac10c7\n---\n src/bin/psql/describe.c | 5 +++++\n 1 file changed, 5 insertions(+)\n\ndiff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c\nindex 2abf255798..6b9b6ea34a 100644\n--- a/src/bin/psql/describe.c\n+++ b/src/bin/psql/describe.c\n@@ -5036,6 +5036,11 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)\n \t\tappendPQExpBuffer(&buf,\n \t\t\t\t\t\t \",\\n pg_catalog.obj_description(n.oid, 'pg_namespace') AS \\\"%s\\\"\",\n \t\t\t\t\t\t gettext_noop(\"Description\"));\n+\n+\t\tappendPQExpBuffer(&buf,\n+\t\t\t\t\t\t \",\\n (SELECT pg_catalog.pg_size_pretty(sum(pg_relation_size(oid,fork))) FROM pg_catalog.pg_class c,\\n\"\n+\t\t\t\t\t\t \" (VALUES('main'),('fsm'),('vm'),('init')) AS fork(fork) WHERE c.relnamespace = n.oid) AS \\\"%s\\\"\",\n+\t\t\t\t\t\t gettext_noop(\"Size\"));\n \t}\n \n \tappendPQExpBufferStr(&buf,\n-- \n2.17.0\n\n\n",
"msg_date": "Tue, 13 Jul 2021 22:07:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] psql: \\dn+ to show size of each schema.."
},
{
"msg_contents": "Hi\n\n2021年7月14日(水) 12:07 Justin Pryzby <pryzby@telsasoft.com>:\n>\n> \\db+ and \\l+ show sizes of tablespaces and databases, so I was surprised in the\n> past that \\dn+ didn't show sizes of schemas. I would find that somewhat\n> convenient, and I assume other people would use it even more useful.\n\nIt's something which would be useful to have. But see this previous proposal:\n\n https://www.postgresql.org/message-id/flat/2d6d2ebf-4dbc-4f74-17d8-05461f4782e2%40dalibo.com\n\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Jul 2021 14:05:29 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema.."
},
{
"msg_contents": "On Wed, 2021-07-14 at 14:05 +0900, Ian Lawrence Barwick wrote:\n> 2021年7月14日(水) 12:07 Justin Pryzby <pryzby@telsasoft.com>:\n> > \\db+ and \\l+ show sizes of tablespaces and databases, so I was surprised in the\n> > past that \\dn+ didn't show sizes of schemas. I would find that somewhat\n> > convenient, and I assume other people would use it even more useful.\n> \n> It's something which would be useful to have. But see this previous proposal:\n> \n> https://www.postgresql.org/message-id/flat/2d6d2ebf-4dbc-4f74-17d8-05461f4782e2%40dalibo.com\n\nRight, I would not like to cause a lot of I/O activity just to look at the\npermissions on a schema...\n\nBesides, schemas are not physical, but logical containers. So I see a point in\nmeasuring the storage used in a certain tablespace, but not so much by all objects\nin a certain schema. It might be useful for accounting purposes, though.\nBut I don't expect it to be in frequent enough demand to add a psql command.\n\nWhat about inventing a function pg_schema_size(regnamespace)?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 14 Jul 2021 07:42:33 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema.."
},
{
"msg_contents": "st 14. 7. 2021 v 7:42 odesílatel Laurenz Albe <laurenz.albe@cybertec.at>\nnapsal:\n\n> On Wed, 2021-07-14 at 14:05 +0900, Ian Lawrence Barwick wrote:\n> > 2021年7月14日(水) 12:07 Justin Pryzby <pryzby@telsasoft.com>:\n> > > \\db+ and \\l+ show sizes of tablespaces and databases, so I was\n> surprised in the\n> > > past that \\dn+ didn't show sizes of schemas. I would find that\n> somewhat\n> > > convenient, and I assume other people would use it even more useful.\n> >\n> > It's something which would be useful to have. But see this previous\n> proposal:\n> >\n> >\n> https://www.postgresql.org/message-id/flat/2d6d2ebf-4dbc-4f74-17d8-05461f4782e2%40dalibo.com\n>\n> Right, I would not like to cause a lot of I/O activity just to look at the\n> permissions on a schema...\n>\n> Besides, schemas are not physical, but logical containers. So I see a\n> point in\n> measuring the storage used in a certain tablespace, but not so much by all\n> objects\n> in a certain schema. It might be useful for accounting purposes, though.\n> But I don't expect it to be in frequent enough demand to add a psql\n> command.\n>\n> What about inventing a function pg_schema_size(regnamespace)?\n>\n\n+1 good idea\n\nPavel\n\n\n> Yours,\n> Laurenz Albe\n>\n>\n>\n>\n\nst 14. 7. 2021 v 7:42 odesílatel Laurenz Albe <laurenz.albe@cybertec.at> napsal:On Wed, 2021-07-14 at 14:05 +0900, Ian Lawrence Barwick wrote:\n> 2021年7月14日(水) 12:07 Justin Pryzby <pryzby@telsasoft.com>:\n> > \\db+ and \\l+ show sizes of tablespaces and databases, so I was surprised in the\n> > past that \\dn+ didn't show sizes of schemas. I would find that somewhat\n> > convenient, and I assume other people would use it even more useful.\n> \n> It's something which would be useful to have. But see this previous proposal:\n> \n> https://www.postgresql.org/message-id/flat/2d6d2ebf-4dbc-4f74-17d8-05461f4782e2%40dalibo.com\n\nRight, I would not like to cause a lot of I/O activity just to look at the\npermissions on a schema...\n\nBesides, schemas are not physical, but logical containers. So I see a point in\nmeasuring the storage used in a certain tablespace, but not so much by all objects\nin a certain schema. It might be useful for accounting purposes, though.\nBut I don't expect it to be in frequent enough demand to add a psql command.\n\nWhat about inventing a function pg_schema_size(regnamespace)?+1 good ideaPavel\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 14 Jul 2021 07:44:28 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema.."
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 02:05:29PM +0900, Ian Lawrence Barwick wrote:\n> 2021年7月14日(水) 12:07 Justin Pryzby <pryzby@telsasoft.com>:\n> >\n> > \\db+ and \\l+ show sizes of tablespaces and databases, so I was surprised in the\n> > past that \\dn+ didn't show sizes of schemas. I would find that somewhat\n> > convenient, and I assume other people would use it even more useful.\n> \n> It's something which would be useful to have. But see this previous proposal:\n> \n> https://www.postgresql.org/message-id/flat/2d6d2ebf-4dbc-4f74-17d8-05461f4782e2%40dalibo.com\n\nThanks for finding that.\n\nIt sounds like the objections were:\n1) it may be too slow - I propose the size should be shown only with \\n++;\nI think \\db and \\l should get the same treatment, and probably everywhere\nshould change to use the \"int verbose\". I moved the ++ columns to the\nright-most column.\n\n2) it may fail or be misleading if user lacks permissions. \nI think Tom's concern was that at some point we might decide to avoid showing a\nrelation's size to a user who has no access to the rel, and then \\dn+ would\nshow misleading information, or fail.\nI implemented this a server-side function for super-user/monitoring role only. \n\nI think \\dn++ is also a reasonable way to address the second concern - if\nsomeone asksk for \"very verbose\" outpu, they get more of an internal,\nimplementation dependant output, which might be more likely to change in future\nreleases. For example, if we move the ++ columns to the right, someone might\njusifiably think that the \\n and \\n+ columns would be less likely to change in\nthe future than the \\n++ columns.\n\nI imagine ++ would find more uses in the future. Like, say, size of an access\nmethods \\dA++. I'll add that in a future revision - I hope that PG15 will also\nhave create table like (INCLUDE ACCESS METHOD), ALTER TABLE SET ACCESS METHOD,\nand pg_restore --no-tableam.\n\n++ may also allow improved testing of psql features - platform dependent stuff\nlike size can be in ++, allowing better/easier/testing of +.\n\n-- \nJustin",
"msg_date": "Thu, 15 Jul 2021 14:07:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema.."
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 07:42:33AM +0200, Laurenz Albe wrote:\n> Besides, schemas are not physical, but logical containers. So I see a point in\n> measuring the storage used in a certain tablespace, but not so much by all objects\n> in a certain schema. It might be useful for accounting purposes, though.\n\nWe use only a few schemas, 1) to hide child tables; 2) to exclude some extended\nstats from backups, and 1-2 other things. But it's useful to be able to see\nhow storage is used by schema, and better to do it conveniently.\n\nI think it'd be even more useful for people who use schemas more widely than we\ndo:\n \"Who's using all our space?\"\n \\dn++\n \"Oh, it's that one - let me clean that up...\"\n\nOr, \"what's the pg_toast stuff, and do I need to do something about it?\"\n\n> But I don't expect it to be in frequent enough demand to add a psql command.\n> \n> What about inventing a function pg_schema_size(regnamespace)?\n\nBut for \"physical\" storage it's also possible to get the size from the OS, much\nmore efficiently, using /bin/df or zfs list (assuming nothing else is using\nthose filesystems). The pg_*_size functions are inefficient, but psql \\db+ and\n\\l+ already call them anyway.\n\nFor schemas, there's no way to get the size from the OS, so it's nice to make\nthe size available from psql, conveniently.\n\nv3 patch:\n - fixes an off by one in forkNum loop;\n - removes an unnecessary subquery in describe.c;\n - returns 0 rather than NULL if the schema is empty;\n - adds pg_am_size;\n\nregression=# \\dA++\n List of access methods\n Name | Type | Handler | Description | Size \n--------+-------+----------------------+----------------------------------------+---------\n brin | Index | brinhandler | block range index (BRIN) access method | 744 kB\n btree | Index | bthandler | b-tree index access method | 21 MB\n gin | Index | ginhandler | GIN index access method | 2672 kB\n gist | Index | gisthandler | GiST index access method | 2800 kB\n hash | Index | hashhandler | hash index access method | 2112 kB\n heap | Table | heap_tableam_handler | heap table access method | 60 MB\n heap2 | Table | heap_tableam_handler | | 120 kB\n spgist | Index | spghandler | SP-GiST index access method | 5840 kB\n(8 rows)\n\nregression=# \\dn++\n List of schemas\n Name | Owner | Access privileges | Description | Size \n--------------------+---------+--------------------+------------------------+---------\n fkpart3 | pryzbyj | | | 168 kB\n fkpart4 | pryzbyj | | | 104 kB\n fkpart5 | pryzbyj | | | 40 kB\n fkpart6 | pryzbyj | | | 48 kB\n mvtest_mvschema | pryzbyj | | | 16 kB\n public | pryzbyj | pryzbyj=UC/pryzbyj+| standard public schema | 69 MB\n | | =UC/pryzbyj | | \n regress_indexing | pryzbyj | | | 48 kB\n regress_rls_schema | pryzbyj | | | 0 bytes\n regress_schema_2 | pryzbyj | | | 0 bytes\n testxmlschema | pryzbyj | | | 24 kB\n(10 rows)\n\n-- \nJustin",
"msg_date": "Thu, 15 Jul 2021 20:16:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "On Thu, 2021-07-15 at 20:16 -0500, Justin Pryzby wrote:\n> On Wed, Jul 14, 2021 at 07:42:33AM +0200, Laurenz Albe wrote:\n> > Besides, schemas are not physical, but logical containers. So I see a point in\n> > measuring the storage used in a certain tablespace, but not so much by all objects\n> > in a certain schema. It might be useful for accounting purposes, though.\n> >\n> > But I don't expect it to be in frequent enough demand to add a psql command.\n> \n> But for \"physical\" storage it's also possible to get the size from the OS, much\n> more efficiently, using /bin/df or zfs list (assuming nothing else is using\n> those filesystems). The pg_*_size functions are inefficient, but psql \\db+ and\n> \\l+ already call them anyway.\n\nHm, yes, the fact that \\l+ does something similar detracts from my argument.\nIt seems somewhat inconsistent to have the size in \\l+, but not in \\dn+.\n\nStill, there is a difference: I never need \\l+, because \\l already shows\nthe permissions on the database, but I often need \\dn+ to see the permissions\non schemas. And I don't want to measure the size when I do that.\n\nThe problem is that our backslash commands are not totally consistent in\nthat respect, and we can hardly fix that.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 16 Jul 2021 11:10:33 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for\n AMs)"
},
{
"msg_contents": "pá 17. 9. 2021 v 11:10 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Wed, Jul 14, 2021 at 07:42:33AM +0200, Laurenz Albe wrote:\n> > Besides, schemas are not physical, but logical containers. So I see a\n> point in\n> > measuring the storage used in a certain tablespace, but not so much by\n> all objects\n> > in a certain schema. It might be useful for accounting purposes, though.\n>\n> We use only a few schemas, 1) to hide child tables; 2) to exclude some\n> extended\n> stats from backups, and 1-2 other things. But it's useful to be able to\n> see\n> how storage is used by schema, and better to do it conveniently.\n>\n> I think it'd be even more useful for people who use schemas more widely\n> than we\n> do:\n> \"Who's using all our space?\"\n> \\dn++\n> \"Oh, it's that one - let me clean that up...\"\n>\n> Or, \"what's the pg_toast stuff, and do I need to do something about it?\"\n>\n> > But I don't expect it to be in frequent enough demand to add a psql\n> command.\n> >\n> > What about inventing a function pg_schema_size(regnamespace)?\n>\n> But for \"physical\" storage it's also possible to get the size from the OS,\n> much\n> more efficiently, using /bin/df or zfs list (assuming nothing else is using\n> those filesystems). The pg_*_size functions are inefficient, but psql\n> \\db+ and\n> \\l+ already call them anyway.\n>\n> For schemas, there's no way to get the size from the OS, so it's nice to\n> make\n> the size available from psql, conveniently.\n>\n> v3 patch:\n> - fixes an off by one in forkNum loop;\n> - removes an unnecessary subquery in describe.c;\n> - returns 0 rather than NULL if the schema is empty;\n> - adds pg_am_size;\n>\n> regression=# \\dA++\n> List of access methods\n> Name | Type | Handler | Description\n> | Size\n>\n> --------+-------+----------------------+----------------------------------------+---------\n> brin | Index | brinhandler | block range index (BRIN) access\n> method | 744 kB\n> btree | Index | bthandler | b-tree index access method\n> | 21 MB\n> gin | Index | ginhandler | GIN index access method\n> | 2672 kB\n> gist | Index | gisthandler | GiST index access method\n> | 2800 kB\n> hash | Index | hashhandler | hash index access method\n> | 2112 kB\n> heap | Table | heap_tableam_handler | heap table access method\n> | 60 MB\n> heap2 | Table | heap_tableam_handler |\n> | 120 kB\n> spgist | Index | spghandler | SP-GiST index access method\n> | 5840 kB\n> (8 rows)\n>\n> regression=# \\dn++\n> List of schemas\n> Name | Owner | Access privileges | Description\n> | Size\n>\n> --------------------+---------+--------------------+------------------------+---------\n> fkpart3 | pryzbyj | |\n> | 168 kB\n> fkpart4 | pryzbyj | |\n> | 104 kB\n> fkpart5 | pryzbyj | |\n> | 40 kB\n> fkpart6 | pryzbyj | |\n> | 48 kB\n> mvtest_mvschema | pryzbyj | |\n> | 16 kB\n> public | pryzbyj | pryzbyj=UC/pryzbyj+| standard public\n> schema | 69 MB\n> | | =UC/pryzbyj |\n> |\n> regress_indexing | pryzbyj | |\n> | 48 kB\n> regress_rls_schema | pryzbyj | |\n> | 0 bytes\n> regress_schema_2 | pryzbyj | |\n> | 0 bytes\n> testxmlschema | pryzbyj | |\n> | 24 kB\n> (10 rows)\n>\n>\nI tested this patch. It looks well. The performance is good enough. I got\nthe result for a schema with 100K tables in 3 seconds.\n\nI am not sure if using \\dt+ and \\dP+ without change is a good idea. I can\nimagine \\dt+ and \\dt++. \\dP can exist just only in ++ form or we can ignore\nit (like now, and support \\dP+ and \\dP++) with same result\n\nI can live with the proposed patch, and I understand why ++ was\nintroduced. But I am still not sure it is really user friendly. I prefer to\nextend \\dA and \\dn with some columns (\\dA has only two columns and \\dn has\ntwo columns too), and then we don't need special ++ variants for sizes.\nUsing three levels of detail looks not too practical (more when the basic\nreports \\dA and \\dn) are really very simple).\n\nRegards\n\nPavel\n\n\n\n-- \n> Justin\n>\n\npá 17. 9. 2021 v 11:10 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Wed, Jul 14, 2021 at 07:42:33AM +0200, Laurenz Albe wrote:\n> Besides, schemas are not physical, but logical containers. So I see a point in\n> measuring the storage used in a certain tablespace, but not so much by all objects\n> in a certain schema. It might be useful for accounting purposes, though.\n\nWe use only a few schemas, 1) to hide child tables; 2) to exclude some extended\nstats from backups, and 1-2 other things. But it's useful to be able to see\nhow storage is used by schema, and better to do it conveniently.\n\nI think it'd be even more useful for people who use schemas more widely than we\ndo:\n \"Who's using all our space?\"\n \\dn++\n \"Oh, it's that one - let me clean that up...\"\n\nOr, \"what's the pg_toast stuff, and do I need to do something about it?\"\n\n> But I don't expect it to be in frequent enough demand to add a psql command.\n> \n> What about inventing a function pg_schema_size(regnamespace)?\n\nBut for \"physical\" storage it's also possible to get the size from the OS, much\nmore efficiently, using /bin/df or zfs list (assuming nothing else is using\nthose filesystems). The pg_*_size functions are inefficient, but psql \\db+ and\n\\l+ already call them anyway.\n\nFor schemas, there's no way to get the size from the OS, so it's nice to make\nthe size available from psql, conveniently.\n\nv3 patch:\n - fixes an off by one in forkNum loop;\n - removes an unnecessary subquery in describe.c;\n - returns 0 rather than NULL if the schema is empty;\n - adds pg_am_size;\n\nregression=# \\dA++\n List of access methods\n Name | Type | Handler | Description | Size \n--------+-------+----------------------+----------------------------------------+---------\n brin | Index | brinhandler | block range index (BRIN) access method | 744 kB\n btree | Index | bthandler | b-tree index access method | 21 MB\n gin | Index | ginhandler | GIN index access method | 2672 kB\n gist | Index | gisthandler | GiST index access method | 2800 kB\n hash | Index | hashhandler | hash index access method | 2112 kB\n heap | Table | heap_tableam_handler | heap table access method | 60 MB\n heap2 | Table | heap_tableam_handler | | 120 kB\n spgist | Index | spghandler | SP-GiST index access method | 5840 kB\n(8 rows)\n\nregression=# \\dn++\n List of schemas\n Name | Owner | Access privileges | Description | Size \n--------------------+---------+--------------------+------------------------+---------\n fkpart3 | pryzbyj | | | 168 kB\n fkpart4 | pryzbyj | | | 104 kB\n fkpart5 | pryzbyj | | | 40 kB\n fkpart6 | pryzbyj | | | 48 kB\n mvtest_mvschema | pryzbyj | | | 16 kB\n public | pryzbyj | pryzbyj=UC/pryzbyj+| standard public schema | 69 MB\n | | =UC/pryzbyj | | \n regress_indexing | pryzbyj | | | 48 kB\n regress_rls_schema | pryzbyj | | | 0 bytes\n regress_schema_2 | pryzbyj | | | 0 bytes\n testxmlschema | pryzbyj | | | 24 kB\n(10 rows)\nI tested this patch. It looks well. The performance is good enough. I got the result for a schema with 100K tables in 3 seconds.I am not sure if using \\dt+ and \\dP+ without change is a good idea. I can imagine \\dt+ and \\dt++. \\dP can exist just only in ++ form or we can ignore it (like now, and support \\dP+ and \\dP++) with same resultI can live with the proposed patch, and I understand why ++ was introduced. But I am still not sure it is really user friendly. I prefer to extend \\dA and \\dn with some columns (\\dA has only two columns and \\dn has two columns too), and then we don't need special ++ variants for sizes. Using three levels of detail looks not too practical (more when the basic reports \\dA and \\dn) are really very simple). RegardsPavel \n-- \nJustin",
"msg_date": "Fri, 17 Sep 2021 12:05:04 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "On Fri, Sep 17, 2021 at 12:05:04PM +0200, Pavel Stehule wrote:\n> I can live with the proposed patch, and I understand why ++ was\n> introduced. But I am still not sure it is really user friendly. I prefer to\n> extend \\dA and \\dn with some columns (\\dA has only two columns and \\dn has\n> two columns too), and then we don't need special ++ variants for sizes.\n> Using three levels of detail looks not too practical (more when the basic\n> reports \\dA and \\dn) are really very simple).\n\nYou're suggesting to include the ACL+description in \\dn and handler+description\nand \\dA.\n\nAnother option is to add pg_schema_size() and pg_am_size() without shortcuts in\npsql. That would avoid showing a potentially huge ACL when all one wants is\nthe schema size, and would serve my purposes well enough to write\n| SELECT pg_namespace_size(oid), nspname FROM pg_namespace ORDER BY 1 DESC LIMIT 9;\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 27 Sep 2021 21:46:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "út 28. 9. 2021 v 4:46 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Fri, Sep 17, 2021 at 12:05:04PM +0200, Pavel Stehule wrote:\n> > I can live with the proposed patch, and I understand why ++ was\n> > introduced. But I am still not sure it is really user friendly. I prefer\n> to\n> > extend \\dA and \\dn with some columns (\\dA has only two columns and \\dn\n> has\n> > two columns too), and then we don't need special ++ variants for sizes.\n> > Using three levels of detail looks not too practical (more when the basic\n> > reports \\dA and \\dn) are really very simple).\n>\n> You're suggesting to include the ACL+description in \\dn and\n> handler+description\n> and \\dA.\n>\n\nyes\n\n\n> Another option is to add pg_schema_size() and pg_am_size() without\n> shortcuts in\n> psql. That would avoid showing a potentially huge ACL when all one wants\n> is\n> the schema size, and would serve my purposes well enough to write\n> | SELECT pg_namespace_size(oid), nspname FROM pg_namespace ORDER BY 1 DESC\n> LIMIT 9;\n>\n\nIt can work too.\n\nI think the long ACL is a customer design issue, but can be. But the same\nproblem is in \\dt+, and I don't see an objection against this design.\n\nMaybe I am too subjective, because 4 years I use pspg, and wide reports are\nnot a problem for me. When the size is on the end, then it is easy to see\nit in pspg.\n\nI like to see size in \\dn+ report, and I like to use pg_namespace_size\nseparately too. Both can be very practical functionality.\n\nI think so \\dt+ and \\l+ is working very well now, and I am not too happy to\nbreak it (partially break it). Although the proposed change is very\nminimalistic.\n\nBut your example \"SELECT pg_namespace_size(oid), nspname FROM pg_namespace\nORDER BY 1 DESC LIMIT 9\" navigates me to the second idea (that just\nenhances the previous). Can be nice if you can have prepared views on the\nserver side that are +/- equivalent to psql reports, and anybody can simply\nwrite their own custom reports.\n\nsome like\n\nSELECT schema, tablename, owner, pg_size_pretty(size) FROM\npg_description.tables ORDER BY size DESC LIMIT 10\nSELECT schema, owner, pg_size_pretty(size) FROM pg_description.schemas\nORDER BY size DESC LIMIT 10\n\nIn the future, it can simplify psql code, and it allows pretty nice\ncustomization in any client for a lot of purposes.\n\nRegards\n\nPavel\n\n\n\n\n\n> --\n> Justin\n>\n\nút 28. 9. 2021 v 4:46 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Fri, Sep 17, 2021 at 12:05:04PM +0200, Pavel Stehule wrote:\n> I can live with the proposed patch, and I understand why ++ was\n> introduced. But I am still not sure it is really user friendly. I prefer to\n> extend \\dA and \\dn with some columns (\\dA has only two columns and \\dn has\n> two columns too), and then we don't need special ++ variants for sizes.\n> Using three levels of detail looks not too practical (more when the basic\n> reports \\dA and \\dn) are really very simple).\n\nYou're suggesting to include the ACL+description in \\dn and handler+description\nand \\dA.yes \n\nAnother option is to add pg_schema_size() and pg_am_size() without shortcuts in\npsql. That would avoid showing a potentially huge ACL when all one wants is\nthe schema size, and would serve my purposes well enough to write\n| SELECT pg_namespace_size(oid), nspname FROM pg_namespace ORDER BY 1 DESC LIMIT 9;It can work too. I think the long ACL is a customer design issue, but can be. But the same problem is in \\dt+, and I don't see an objection against this design.Maybe I am too subjective, because 4 years I use pspg, and wide reports are not a problem for me. When the size is on the end, then it is easy to see it in pspg.I like to see size in \\dn+ report, and I like to use pg_namespace_size separately too. Both can be very practical functionality.I think so \\dt+ and \\l+ is working very well now, and I am not too happy to break it (partially break it). Although the proposed change is very minimalistic. But your example \"SELECT pg_namespace_size(oid), nspname FROM pg_namespace ORDER BY 1 DESC LIMIT 9\" navigates me to the second idea (that just enhances the previous). Can be nice if you can have prepared views on the server side that are +/- equivalent to psql reports, and anybody can simply write their own custom reports. some likeSELECT schema, tablename, owner, pg_size_pretty(size) FROM pg_description.tables ORDER BY size DESC LIMIT 10SELECT schema, owner, pg_size_pretty(size) FROM pg_description.schemas ORDER BY size DESC LIMIT 10In the future, it can simplify psql code, and it allows pretty nice customization in any client for a lot of purposes. RegardsPavel \n\n-- \nJustin",
"msg_date": "Tue, 28 Sep 2021 06:33:58 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "Remove bogus ACL check for AMs.\nRebased on cf0cab868.\nUse ForkNumber rather than int.\nUpdate comments and commit message.\nAlso move the Size column of \\l and \\dt",
"msg_date": "Sat, 18 Dec 2021 16:10:42 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "Rebased before Julian asks.",
"msg_date": "Fri, 14 Jan 2022 10:35:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "Hi\n\nI like this feature, but I don't like the introduction of double + too\nmuch. I think it is confusing.\n\nIs it really necessary? Cannot be enough just reorganization of \\dn and\n\\dn+.\n\nRegards\n\nPavel\n\npá 14. 1. 2022 v 17:35 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> Rebased before Julian asks.\n>\n\nHiI like this feature, but I don't like the introduction of double + too much. I think it is confusing. Is it really necessary? Cannot be enough just reorganization of \\dn and \\dn+.RegardsPavelpá 14. 1. 2022 v 17:35 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:Rebased before Julian asks.",
"msg_date": "Fri, 14 Jan 2022 17:46:08 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "Rebased",
"msg_date": "Fri, 10 Jun 2022 06:48:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "rebased",
"msg_date": "Sat, 30 Jul 2022 00:35:38 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "As discussed in [1], we're taking this opportunity to return some\npatchsets that don't appear to be getting enough reviewer interest.\n\nThis is not a rejection, since we don't necessarily think there's\nanything unacceptable about the entry, but it differs from a standard\n\"Returned with Feedback\" in that there's probably not much actionable\nfeedback at all. Rather than code changes, what this patch needs is more\ncommunity interest. You might\n\n- ask people for help with your approach,\n- see if there are similar patches that your code could supplement,\n- get interested parties to agree to review your patch in a CF, or\n- possibly present the functionality in a way that's easier to review\n overall.\n\n(Doing these things is no guarantee that there will be interest, but\nit's hopefully better than endlessly rebasing a patchset that is not\nreceiving any feedback from the community.)\n\nOnce you think you've built up some community support and the patchset\nis ready for review, you (or any interested party) can resurrect the\npatch entry by visiting\n\n https://commitfest.postgresql.org/38/3256/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1] https://postgr.es/m/86140760-8ba5-6f3a-3e6e-5ca6c060bd24@timescale.com\n\n\n\n",
"msg_date": "Mon, 1 Aug 2022 13:47:59 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "Rebased on c727f511b.\n\nThis patch record was \"closed for lack of interest\", but I think what's\nactually needed is committer review of which approach to take.\n\n - add backend functions but do not modify psql ?\n - add to psql slash-plus commnds ?\n - introduce psql double-plus commands for new options ?\n - change pre-existing psql plus commands to only show size with\n double-plus ?\n - go back to the original, two-line client-side sum() ?\n\nUntil then, the patchset is organized with those questions in mind.\n\n-- \nJustin",
"msg_date": "Thu, 15 Dec 2022 10:13:23 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "čt 15. 12. 2022 v 17:13 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> Rebased on c727f511b.\n>\n> This patch record was \"closed for lack of interest\", but I think what's\n> actually needed is committer review of which approach to take.\n>\n> - add backend functions but do not modify psql ?\n> - add to psql slash-plus commnds ?\n> - introduce psql double-plus commands for new options ?\n> - change pre-existing psql plus commands to only show size with\n> double-plus ?\n> - go back to the original, two-line client-side sum() ?\n>\n> Until then, the patchset is organized with those questions in mind.\n>\n\n+1\n\nThis format makes sense to me.\n\nRegards\n\nPavel\n\n>\n> --\n> Justin\n>\n\nčt 15. 12. 2022 v 17:13 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:Rebased on c727f511b.\n\nThis patch record was \"closed for lack of interest\", but I think what's\nactually needed is committer review of which approach to take.\n\n - add backend functions but do not modify psql ?\n - add to psql slash-plus commnds ?\n - introduce psql double-plus commands for new options ?\n - change pre-existing psql plus commands to only show size with\n double-plus ?\n - go back to the original, two-line client-side sum() ?\n\nUntil then, the patchset is organized with those questions in mind.+1This format makes sense to me.RegardsPavel\n\n-- \nJustin",
"msg_date": "Fri, 16 Dec 2022 09:05:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 10:13:23AM -0600, Justin Pryzby wrote:\n> Rebased on c727f511b.\n\nRebased on 30a53b792.\nWith minor changes including fixes to an intermediate patch.\n\n> This patch record was \"closed for lack of interest\", but I think what's\n> actually needed is committer review of which approach to take.\n> \n> - add backend functions but do not modify psql ?\n> - add to psql slash-plus commnds ?\n> - introduce psql double-plus commands for new options ?\n> - change pre-existing psql plus commands to only show size with\n> double-plus ?\n> - go back to the original, two-line client-side sum() ?\n> \n> Until then, the patchset is organized with those questions in mind.",
"msg_date": "Sat, 11 Mar 2023 16:55:03 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "I added documentation for the SQL functions in 001.\nAnd updated to say 170000\n\nI'm planning to set this patch as ready - it has not changed\nsignificantly in 18 months. Not for the first time, I've implemented a\nworkaround at a higher layer.\n\n-- \nJustin",
"msg_date": "Wed, 24 May 2023 16:05:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "> On 24 May 2023, at 23:05, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I'm planning to set this patch as ready\n\nThis is marked RfC so I'm moving this to the next CF, but the patch no longer\napplies so it needs a rebase.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 1 Aug 2023 09:54:34 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 10:13:23AM -0600, Justin Pryzby wrote:\n> This patch record was \"closed for lack of interest\", but I think what's\n> actually needed is committer review of which approach to take.\n\nOn Tue, Aug 01, 2023 at 09:54:34AM +0200, Daniel Gustafsson wrote:\n> > On 24 May 2023, at 23:05, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > I'm planning to set this patch as ready\n> \n> This is marked RfC so I'm moving this to the next CF, but the patch no longer\n> applies so it needs a rebase.\n\nI was still hoping to receive some feedback on which patches to squish.\n\n-- \nJustin",
"msg_date": "Mon, 14 Aug 2023 10:03:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch had a CF status of \"Ready for Committer\", but the\nthread has been inactive for 5+ months.\n\nSince the last post from Justin said \"hoping to receive some feedback\"\nI have changed the CF status back to \"Needs Review\" [1].\n\n======\n[1] https://commitfest.postgresql.org/46/3256/\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 11:10:48 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "Hi Justin,\n\nThanks for the patch and the work on it. In reviewing the basic\nfeature, I think this is something that has utility and is worthwhile\nat the high level.\n\nA few more specific notes:\n\nThe pg_namespace_size() function can stand on its own, and probably\nhas some utility for the released Postgres versions.\n\nI do think the psql implementation for the \\dn+ or \\dA+ commands\nshouldn't need to use this same function; it's a straightforward\nexpansion of the SQL query that can be run in a way that will be\nbackwards-compatible with any connected postgres version, so no reason\nto exclude this information for this cases. (This may have been in an\nearlier revision of the patchset; I didn't check everything.)\n\nI think the \\dX++ command versions add code complexity without a real\nneed for it. We have precedence with \\l(+) to show permissions on the\nbasic display and size on the extended display, and I think this is\nsufficient in this case here. While moving the permissions to \\dn is\na behavior change, it's adding information, not taking it away, and as\nan interactive command it is unlikely to introduce significant\nbreakage in any scripting scenario.\n\n(In reviewing the patch we've also seen a bit of odd behavior/possible\nbug with the existing extended + commands, which introducing\nsignificant ++ overloading might be confusing, but not the\nfault/concern of this patch.)\n\nQuickie summary:\n\n0001-Add-pg_am_size-pg_namespace_size.patch\n- fine, but needs rebase to work\n0002-psql-add-convenience-commands-dA-and-dn.patch\n- split into just + variant; remove \\l++\n- make the \\dn show permission and \\dn+ show size\n0003-f-convert-the-other-verbose-to-int-too.patch\n- unneeded\n0004-Move-the-double-plus-Size-columns-to-the-right.patch\n- unneeded\n\nDocs on the first patch seemed fine; I do think we'll need docs\nchanges for the psql changes.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Thu, 30 May 2024 10:59:06 -0700",
"msg_from": "David Christensen <david+pg@pgguru.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "On Thu, May 30, 2024 at 10:59:06AM -0700, David Christensen wrote:\n> Hi Justin,\n> \n> Thanks for the patch and the work on it. In reviewing the basic\n> feature, I think this is something that has utility and is worthwhile\n> at the high level.\n\nThanks for looking.\n\n> A few more specific notes:\n> \n> The pg_namespace_size() function can stand on its own, and probably\n> has some utility for the released Postgres versions.\n\nAre you suggesting to add the C function retroactively in back branches?\nI don't think anybody would consider doing that.\n\nIt wouldn't be used by anything internally, and any module that wanted\nto use it would have to check the minor version, instead of just the\nmajor version, which is wrong.\n\n> I do think the psql implementation for the \\dn+ or \\dA+ commands\n> shouldn't need to use this same function; it's a straightforward\n> expansion of the SQL query that can be run in a way that will be\n> backwards-compatible with any connected postgres version, so no reason\n> to exclude this information for this cases. (This may have been in an\n> earlier revision of the patchset; I didn't check everything.)\n\nI think you're suggesting to write the query in SQL rather than in C.\n\nBut I did that in the first version of the patch, and the response was\nthat maybe in the future someone would want to add permission checks\nthat would compromize the ability to get correct results from SQL, so\nthen I presented the functionality writen in C.\n\nI recommend that reviewers try to read the existing communication on the\nthread, otherwise we end up going back and forth about the same things.\n\n> I think the \\dX++ command versions add code complexity without a real\n> need for it.\n\nIf you view this as a way to \"show schema sizes\", then you're right,\nthere's no need. But I don't want this patch to necessary further\nembrace the idea that it's okay for \"plus commands to be slow and show\nnonportable results\". If there were a consensus that it'd be fine in a\nplus command, I would be okay with that, though.\n\n> We have precedence with \\l(+) to show permissions on the\n> basic display and size on the extended display, and I think this is\n> sufficient in this case here.\n\nYou also have the precedence that \\db doesn't show the ACL, and you\ncan't get it without also computing the sizes. That's 1) inconsistent\nwith \\l and 2) pretty inconvenient for someone who wants to show the\nACL (as mentioned in the first message on this thread).\n\n> 0001-Add-pg_am_size-pg_namespace_size.patch\n> - fine, but needs rebase to work\n\nI suggest reviewers to consider sending a rebased patch, optionally with\nany proposed changes in a separate patch.\n\n> 0002-psql-add-convenience-commands-dA-and-dn.patch\n> - split into just + variant; remove \\l++\n> - make the \\dn show permission and \\dn+ show size\n\n> 0003-f-convert-the-other-verbose-to-int-too.patch\n> - unneeded\n> 0004-Move-the-double-plus-Size-columns-to-the-right.patch\n> - unneeded\n\nYou say they're unneeded, but what I've been hoping for is a committer\ninterested enough to at least suggest whether to run with 001, 001+002,\n001+002+003, or 001+002+003+004. They're intentionally presented as\nsuch.\n\nI've also thought about submitting a patch specifically dedicated to\n\"moving size out of + and into ++\". I find the idea compelling, for the\nreasons I wrote in the the patch description. That'd be like presenting\n003+004 first.\n\nI'm opened to changing the behavior or the implementation. But changing\nthe patch as I've presented it based on one suggestion I think will lead\nto incoherent code trashing. I need to hear a wider agreement.\n\n-- \nJustin",
"msg_date": "Mon, 3 Jun 2024 09:10:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "po 3. 6. 2024 v 16:10 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Thu, May 30, 2024 at 10:59:06AM -0700, David Christensen wrote:\n> > Hi Justin,\n> >\n> > Thanks for the patch and the work on it. In reviewing the basic\n> > feature, I think this is something that has utility and is worthwhile\n> > at the high level.\n>\n> Thanks for looking.\n>\n> > A few more specific notes:\n> >\n> > The pg_namespace_size() function can stand on its own, and probably\n> > has some utility for the released Postgres versions.\n>\n> Are you suggesting to add the C function retroactively in back branches?\n> I don't think anybody would consider doing that.\n>\n> It wouldn't be used by anything internally, and any module that wanted\n> to use it would have to check the minor version, instead of just the\n> major version, which is wrong.\n>\n> > I do think the psql implementation for the \\dn+ or \\dA+ commands\n> > shouldn't need to use this same function; it's a straightforward\n> > expansion of the SQL query that can be run in a way that will be\n> > backwards-compatible with any connected postgres version, so no reason\n> > to exclude this information for this cases. (This may have been in an\n> > earlier revision of the patchset; I didn't check everything.)\n>\n> I think you're suggesting to write the query in SQL rather than in C.\n>\n> But I did that in the first version of the patch, and the response was\n> that maybe in the future someone would want to add permission checks\n> that would compromize the ability to get correct results from SQL, so\n> then I presented the functionality writen in C.\n>\n> I recommend that reviewers try to read the existing communication on the\n> thread, otherwise we end up going back and forth about the same things.\n>\n> > I think the \\dX++ command versions add code complexity without a real\n> > need for it.\n>\n> If you view this as a way to \"show schema sizes\", then you're right,\n> there's no need. But I don't want this patch to necessary further\n> embrace the idea that it's okay for \"plus commands to be slow and show\n> nonportable results\". If there were a consensus that it'd be fine in a\n> plus command, I would be okay with that, though.\n>\n\nI think showing size in \\dX+ command is consistent with any other +\ncommands and the introduction ++ variant is inconsistent and not too\nintuitive.\n\nSo I personally vote just for \\dX+ without the introduction ++ command. Any\ntime, in this case we can introduce ++ in future when we see some\nperformance problems.\n\n\n> > We have precedence with \\l(+) to show permissions on the\n> > basic display and size on the extended display, and I think this is\n> > sufficient in this case here.\n>\n> You also have the precedence that \\db doesn't show the ACL, and you\n> can't get it without also computing the sizes. That's 1) inconsistent\n> with \\l and 2) pretty inconvenient for someone who wants to show the\n> ACL (as mentioned in the first message on this thread).\n>\n> > 0001-Add-pg_am_size-pg_namespace_size.patch\n> > - fine, but needs rebase to work\n>\n> I suggest reviewers to consider sending a rebased patch, optionally with\n> any proposed changes in a separate patch.\n>\n> > 0002-psql-add-convenience-commands-dA-and-dn.patch\n> > - split into just + variant; remove \\l++\n> > - make the \\dn show permission and \\dn+ show size\n>\n> > 0003-f-convert-the-other-verbose-to-int-too.patch\n> > - unneeded\n> > 0004-Move-the-double-plus-Size-columns-to-the-right.patch\n> > - unneeded\n>\n> You say they're unneeded, but what I've been hoping for is a committer\n> interested enough to at least suggest whether to run with 001, 001+002,\n> 001+002+003, or 001+002+003+004. They're intentionally presented as\n> such.\n>\n> I've also thought about submitting a patch specifically dedicated to\n> \"moving size out of + and into ++\". I find the idea compelling, for the\n> reasons I wrote in the the patch description. That'd be like presenting\n> 003+004 first.\n>\n> I'm opened to changing the behavior or the implementation. But changing\n> the patch as I've presented it based on one suggestion I think will lead\n> to incoherent code trashing. I need to hear a wider agreement.\n>\n> --\n> Justin\n>\n\npo 3. 6. 2024 v 16:10 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Thu, May 30, 2024 at 10:59:06AM -0700, David Christensen wrote:\n> Hi Justin,\n> \n> Thanks for the patch and the work on it. In reviewing the basic\n> feature, I think this is something that has utility and is worthwhile\n> at the high level.\n\nThanks for looking.\n\n> A few more specific notes:\n> \n> The pg_namespace_size() function can stand on its own, and probably\n> has some utility for the released Postgres versions.\n\nAre you suggesting to add the C function retroactively in back branches?\nI don't think anybody would consider doing that.\n\nIt wouldn't be used by anything internally, and any module that wanted\nto use it would have to check the minor version, instead of just the\nmajor version, which is wrong.\n\n> I do think the psql implementation for the \\dn+ or \\dA+ commands\n> shouldn't need to use this same function; it's a straightforward\n> expansion of the SQL query that can be run in a way that will be\n> backwards-compatible with any connected postgres version, so no reason\n> to exclude this information for this cases. (This may have been in an\n> earlier revision of the patchset; I didn't check everything.)\n\nI think you're suggesting to write the query in SQL rather than in C.\n\nBut I did that in the first version of the patch, and the response was\nthat maybe in the future someone would want to add permission checks\nthat would compromize the ability to get correct results from SQL, so\nthen I presented the functionality writen in C.\n\nI recommend that reviewers try to read the existing communication on the\nthread, otherwise we end up going back and forth about the same things.\n\n> I think the \\dX++ command versions add code complexity without a real\n> need for it.\n\nIf you view this as a way to \"show schema sizes\", then you're right,\nthere's no need. But I don't want this patch to necessary further\nembrace the idea that it's okay for \"plus commands to be slow and show\nnonportable results\". If there were a consensus that it'd be fine in a\nplus command, I would be okay with that, though.I think showing size in \\dX+ command is consistent with any other + commands and the introduction ++ variant is inconsistent and not too intuitive.So I personally vote just for \\dX+ without the introduction ++ command. Any time, in this case we can introduce ++ in future when we see some performance problems. \n\n> We have precedence with \\l(+) to show permissions on the\n> basic display and size on the extended display, and I think this is\n> sufficient in this case here.\n\nYou also have the precedence that \\db doesn't show the ACL, and you\ncan't get it without also computing the sizes. That's 1) inconsistent\nwith \\l and 2) pretty inconvenient for someone who wants to show the\nACL (as mentioned in the first message on this thread).\n\n> 0001-Add-pg_am_size-pg_namespace_size.patch\n> - fine, but needs rebase to work\n\nI suggest reviewers to consider sending a rebased patch, optionally with\nany proposed changes in a separate patch.\n\n> 0002-psql-add-convenience-commands-dA-and-dn.patch\n> - split into just + variant; remove \\l++\n> - make the \\dn show permission and \\dn+ show size\n\n> 0003-f-convert-the-other-verbose-to-int-too.patch\n> - unneeded\n> 0004-Move-the-double-plus-Size-columns-to-the-right.patch\n> - unneeded\n\nYou say they're unneeded, but what I've been hoping for is a committer\ninterested enough to at least suggest whether to run with 001, 001+002,\n001+002+003, or 001+002+003+004. They're intentionally presented as\nsuch.\n\nI've also thought about submitting a patch specifically dedicated to\n\"moving size out of + and into ++\". I find the idea compelling, for the\nreasons I wrote in the the patch description. That'd be like presenting\n003+004 first.\n\nI'm opened to changing the behavior or the implementation. But changing\nthe patch as I've presented it based on one suggestion I think will lead\nto incoherent code trashing. I need to hear a wider agreement.\n\n-- \nJustin",
"msg_date": "Mon, 3 Jun 2024 16:49:43 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
},
{
"msg_contents": "On Mon, Jun 3, 2024 at 9:10 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, May 30, 2024 at 10:59:06AM -0700, David Christensen wrote:\n> > Hi Justin,\n> >\n> > Thanks for the patch and the work on it. In reviewing the basic\n> > feature, I think this is something that has utility and is worthwhile\n> > at the high level.\n>\n> Thanks for looking.\n>\n> > A few more specific notes:\n> >\n> > The pg_namespace_size() function can stand on its own, and probably\n> > has some utility for the released Postgres versions.\n>\n> Are you suggesting to add the C function retroactively in back branches?\n> I don't think anybody would consider doing that.\n>\n> It wouldn't be used by anything internally, and any module that wanted\n> to use it would have to check the minor version, instead of just the\n> major version, which is wrong.\n\nAh, I meant once released it would be useful going forward, but\nre-reading it can see the ambiguity. No, definitely not suggesting\nadding to back branches.\n\n> > I do think the psql implementation for the \\dn+ or \\dA+ commands\n> > shouldn't need to use this same function; it's a straightforward\n> > expansion of the SQL query that can be run in a way that will be\n> > backwards-compatible with any connected postgres version, so no reason\n> > to exclude this information for this cases. (This may have been in an\n> > earlier revision of the patchset; I didn't check everything.)\n>\n> I think you're suggesting to write the query in SQL rather than in C.\n\nYes.\n\n> But I did that in the first version of the patch, and the response was\n> that maybe in the future someone would want to add permission checks\n> that would compromize the ability to get correct results from SQL, so\n> then I presented the functionality writen in C.\n>\n> I recommend that reviewers try to read the existing communication on the\n> thread, otherwise we end up going back and forth about the same things.\n\nYes, I reviewed the whole thread, just not all of the patch contents.\nI'd agree that suggestion flapping is not useful, but just presenting\nmy take on this at this time (as well as several others in the Patch\nReview workshop at PGConf.dev, which is where this one got revived).\n\n> > I think the \\dX++ command versions add code complexity without a real\n> > need for it.\n>\n> If you view this as a way to \"show schema sizes\", then you're right,\n> there's no need. But I don't want this patch to necessary further\n> embrace the idea that it's okay for \"plus commands to be slow and show\n> nonportable results\". If there were a consensus that it'd be fine in a\n> plus command, I would be okay with that, though.\n\nI think this is a separate discussion in terms of introducing\nverbosity levels and not needed at this point for this feature.\nCertainly revamping all inconsistencies with the psql command\ninterfaces is out of scope, just thinking about which one we'd want to\nmodel going forward and providing an opinion there (for what it's\nworth... :D)\n\n> > We have precedence with \\l(+) to show permissions on the\n> > basic display and size on the extended display, and I think this is\n> > sufficient in this case here.\n>\n> You also have the precedence that \\db doesn't show the ACL, and you\n> can't get it without also computing the sizes. That's 1) inconsistent\n> with \\l and 2) pretty inconvenient for someone who wants to show the\n> ACL (as mentioned in the first message on this thread).\n\nI can't speak to this one, just think that adding more command options\nadds to the overall complexity so probably to be avoided unless we\ncan't.\n\n> > 0001-Add-pg_am_size-pg_namespace_size.patch\n> > - fine, but needs rebase to work\n>\n> I suggest reviewers to consider sending a rebased patch, optionally with\n> any proposed changes in a separate patch.\n\nSure, if you don't have time to do this; agree with your later remarks\nthat consensus is important before moving forward, so same applies\nhere.\n\n> > 0002-psql-add-convenience-commands-dA-and-dn.patch\n> > - split into just + variant; remove \\l++\n> > - make the \\dn show permission and \\dn+ show size\n>\n> > 0003-f-convert-the-other-verbose-to-int-too.patch\n> > - unneeded\n> > 0004-Move-the-double-plus-Size-columns-to-the-right.patch\n> > - unneeded\n>\n> You say they're unneeded, but what I've been hoping for is a committer\n> interested enough to at least suggest whether to run with 001, 001+002,\n> 001+002+003, or 001+002+003+004. They're intentionally presented as\n> such.\n>\n> I've also thought about submitting a patch specifically dedicated to\n> \"moving size out of + and into ++\". I find the idea compelling, for the\n> reasons I wrote in the the patch description. That'd be like presenting\n> 003+004 first.\n>\n> I'm opened to changing the behavior or the implementation. But changing\n> the patch as I've presented it based on one suggestion I think will lead\n> to incoherent code trashing. I need to hear a wider agreement.\n\nAgreed that some sort of consensus is important.\n\n\n",
"msg_date": "Mon, 3 Jun 2024 15:48:36 -0500",
"msg_from": "David Christensen <david+pg@pgguru.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: \\dn+ to show size of each schema (and \\dA+ for AMs)"
}
] |
[
{
"msg_contents": "Hi\n\nThe description for \"pg_database\" [1] mentions the function\n\"pg_encoding_to_char()\", but this is not described anywhere in the docs. Given\nthat that it (and the corresponding \"pg_char_to_encoding()\") have been around\nsince 7.0 [2], it's probably not a burning issue, but it seems not entirely\nunreasonable to add short descriptions for both (and link from \"pg_conversion\"\nwhile we're at it); see attached patch. \"System Catalog Information Functions\"\nseems the most logical place to put these.\n\n[1] https://www.postgresql.org/docs/current/catalog-pg-database.html\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5eb1d0deb15f2b7cd0051bef12f3e091516c723b\n\nWill add to the next commitfest.\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com",
"msg_date": "Wed, 14 Jul 2021 14:43:52 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] document"
},
{
"msg_contents": "On Wed, 2021-07-14 at 14:43 +0900, Ian Lawrence Barwick wrote:\n> Hi\n> \n> The description for \"pg_database\" [1] mentions the function\n> \"pg_encoding_to_char()\", but this is not described anywhere in the docs. Given\n> that that it (and the corresponding \"pg_char_to_encoding()\") have been around\n> since 7.0 [2], it's probably not a burning issue, but it seems not entirely\n> unreasonable to add short descriptions for both (and link from \"pg_conversion\"\n> while we're at it); see attached patch. \"System Catalog Information Functions\"\n> seems the most logical place to put these.\n> \n> [1] https://www.postgresql.org/docs/current/catalog-pg-database.html\n> [2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5eb1d0deb15f2b7cd0051bef12f3e091516c723b\n> \n> Will add to the next commitfest.\n\n+1\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 14 Jul 2021 07:45:52 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] document"
},
{
"msg_contents": "2021年7月14日(水) 14:43 Ian Lawrence Barwick <barwick@gmail.com>:\n>\n> Hi\n>\n> The description for \"pg_database\" [1] mentions the function\n> \"pg_encoding_to_char()\", but this is not described anywhere in the docs. Given\n> that that it (and the corresponding \"pg_char_to_encoding()\") have been around\n> since 7.0 [2], it's probably not a burning issue, but it seems not entirely\n> unreasonable to add short descriptions for both (and link from \"pg_conversion\"\n> while we're at it); see attached patch. \"System Catalog Information Functions\"\n> seems the most logical place to put these.\n>\n> [1] https://www.postgresql.org/docs/current/catalog-pg-database.html\n> [2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5eb1d0deb15f2b7cd0051bef12f3e091516c723b\n>\n> Will add to the next commitfest.\n\nAdded; apologies, the subject line of the original mail was\nunintentionally truncated.\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Jul 2021 16:13:57 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] document pg_encoding_to_char() and pg_char_to_encoding()"
},
{
"msg_contents": "\n\nOn 2021/07/14 14:45, Laurenz Albe wrote:\n> On Wed, 2021-07-14 at 14:43 +0900, Ian Lawrence Barwick wrote:\n>> Hi\n>>\n>> The description for \"pg_database\" [1] mentions the function\n>> \"pg_encoding_to_char()\", but this is not described anywhere in the docs. Given\n>> that that it (and the corresponding \"pg_char_to_encoding()\") have been around\n>> since 7.0 [2], it's probably not a burning issue, but it seems not entirely\n>> unreasonable to add short descriptions for both (and link from \"pg_conversion\"\n>> while we're at it); see attached patch. \"System Catalog Information Functions\"\n>> seems the most logical place to put these.\n>>\n>> [1] https://www.postgresql.org/docs/current/catalog-pg-database.html\n>> [2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5eb1d0deb15f2b7cd0051bef12f3e091516c723b\n>>\n>> Will add to the next commitfest.\n> \n> +1\n\n+1\n\nWhen I applied the patch to the master, I found that the table entries for\nthose function were added into the table for aclitem functions in the docs.\nI think this is not valid position and needs to be moved to the proper one\n(maybe the table for system catalog information functions?).\n\n+ <returnvalue>int</returnvalue>\n+ <function>pg_encoding_to_char</function> ( <parameter>encoding</parameter> <type>int</type> )\n\nIt's better to s/int/integer because the entries for other functions\nin func.sgml use \"integer\".\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 25 Aug 2021 22:21:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] document"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> When I applied the patch to the master, I found that the table entries for\n> those function were added into the table for aclitem functions in the docs.\n> I think this is not valid position and needs to be moved to the proper one\n> (maybe the table for system catalog information functions?).\n\nYou have to be very careful these days when applying stale patches to\nfunc.sgml --- there's enough duplicate boilerplate that \"patch' can easily\nbe fooled into dumping an addition into the wrong place. I doubt that\nthe submitter meant the doc addition to go there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Aug 2021 09:50:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] document"
},
{
"msg_contents": "On Wed, Aug 25, 2021 at 09:50:13AM -0400, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > When I applied the patch to the master, I found that the table entries for\n> > those function were added into the table for aclitem functions in the docs.\n> > I think this is not valid position and needs to be moved to the proper one\n> > (maybe the table for system catalog information functions?).\n> \n> You have to be very careful these days when applying stale patches to\n> func.sgml --- there's enough duplicate boilerplate that \"patch' can easily\n> be fooled into dumping an addition into the wrong place. I doubt that\n> the submitter meant the doc addition to go there.\n\nI suppose one solution to this is to use git format-patch -U11 or similar, at\nleast for doc/\n\nOr write the \"duplicate boilerplate\" across fewer lines.\n\nAnd another is to add <!-- function() --> comments before and/or after each.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Aug 2021 11:39:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] document"
},
{
"msg_contents": "On 2021/08/26 1:39, Justin Pryzby wrote:\n> On Wed, Aug 25, 2021 at 09:50:13AM -0400, Tom Lane wrote:\n>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>> When I applied the patch to the master, I found that the table entries for\n>>> those function were added into the table for aclitem functions in the docs.\n>>> I think this is not valid position and needs to be moved to the proper one\n>>> (maybe the table for system catalog information functions?).\n>>\n>> You have to be very careful these days when applying stale patches to\n>> func.sgml --- there's enough duplicate boilerplate that \"patch' can easily\n>> be fooled into dumping an addition into the wrong place. I doubt that\n>> the submitter meant the doc addition to go there.\n> \n> I suppose one solution to this is to use git format-patch -U11 or similar, at\n> least for doc/\n\nYes. I moved the desriptions of the function into the table for\nsystem catalog information functions, and made the patch by using\ngit diff -U6. Patch attached. Barring any objection, I'm thinking\nto commit it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 4 Oct 2021 15:18:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] document"
},
{
"msg_contents": "\n\nOn 2021/10/04 15:18, Fujii Masao wrote:\n> \n> \n> On 2021/08/26 1:39, Justin Pryzby wrote:\n>> On Wed, Aug 25, 2021 at 09:50:13AM -0400, Tom Lane wrote:\n>>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>>> When I applied the patch to the master, I found that the table entries for\n>>>> those function were added into the table for aclitem functions in the docs.\n>>>> I think this is not valid position and needs to be moved to the proper one\n>>>> (maybe the table for system catalog information functions?).\n>>>\n>>> You have to be very careful these days when applying stale patches to\n>>> func.sgml --- there's enough duplicate boilerplate that \"patch' can easily\n>>> be fooled into dumping an addition into the wrong place. I doubt that\n>>> the submitter meant the doc addition to go there.\n>>\n>> I suppose one solution to this is to use git format-patch -U11 or similar, at\n>> least for doc/\n> \n> Yes. I moved the desriptions of the function into the table for\n> system catalog information functions, and made the patch by using\n> git diff -U6. Patch attached. Barring any objection, I'm thinking\n> to commit it.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:55:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] document"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen reviewing some pg_dump related code, I found some existsing code\n(getTableAttrs() and dumpEnumType()) invoke PQfnumber() repeatedly which seems\nunnecessary.\n\nExample\n-----\n\t\tfor (int j = 0; j < ntups; j++)\n\t\t{\n\t\t\tif (j + 1 != atoi(PQgetvalue(res, j, PQfnumber(res, \"attnum\"))))\n-----\n\nSince PQfnumber() is not a cheap function, I think we'd better invoke\nPQfnumber() out of the loop like the attatched patch.\n\nAfter applying this change, I can see about 8% performance gain in my test environment\nwhen dump table definitions which have many columns.\n\nBest regards,\nHou zhijie",
"msg_date": "Wed, 14 Jul 2021 08:54:32 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Avoid repeated PQfnumber() in pg_dump"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 08:54:32AM +0000, houzj.fnst@fujitsu.com wrote:\n> Since PQfnumber() is not a cheap function, I think we'd better invoke\n> PQfnumber() out of the loop like the attatched patch.\n\n+1\n\nPlease add to the next CF\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 14 Jul 2021 10:21:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid repeated PQfnumber() in pg_dump"
},
{
"msg_contents": "> On 14 Jul 2021, at 10:54, houzj.fnst@fujitsu.com wrote:\n\n> Since PQfnumber() is not a cheap function, I think we'd better invoke\n> PQfnumber() out of the loop like the attatched patch.\n\nLooks good on a quick readthrough, and I didn't see any other similar codepaths\nin pg_dump on top of what you've fixed.\n\n> After applying this change, I can see about 8% performance gain in my test environment\n> when dump table definitions which have many columns.\n\nOut of curiosity, how many columns are \"many columns\"?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 14 Jul 2021 23:34:51 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Avoid repeated PQfnumber() in pg_dump"
},
{
"msg_contents": "On July 15, 2021 5:35 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 14 Jul 2021, at 10:54, houzj.fnst@fujitsu.com wrote:\n> \n> > Since PQfnumber() is not a cheap function, I think we'd better invoke\n> > PQfnumber() out of the loop like the attatched patch.\n> \n> Looks good on a quick readthrough, and I didn't see any other similar\n> codepaths in pg_dump on top of what you've fixed.\n\nThanks for reviewing the patch.\nAdded to the CF: https://commitfest.postgresql.org/34/3254/\n\n> > After applying this change, I can see about 8% performance gain in my\n> > test environment when dump table definitions which have many columns.\n> \n> Out of curiosity, how many columns are \"many columns\"?\n\nI tried dump 10 table definitions while each table has 1000 columns\n(maybe not real world case).\n\nBest regards,\nhouzj\n\n\n",
"msg_date": "Thu, 15 Jul 2021 02:51:02 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Avoid repeated PQfnumber() in pg_dump"
},
{
"msg_contents": "> On 15 Jul 2021, at 04:51, houzj.fnst@fujitsu.com wrote:\n> On July 15, 2021 5:35 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>> Out of curiosity, how many columns are \"many columns\"?\n> \n> I tried dump 10 table definitions while each table has 1000 columns\n> (maybe not real world case).\n\nWhile unlikely to be common, very wide tables aren’t unheard of. Either way, I\nthink it has merit to pull out the PQfnumber before the loop even if it’s a\nwash performance wise for many users, as it’s a pattern used elsewhere in\npg_dump.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 15 Jul 2021 21:46:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Avoid repeated PQfnumber() in pg_dump"
},
{
"msg_contents": "On 7/15/21, 12:48 PM, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\r\n> While unlikely to be common, very wide tables aren’t unheard of. Either way, I\r\n> think it has merit to pull out the PQfnumber before the loop even if it’s a\r\n> wash performance wise for many users, as it’s a pattern used elsewhere in\r\n> pg_dump.\r\n\r\n+1\r\n\r\nThe patch looks good to me. I am marking it as ready-for-committer.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 22 Jul 2021 23:39:33 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid repeated PQfnumber() in pg_dump"
},
{
"msg_contents": "> On 23 Jul 2021, at 01:39, Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> The patch looks good to me. I am marking it as ready-for-committer.\n\nI took another look at this today and pushed it after verifying with a pgindent\nrun. Thanks! \n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 27 Aug 2021 16:26:58 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Avoid repeated PQfnumber() in pg_dump"
},
{
"msg_contents": "On 8/27/21, 7:27 AM, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\r\n> I took another look at this today and pushed it after verifying with a pgindent\r\n> run. Thanks!\r\n\r\nThank you!\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 27 Aug 2021 16:35:44 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid repeated PQfnumber() in pg_dump"
}
] |
[
{
"msg_contents": "Hi all.\n\nI've come up with a proof-of-concept patch using the delegation/proxy approach.\n\nLet's say we have two DB, one for FDW and one for the real server. When client\nconnects to FDW server using kerberos authentication, we can obtain a \"proxy\"\ncredential and store it in the global variable \"MyProcPort->gss->proxy\". This can\nbe then passed to gssapi calls during libpq kerberos setup when the foreign table\nis queried.\n\nThis will mitigate the need for keytab file on FDW server. We will also have to\nrelax the password requirement for user mapping.\n\nThe big problem here is how to pass proxy credential from backend to libpq-fe\nsafely. Because libpq called in postgres_fdw is compiled as frontend binary, we'd\nbetter not include any backend related stuff in libpq-fe.\nIn this patch I use a very ugly hack to work around this. First take pointer address\nof the variable MyProcPort->gss->proxy, convert it to hex string, and then pass\nit as libpq option \"gss_proxy_cred\". Any idea about how to do this in a more\nelegant way?\n\nBest regards,\nPeifeng",
"msg_date": "Wed, 14 Jul 2021 09:30:18 +0000",
"msg_from": "Peifeng Qiu <peifengq@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Support kerberos authentication for postgres_fdw"
}
] |
[
{
"msg_contents": "Hi\r\n\r\nWhen reading the code FinishPreparedTransaction, I found that SI messages are sent \r\nwhen executing ROLLBACK PREPARED command.\r\n\r\nBut according to AtEOXact_Inval function, we send the SI messages only when committing the transaction .\r\n\r\nSo, I think we needn't send SI messags when rollbacking the two-phase transaction.\r\nOr Does it has something special because of two-phase transaction?\r\n\r\n\r\nRegards\r\n\r\n",
"msg_date": "Wed, 14 Jul 2021 12:11:15 +0000",
"msg_from": "\"liuhuailing@fujitsu.com\" <liuhuailing@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "SI messages sent when excuting ROLLBACK PREPARED command"
},
{
"msg_contents": "\"liuhuailing@fujitsu.com\" <liuhuailing@fujitsu.com> writes:\n> So, I think we needn't send SI messags when rollbacking the two-phase transaction.\n> Or Does it has something special because of two-phase transaction?\n\nHmmm, yeah, I think you're right. It probably doesn't make a big\ndifference in the real world --- anyone who's dependent on the\nperformance of 2PC rollbaxks is Doing It Wrong. But we'd have\nalready done LocalExecuteInvalidationMessage when getting out of\nthe prepared transaction, so no other SI invals should be needed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jul 2021 13:36:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SI messages sent when excuting ROLLBACK PREPARED command"
},
{
"msg_contents": "Hi, tom\r\n\r\nThanks for your reply.\r\n\r\n>Hmmm, yeah, I think you're right. It probably doesn't make a big difference in the real world --- anyone who's dependent on the performance of 2PC rollbaxks is Doing It Wrong. \r\n> But we'd have already done LocalExecuteInvalidationMessage when getting out of the prepared transaction, so no other SI invals should be needed.\r\nYes, it does not make any error.\r\n\r\nBut for the beginner, when understanding the code, it may make confused.\r\nAnd for the developer, when developing based on this code, it may make unnecessary handling added. \r\n\r\nSo, I think it is better to optimize the code.\r\n\r\nHere is the patch.\r\n\r\nRegards, liuhl\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \r\nSent: Thursday, July 15, 2021 1:36 AM\r\nTo: Liu, Huailing/刘 怀玲 <liuhuailing@fujitsu.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: SI messages sent when excuting ROLLBACK PREPARED command\r\n\r\n\"liuhuailing@fujitsu.com\" <liuhuailing@fujitsu.com> writes:\r\n> So, I think we needn't send SI messags when rollbacking the two-phase transaction.\r\n> Or Does it has something special because of two-phase transaction?\r\n\r\nHmmm, yeah, I think you're right. It probably doesn't make a big difference in the real world --- anyone who's dependent on the performance of 2PC rollbaxks is Doing It Wrong. But we'd have already done LocalExecuteInvalidationMessage when getting out of the prepared transaction, so no other SI invals should be needed.\r\n\r\n\t\t\tregards, tom lane",
"msg_date": "Thu, 15 Jul 2021 03:04:31 +0000",
"msg_from": "\"liuhuailing@fujitsu.com\" <liuhuailing@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: SI messages sent when excuting ROLLBACK PREPARED command"
},
{
"msg_contents": "Hi, tom\r\n\r\n> >Hmmm, yeah, I think you're right. It probably doesn't make a big difference in\r\n> the real world --- anyone who's dependent on the performance of 2PC rollbaxks\r\n> is Doing It Wrong.\r\n> > But we'd have already done LocalExecuteInvalidationMessage when getting\r\n> out of the prepared transaction, so no other SI invals should be needed.\r\n> Yes, it does not make any error.\r\n> \r\n> But for the beginner, when understanding the code, it may make confused.\r\n> And for the developer, when developing based on this code, it may make\r\n> unnecessary handling added.\r\n> \r\n> So, I think it is better to optimize the code.\r\n> \r\n> Here is the patch.\r\nThere was a problem with the before patch when testing. \r\nSo resubmit it.\r\n\r\nRegards, Liu Huailing",
"msg_date": "Tue, 3 Aug 2021 09:29:48 +0000",
"msg_from": "\"liuhuailing@fujitsu.com\" <liuhuailing@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: SI messages sent when excuting ROLLBACK PREPARED command"
},
{
"msg_contents": "On Tue, Aug 03, 2021 at 09:29:48AM +0000, liuhuailing@fujitsu.com wrote:\n> There was a problem with the before patch when testing. \n> So resubmit it.\n\nFWIW, I see no problems with patch version 1 or 2, as long as you\napply patch version 1 with a command like patch -p2. One thing of\npatch 2 is that git diff --check complains because of a whitespace.\n\nAnyway, I also think that you are right here and that there is no need\nto run this code path with ROLLBACK PREPARED. It is worth noting that\nthe point of Tom about local invalidation messages in PREPARE comes\nfrom PostPrepare_Inval().\n\nI would just tweak the comment block at the top of what's being\nchanged, as per the attached. Please let me know if there are any\nobjections. \n--\nMichael",
"msg_date": "Wed, 11 Aug 2021 15:14:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SI messages sent when excuting ROLLBACK PREPARED command"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 03:14:11PM +0900, Michael Paquier wrote:\n> I would just tweak the comment block at the top of what's being\n> changed, as per the attached. Please let me know if there are any\n> objections. \n\nAnd applied as of 710796f.\n--\nMichael",
"msg_date": "Fri, 13 Aug 2021 21:50:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SI messages sent when excuting ROLLBACK PREPARED command"
},
{
"msg_contents": "> On Wed, Aug 11, 2021 at 03:14:11PM +0900, Michael Paquier wrote:\r\n> > I would just tweak the comment block at the top of what's being\r\n> > changed, as per the attached. Please let me know if there are any\r\n> > objections.\r\n> \r\n> And applied as of 710796f.\r\nThanks for your comment and commit.\r\nI've changed the patch's commit fest status to 'committed'.\r\nhttps://commitfest.postgresql.org/34/3257/\r\n\r\nRegards,\r\nLiu\r\n",
"msg_date": "Thu, 19 Aug 2021 07:52:01 +0000",
"msg_from": "\"liuhuailing@fujitsu.com\" <liuhuailing@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: SI messages sent when excuting ROLLBACK PREPARED command"
}
] |
[
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was looking at fmgr_internal_validator().\n> It seems prosrc is only used internally.\n> The patch frees the C string prosrc points to, prior to returning.\n\nThere's really very little point in adding such code. Our memory\ncontext mechanisms take care of minor leaks like this, with less\ncode and fewer cycles expended than explicit pfree calls require.\nIt's worth trying to clean up explicitly in code that might get\nexecuted many times in a row, or might be allocating very big\ntemporary chunks; but fmgr_internal_validator hardly falls in\nthat category.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jul 2021 13:17:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: free C string"
},
{
"msg_contents": "Hi,\nI was looking at fmgr_internal_validator().\n\nIt seems prosrc is only used internally.\n\nThe patch frees the C string prosrc points to, prior to returning.\n\nPlease take a look.\n\nThanks",
"msg_date": "Wed, 14 Jul 2021 10:19:14 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "free C string"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > I was looking at fmgr_internal_validator().\n> > It seems prosrc is only used internally.\n> > The patch frees the C string prosrc points to, prior to returning.\n>\n> There's really very little point in adding such code. Our memory\n> context mechanisms take care of minor leaks like this, with less\n> code and fewer cycles expended than explicit pfree calls require.\n> It's worth trying to clean up explicitly in code that might get\n> executed many times in a row, or might be allocating very big\n> temporary chunks; but fmgr_internal_validator hardly falls in\n> that category.\n>\n> regards, tom lane\n>\nHi,\nHow about this occurrence which is in a loop ?\n\nThanks",
"msg_date": "Wed, 14 Jul 2021 11:29:23 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: free C string"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> On Wed, Jul 14, 2021 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There's really very little point in adding such code. Our memory\n>> context mechanisms take care of minor leaks like this, with less\n>> code and fewer cycles expended than explicit pfree calls require.\n>> It's worth trying to clean up explicitly in code that might get\n>> executed many times in a row, or might be allocating very big\n>> temporary chunks; but fmgr_internal_validator hardly falls in\n>> that category.\n\n> How about this occurrence which is in a loop ?\n\nI'd say the burden is on you to prove that it's worth worrying\nabout, not vice versa. If we added pfree everywhere we possibly\ncould, the code would be larger and slower, not faster.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jul 2021 15:36:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: free C string"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like the commit d75288fb [1] added an unnecessary\nAssert(PgArchPID == 0); in PostmasterStateMachine as the if block code\ngets hit only when PgArchPID == 0. PSA small patch.\n\n[1]\ncommit d75288fb27b8fe0a926aaab7d75816f091ecdc27\nAuthor: Fujii Masao <fujii@postgresql.org>\nDate: Mon Mar 15 13:13:14 2021 +0900\n\n Make archiver process an auxiliary process.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Wed, 14 Jul 2021 23:38:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove redundant Assert(PgArchPID == 0); in PostmasterStateMachine"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 11:38:59PM +0530, Bharath Rupireddy wrote:\n> It looks like the commit d75288fb [1] added an unnecessary\n> Assert(PgArchPID == 0); in PostmasterStateMachine as the if block code\n> gets hit only when PgArchPID == 0. PSA small patch.\n\nAgreed that there is no need to keep that around. Will fix.\n--\nMichael",
"msg_date": "Thu, 15 Jul 2021 11:21:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant Assert(PgArchPID == 0); in\n PostmasterStateMachine"
},
{
"msg_contents": "\n\nOn 2021/07/15 11:21, Michael Paquier wrote:\n> On Wed, Jul 14, 2021 at 11:38:59PM +0530, Bharath Rupireddy wrote:\n>> It looks like the commit d75288fb [1] added an unnecessary\n>> Assert(PgArchPID == 0); in PostmasterStateMachine as the if block code\n>> gets hit only when PgArchPID == 0. PSA small patch.\n\nGood catch, Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 15 Jul 2021 11:49:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant Assert(PgArchPID == 0); in\n PostmasterStateMachine"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 11:49:33AM +0900, Fujii Masao wrote:\n> Good catch, Thanks!\n\nDone while I was on it.\n--\nMichael",
"msg_date": "Thu, 15 Jul 2021 16:02:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant Assert(PgArchPID == 0); in\n PostmasterStateMachine"
}
] |
[
{
"msg_contents": "Upgrading from\n10.5 to 13.3 using pg_upgrade -k\n\nThe following is the result of an upgrade\n\nselect * from pg_extension ;\n oid | extname | extowner | extnamespace | extrelocatable |\nextversion | extconfig | extcondition\n-------+--------------------+----------+--------------+----------------+------------+-----------+--------------\n 12910 | plpgsql | 10 | 11 | f |\n1.0 | |\n 16403 | pg_stat_statements | 10 | 2200 | t |\n1.5 | |\n(2 rows)\n\ntest=# \\df+ pg_stat_statements_reset\n\n List of functions\n Schema | Name | Result data type | Argument data types\n| Type | Volatility | Parallel | Owner | Security | Access privileges\n | Language | Source code | Description\n--------+--------------------------+------------------+---------------------+------+------------+----------+-------+----------+---------------------------+----------+--------------------------+-------------\n public | pg_stat_statements_reset | void |\n | func | volatile | safe | davec | invoker | davec=X/davec\n +| c | pg_stat_statements_reset |\n | | |\n | | | | | |\npg_read_all_stats=X/davec | | |\n(1 row)\n\nAnd this is from creating the extension in a new db on the same instance\n\nfoo=# select * from pg_extension ;\n oid | extname | extowner | extnamespace | extrelocatable |\nextversion | extconfig | extcondition\n-------+--------------------+----------+--------------+----------------+------------+-----------+--------------\n 12910 | plpgsql | 10 | 11 | f |\n1.0 | |\n 16393 | pg_stat_statements | 10 | 2200 | t |\n1.8 | |\n(2 rows)\n\nfoo=# \\df+ pg_stat_statements_reset\n\n List of functions\n Schema | Name | Result data type |\n Argument data types | Type | Volatility |\nParallel | Owner | Security | Access privileges | Language | Source\ncode | Description\n--------+--------------------------+------------------+--------------------------------------------------------------------+------+------------+----------+-------+----------+-------------------+----------+------------------------------+-------------\n public | pg_stat_statements_reset | void | userid oid DEFAULT\n0, dbid oid DEFAULT 0, queryid bigint DEFAULT 0 | func | volatile | safe\n | davec | invoker | davec=X/davec | c |\npg_stat_statements_reset_1_7 |\n(1 row)\n\nNotice the upgraded version is 1.5 and the new version is 1.8\n\nI would think somewhere in the upgrade of the schema there should have been\na create extension pg_stat_statements ?\n\nDave\nDave Cramer\n\nUpgrading from 10.5 to 13.3 using pg_upgrade -kThe following is the result of an upgradeselect * from pg_extension ; oid | extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition-------+--------------------+----------+--------------+----------------+------------+-----------+-------------- 12910 | plpgsql | 10 | 11 | f | 1.0 | | 16403 | pg_stat_statements | 10 | 2200 | t | 1.5 | |(2 rows)test=# \\df+ pg_stat_statements_reset List of functions Schema | Name | Result data type | Argument data types | Type | Volatility | Parallel | Owner | Security | Access privileges | Language | Source code | Description--------+--------------------------+------------------+---------------------+------+------------+----------+-------+----------+---------------------------+----------+--------------------------+------------- public | pg_stat_statements_reset | void | | func | volatile | safe | davec | invoker | davec=X/davec +| c | pg_stat_statements_reset | | | | | | | | | | pg_read_all_stats=X/davec | | |(1 row)And this is from creating the extension in a new db on the same instancefoo=# select * from pg_extension ; oid | extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition-------+--------------------+----------+--------------+----------------+------------+-----------+-------------- 12910 | plpgsql | 10 | 11 | f | 1.0 | | 16393 | pg_stat_statements | 10 | 2200 | t | 1.8 | |(2 rows)foo=# \\df+ pg_stat_statements_reset List of functions Schema | Name | Result data type | Argument data types | Type | Volatility | Parallel | Owner | Security | Access privileges | Language | Source code | Description--------+--------------------------+------------------+--------------------------------------------------------------------+------+------------+----------+-------+----------+-------------------+----------+------------------------------+------------- public | pg_stat_statements_reset | void | userid oid DEFAULT 0, dbid oid DEFAULT 0, queryid bigint DEFAULT 0 | func | volatile | safe | davec | invoker | davec=X/davec | c | pg_stat_statements_reset_1_7 |(1 row)Notice the upgraded version is 1.5 and the new version is 1.8I would think somewhere in the upgrade of the schema there should have been a create extension pg_stat_statements ?DaveDave Cramer",
"msg_date": "Wed, 14 Jul 2021 14:38:49 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Wednesday, July 14, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> Notice the upgraded version is 1.5 and the new version is 1.8\n>\n> I would think somewhere in the upgrade of the schema there should have\n> been a create extension pg_stat_statements ?\n>\n\nThat would be a faulty assumption. Modules do not get upgraded during a\nserver version upgrade. This is a good thing, IMO.\n\nDavid J.\n\nOn Wednesday, July 14, 2021, Dave Cramer <davecramer@gmail.com> wrote:Notice the upgraded version is 1.5 and the new version is 1.8I would think somewhere in the upgrade of the schema there should have been a create extension pg_stat_statements ?That would be a faulty assumption. Modules do not get upgraded during a server version upgrade. This is a good thing, IMO.David J.",
"msg_date": "Wed, 14 Jul 2021 11:47:47 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Wed, 14 Jul 2021 at 14:47, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wednesday, July 14, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>>\n>> Notice the upgraded version is 1.5 and the new version is 1.8\n>>\n>> I would think somewhere in the upgrade of the schema there should have\n>> been a create extension pg_stat_statements ?\n>>\n>\n> That would be a faulty assumption. Modules do not get upgraded during a\n> server version upgrade. This is a good thing, IMO.\n>\n\nThis is from the documentation of pg_upgrade\n\nInstall any custom shared object files (or DLLs) used by the old cluster\ninto the new cluster, e.g., pgcrypto.so, whether they are from contrib or\nsome other source. Do not install the schema definitions, e.g., CREATE\nEXTENSION pgcrypto, because these will be upgraded from the old cluster.\nAlso, any custom full text search files (dictionary, synonym, thesaurus,\nstop words) must also be copied to the new cluster.\n\nIf indeed modules do not get upgraded then the above is confusing at best,\nand misleading at worst.\n\nDave\n\n\n> David J.\n>\n>\n\nOn Wed, 14 Jul 2021 at 14:47, David G. Johnston <david.g.johnston@gmail.com> wrote:On Wednesday, July 14, 2021, Dave Cramer <davecramer@gmail.com> wrote:Notice the upgraded version is 1.5 and the new version is 1.8I would think somewhere in the upgrade of the schema there should have been a create extension pg_stat_statements ?That would be a faulty assumption. Modules do not get upgraded during a server version upgrade. This is a good thing, IMO.This is from the documentation of pg_upgradeInstall any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., pgcrypto.so, whether they are from contrib or some other source. Do not install the schema definitions, e.g., CREATE EXTENSION pgcrypto, because these will be upgraded from the old cluster. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster.If indeed modules do not get upgraded then the above is confusing at best, and misleading at worst.Dave David J.",
"msg_date": "Wed, 14 Jul 2021 14:58:45 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 11:59 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Wed, 14 Jul 2021 at 14:47, David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>\n>> On Wednesday, July 14, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>>>\n>>>\n>>> Notice the upgraded version is 1.5 and the new version is 1.8\n>>>\n>>> I would think somewhere in the upgrade of the schema there should have\n>>> been a create extension pg_stat_statements ?\n>>>\n>>\n>> That would be a faulty assumption. Modules do not get upgraded during a\n>> server version upgrade. This is a good thing, IMO.\n>>\n>\n> This is from the documentation of pg_upgrade\n>\n> Install any custom shared object files (or DLLs) used by the old cluster\n> into the new cluster, e.g., pgcrypto.so, whether they are from contrib or\n> some other source. Do not install the schema definitions, e.g., CREATE\n> EXTENSION pgcrypto, because these will be upgraded from the old cluster.\n> Also, any custom full text search files (dictionary, synonym, thesaurus,\n> stop words) must also be copied to the new cluster.\n>\n> If indeed modules do not get upgraded then the above is confusing at best,\n> and misleading at worst.\n>\n>\n\"Install ... files used by the old cluster\" (which must be binary\ncompatible with the new cluster as noted elsewhere on that page) supports\nthe claim that it is the old cluster's version that is going to result.\nBut I agree that saying \"because these will be upgraded from the old\ncluster\" is poorly worded and should be fixed to be more precise here.\n\nSomething like, \"... because the installed extensions will be copied from\nthe old cluster during the upgrade.\"\n\nDavid J.\n\nOn Wed, Jul 14, 2021 at 11:59 AM Dave Cramer <davecramer@gmail.com> wrote:On Wed, 14 Jul 2021 at 14:47, David G. Johnston <david.g.johnston@gmail.com> wrote:On Wednesday, July 14, 2021, Dave Cramer <davecramer@gmail.com> wrote:Notice the upgraded version is 1.5 and the new version is 1.8I would think somewhere in the upgrade of the schema there should have been a create extension pg_stat_statements ?That would be a faulty assumption. Modules do not get upgraded during a server version upgrade. This is a good thing, IMO.This is from the documentation of pg_upgradeInstall any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., pgcrypto.so, whether they are from contrib or some other source. Do not install the schema definitions, e.g., CREATE EXTENSION pgcrypto, because these will be upgraded from the old cluster. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster.If indeed modules do not get upgraded then the above is confusing at best, and misleading at worst.\"Install ... files used by the old cluster\" (which must be binary compatible with the new cluster as noted elsewhere on that page) supports the claim that it is the old cluster's version that is going to result. But I agree that saying \"because these will be upgraded from the old cluster\" is poorly worded and should be fixed to be more precise here.Something like, \"... because the installed extensions will be copied from the old cluster during the upgrade.\"David J.",
"msg_date": "Wed, 14 Jul 2021 12:09:20 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Wed, 14 Jul 2021 at 15:09, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wed, Jul 14, 2021 at 11:59 AM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>>\n>> On Wed, 14 Jul 2021 at 14:47, David G. Johnston <\n>> david.g.johnston@gmail.com> wrote:\n>>\n>>> On Wednesday, July 14, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n>>>\n>>>>\n>>>>\n>>>> Notice the upgraded version is 1.5 and the new version is 1.8\n>>>>\n>>>> I would think somewhere in the upgrade of the schema there should have\n>>>> been a create extension pg_stat_statements ?\n>>>>\n>>>\n>>> That would be a faulty assumption. Modules do not get upgraded during a\n>>> server version upgrade. This is a good thing, IMO.\n>>>\n>>\n>> This is from the documentation of pg_upgrade\n>>\n>> Install any custom shared object files (or DLLs) used by the old cluster\n>> into the new cluster, e.g., pgcrypto.so, whether they are from contrib or\n>> some other source. Do not install the schema definitions, e.g., CREATE\n>> EXTENSION pgcrypto, because these will be upgraded from the old cluster.\n>> Also, any custom full text search files (dictionary, synonym, thesaurus,\n>> stop words) must also be copied to the new cluster.\n>>\n>> If indeed modules do not get upgraded then the above is confusing at\n>> best, and misleading at worst.\n>>\n>>\n> \"Install ... files used by the old cluster\" (which must be binary\n> compatible with the new cluster as noted elsewhere on that page) supports\n> the claim that it is the old cluster's version that is going to result.\n> But I agree that saying \"because these will be upgraded from the old\n> cluster\" is poorly worded and should be fixed to be more precise here.\n>\n> Something like, \"... because the installed extensions will be copied from\n> the old cluster during the upgrade.\"\n>\n\nThis is still rather opaque. Without intimate knowledge of what changes\nhave occurred in each extension I have installed; how would I know what I\nhave to fix after the upgrade.\n\nSeems to me extensions should either store some information in pg_extension\nto indicate compatibility, or they should have some sort of upgrade script\nwhich pg_upgrade would call to fix any problems (yes, I realize this is\nhand waving at the moment)\n\nIn this example the older version of pg_stat_statements works fine, it only\nfails when I do a dump restore of the new database and then the error is\nrather obtuse. IIRC pg_dump wanted to revoke all from public from the\nfunction pg_stat_statements_reset() and that could not be found, yet the\nfunction is there. I don't believe we should be surprising our users like\nthis.\n\nDave\n\n>\n> David J.\n>\n>\n\nOn Wed, 14 Jul 2021 at 15:09, David G. Johnston <david.g.johnston@gmail.com> wrote:On Wed, Jul 14, 2021 at 11:59 AM Dave Cramer <davecramer@gmail.com> wrote:On Wed, 14 Jul 2021 at 14:47, David G. Johnston <david.g.johnston@gmail.com> wrote:On Wednesday, July 14, 2021, Dave Cramer <davecramer@gmail.com> wrote:Notice the upgraded version is 1.5 and the new version is 1.8I would think somewhere in the upgrade of the schema there should have been a create extension pg_stat_statements ?That would be a faulty assumption. Modules do not get upgraded during a server version upgrade. This is a good thing, IMO.This is from the documentation of pg_upgradeInstall any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., pgcrypto.so, whether they are from contrib or some other source. Do not install the schema definitions, e.g., CREATE EXTENSION pgcrypto, because these will be upgraded from the old cluster. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster.If indeed modules do not get upgraded then the above is confusing at best, and misleading at worst.\"Install ... files used by the old cluster\" (which must be binary compatible with the new cluster as noted elsewhere on that page) supports the claim that it is the old cluster's version that is going to result. But I agree that saying \"because these will be upgraded from the old cluster\" is poorly worded and should be fixed to be more precise here.Something like, \"... because the installed extensions will be copied from the old cluster during the upgrade.\"This is still rather opaque. Without intimate knowledge of what changes have occurred in each extension I have installed; how would I know what I have to fix after the upgrade. Seems to me extensions should either store some information in pg_extension to indicate compatibility, or they should have some sort of upgrade script which pg_upgrade would call to fix any problems (yes, I realize this is hand waving at the moment)In this example the older version of pg_stat_statements works fine, it only fails when I do a dump restore of the new database and then the error is rather obtuse. IIRC pg_dump wanted to revoke all from public from the function pg_stat_statements_reset() and that could not be found, yet the function is there. I don't believe we should be surprising our users like this.DaveDavid J.",
"msg_date": "Wed, 14 Jul 2021 15:21:40 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 12:21 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> On Wed, 14 Jul 2021 at 15:09, David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>\n>>\n>> Something like, \"... because the installed extensions will be copied from\n>> the old cluster during the upgrade.\"\n>>\n>\n> This is still rather opaque. Without intimate knowledge of what changes\n> have occurred in each extension I have installed; how would I know what I\n> have to fix after the upgrade.\n>\n\nThe point of this behavior is that you don't have to fix anything after an\nupgrade - so long as your current extension version works on the new\ncluster. If you are upgrading in such a way that the current extension and\nnew cluster are not compatible you need to not do that. Upgrade instead to\na lesser version where they are compatible. Then upgrade your extension to\nits newer version, changing any required user code that such an upgrade\nrequires, then upgrade the server again.\n\nDavid J.\n\nOn Wed, Jul 14, 2021 at 12:21 PM Dave Cramer <davecramer@gmail.com> wrote:On Wed, 14 Jul 2021 at 15:09, David G. Johnston <david.g.johnston@gmail.com> wrote:Something like, \"... because the installed extensions will be copied from the old cluster during the upgrade.\"This is still rather opaque. Without intimate knowledge of what changes have occurred in each extension I have installed; how would I know what I have to fix after the upgrade. The point of this behavior is that you don't have to fix anything after an upgrade - so long as your current extension version works on the new cluster. If you are upgrading in such a way that the current extension and new cluster are not compatible you need to not do that. Upgrade instead to a lesser version where they are compatible. Then upgrade your extension to its newer version, changing any required user code that such an upgrade requires, then upgrade the server again.David J.",
"msg_date": "Wed, 14 Jul 2021 12:33:42 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> On Wed, 14 Jul 2021 at 15:09, David G. Johnston <david.g.johnston@gmail.com>\n> wrote:\n>> \"Install ... files used by the old cluster\" (which must be binary\n>> compatible with the new cluster as noted elsewhere on that page) supports\n>> the claim that it is the old cluster's version that is going to result.\n>> But I agree that saying \"because these will be upgraded from the old\n>> cluster\" is poorly worded and should be fixed to be more precise here.\n>> \n>> Something like, \"... because the installed extensions will be copied from\n>> the old cluster during the upgrade.\"\n\n> This is still rather opaque. Without intimate knowledge of what changes\n> have occurred in each extension I have installed; how would I know what I\n> have to fix after the upgrade.\n\nThat's exactly why we don't force upgrades of extensions. It is on the\nuser's head to upgrade their extensions from time to time, but we don't\nmake them do it as part of pg_upgrade. (There are also some\nimplementation-level reasons to avoid this, IIRC, but the overall\nchoice is intentional.)\n\nI agree this documentation could be worded better. Another idea\nis that possibly pg_upgrade could produce a list of extensions\nthat are not the latest version, so people know what's left to\nbe addressed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jul 2021 15:43:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Wed, 14 Jul 2021 at 15:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > On Wed, 14 Jul 2021 at 15:09, David G. Johnston <\n> david.g.johnston@gmail.com>\n> > wrote:\n> >> \"Install ... files used by the old cluster\" (which must be binary\n> >> compatible with the new cluster as noted elsewhere on that page)\n> supports\n> >> the claim that it is the old cluster's version that is going to result.\n> >> But I agree that saying \"because these will be upgraded from the old\n> >> cluster\" is poorly worded and should be fixed to be more precise here.\n> >>\n> >> Something like, \"... because the installed extensions will be copied\n> from\n> >> the old cluster during the upgrade.\"\n>\n> > This is still rather opaque. Without intimate knowledge of what changes\n> > have occurred in each extension I have installed; how would I know what I\n> > have to fix after the upgrade.\n>\n> That's exactly why we don't force upgrades of extensions. It is on the\n> user's head to upgrade their extensions from time to time, but we don't\n> make them do it as part of pg_upgrade. (There are also some\n> implementation-level reasons to avoid this, IIRC, but the overall\n> choice is intentional.)\n>\n> I agree this documentation could be worded better.\n\n\nAs a first step I propose the following\n\ndiff --git a/doc/src/sgml/ref/pgupgrade.sgml\nb/doc/src/sgml/ref/pgupgrade.sgml\nindex a83c63cd98..f747a4473a 100644\n--- a/doc/src/sgml/ref/pgupgrade.sgml\n+++ b/doc/src/sgml/ref/pgupgrade.sgml\n@@ -305,9 +305,10 @@ make prefix=/usr/local/pgsql.new install\n Install any custom shared object files (or DLLs) used by the old\ncluster\n into the new cluster, e.g., <filename>pgcrypto.so</filename>,\n whether they are from <filename>contrib</filename>\n- or some other source. Do not install the schema definitions, e.g.,\n- <command>CREATE EXTENSION pgcrypto</command>, because these will be\nupgraded\n- from the old cluster.\n+ or some other source. Do not execute CREATE EXTENSION on the new\ncluster.\n+ The extensions will be upgraded from the old cluster. However it may\nbe\n+ necessary to recreate the extension on the new server after the\nupgrade\n+ to ensure compatibility with the new library.\n Also, any custom full text search files (dictionary, synonym,\n thesaurus, stop words) must also be copied to the new cluster.\n </para>\n\n\n> Another idea\n> is that possibly pg_upgrade could produce a list of extensions\n> that are not the latest version, so people know what's left to\n> be addressed.\n>\n\nIt would be possible to look at the control files in the new cluster to see\nthe default version and simply output a file with the differences.\nWe can query pg_extension for the currently installed versions.\n\nDave\n\n>\n>\n\nDave CramerOn Wed, 14 Jul 2021 at 15:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@gmail.com> writes:\n> On Wed, 14 Jul 2021 at 15:09, David G. Johnston <david.g.johnston@gmail.com>\n> wrote:\n>> \"Install ... files used by the old cluster\" (which must be binary\n>> compatible with the new cluster as noted elsewhere on that page) supports\n>> the claim that it is the old cluster's version that is going to result.\n>> But I agree that saying \"because these will be upgraded from the old\n>> cluster\" is poorly worded and should be fixed to be more precise here.\n>> \n>> Something like, \"... because the installed extensions will be copied from\n>> the old cluster during the upgrade.\"\n\n> This is still rather opaque. Without intimate knowledge of what changes\n> have occurred in each extension I have installed; how would I know what I\n> have to fix after the upgrade.\n\nThat's exactly why we don't force upgrades of extensions. It is on the\nuser's head to upgrade their extensions from time to time, but we don't\nmake them do it as part of pg_upgrade. (There are also some\nimplementation-level reasons to avoid this, IIRC, but the overall\nchoice is intentional.)\n\nI agree this documentation could be worded better. As a first step I propose the followingdiff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgmlindex a83c63cd98..f747a4473a 100644--- a/doc/src/sgml/ref/pgupgrade.sgml+++ b/doc/src/sgml/ref/pgupgrade.sgml@@ -305,9 +305,10 @@ make prefix=/usr/local/pgsql.new install Install any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., <filename>pgcrypto.so</filename>, whether they are from <filename>contrib</filename>- or some other source. Do not install the schema definitions, e.g.,- <command>CREATE EXTENSION pgcrypto</command>, because these will be upgraded- from the old cluster.+ or some other source. Do not execute CREATE EXTENSION on the new cluster.+ The extensions will be upgraded from the old cluster. However it may be+ necessary to recreate the extension on the new server after the upgrade+ to ensure compatibility with the new library. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster. </para> Another idea\nis that possibly pg_upgrade could produce a list of extensions\nthat are not the latest version, so people know what's left to\nbe addressed.It would be possible to look at the control files in the new cluster to see the default version and simply output a file with the differences.We can query pg_extension for the currently installed versions.Dave",
"msg_date": "Thu, 15 Jul 2021 07:40:54 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> As a first step I propose the following\n>\n> diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.\n> sgml\n> index a83c63cd98..f747a4473a 100644\n> --- a/doc/src/sgml/ref/pgupgrade.sgml\n> +++ b/doc/src/sgml/ref/pgupgrade.sgml\n> @@ -305,9 +305,10 @@ make prefix=/usr/local/pgsql.new install\n> Install any custom shared object files (or DLLs) used by the old\n> cluster\n> into the new cluster, e.g., <filename>pgcrypto.so</filename>,\n> whether they are from <filename>contrib</filename>\n> - or some other source. Do not install the schema definitions, e.g.,\n> - <command>CREATE EXTENSION pgcrypto</command>, because these will be\n> upgraded\n> - from the old cluster.\n> + or some other source. Do not execute CREATE EXTENSION on the new\n> cluster.\n> + The extensions will be upgraded from the old cluster. However it may\n> be\n> + necessary to recreate the extension on the new server after the\n> upgrade\n> + to ensure compatibility with the new library.\n> Also, any custom full text search files (dictionary, synonym,\n> thesaurus, stop words) must also be copied to the new cluster.\n> </para>\n>\n>\n\nI think this needs some work to distinguish between core extensions where\nwe know the new server already has a library installed and external\nextensions where it’s expected that the library that is added to the new\ncluster is compatible with the version being migrated (not upgraded). In\nshort, it should never be necessary to recreate the extension. My\nuncertainty revolves around core extensions since it seems odd to tell the\nuser to overwrite them with versions from an older version of PostgreSQL.\n\nDavid J.\n\nOn Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:As a first step I propose the followingdiff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgmlindex a83c63cd98..f747a4473a 100644--- a/doc/src/sgml/ref/pgupgrade.sgml+++ b/doc/src/sgml/ref/pgupgrade.sgml@@ -305,9 +305,10 @@ make prefix=/usr/local/pgsql.new install Install any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., <filename>pgcrypto.so</filename>, whether they are from <filename>contrib</filename>- or some other source. Do not install the schema definitions, e.g.,- <command>CREATE EXTENSION pgcrypto</command>, because these will be upgraded- from the old cluster.+ or some other source. Do not execute CREATE EXTENSION on the new cluster.+ The extensions will be upgraded from the old cluster. However it may be+ necessary to recreate the extension on the new server after the upgrade+ to ensure compatibility with the new library. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster. </para> I think this needs some work to distinguish between core extensions where we know the new server already has a library installed and external extensions where it’s expected that the library that is added to the new cluster is compatible with the version being migrated (not upgraded). In short, it should never be necessary to recreate the extension. My uncertainty revolves around core extensions since it seems odd to tell the user to overwrite them with versions from an older version of PostgreSQL.David J.",
"msg_date": "Thu, 15 Jul 2021 08:01:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 15 Jul 2021 at 11:01, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>> As a first step I propose the following\n>>\n>> diff --git a/doc/src/sgml/ref/pgupgrade.sgml\n>> b/doc/src/sgml/ref/pgupgrade.sgml\n>> index a83c63cd98..f747a4473a 100644\n>> --- a/doc/src/sgml/ref/pgupgrade.sgml\n>> +++ b/doc/src/sgml/ref/pgupgrade.sgml\n>> @@ -305,9 +305,10 @@ make prefix=/usr/local/pgsql.new install\n>> Install any custom shared object files (or DLLs) used by the old\n>> cluster\n>> into the new cluster, e.g., <filename>pgcrypto.so</filename>,\n>> whether they are from <filename>contrib</filename>\n>> - or some other source. Do not install the schema definitions, e.g.,\n>> - <command>CREATE EXTENSION pgcrypto</command>, because these will be\n>> upgraded\n>> - from the old cluster.\n>> + or some other source. Do not execute CREATE EXTENSION on the new\n>> cluster.\n>> + The extensions will be upgraded from the old cluster. However it\n>> may be\n>> + necessary to recreate the extension on the new server after the\n>> upgrade\n>> + to ensure compatibility with the new library.\n>> Also, any custom full text search files (dictionary, synonym,\n>> thesaurus, stop words) must also be copied to the new cluster.\n>> </para>\n>>\n>>\n>\n> I think this needs some work to distinguish between core extensions where\n> we know the new server already has a library installed and external\n> extensions where it’s expected that the library that is added to the new\n> cluster is compatible with the version being migrated (not upgraded). In\n> short, it should never be necessary to recreate the extension. My\n> uncertainty revolves around core extensions since it seems odd to tell the\n> user to overwrite them with versions from an older version of PostgreSQL.\n>\n\n\nWell clearly my suggestion was not clear if you interpreted that as over\nwriting them with versions from an older version of PostgreSQL.\n\n\nDave Cramer\n\n>\n>\n\nOn Thu, 15 Jul 2021 at 11:01, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:As a first step I propose the followingdiff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgmlindex a83c63cd98..f747a4473a 100644--- a/doc/src/sgml/ref/pgupgrade.sgml+++ b/doc/src/sgml/ref/pgupgrade.sgml@@ -305,9 +305,10 @@ make prefix=/usr/local/pgsql.new install Install any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., <filename>pgcrypto.so</filename>, whether they are from <filename>contrib</filename>- or some other source. Do not install the schema definitions, e.g.,- <command>CREATE EXTENSION pgcrypto</command>, because these will be upgraded- from the old cluster.+ or some other source. Do not execute CREATE EXTENSION on the new cluster.+ The extensions will be upgraded from the old cluster. However it may be+ necessary to recreate the extension on the new server after the upgrade+ to ensure compatibility with the new library. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster. </para> I think this needs some work to distinguish between core extensions where we know the new server already has a library installed and external extensions where it’s expected that the library that is added to the new cluster is compatible with the version being migrated (not upgraded). In short, it should never be necessary to recreate the extension. My uncertainty revolves around core extensions since it seems odd to tell the user to overwrite them with versions from an older version of PostgreSQL.Well clearly my suggestion was not clear if you interpreted that as over writing them with versions from an older version of PostgreSQL.Dave Cramer",
"msg_date": "Thu, 15 Jul 2021 11:09:28 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thursday, July 15, 2021, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>> Install any custom shared object files (or DLLs) used by the old\n>> cluster\n>> into the new cluster, e.g., <filename>pgcrypto.so</filename>,\n>> whether they are from <filename>contrib</filename>\n>> - or some other source.\n>> However it may be\n>> + necessary to recreate the extension on the new server after the\n>> upgrade\n>> + to ensure compatibility with the new library.\n>>\n>>\n>\n> My uncertainty revolves around core extensions since it seems odd to tell\n> the user to overwrite them with versions from an older version of\n> PostgreSQL.\n>\n\nOk. Just re-read the docs a third time…no uncertainty regarding contrib\nnow…following the first part of the instructions means that before one\ncould re-run create extension they would need to restore the original\ncontrib library files to avoid the new extension code using the old\nlibrary. So that whole part about recreation is inconsistent with the\nexisting unchanged text.\n\nDavid J.\n\nOn Thursday, July 15, 2021, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote: Install any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., <filename>pgcrypto.so</filename>, whether they are from <filename>contrib</filename>- or some other source.However it may be+ necessary to recreate the extension on the new server after the upgrade+ to ensure compatibility with the new library. My uncertainty revolves around core extensions since it seems odd to tell the user to overwrite them with versions from an older version of PostgreSQL.Ok. Just re-read the docs a third time…no uncertainty regarding contrib now…following the first part of the instructions means that before one could re-run create extension they would need to restore the original contrib library files to avoid the new extension code using the old library. So that whole part about recreation is inconsistent with the existing unchanged text.David J.",
"msg_date": "Thu, 15 Jul 2021 08:15:57 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n\n> Well clearly my suggestion was not clear if you interpreted that as over\n> writing them with versions from an older version of PostgreSQL.\n>\n>>\n>>\nIgnoring my original interpretation as being moot; the section immediately\npreceding your edit says to do exactly that.\n\nDavid J.\n\nOn Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:Well clearly my suggestion was not clear if you interpreted that as over writing them with versions from an older version of PostgreSQL.Ignoring my original interpretation as being moot; the section immediately preceding your edit says to do exactly that.David J.",
"msg_date": "Thu, 15 Jul 2021 08:17:27 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 15 Jul 2021 at 11:15, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thursday, July 15, 2021, David G. Johnston <david.g.johnston@gmail.com>\n> wrote:\n>\n>> On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>>>\n>>> Install any custom shared object files (or DLLs) used by the old\n>>> cluster\n>>> into the new cluster, e.g., <filename>pgcrypto.so</filename>,\n>>> whether they are from <filename>contrib</filename>\n>>> - or some other source.\n>>> However it may be\n>>> + necessary to recreate the extension on the new server after the\n>>> upgrade\n>>> + to ensure compatibility with the new library.\n>>>\n>>>\n>>\n>> My uncertainty revolves around core extensions since it seems odd to\n>> tell the user to overwrite them with versions from an older version of\n>> PostgreSQL.\n>>\n>\n> Ok. Just re-read the docs a third time…no uncertainty regarding contrib\n> now…following the first part of the instructions means that before one\n> could re-run create extension they would need to restore the original\n> contrib library files to avoid the new extension code using the old\n> library. So that whole part about recreation is inconsistent with the\n> existing unchanged text.\n>\n>\nThe way I solved the original problem of having old function definitions\nfor pg_stat_statement functions in the *new* library was by recreating the\nextension which presumably redefines the functions correctly.\n\nI'm thinking at this point we need something a bit more sophisticated like\n\nALTER EXTENSION ... UPGRADE. And the extension knows how to upgrade itself.\n\nDave\n\n\n>\n>\n>\n\nOn Thu, 15 Jul 2021 at 11:15, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thursday, July 15, 2021, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote: Install any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., <filename>pgcrypto.so</filename>, whether they are from <filename>contrib</filename>- or some other source.However it may be+ necessary to recreate the extension on the new server after the upgrade+ to ensure compatibility with the new library. My uncertainty revolves around core extensions since it seems odd to tell the user to overwrite them with versions from an older version of PostgreSQL.Ok. Just re-read the docs a third time…no uncertainty regarding contrib now…following the first part of the instructions means that before one could re-run create extension they would need to restore the original contrib library files to avoid the new extension code using the old library. So that whole part about recreation is inconsistent with the existing unchanged text.The way I solved the original problem of having old function definitions for pg_stat_statement functions in the *new* library was by recreating the extension which presumably redefines the functions correctly. I'm thinking at this point we need something a bit more sophisticated likeALTER EXTENSION ... UPGRADE. And the extension knows how to upgrade itself.Dave",
"msg_date": "Thu, 15 Jul 2021 11:21:59 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> I'm thinking at this point we need something a bit more sophisticated like\n>\n> ALTER EXTENSION ... UPGRADE. And the extension knows how to upgrade itself.\n>\n\nI’m not familiar with what hoops extensions jump through to facilitate\nupgrades but even if it was as simple as “create extension upgrade” I\nwouldn’t have pg_upgrade execute that command (or at least not by\ndefault). I would maybe have pg_upgrade help move the libraries over from\nthe old server (and we must be dealing with different databases having\ndifferent extension versions in some manner…).\n\nDavid J.\n\nOn Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:I'm thinking at this point we need something a bit more sophisticated likeALTER EXTENSION ... UPGRADE. And the extension knows how to upgrade itself.I’m not familiar with what hoops extensions jump through to facilitate upgrades but even if it was as simple as “create extension upgrade” I wouldn’t have pg_upgrade execute that command (or at least not by default). I would maybe have pg_upgrade help move the libraries over from the old server (and we must be dealing with different databases having different extension versions in some manner…).David J.",
"msg_date": "Thu, 15 Jul 2021 08:29:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 15 Jul 2021 at 11:29, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>> I'm thinking at this point we need something a bit more sophisticated like\n>>\n>> ALTER EXTENSION ... UPGRADE. And the extension knows how to upgrade\n>> itself.\n>>\n>\n> I’m not familiar with what hoops extensions jump through to facilitate\n> upgrades but even if it was as simple as “create extension upgrade” I\n> wouldn’t have pg_upgrade execute that command (or at least not by\n> default). I would maybe have pg_upgrade help move the libraries over from\n> the old server (and we must be dealing with different databases having\n> different extension versions in some manner…).\n>\n\nWell IMHO the status quo is terrible. Perhaps you have a suggestion on how\nto make it better ?\n\nDave\n\n\n>\n> David J.\n>\n>\n\nOn Thu, 15 Jul 2021 at 11:29, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thursday, July 15, 2021, Dave Cramer <davecramer@gmail.com> wrote:I'm thinking at this point we need something a bit more sophisticated likeALTER EXTENSION ... UPGRADE. And the extension knows how to upgrade itself.I’m not familiar with what hoops extensions jump through to facilitate upgrades but even if it was as simple as “create extension upgrade” I wouldn’t have pg_upgrade execute that command (or at least not by default). I would maybe have pg_upgrade help move the libraries over from the old server (and we must be dealing with different databases having different extension versions in some manner…).Well IMHO the status quo is terrible. Perhaps you have a suggestion on how to make it better ?Dave David J.",
"msg_date": "Thu, 15 Jul 2021 11:43:20 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 2021-Jul-15, Dave Cramer wrote:\n\n> Well IMHO the status quo is terrible. Perhaps you have a suggestion on how\n> to make it better ?\n\nI thought the suggestion of having pg_upgrade emit a file with a list of\nall extensions needing upgrade in each database was a fairly decent one.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 15 Jul 2021 12:11:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 2021-Jul-15, Alvaro Herrera wrote:\n\n> On 2021-Jul-15, Dave Cramer wrote:\n> \n> > Well IMHO the status quo is terrible. Perhaps you have a suggestion on how\n> > to make it better ?\n> \n> I thought the suggestion of having pg_upgrade emit a file with a list of\n> all extensions needing upgrade in each database was a fairly decent one.\n\nEh, and \n pg_upgrade [other switches] --upgrade-extensions\nsounds good too ...\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 15 Jul 2021 12:13:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 8:43 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> On Thu, 15 Jul 2021 at 11:29, David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>\n>>\n>> I’m not familiar with what hoops extensions jump through to facilitate\n>> upgrades but even if it was as simple as “create extension upgrade” I\n>> wouldn’t have pg_upgrade execute that command (or at least not by\n>> default). I would maybe have pg_upgrade help move the libraries over from\n>> the old server (and we must be dealing with different databases having\n>> different extension versions in some manner…).\n>>\n>\n> Well IMHO the status quo is terrible. Perhaps you have a suggestion on how\n> to make it better ?\n>\n\nTo a certain extent it is beyond pg_upgrade's purview to care about\nextension explicitly - it considers them \"data\" on the database side and\ncopies over the schema and, within reason, punts on the filesystem by\nsaying \"ensure that the existing versions of your extensions in the old\ncluster can correctly run in the new cluster\" (which basically just takes a\nsimple file copy/install and the assumption you are upgrading to a server\nversion that is supported by the extension in question - also a reasonable\nrequirement). In short, I don't have a suggestion on how to improve that\nand don't really consider it a terrible flaw in pg_upgrade.\n\nI'll readily admit that I lack sufficient knowledge here to make such\nsuggestions as I don't hold any optionions that things are \"quite terrible\"\nand haven't been presented with concrete problems to consider alternatives\nfor.\n\nDavid J.\n\nOn Thu, Jul 15, 2021 at 8:43 AM Dave Cramer <davecramer@gmail.com> wrote:On Thu, 15 Jul 2021 at 11:29, David G. Johnston <david.g.johnston@gmail.com> wrote:I’m not familiar with what hoops extensions jump through to facilitate upgrades but even if it was as simple as “create extension upgrade” I wouldn’t have pg_upgrade execute that command (or at least not by default). I would maybe have pg_upgrade help move the libraries over from the old server (and we must be dealing with different databases having different extension versions in some manner…).Well IMHO the status quo is terrible. Perhaps you have a suggestion on how to make it better ?To a certain extent it is beyond pg_upgrade's purview to care about extension explicitly - it considers them \"data\" on the database side and copies over the schema and, within reason, punts on the filesystem by saying \"ensure that the existing versions of your extensions in the old cluster can correctly run in the new cluster\" (which basically just takes a simple file copy/install and the assumption you are upgrading to a server version that is supported by the extension in question - also a reasonable requirement). In short, I don't have a suggestion on how to improve that and don't really consider it a terrible flaw in pg_upgrade.I'll readily admit that I lack sufficient knowledge here to make such suggestions as I don't hold any optionions that things are \"quite terrible\" and haven't been presented with concrete problems to consider alternatives for.David J.",
"msg_date": "Thu, 15 Jul 2021 09:15:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 15 Jul 2021 at 12:13, Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Jul-15, Alvaro Herrera wrote:\n>\n> > On 2021-Jul-15, Dave Cramer wrote:\n> >\n> > > Well IMHO the status quo is terrible. Perhaps you have a suggestion on\n> how\n> > > to make it better ?\n> >\n> > I thought the suggestion of having pg_upgrade emit a file with a list of\n> > all extensions needing upgrade in each database was a fairly decent one.\n>\n> I think this is the minimum we should be doing.\n\n\n> Eh, and\n> pg_upgrade [other switches] --upgrade-extensions\n> sounds good too ...\n>\n\nUltimately I believe this is the solution, however we still need to teach\nextensions how to upgrade themselves or emit a message saying they can't,\nor even ignore if it truly is a NOP.\n\nDave\n\nOn Thu, 15 Jul 2021 at 12:13, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Jul-15, Alvaro Herrera wrote:\n\n> On 2021-Jul-15, Dave Cramer wrote:\n> \n> > Well IMHO the status quo is terrible. Perhaps you have a suggestion on how\n> > to make it better ?\n> \n> I thought the suggestion of having pg_upgrade emit a file with a list of\n> all extensions needing upgrade in each database was a fairly decent one.\nI think this is the minimum we should be doing. \nEh, and \n pg_upgrade [other switches] --upgrade-extensions\nsounds good too ...Ultimately I believe this is the solution, however we still need to teach extensions how to upgrade themselves or emit a message saying they can't, or even ignore if it truly is a NOP.Dave",
"msg_date": "Thu, 15 Jul 2021 12:16:09 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 9:16 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n> Eh, and\n>> pg_upgrade [other switches] --upgrade-extensions\n>> sounds good too ...\n>>\n>\n> Ultimately I believe this is the solution, however we still need to teach\n> extensions how to upgrade themselves or emit a message saying they can't,\n> or even ignore if it truly is a NOP.\n>\n>\nIf it's opt-in and simple I don't really care but I doubt I would use it as\npersonally I'd rather the upgrade not touch my application at all (to the\nextent possible) and just basically promise that I'll get a reliable\nupgrade. Then I'll go ahead and ensure I have the backups of the new\nversion and that my application works correctly, then just run the \"ALTER\nEXTENSION\" myself. But anything that will solve pain points for\nsame-PostgreSQL-version extension upgrading is great.\n\nI would say that it probably should be \"--upgrade-extension=aaa\n--upgrade_extension=bbb\" though if we are going to the effort to offer\nsomething.\n\nDavid J.\n\nOn Thu, Jul 15, 2021 at 9:16 AM Dave Cramer <davecramer@gmail.com> wrote:Eh, and \n pg_upgrade [other switches] --upgrade-extensions\nsounds good too ...Ultimately I believe this is the solution, however we still need to teach extensions how to upgrade themselves or emit a message saying they can't, or even ignore if it truly is a NOP.If it's opt-in and simple I don't really care but I doubt I would use it as personally I'd rather the upgrade not touch my application at all (to the extent possible) and just basically promise that I'll get a reliable upgrade. Then I'll go ahead and ensure I have the backups of the new version and that my application works correctly, then just run the \"ALTER EXTENSION\" myself. But anything that will solve pain points for same-PostgreSQL-version extension upgrading is great.I would say that it probably should be \"--upgrade-extension=aaa --upgrade_extension=bbb\" though if we are going to the effort to offer something.David J.",
"msg_date": "Thu, 15 Jul 2021 09:25:37 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 8:43 AM Dave Cramer <davecramer@gmail.com<mailto:davecramer@gmail.com>> wrote:\n\nOn Thu, 15 Jul 2021 at 11:29, David G. Johnston <david.g.johnston@gmail.com<mailto:david.g.johnston@gmail.com>> wrote:\n\nI’m not familiar with what hoops extensions jump through to facilitate upgrades but even if it was as simple as “create extension upgrade” I wouldn’t have pg_upgrade execute that command (or at least not by default). I would maybe have pg_upgrade help move the libraries over from the old server (and we must be dealing with different databases having different extension versions in some manner…).\n\nWell IMHO the status quo is terrible. Perhaps you have a suggestion on how to make it better ?\n\nTo a certain extent it is beyond pg_upgrade's purview to care about extension explicitly - it considers them \"data\" on the database side and copies over the schema and, within reason, punts on the filesystem by saying \"ensure that the existing versions of your extensions in the old cluster can correctly run in the new cluster\" (which basically just takes a simple file copy/install and the assumption you are upgrading to a server version that is supported by the extension in question - also a reasonable requirement). In short, I don't have a suggestion on how to improve that and don't really consider it a terrible flaw in pg_upgrade.\n\n I don’t know if this is a terrible flaw in pg_upgrade, it is a terrible flaw in the overall Postgres experience.\n\n We are currently working through various upgrade processes and it seems like the status quo is:\n\nDrop the extension, upgrade and reinstall\nOR\nUpgrade the cluster then upgrade the extension\n\nThe issue is that it often isn’t clear which path to choose and choosing the wrong path can lead to data loss.\n\nI don’t think it is ok to expect end users to understand when it is an isn’t ok to just drop and recreate and often it\nIsn’t clear in the extension documentation itself. I’m not sure what core can/should do about it but it is a major pain.\n\n-- Rob\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Thu, Jul 15, 2021 at 8:43 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n\n\n\n\n\n \n\n\n\nOn Thu, 15 Jul 2021 at 11:29, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\n\n\n \n\n\nI’m not familiar with what hoops extensions jump through to facilitate upgrades but even if it was as simple as “create extension upgrade” I wouldn’t have pg_upgrade execute that command (or at least not by default). I would maybe have\n pg_upgrade help move the libraries over from the old server (and we must be dealing with different databases having different extension versions in some manner…).\n\n\n\n \n\n\nWell IMHO the status quo is terrible. Perhaps you have a suggestion on how to make it better ?\n\n\n\n\n\n \n\n\nTo a certain extent it is beyond pg_upgrade's purview to care about extension explicitly - it considers them \"data\" on the database side and copies over the schema and, within reason, punts on\n the filesystem by saying \"ensure that the existing versions of your extensions in the old cluster can correctly run in the new cluster\" (which basically just takes a simple file copy/install and the assumption you are upgrading to a server version that is\n supported by the extension in question - also a reasonable requirement). In short, I don't have a suggestion on how to improve that and don't really consider it a terrible flaw in pg_upgrade.\n \n\n\n\n\n I don’t know if this is a terrible flaw in pg_upgrade, it is a terrible flaw in the overall Postgres experience.\n\n\n We are currently working through various upgrade processes and it seems like the status quo is:\n \nDrop the extension, upgrade and reinstall\nOR\nUpgrade the cluster then upgrade the extension\n \nThe issue is that it often isn’t clear which path to choose and choosing the wrong path can lead to data loss.\n\n \nI don’t think it is ok to expect end users to understand when it is an isn’t ok to just drop and recreate and often it\nIsn’t clear in the extension documentation itself. I’m not sure what core can/should do about it but it is a major pain.\n\n \n-- Rob",
"msg_date": "Thu, 15 Jul 2021 16:31:04 +0000",
"msg_from": "Robert Eckhardt <eckhardtr@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 15 Jul 2021 at 12:25, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thu, Jul 15, 2021 at 9:16 AM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>> Eh, and\n>>> pg_upgrade [other switches] --upgrade-extensions\n>>> sounds good too ...\n>>>\n>>\n>> Ultimately I believe this is the solution, however we still need to teach\n>> extensions how to upgrade themselves or emit a message saying they can't,\n>> or even ignore if it truly is a NOP.\n>>\n>>\n> If it's opt-in and simple I don't really care but I doubt I would use it\n> as personally I'd rather the upgrade not touch my application at all (to\n> the extent possible) and just basically promise that I'll get a reliable\n> upgrade. Then I'll go ahead and ensure I have the backups of the new\n> version and that my application works correctly, then just run the \"ALTER\n> EXTENSION\" myself. But anything that will solve pain points for\n> same-PostgreSQL-version extension upgrading is great.\n>\n\nI may have not communicated this clearly. In this case the application\nworked fine, the extension worked fine. The issue only arose when doing a\ndump and restore of the database and then the only reason it failed was due\nto trying to revoke permissions from pg_stat_statements_reset.\n\nThere may have been other things that were not working correctly but since\nit did not cause any errors it was difficult to know.\n\nAs Robert points out in the next message this is not a particularly great\nuser experience\n\nDave\n\n>\n>\n\nOn Thu, 15 Jul 2021 at 12:25, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thu, Jul 15, 2021 at 9:16 AM Dave Cramer <davecramer@gmail.com> wrote:Eh, and \n pg_upgrade [other switches] --upgrade-extensions\nsounds good too ...Ultimately I believe this is the solution, however we still need to teach extensions how to upgrade themselves or emit a message saying they can't, or even ignore if it truly is a NOP.If it's opt-in and simple I don't really care but I doubt I would use it as personally I'd rather the upgrade not touch my application at all (to the extent possible) and just basically promise that I'll get a reliable upgrade. Then I'll go ahead and ensure I have the backups of the new version and that my application works correctly, then just run the \"ALTER EXTENSION\" myself. But anything that will solve pain points for same-PostgreSQL-version extension upgrading is great.I may have not communicated this clearly. In this case the application worked fine, the extension worked fine. The issue only arose when doing a dump and restore of the database and then the only reason it failed was due to trying to revoke permissions from pg_stat_statements_reset.There may have been other things that were not working correctly but since it did not cause any errors it was difficult to know.As Robert points out in the next message this is not a particularly great user experienceDave",
"msg_date": "Thu, 15 Jul 2021 12:35:05 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/15/21 12:25 PM, David G. Johnston wrote:\n\n> I would say that it probably should be \"--upgrade-extension=aaa \n> --upgrade_extension=bbb\" though if we are going to the effort to offer \n> something.\n\nI am a bit confused here. From the previous exchange I get the feeling \nthat you haven't created and maintained a single extension that survived \na single version upgrade of itself or PostgreSQL (in the latter case \nrequiring code changes to the extension due to internal API changes \ninside the PostgreSQL version).\n\nI have. PL/Profiler to be explicit.\n\nCan you please elaborate what experience your opinion is based on?\n\n\nRegards, Jan\n\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 15 Jul 2021 12:35:55 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/15/21 12:31 PM, Robert Eckhardt wrote:\n> �I don�t know if this is a terrible flaw in pg_upgrade, it is a \n> terrible flaw in the overall Postgres experience.\n\n+1 (that is the actual problem here)\n\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 15 Jul 2021 12:38:45 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 9:36 AM Jan Wieck <jan@wi3ck.info> wrote:\n\n>\n> I am a bit confused here. From the previous exchange I get the feeling\n> that you haven't created and maintained a single extension that survived\n> a single version upgrade of itself or PostgreSQL (in the latter case\n> requiring code changes to the extension due to internal API changes\n> inside the PostgreSQL version).\n>\n\nCorrect.\n\n>\n> I have. PL/Profiler to be explicit.\n>\n> Can you please elaborate what experience your opinion is based on?\n>\n>\nI am an application developer who operates on the principle of \"change only\none thing at a time\".\n\nDavid J.\n\nOn Thu, Jul 15, 2021 at 9:36 AM Jan Wieck <jan@wi3ck.info> wrote:\nI am a bit confused here. From the previous exchange I get the feeling \nthat you haven't created and maintained a single extension that survived \na single version upgrade of itself or PostgreSQL (in the latter case \nrequiring code changes to the extension due to internal API changes \ninside the PostgreSQL version).Correct.\n\nI have. PL/Profiler to be explicit.\n\nCan you please elaborate what experience your opinion is based on?I am an application developer who operates on the principle of \"change only one thing at a time\".David J.",
"msg_date": "Thu, 15 Jul 2021 09:46:08 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/15/21 12:46 PM, David G. Johnston wrote:\n\n> I am an application developer who operates on the principle of \"change \n> only one thing at a time\".\n\nWhich pg_upgrade by definition isn't. It is bringing ALL the changes \nfrom one major version to the target version, which may be multiple at \nonce. Including, but not limited to, catalog schema changes, SQL \nlanguage changes, extension behavior changes and utility command \nbehavior changes.\n\nOn that principle, you should advocate against using pg_upgrade in the \nfirst place.\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 15 Jul 2021 12:58:29 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 9:58 AM Jan Wieck <jan@wi3ck.info> wrote:\n\n> On 7/15/21 12:46 PM, David G. Johnston wrote:\n>\n> > I am an application developer who operates on the principle of \"change\n> > only one thing at a time\".\n>\n> Which pg_upgrade by definition isn't. It is bringing ALL the changes\n> from one major version to the target version, which may be multiple at\n> once. Including, but not limited to, catalog schema changes, SQL\n> language changes, extension behavior changes and utility command\n> behavior changes.\n>\n> On that principle, you should advocate against using pg_upgrade in the\n> first place.\n>\n>\nNot that I use extensions a whole lot (yes, my overall experience here is\nslim) but I would definitely prefer those that allow me to stay on a single\nPostgreSQL major version while migrating between major versions of their\nown product. Extensions that don't fit this model (i.e., choose to treat\ntheir major version as being the same as the major version of PostgreSQL\nthey were developed for) must by necessity be upgraded simultaneously with\nthe PostgreSQL server. But while PostgreSQL doesn't really have a choice\nhere - it cannot be expected to subdivide itself - extensions (at least\nexternal ones - PostGIS is one I have in mind presently) - can and often do\nattempt to support multiple versions of PostgreSQL for whatever major\nversions of their product they are offering. For these it is possible to\nadhere to the \"change one thing at a time principle\" and to treat the\nextensions as not being part of \"ALL the changes from one major version to\nthe target version...\"\n\nDavid J.\n\nOn Thu, Jul 15, 2021 at 9:58 AM Jan Wieck <jan@wi3ck.info> wrote:On 7/15/21 12:46 PM, David G. Johnston wrote:\n\n> I am an application developer who operates on the principle of \"change \n> only one thing at a time\".\n\nWhich pg_upgrade by definition isn't. It is bringing ALL the changes \nfrom one major version to the target version, which may be multiple at \nonce. Including, but not limited to, catalog schema changes, SQL \nlanguage changes, extension behavior changes and utility command \nbehavior changes.\n\nOn that principle, you should advocate against using pg_upgrade in the \nfirst place.Not that I use extensions a whole lot (yes, my overall experience here is slim) but I would definitely prefer those that allow me to stay on a single PostgreSQL major version while migrating between major versions of their own product. Extensions that don't fit this model (i.e., choose to treat their major version as being the same as the major version of PostgreSQL they were developed for) must by necessity be upgraded simultaneously with the PostgreSQL server. But while PostgreSQL doesn't really have a choice here - it cannot be expected to subdivide itself - extensions (at least external ones - PostGIS is one I have in mind presently) - can and often do attempt to support multiple versions of PostgreSQL for whatever major versions of their product they are offering. For these it is possible to adhere to the \"change one thing at a time principle\" and to treat the extensions as not being part of \"ALL the changes from one major version to the target version...\"David J.",
"msg_date": "Thu, 15 Jul 2021 10:10:45 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/15/21 1:10 PM, David G. Johnston wrote:\n> ... But while \n> PostgreSQL doesn't really have a choice here - it cannot be expected to \n> subdivide itself - extensions (at least external ones - PostGIS is one I \n> have in mind presently) - can and often do attempt to support multiple \n> versions of PostgreSQL for whatever major versions of their product they \n> are offering. For these it is possible to adhere to the \"change one \n> thing at a time principle\" and to treat the extensions as not being part \n> of \"ALL the changes from one major version to the target version...\"\n\nYou may make that exception for an external extension like PostGIS. But \nI don't think it is valid for one distributed in sync with the core \nsystem in the contrib package, like pg_stat_statements. Which happens to \nbe the one named in the subject line of this entire discussion.\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 15 Jul 2021 14:14:28 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 02:14:28PM -0400, Jan Wieck wrote:\n> On 7/15/21 1:10 PM, David G. Johnston wrote:\n> > ... But while PostgreSQL doesn't really have a choice here - it cannot\n> > be expected to subdivide itself - extensions (at least external ones -\n> > PostGIS is one I have in mind presently) - can and often do attempt to\n> > support multiple versions of PostgreSQL for whatever major versions of\n> > their product they are offering. For these it is possible to adhere to\n> > the \"change one thing at a time principle\" and to treat the extensions\n> > as not being part of \"ALL the changes from one major version to the\n> > target version...\"\n> \n> You may make that exception for an external extension like PostGIS. But I\n> don't think it is valid for one distributed in sync with the core system in\n> the contrib package, like pg_stat_statements. Which happens to be the one\n> named in the subject line of this entire discussion.\n\nYes, I think one big issue is that the documentation of the new server\nmight not match the API of the extension installed on the old server.\n\nThere has been a lot of discussion from years ago about why we can't\nauto-upgrade extensions, so it might be good to revisit that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 15 Jul 2021 14:26:24 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 11:14 AM Jan Wieck <jan@wi3ck.info> wrote:\n\n> On 7/15/21 1:10 PM, David G. Johnston wrote:\n> > ... But while\n> > PostgreSQL doesn't really have a choice here - it cannot be expected to\n> > subdivide itself - extensions (at least external ones - PostGIS is one I\n> > have in mind presently) - can and often do attempt to support multiple\n> > versions of PostgreSQL for whatever major versions of their product they\n> > are offering. For these it is possible to adhere to the \"change one\n> > thing at a time principle\" and to treat the extensions as not being part\n> > of \"ALL the changes from one major version to the target version...\"\n>\n> You may make that exception for an external extension like PostGIS. But\n> I don't think it is valid for one distributed in sync with the core\n> system in the contrib package, like pg_stat_statements. Which happens to\n> be the one named in the subject line of this entire discussion.\n>\n>\nYep, and IIUC running \"CREATE EXTENSION pg_stat_statements VERSION '1.5';\"\nworks correctly in v13 as does executing \"ALTER EXTENSION\npg_stat_statements UPDATE;\" while version 1.5 is installed. So even\nwithout doing the copying of the old contrib libraries to the new server\nsuch a \"one at a time\" procedure would work just fine for this particular\ncontrib extension.\n\nAnd since the OP was unaware of the presence of the existing ALTER\nEXTENSION UPDATE command I'm not sure at what point a \"lack of features\"\ncomplaint here is due to lack of knowledge or actual problems (yes, I did\nforget too but at least this strengthens my position that one-at-a-time\nmethods are workable, even today).\n\nDavid J.\n\nOn Thu, Jul 15, 2021 at 11:14 AM Jan Wieck <jan@wi3ck.info> wrote:On 7/15/21 1:10 PM, David G. Johnston wrote:\n> ... But while \n> PostgreSQL doesn't really have a choice here - it cannot be expected to \n> subdivide itself - extensions (at least external ones - PostGIS is one I \n> have in mind presently) - can and often do attempt to support multiple \n> versions of PostgreSQL for whatever major versions of their product they \n> are offering. For these it is possible to adhere to the \"change one \n> thing at a time principle\" and to treat the extensions as not being part \n> of \"ALL the changes from one major version to the target version...\"\n\nYou may make that exception for an external extension like PostGIS. But \nI don't think it is valid for one distributed in sync with the core \nsystem in the contrib package, like pg_stat_statements. Which happens to \nbe the one named in the subject line of this entire discussion.Yep, and IIUC running \"CREATE EXTENSION pg_stat_statements VERSION '1.5';\" works correctly in v13 as does executing \"ALTER EXTENSION pg_stat_statements UPDATE;\" while version 1.5 is installed. So even without doing the copying of the old contrib libraries to the new server such a \"one at a time\" procedure would work just fine for this particular contrib extension.And since the OP was unaware of the presence of the existing ALTER EXTENSION UPDATE command I'm not sure at what point a \"lack of features\" complaint here is due to lack of knowledge or actual problems (yes, I did forget too but at least this strengthens my position that one-at-a-time methods are workable, even today).David J.",
"msg_date": "Thu, 15 Jul 2021 11:30:35 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 15 Jul 2021 at 14:31, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thu, Jul 15, 2021 at 11:14 AM Jan Wieck <jan@wi3ck.info> wrote:\n>\n>> On 7/15/21 1:10 PM, David G. Johnston wrote:\n>> > ... But while\n>> > PostgreSQL doesn't really have a choice here - it cannot be expected to\n>> > subdivide itself - extensions (at least external ones - PostGIS is one\n>> I\n>> > have in mind presently) - can and often do attempt to support multiple\n>> > versions of PostgreSQL for whatever major versions of their product\n>> they\n>> > are offering. For these it is possible to adhere to the \"change one\n>> > thing at a time principle\" and to treat the extensions as not being\n>> part\n>> > of \"ALL the changes from one major version to the target version...\"\n>>\n>> You may make that exception for an external extension like PostGIS. But\n>> I don't think it is valid for one distributed in sync with the core\n>> system in the contrib package, like pg_stat_statements. Which happens to\n>> be the one named in the subject line of this entire discussion.\n>>\n>>\n> Yep, and IIUC running \"CREATE EXTENSION pg_stat_statements VERSION '1.5';\"\n> works correctly in v13 as does executing\n>\nWhile it does work there are issues with dumping and restoring a database\nwith the old version of pg_stat_statements in it that would only be found\nby dumping and restoring.\n\n\n> \"ALTER EXTENSION pg_stat_statements UPDATE;\" while version 1.5 is\n> installed.\n>\n\n\n\n> So even without doing the copying of the old contrib libraries to the new\n> server such a \"one at a time\" procedure would work just fine for this\n> particular contrib extension.\n>\n\nYou cannot copy the old contrib libraries into the new server. This will\nfail due to changes in the API and various exported variables either not\nbeing there or being renamed.\n\n\n>\n> And since the OP was unaware of the presence of the existing ALTER\n> EXTENSION UPDATE command I'm not sure at what point a \"lack of features\"\n> complaint here is due to lack of knowledge or actual problems (yes, I did\n> forget too but at least this strengthens my position that one-at-a-time\n> methods are workable, even today).\n>\n\nYou are correct I was not aware of the ALTER EXTENSION UPDATE command, but\nthat doesn't change the issue.\nIt's not so much the lack of features that I am complaining about; it is\nthe incompleteness of the upgrade. pg_upgrade does a wonderful job telling\nme what extensions are incompatible with the upgrade before it does the\nupgrade, but it fails to say that the versions that are installed may need\nto be updated.\n\nDave\n\nOn Thu, 15 Jul 2021 at 14:31, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thu, Jul 15, 2021 at 11:14 AM Jan Wieck <jan@wi3ck.info> wrote:On 7/15/21 1:10 PM, David G. Johnston wrote:\n> ... But while \n> PostgreSQL doesn't really have a choice here - it cannot be expected to \n> subdivide itself - extensions (at least external ones - PostGIS is one I \n> have in mind presently) - can and often do attempt to support multiple \n> versions of PostgreSQL for whatever major versions of their product they \n> are offering. For these it is possible to adhere to the \"change one \n> thing at a time principle\" and to treat the extensions as not being part \n> of \"ALL the changes from one major version to the target version...\"\n\nYou may make that exception for an external extension like PostGIS. But \nI don't think it is valid for one distributed in sync with the core \nsystem in the contrib package, like pg_stat_statements. Which happens to \nbe the one named in the subject line of this entire discussion.Yep, and IIUC running \"CREATE EXTENSION pg_stat_statements VERSION '1.5';\" works correctly in v13 as does executing While it does work there are issues with dumping and restoring a database with the old version of pg_stat_statements in it that would only be found by dumping and restoring. \"ALTER EXTENSION pg_stat_statements UPDATE;\" while version 1.5 is installed. So even without doing the copying of the old contrib libraries to the new server such a \"one at a time\" procedure would work just fine for this particular contrib extension.You cannot copy the old contrib libraries into the new server. This will fail due to changes in the API and various exported variables either not being there or being renamed. And since the OP was unaware of the presence of the existing ALTER EXTENSION UPDATE command I'm not sure at what point a \"lack of features\" complaint here is due to lack of knowledge or actual problems (yes, I did forget too but at least this strengthens my position that one-at-a-time methods are workable, even today).You are correct I was not aware of the ALTER EXTENSION UPDATE command, but that doesn't change the issue.It's not so much the lack of features that I am complaining about; it is the incompleteness of the upgrade. pg_upgrade does a wonderful job telling me what extensions are incompatible with the upgrade before it does the upgrade, but it fails to say that the versions that are installed may need to be updated.Dave",
"msg_date": "Thu, 15 Jul 2021 14:51:53 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 11:52 AM Dave Cramer <davecramer@postgres.rocks>\nwrote:\n\n>\n> On Thu, 15 Jul 2021 at 14:31, David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>\n>>\n>> Yep, and IIUC running \"CREATE EXTENSION pg_stat_statements VERSION\n>> '1.5';\" works correctly in v13 as does executing\n>>\n> While it does work there are issues with dumping and restoring a database\n> with the old version of pg_stat_statements in it that would only be found\n> by dumping and restoring.\n>\n\nI'm unsure how this impacts the broader discussion. At worse you'd have a\nchance to manually run the update command on the new cluster before using\ndump/restore.\n\n\n>\n>> \"ALTER EXTENSION pg_stat_statements UPDATE;\" while version 1.5 is\n>> installed.\n>>\n>\n>\n>\n>> So even without doing the copying of the old contrib libraries to the new\n>> server such a \"one at a time\" procedure would work just fine for this\n>> particular contrib extension.\n>>\n>\n> You cannot copy the old contrib libraries into the new server. This will\n> fail due to changes in the API and various exported variables either not\n> being there or being renamed.\n>\n\nIf this is true then the docs have a bug. It sounds more like the\ndocumentation should say \"ensure that the new cluster has extension\nlibraries installed that are compatible with the version of the extension\ninstalled on the old cluster\". Whether we want to be even more specific\nwith regards to contrib I cannot say - it seems like newer versions largely\nretain backward compatibility so this is basically a non-issue for contrib\n(though maybe individual extensions have their own requirements?)\n\nbut it fails to say that the versions that are installed may need to be\n> updated.\n>\n\nOK, especially as this seems useful outside of pg_upgrade, and if done\nseparately is something pg_upgrade could just run as part of its new\ncluster evaluation scripts. Knowing whether an extension is outdated\ndoesn't require the old cluster.\nDavid J.\n\nOn Thu, Jul 15, 2021 at 11:52 AM Dave Cramer <davecramer@postgres.rocks> wrote:On Thu, 15 Jul 2021 at 14:31, David G. Johnston <david.g.johnston@gmail.com> wrote:Yep, and IIUC running \"CREATE EXTENSION pg_stat_statements VERSION '1.5';\" works correctly in v13 as does executing While it does work there are issues with dumping and restoring a database with the old version of pg_stat_statements in it that would only be found by dumping and restoring.I'm unsure how this impacts the broader discussion. At worse you'd have a chance to manually run the update command on the new cluster before using dump/restore. \"ALTER EXTENSION pg_stat_statements UPDATE;\" while version 1.5 is installed. So even without doing the copying of the old contrib libraries to the new server such a \"one at a time\" procedure would work just fine for this particular contrib extension.You cannot copy the old contrib libraries into the new server. This will fail due to changes in the API and various exported variables either not being there or being renamed.If this is true then the docs have a bug. It sounds more like the documentation should say \"ensure that the new cluster has extension libraries installed that are compatible with the version of the extension installed on the old cluster\". Whether we want to be even more specific with regards to contrib I cannot say - it seems like newer versions largely retain backward compatibility so this is basically a non-issue for contrib (though maybe individual extensions have their own requirements?)but it fails to say that the versions that are installed may need to be updated.OK, especially as this seems useful outside of pg_upgrade, and if done separately is something pg_upgrade could just run as part of its new cluster evaluation scripts. Knowing whether an extension is outdated doesn't require the old cluster.David J.",
"msg_date": "Thu, 15 Jul 2021 13:24:17 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/15/21 4:24 PM, David G. Johnston wrote:\n\n> OK, especially as this seems useful outside of pg_upgrade, and if done \n> separately is something pg_upgrade could just run as part of its new \n> cluster evaluation scripts. Knowing whether an extension is outdated \n> doesn't require the old cluster.\n\nKnowing that (the extension is outdated) exactly how? Can you give us an \nexample query, maybe a few SQL snippets explaining what exactly you are \ntalking about? Because at this point you completely lost me.\n\nSorry, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 15 Jul 2021 19:18:10 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thursday, July 15, 2021, Jan Wieck <jan@wi3ck.info> wrote:\n\n> On 7/15/21 4:24 PM, David G. Johnston wrote:\n>\n> OK, especially as this seems useful outside of pg_upgrade, and if done\n>> separately is something pg_upgrade could just run as part of its new\n>> cluster evaluation scripts. Knowing whether an extension is outdated\n>> doesn't require the old cluster.\n>>\n>\n> Knowing that (the extension is outdated) exactly how? Can you give us an\n> example query, maybe a few SQL snippets explaining what exactly you are\n> talking about? Because at this point you completely lost me.\n>\n\nI was mostly going off other people saying it was possible. In any case,\nlooking at pg_available_extension_versions, once you figure out how to\norder by the version text column, would let you check if any installed\nextensions are not their latest version.\n\nDavid J.\n\nOn Thursday, July 15, 2021, Jan Wieck <jan@wi3ck.info> wrote:On 7/15/21 4:24 PM, David G. Johnston wrote:\n\n\nOK, especially as this seems useful outside of pg_upgrade, and if done separately is something pg_upgrade could just run as part of its new cluster evaluation scripts. Knowing whether an extension is outdated doesn't require the old cluster.\n\n\nKnowing that (the extension is outdated) exactly how? Can you give us an example query, maybe a few SQL snippets explaining what exactly you are talking about? Because at this point you completely lost me.\nI was mostly going off other people saying it was possible. In any case, looking at pg_available_extension_versions, once you figure out how to order by the version text column, would let you check if any installed extensions are not their latest version.David J.",
"msg_date": "Thu, 15 Jul 2021 16:31:07 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 08:15:57AM -0700, David G. Johnston wrote:\n> On Thursday, July 15, 2021, David G. Johnston <david.g.johnston@gmail.com>\n> My uncertainty revolves around core extensions since it seems odd to tell\n> the user to overwrite them with versions from an older version of\n> PostgreSQL.\n> \n> Ok. Just re-read the docs a third time…no uncertainty regarding contrib\n> now…following the first part of the instructions means that before one could\n> re-run create extension they would need to restore the original contrib library\n> files to avoid the new extension code using the old library. So that whole\n> part about recreation is inconsistent with the existing unchanged text.\n\nI came up with the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Wed, 28 Jul 2021 21:52:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 6:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I came up with the attached patch.\n>\n\nThank you. It is an improvement but I think more could be done here (not\nexactly sure what - though removing the \"copy binaries for contrib modules\nfrom the old server\" seems like a decent second step.)\n\nI'm not sure it really needs a parenthetical, and I personally dislike\nusing \"Consider\" to start the sentence.\n\n\"Bringing extensions up to the newest version available on the new server\ncan be done later using ALTER EXTENSION UPGRADE (after ensuring the correct\nbinaries are installed).\"\n\nDavid J.\n\nOn Wed, Jul 28, 2021 at 6:52 PM Bruce Momjian <bruce@momjian.us> wrote:I came up with the attached patch.Thank you. It is an improvement but I think more could be done here (not exactly sure what - though removing the \"copy binaries for contrib modules from the old server\" seems like a decent second step.)I'm not sure it really needs a parenthetical, and I personally dislike using \"Consider\" to start the sentence.\"Bringing extensions up to the newest version available on the new server can be done later using ALTER EXTENSION UPGRADE (after ensuring the correct binaries are installed).\"David J.",
"msg_date": "Wed, 28 Jul 2021 21:35:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 00:35, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wed, Jul 28, 2021 at 6:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> I came up with the attached patch.\n>>\n>\n> Thank you. It is an improvement but I think more could be done here (not\n> exactly sure what - though removing the \"copy binaries for contrib modules\n> from the old server\" seems like a decent second step.)\n>\n> I'm not sure it really needs a parenthetical, and I personally dislike\n> using \"Consider\" to start the sentence.\n>\n> \"Bringing extensions up to the newest version available on the new server\n> can be done later using ALTER EXTENSION UPGRADE (after ensuring the correct\n> binaries are installed).\"\n>\n\n\nAs for the patch. What exactly is being copied ?\nThis is all very confusing. Some of the extensions will work perfectly fine\non the new server without an upgrade. At least one of the extensions will\nappear to function perfectly fine with new binaries and old function\ndefinitions.\nSeems to me we need to do more.\n\nDave\n\nOn Thu, 29 Jul 2021 at 00:35, David G. Johnston <david.g.johnston@gmail.com> wrote:On Wed, Jul 28, 2021 at 6:52 PM Bruce Momjian <bruce@momjian.us> wrote:I came up with the attached patch.Thank you. It is an improvement but I think more could be done here (not exactly sure what - though removing the \"copy binaries for contrib modules from the old server\" seems like a decent second step.)I'm not sure it really needs a parenthetical, and I personally dislike using \"Consider\" to start the sentence.\"Bringing extensions up to the newest version available on the new server can be done later using ALTER EXTENSION UPGRADE (after ensuring the correct binaries are installed).\"As for the patch. What exactly is being copied ? This is all very confusing. Some of the extensions will work perfectly fine on the new server without an upgrade. At least one of the extensions will appear to function perfectly fine with new binaries and old function definitions.Seems to me we need to do more. Dave",
"msg_date": "Thu, 29 Jul 2021 07:02:41 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 09:35:28PM -0700, David G. Johnston wrote:\n> On Wed, Jul 28, 2021 at 6:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> I came up with the attached patch.\n> \n> \n> Thank you. It is an improvement but I think more could be done here (not\n> exactly sure what - though removing the \"copy binaries for contrib modules from\n> the old server\" seems like a decent second step.)\n\nUh, I don't see that text.\n\n> I'm not sure it really needs a parenthetical, and I personally dislike using\n> \"Consider\" to start the sentence.\n> \n> \"Bringing extensions up to the newest version available on the new server can\n> be done later using ALTER EXTENSION UPGRADE (after ensuring the correct\n> binaries are installed).\"\n\nOK, I went with this new text. There is confusion over install vs copy,\nand whether this is happening at the operating system level or the SQL\nlevel. I tried to clarify that, but I am not sure I was successful. I\nalso used the word \"extension\" since this is more common than \"custom\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Thu, 29 Jul 2021 10:56:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": ">\n> OK, I went with this new text. There is confusion over install vs copy,\n> and whether this is happening at the operating system level or the SQL\n> level. I tried to clarify that, but I am not sure I was successful. I\n> also used the word \"extension\" since this is more common than \"custom\".\n>\n\nMuch better, however I'm unclear on whether CREATE EXTENSION is actually\nexecuted on the upgraded server.\n\n From what I could gather it is not, otherwise the new function definitions\nshould have been in place.\nI think what happens is that the function definitions are copied as part of\nthe schema and pg_extension is also copied.\n\nDave\n\n>\n>\n\n\nOK, I went with this new text. There is confusion over install vs copy,\nand whether this is happening at the operating system level or the SQL\nlevel. I tried to clarify that, but I am not sure I was successful. I\nalso used the word \"extension\" since this is more common than \"custom\".Much better, however I'm unclear on whether CREATE EXTENSION is actually executed on the upgraded server.From what I could gather it is not, otherwise the new function definitions should have been in place. I think what happens is that the function definitions are copied as part of the schema and pg_extension is also copied.Dave",
"msg_date": "Thu, 29 Jul 2021 11:01:43 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 11:01:43AM -0400, Dave Cramer wrote:\n> \n> \n> \n> \n> OK, I went with this new text. There is confusion over install vs copy,\n> and whether this is happening at the operating system level or the SQL\n> level. I tried to clarify that, but I am not sure I was successful. I\n> also used the word \"extension\" since this is more common than \"custom\".\n> \n> \n> Much better, however I'm unclear on whether CREATE EXTENSION is actually\n> executed on the upgraded server.\n> \n> From what I could gather it is not, otherwise the new function definitions\n> should have been in place. \n> I think what happens is that the function definitions are copied as part of the\n> schema and pg_extension is also copied.\n\nYes, the _effect_ of CREATE EXTENSION in the old cluster is copied to\nthe new cluster as object definitions. CREATE EXTENSION runs the SQL\nfiles associated with the extension.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 11:10:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 11:10, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Jul 29, 2021 at 11:01:43AM -0400, Dave Cramer wrote:\n> >\n> >\n> >\n> >\n> > OK, I went with this new text. There is confusion over install vs\n> copy,\n> > and whether this is happening at the operating system level or the\n> SQL\n> > level. I tried to clarify that, but I am not sure I was\n> successful. I\n> > also used the word \"extension\" since this is more common than\n> \"custom\".\n> >\n> >\n> > Much better, however I'm unclear on whether CREATE EXTENSION is actually\n> > executed on the upgraded server.\n> >\n> > From what I could gather it is not, otherwise the new function\n> definitions\n> > should have been in place.\n> > I think what happens is that the function definitions are copied as part\n> of the\n> > schema and pg_extension is also copied.\n>\n> Yes, the _effect_ of CREATE EXTENSION in the old cluster is copied to\n> the new cluster as object definitions. CREATE EXTENSION runs the SQL\n> files associated with the extension.\n>\n\nOK, I think we should be more clear as to what is happening. Saying they\nwill be recreated is misleading.\nThe extension definitions are being copied from the old server to the new\nserver.\n\nI also think we should have stronger wording in the \"The extensions may be\nupgraded ...\" statement.\nI'd suggest \"Any new versions of extensions should be upgraded using....\"\n\nDave\n\n>\n>\n\nOn Thu, 29 Jul 2021 at 11:10, Bruce Momjian <bruce@momjian.us> wrote:On Thu, Jul 29, 2021 at 11:01:43AM -0400, Dave Cramer wrote:\n> \n> \n> \n> \n> OK, I went with this new text. There is confusion over install vs copy,\n> and whether this is happening at the operating system level or the SQL\n> level. I tried to clarify that, but I am not sure I was successful. I\n> also used the word \"extension\" since this is more common than \"custom\".\n> \n> \n> Much better, however I'm unclear on whether CREATE EXTENSION is actually\n> executed on the upgraded server.\n> \n> From what I could gather it is not, otherwise the new function definitions\n> should have been in place. \n> I think what happens is that the function definitions are copied as part of the\n> schema and pg_extension is also copied.\n\nYes, the _effect_ of CREATE EXTENSION in the old cluster is copied to\nthe new cluster as object definitions. CREATE EXTENSION runs the SQL\nfiles associated with the extension.OK, I think we should be more clear as to what is happening. Saying they will be recreated is misleading.The extension definitions are being copied from the old server to the new server.I also think we should have stronger wording in the \"The extensions may be upgraded ...\" statement. I'd suggest \"Any new versions of extensions should be upgraded using....\"Dave",
"msg_date": "Thu, 29 Jul 2021 11:17:52 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 11:17:52AM -0400, Dave Cramer wrote:\n> On Thu, 29 Jul 2021 at 11:10, Bruce Momjian <bruce@momjian.us> wrote:\n> OK, I think we should be more clear as to what is happening. Saying they will\n> be recreated is misleading.\n> The extension definitions are being copied from the old server to the new\n> server.\n\nI think my wording is says exactly that:\n\n\tDo not load the schema definitions, e.g., <command>CREATE EXTENSION\n\tpgcrypto</command>, because these will be recreated from the old\n\tcluster. \n\n> I also think we should have stronger wording in the \"The extensions may be\n> upgraded ...\" statement. \n> I'd suggest \"Any new versions of extensions should be upgraded using....\"\n\nI can't really comment on that since I see little mention of upgrading\nextensions in our docs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 11:24:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 7:56 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Jul 28, 2021 at 09:35:28PM -0700, David G. Johnston wrote:\n> > On Wed, Jul 28, 2021 at 6:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I came up with the attached patch.\n> >\n> >\n> > Thank you. It is an improvement but I think more could be done here (not\n> > exactly sure what - though removing the \"copy binaries for contrib\n> modules from\n> > the old server\" seems like a decent second step.)\n>\n> Uh, I don't see that text.\n>\n>\n\"\"\"\n 5. Install custom shared object files\n\nInstall any custom shared object files (or DLLs) used by the old cluster\ninto the new cluster, e.g., pgcrypto.so, whether they are from contrib or\nsome other source. Do not install the schema definitions, e.g., CREATE\nEXTENSION pgcrypto, because these will be upgraded from the old cluster.\nAlso, any custom full text search files (dictionary, synonym, thesaurus,\nstop words) must also be copied to the new cluster.\n\"\"\"\nI have an issue with the fragment \"whether they are from contrib\" - my\nunderstanding at this point is that because of the way we package and\nversion contrib it should not be necessary to copy those shared object\nfiles from the old to the new server (maybe, just maybe, with a\nqualification that you are upgrading between two versions that were in\nsupport during the same time period).\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 7:56 AM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Jul 28, 2021 at 09:35:28PM -0700, David G. Johnston wrote:\n> On Wed, Jul 28, 2021 at 6:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> I came up with the attached patch.\n> \n> \n> Thank you. It is an improvement but I think more could be done here (not\n> exactly sure what - though removing the \"copy binaries for contrib modules from\n> the old server\" seems like a decent second step.)\n\nUh, I don't see that text.\"\"\" 5. Install custom shared object filesInstall any custom shared object files (or DLLs) used by the old cluster into the new cluster, e.g., pgcrypto.so, whether they are from contrib or some other source. Do not install the schema definitions, e.g., CREATE EXTENSION pgcrypto, because these will be upgraded from the old cluster. Also, any custom full text search files (dictionary, synonym, thesaurus, stop words) must also be copied to the new cluster.\"\"\"I have an issue with the fragment \"whether they are from contrib\" - my understanding at this point is that because of the way we package and version contrib it should not be necessary to copy those shared object files from the old to the new server (maybe, just maybe, with a qualification that you are upgrading between two versions that were in support during the same time period).David J.",
"msg_date": "Thu, 29 Jul 2021 08:28:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "I have an issue with the fragment \"whether they are from contrib\" - my\n> understanding at this point is that because of the way we package and\n> version contrib it should not be necessary to copy those shared object\n> files from the old to the new server (maybe, just maybe, with a\n> qualification that you are upgrading between two versions that were in\n> support during the same time period).\n>\n\nJust to clarify. In no case are binaries copied from the old server to the\nnew server. Whether from contrib or otherwise.\n\nDave\n\nI have an issue with the fragment \"whether they are from contrib\" - my understanding at this point is that because of the way we package and version contrib it should not be necessary to copy those shared object files from the old to the new server (maybe, just maybe, with a qualification that you are upgrading between two versions that were in support during the same time period).Just to clarify. In no case are binaries copied from the old server to the new server. Whether from contrib or otherwise.Dave",
"msg_date": "Thu, 29 Jul 2021 11:36:19 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 8:36 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> I have an issue with the fragment \"whether they are from contrib\" - my\n>> understanding at this point is that because of the way we package and\n>> version contrib it should not be necessary to copy those shared object\n>> files from the old to the new server (maybe, just maybe, with a\n>> qualification that you are upgrading between two versions that were in\n>> support during the same time period).\n>>\n>\n> Just to clarify. In no case are binaries copied from the old server to the\n> new server. Whether from contrib or otherwise.\n>\n>\nI had used \"binaries\" when I should have written \"shared object files\". I\njust imagine a DLL as being a binary file so it seems accurate but we use\nthe term differently I suppose?\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 8:36 AM Dave Cramer <davecramer@gmail.com> wrote:I have an issue with the fragment \"whether they are from contrib\" - my understanding at this point is that because of the way we package and version contrib it should not be necessary to copy those shared object files from the old to the new server (maybe, just maybe, with a qualification that you are upgrading between two versions that were in support during the same time period).Just to clarify. In no case are binaries copied from the old server to the new server. Whether from contrib or otherwise.I had used \"binaries\" when I should have written \"shared object files\". I just imagine a DLL as being a binary file so it seems accurate but we use the term differently I suppose?David J.",
"msg_date": "Thu, 29 Jul 2021 08:39:32 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 08:28:12AM -0700, David G. Johnston wrote:\n> On Thu, Jul 29, 2021 at 7:56 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Wed, Jul 28, 2021 at 09:35:28PM -0700, David G. Johnston wrote:\n> > On Wed, Jul 28, 2021 at 6:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I came up with the attached patch.\n> >\n> >\n> > Thank you. It is an improvement but I think more could be done here (not\n> > exactly sure what - though removing the \"copy binaries for contrib\n> modules from\n> > the old server\" seems like a decent second step.)\n> \n> Uh, I don't see that text.\n> \"\"\"\n> 5. Install custom shared object files\n> \n> Install any custom shared object files (or DLLs) used by the old cluster into\n> the new cluster, e.g., pgcrypto.so, whether they are from contrib or some other\n> source. Do not install the schema definitions, e.g., CREATE EXTENSION pgcrypto,\n> because these will be upgraded from the old cluster. Also, any custom full text\n> search files (dictionary, synonym, thesaurus, stop words) must also be copied\n> to the new cluster.\n> \"\"\"\n> I have an issue with the fragment \"whether they are from contrib\" - my\n> understanding at this point is that because of the way we package and version\n> contrib it should not be necessary to copy those shared object files from the\n> old to the new server (maybe, just maybe, with a qualification that you are\n> upgrading between two versions that were in support during the same time\n> period).\n\nOK, so this is the confusion I was talking about. You are supposed to\ninstall _new_ _versions_ of the extensions that are in the old cluster\nto the new cluster. You are not supposed to _copy_ the files from the\nold to new cluster. I think my new patch makes that clearer, but can it\nbe improved?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 11:40:27 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 11:36:19AM -0400, Dave Cramer wrote:\n> \n> \n> I have an issue with the fragment \"whether they are from contrib\" - my\n> understanding at this point is that because of the way we package and\n> version contrib it should not be necessary to copy those shared object\n> files from the old to the new server (maybe, just maybe, with a\n> qualification that you are upgrading between two versions that were in\n> support during the same time period).\n> \n> \n> Just to clarify. In no case are binaries copied from the old server to the new\n> server. Whether from contrib or otherwise.\n\nRight. Those are _binaries_ and therefore made to match a specific\nPostgres binary. They might work or might not, but copying them is\nnever a good idea --- they should be recompiled to match the new server\nbinary, even if the extension had no version/API changes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 11:42:16 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/29/21 11:10 AM, Bruce Momjian wrote:\n> On Thu, Jul 29, 2021 at 11:01:43AM -0400, Dave Cramer wrote:\n>> Much better, however I'm unclear on whether CREATE EXTENSION is actually\n>> executed on the upgraded server.\n>> \n>> From what I could gather it is not, otherwise the new function definitions\n>> should have been in place. \n>> I think what happens is that the function definitions are copied as part of the\n>> schema and pg_extension is also copied.\n> \n> Yes, the _effect_ of CREATE EXTENSION in the old cluster is copied to\n> the new cluster as object definitions. CREATE EXTENSION runs the SQL\n> files associated with the extension.\n> \n\nThis assumes that the scripts executed during CREATE EXTENSION have no \nconditional code in them that depends on the server version. Running the \nsame SQL script on different server versions can have different effects.\n\nI don't have a ready example of such an extension, but if we ever would \nconvert the backend parts of Slony into an extension, it would be one.\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 29 Jul 2021 11:46:12 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 11:39, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thu, Jul 29, 2021 at 8:36 AM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>>\n>> I have an issue with the fragment \"whether they are from contrib\" - my\n>>> understanding at this point is that because of the way we package and\n>>> version contrib it should not be necessary to copy those shared object\n>>> files from the old to the new server (maybe, just maybe, with a\n>>> qualification that you are upgrading between two versions that were in\n>>> support during the same time period).\n>>>\n>>\n>> Just to clarify. In no case are binaries copied from the old server to\n>> the new server. Whether from contrib or otherwise.\n>>\n>>\n> I had used \"binaries\" when I should have written \"shared object files\". I\n> just imagine a DLL as being a binary file so it seems accurate but we use\n> the term differently I suppose?\n>\n\nNo, we are using the same term. pg_upgrade does not copy anything that was\ncompiled, whether it is called a DLL or otherwise.\n\n\nDave\n\n>\n> David J.\n>\n\nOn Thu, 29 Jul 2021 at 11:39, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thu, Jul 29, 2021 at 8:36 AM Dave Cramer <davecramer@gmail.com> wrote:I have an issue with the fragment \"whether they are from contrib\" - my understanding at this point is that because of the way we package and version contrib it should not be necessary to copy those shared object files from the old to the new server (maybe, just maybe, with a qualification that you are upgrading between two versions that were in support during the same time period).Just to clarify. In no case are binaries copied from the old server to the new server. Whether from contrib or otherwise.I had used \"binaries\" when I should have written \"shared object files\". I just imagine a DLL as being a binary file so it seems accurate but we use the term differently I suppose?No, we are using the same term. pg_upgrade does not copy anything that was compiled, whether it is called a DLL or otherwise.Dave David J.",
"msg_date": "Thu, 29 Jul 2021 11:46:36 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 8:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Jul 29, 2021 at 11:36:19AM -0400, Dave Cramer wrote:\n> >\n> >\n> > I have an issue with the fragment \"whether they are from contrib\" -\n> my\n> > understanding at this point is that because of the way we package and\n> > version contrib it should not be necessary to copy those shared\n> object\n> > files from the old to the new server (maybe, just maybe, with a\n> > qualification that you are upgrading between two versions that were\n> in\n> > support during the same time period).\n> >\n> >\n> > Just to clarify. In no case are binaries copied from the old server to\n> the new\n> > server. Whether from contrib or otherwise.\n>\n> Right. Those are _binaries_ and therefore made to match a specific\n> Postgres binary. They might work or might not, but copying them is\n> never a good idea --- they should be recompiled to match the new server\n> binary, even if the extension had no version/API changes.\n>\n\nOk, looking at the flow again, where exactly would the user even be able to\nexecute \"CREATE EXTENSION\" meaningfully? The relevant databases do not\nexist (not totally sure what happens to the postgres database created\nduring the initdb step...) so at the point where the user is \"installing\nthe extension\" all they can reasonably do is a server-level install (they\ncould maybe create extension in the postgres database, but does that even\nmatter?).\n\nSo, I'd propose simplifying this all to something like:\n\nInstall extensions on the new server\n\nAny extensions that are used by the old cluster need to be installed into\nthe new cluster. Each database in the old cluster will have its current\nversion of all extensions migrated to the new cluster as-is. You can use\nthe ALTER EXTENSION command, on a per-database basis, to update its\nextensions post-upgrade.\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 8:42 AM Bruce Momjian <bruce@momjian.us> wrote:On Thu, Jul 29, 2021 at 11:36:19AM -0400, Dave Cramer wrote:\n> \n> \n> I have an issue with the fragment \"whether they are from contrib\" - my\n> understanding at this point is that because of the way we package and\n> version contrib it should not be necessary to copy those shared object\n> files from the old to the new server (maybe, just maybe, with a\n> qualification that you are upgrading between two versions that were in\n> support during the same time period).\n> \n> \n> Just to clarify. In no case are binaries copied from the old server to the new\n> server. Whether from contrib or otherwise.\n\nRight. Those are _binaries_ and therefore made to match a specific\nPostgres binary. They might work or might not, but copying them is\nnever a good idea --- they should be recompiled to match the new server\nbinary, even if the extension had no version/API changes.Ok, looking at the flow again, where exactly would the user even be able to execute \"CREATE EXTENSION\" meaningfully? The relevant databases do not exist (not totally sure what happens to the postgres database created during the initdb step...) so at the point where the user is \"installing the extension\" all they can reasonably do is a server-level install (they could maybe create extension in the postgres database, but does that even matter?).So, I'd propose simplifying this all to something like:Install extensions on the new serverAny extensions that are used by the old cluster need to be installed into the new cluster. Each database in the old cluster will have its current version of all extensions migrated to the new cluster as-is. You can use the ALTER EXTENSION command, on a per-database basis, to update its extensions post-upgrade.David J.",
"msg_date": "Thu, 29 Jul 2021 09:00:36 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 08:39:32AM -0700, David G. Johnston wrote:\n> On Thu, Jul 29, 2021 at 8:36 AM Dave Cramer <davecramer@gmail.com> wrote:\n> \n> \n> \n> \n> I have an issue with the fragment \"whether they are from contrib\" - my\n> understanding at this point is that because of the way we package and\n> version contrib it should not be necessary to copy those shared object\n> files from the old to the new server (maybe, just maybe, with a\n> qualification that you are upgrading between two versions that were in\n> support during the same time period).\n> \n> \n> Just to clarify. In no case are binaries copied from the old server to the\n> new server. Whether from contrib or otherwise.\n>\n> I had used \"binaries\" when I should have written \"shared object files\". I just\n> imagine a DLL as being a binary file so it seems accurate but we use the term\n> differently I suppose?\n\nUh, technically, the _executable_ binary should only use shared object /\nloadable libraries that were compiled against that binary's exported\nAPI. Sometimes mismatches work (if the API used by the shared object\nhas not changed in the binary) so people get used to it working, and\nthen one day it doesn't, but it is never a safe process.\n\nIf two people here are confused about this, obviously others will be as\nwell. I think we were trying to do too much in that first sentence, so\nI split it into two in the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Thu, 29 Jul 2021 12:12:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 11:46:12AM -0400, Jan Wieck wrote:\n> On 7/29/21 11:10 AM, Bruce Momjian wrote:\n> > On Thu, Jul 29, 2021 at 11:01:43AM -0400, Dave Cramer wrote:\n> > > Much better, however I'm unclear on whether CREATE EXTENSION is actually\n> > > executed on the upgraded server.\n> > > \n> > > From what I could gather it is not, otherwise the new function definitions\n> > > should have been in place. I think what happens is that the function\n> > > definitions are copied as part of the\n> > > schema and pg_extension is also copied.\n> > \n> > Yes, the _effect_ of CREATE EXTENSION in the old cluster is copied to\n> > the new cluster as object definitions. CREATE EXTENSION runs the SQL\n> > files associated with the extension.\n> > \n> \n> This assumes that the scripts executed during CREATE EXTENSION have no\n> conditional code in them that depends on the server version. Running the\n> same SQL script on different server versions can have different effects.\n> \n> I don't have a ready example of such an extension, but if we ever would\n> convert the backend parts of Slony into an extension, it would be one.\n\nThe bottom line is that we have _no_ mechanism to handle this except\nuninstalling the extension before the upgrade and re-installing it\nafterward, which isn't clearly spelled out for each extension, as far as\nI know, and would not work for extensions that create data types.\n\nYes, I do feel this is a big hold in our upgrade instructions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:14:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 09:00:36AM -0700, David G. Johnston wrote:\n> On Thu, Jul 29, 2021 at 8:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, Jul 29, 2021 at 11:36:19AM -0400, Dave Cramer wrote:\n> >\n> >\n> > I have an issue with the fragment \"whether they are from contrib\" -\n> my\n> > understanding at this point is that because of the way we package and\n> > version contrib it should not be necessary to copy those shared\n> object\n> > files from the old to the new server (maybe, just maybe, with a\n> > qualification that you are upgrading between two versions that were\n> in\n> > support during the same time period).\n> >\n> >\n> > Just to clarify. In no case are binaries copied from the old server to\n> the new\n> > server. Whether from contrib or otherwise.\n> \n> Right. Those are _binaries_ and therefore made to match a specific\n> Postgres binary. They might work or might not, but copying them is\n> never a good idea --- they should be recompiled to match the new server\n> binary, even if the extension had no version/API changes.\n> \n> \n> Ok, looking at the flow again, where exactly would the user even be able to\n> execute \"CREATE EXTENSION\" meaningfully? The relevant databases do not exist\n> (not totally sure what happens to the postgres database created during the\n> initdb step...) so at the point where the user is \"installing the extension\"\n> all they can reasonably do is a server-level install (they could maybe create\n> extension in the postgres database, but does that even matter?).\n\nThey could technically start the new cluster and use \"CREATE EXTENSION\"\nbefore the upgrade, and then the ugprade would fail since there would be\nduplicate object errors.\n\n> So, I'd propose simplifying this all to something like:\n> \n> Install extensions on the new server\n> \n> Any extensions that are used by the old cluster need to be installed into the\n> new cluster. Each database in the old cluster will have its current version of\n> all extensions migrated to the new cluster as-is. You can use the ALTER\n> EXTENSION command, on a per-database basis, to update its extensions\n> post-upgrade.\n\nCan you review the text I just posted? Thanks. I think we are making\nprogress. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:16:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Thu, 29 Jul 2021 at 12:16, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Jul 29, 2021 at 09:00:36AM -0700, David G. Johnston wrote:\n> > On Thu, Jul 29, 2021 at 8:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Thu, Jul 29, 2021 at 11:36:19AM -0400, Dave Cramer wrote:\n> > >\n> > >\n> > > I have an issue with the fragment \"whether they are from\n> contrib\" -\n> > my\n> > > understanding at this point is that because of the way we\n> package and\n> > > version contrib it should not be necessary to copy those shared\n> > object\n> > > files from the old to the new server (maybe, just maybe, with a\n> > > qualification that you are upgrading between two versions that\n> were\n> > in\n> > > support during the same time period).\n> > >\n> > >\n> > > Just to clarify. In no case are binaries copied from the old\n> server to\n> > the new\n> > > server. Whether from contrib or otherwise.\n> >\n> > Right. Those are _binaries_ and therefore made to match a specific\n> > Postgres binary. They might work or might not, but copying them is\n> > never a good idea --- they should be recompiled to match the new\n> server\n> > binary, even if the extension had no version/API changes.\n> >\n> >\n> > Ok, looking at the flow again, where exactly would the user even be able\n> to\n> > execute \"CREATE EXTENSION\" meaningfully? The relevant databases do not\n> exist\n> > (not totally sure what happens to the postgres database created during\n> the\n> > initdb step...) so at the point where the user is \"installing the\n> extension\"\n> > all they can reasonably do is a server-level install (they could maybe\n> create\n> > extension in the postgres database, but does that even matter?).\n>\n> They could technically start the new cluster and use \"CREATE EXTENSION\"\n> before the upgrade, and then the ugprade would fail since there would be\n> duplicate object errors.\n>\n> > So, I'd propose simplifying this all to something like:\n> >\n> > Install extensions on the new server\n> >\n> > Any extensions that are used by the old cluster need to be installed\n> into the\n> > new cluster. Each database in the old cluster will have its current\n> version of\n> > all extensions migrated to the new cluster as-is. You can use the ALTER\n> > EXTENSION command, on a per-database basis, to update its extensions\n> > post-upgrade.\n>\n> Can you review the text I just posted? Thanks. I think we are making\n> progress. ;-)\n>\n\nI am OK with Everything except\n\nDo not load the schema definitions,\ne.g., <command>CREATE EXTENSION pgcrypto</command>, because these\nwill be recreated from the old cluster. (The extensions may be\nupgraded later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n\n I take issue with the word \"recreated\". This implies something new is\ncreated, when in fact the old definitions are simply copied over.\n\nAs I said earlier; using the wording \"may be upgraded\" is not nearly\ncautionary enough.\n\nDave\n\nDave CramerOn Thu, 29 Jul 2021 at 12:16, Bruce Momjian <bruce@momjian.us> wrote:On Thu, Jul 29, 2021 at 09:00:36AM -0700, David G. Johnston wrote:\n> On Thu, Jul 29, 2021 at 8:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, Jul 29, 2021 at 11:36:19AM -0400, Dave Cramer wrote:\n> >\n> >\n> > I have an issue with the fragment \"whether they are from contrib\" -\n> my\n> > understanding at this point is that because of the way we package and\n> > version contrib it should not be necessary to copy those shared\n> object\n> > files from the old to the new server (maybe, just maybe, with a\n> > qualification that you are upgrading between two versions that were\n> in\n> > support during the same time period).\n> >\n> >\n> > Just to clarify. In no case are binaries copied from the old server to\n> the new\n> > server. Whether from contrib or otherwise.\n> \n> Right. Those are _binaries_ and therefore made to match a specific\n> Postgres binary. They might work or might not, but copying them is\n> never a good idea --- they should be recompiled to match the new server\n> binary, even if the extension had no version/API changes.\n> \n> \n> Ok, looking at the flow again, where exactly would the user even be able to\n> execute \"CREATE EXTENSION\" meaningfully? The relevant databases do not exist\n> (not totally sure what happens to the postgres database created during the\n> initdb step...) so at the point where the user is \"installing the extension\"\n> all they can reasonably do is a server-level install (they could maybe create\n> extension in the postgres database, but does that even matter?).\n\nThey could technically start the new cluster and use \"CREATE EXTENSION\"\nbefore the upgrade, and then the ugprade would fail since there would be\nduplicate object errors.\n\n> So, I'd propose simplifying this all to something like:\n> \n> Install extensions on the new server\n> \n> Any extensions that are used by the old cluster need to be installed into the\n> new cluster. Each database in the old cluster will have its current version of\n> all extensions migrated to the new cluster as-is. You can use the ALTER\n> EXTENSION command, on a per-database basis, to update its extensions\n> post-upgrade.\n\nCan you review the text I just posted? Thanks. I think we are making\nprogress. ;-)I am OK with Everything exceptDo not load the schema definitions,e.g., <command>CREATE EXTENSION pgcrypto</command>, because thesewill be recreated from the old cluster. (The extensions may beupgraded later using <literal>ALTER EXTENSION ... UPGRADE</literal>.) I take issue with the word \"recreated\". This implies something new is created, when in fact the old definitions are simply copied over.As I said earlier; using the wording \"may be upgraded\" is not nearly cautionary enough.Dave",
"msg_date": "Thu, 29 Jul 2021 12:22:59 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/29/21 12:00 PM, David G. Johnston wrote:\n> Ok, looking at the flow again, where exactly would the user even be able \n> to execute \"CREATE EXTENSION\" meaningfully? The relevant databases do \n> not exist (not totally sure what happens to the postgres database \n> created during the initdb step...) so at the point where the user is \n> \"installing the extension\" all they can reasonably do is a server-level \n> install (they could maybe create extension in the postgres database, but \n> does that even matter?).\n> \n> So, I'd propose simplifying this all to something like:\n> \n> Install extensions on the new server\n\nExtensions are not installed on the server level. Their binary \ncomponents (shared objects) are, but the actual catalog modifications \nthat make them accessible are performed per database by CREATE \nEXTENSION, which executes the SQL files associated with the extension. \nAnd they can be performed differently per database, like for example \nplacing one and the same extension into different schemas in different \ndatabases.\n\npg_upgrade is not (and should not be) concerned with placing the \nextension's installation components into the new version's lib and share \ndirectories. But it is pg_upgrade's job to perform the correct catalog \nmodification per database during the upgrade.\n\n> Any extensions that are used by the old cluster need to be installed \n> into the new cluster. Each database in the old cluster will have its \n> current version of all extensions migrated to the new cluster as-is. \n> You can use the ALTER EXTENSION command, on a per-database basis, to \n> update its extensions post-upgrade.\n\nThat assumes that the extension SQL files are capable of detecting a \nserver version change and perform the necessary (if any) steps to alter \nthe extension's objects accordingly.\n\nOff the top of my head I don't remember what happens when one executes \nALTER EXTENSION ... UPGRADE ... when it is already on the latest version \n*of the extension*. Might be an error or a no-op.\n\nAnd to make matters worse, it is not possible to work around this with a \nDROP EXTENSION ... CREATE EXTENSION. There are extensions that create \nobjects, like user defined data types and functions, that will be \nreferenced by end user objects like tables and views.\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:28:42 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 12:22:59PM -0400, Dave Cramer wrote:\n> On Thu, 29 Jul 2021 at 12:16, Bruce Momjian <bruce@momjian.us> wrote:\n> Can you review the text I just posted? Thanks. I think we are making\n> progress. ;-)\n> \n> \n> I am OK with Everything except\n> \n> Do not load the schema definitions,\n> e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> will be recreated from the old cluster. (The extensions may be\n> upgraded later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n> \n> I take issue with the word \"recreated\". This implies something new is created,\n> when in fact the old definitions are simply copied over.\n> \n> As I said earlier; using the wording \"may be upgraded\" is not nearly cautionary\n> enough.\n\nOK, I changed it to \"copy\" though I used \"recreated\" earlier since I was\nworried \"copy\" would be confused with file copy. I changed the\nrecommendation to \"should be\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Thu, 29 Jul 2021 12:34:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 9:28 AM Jan Wieck <jan@wi3ck.info> wrote:\n\n> On 7/29/21 12:00 PM, David G. Johnston wrote:\n> > Ok, looking at the flow again, where exactly would the user even be able\n> > to execute \"CREATE EXTENSION\" meaningfully? The relevant databases do\n> > not exist (not totally sure what happens to the postgres database\n> > created during the initdb step...) so at the point where the user is\n> > \"installing the extension\" all they can reasonably do is a server-level\n> > install (they could maybe create extension in the postgres database, but\n> > does that even matter?).\n> >\n> > So, I'd propose simplifying this all to something like:\n> >\n> > Install extensions on the new server\n>\n> Extensions are not installed on the server level. Their binary\n> components (shared objects) are, but the actual catalog modifications\n> that make them accessible are performed per database by CREATE\n> EXTENSION, which executes the SQL files associated with the extension.\n> And they can be performed differently per database, like for example\n> placing one and the same extension into different schemas in different\n> databases.\n>\n> pg_upgrade is not (and should not be) concerned with placing the\n> extension's installation components into the new version's lib and share\n> directories. But it is pg_upgrade's job to perform the correct catalog\n> modification per database during the upgrade.\n>\n\nThat is exactly the point I am making. The section is informing the user\nof things to do that the server will not do. Which is \"install extension\ncode into the O/S\" and that mentioning CREATE EXTENSION at this point in\nthe process is talking about something that is simply out-of-scope.\n\n\n\n> > Any extensions that are used by the old cluster need to be installed\n> > into the new cluster. Each database in the old cluster will have its\n> > current version of all extensions migrated to the new cluster as-is.\n> > You can use the ALTER EXTENSION command, on a per-database basis, to\n> > update its extensions post-upgrade.\n>\n> That assumes that the extension SQL files are capable of detecting a\n> server version change and perform the necessary (if any) steps to alter\n> the extension's objects accordingly.\n>\n> Off the top of my head I don't remember what happens when one executes\n> ALTER EXTENSION ... UPGRADE ... when it is already on the latest version\n> *of the extension*. Might be an error or a no-op.\n>\n> And to make matters worse, it is not possible to work around this with a\n> DROP EXTENSION ... CREATE EXTENSION. There are extensions that create\n> objects, like user defined data types and functions, that will be\n> referenced by end user objects like tables and views.\n>\n>\nThese are all excellent points - but at present pg_upgrade simply doesn't\ncare and hopes that the extension author's documentation deals with these\npossibilities in a sane manner.\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 9:28 AM Jan Wieck <jan@wi3ck.info> wrote:On 7/29/21 12:00 PM, David G. Johnston wrote:\n> Ok, looking at the flow again, where exactly would the user even be able \n> to execute \"CREATE EXTENSION\" meaningfully? The relevant databases do \n> not exist (not totally sure what happens to the postgres database \n> created during the initdb step...) so at the point where the user is \n> \"installing the extension\" all they can reasonably do is a server-level \n> install (they could maybe create extension in the postgres database, but \n> does that even matter?).\n> \n> So, I'd propose simplifying this all to something like:\n> \n> Install extensions on the new server\n\nExtensions are not installed on the server level. Their binary \ncomponents (shared objects) are, but the actual catalog modifications \nthat make them accessible are performed per database by CREATE \nEXTENSION, which executes the SQL files associated with the extension. \nAnd they can be performed differently per database, like for example \nplacing one and the same extension into different schemas in different \ndatabases.\n\npg_upgrade is not (and should not be) concerned with placing the \nextension's installation components into the new version's lib and share \ndirectories. But it is pg_upgrade's job to perform the correct catalog \nmodification per database during the upgrade.That is exactly the point I am making. The section is informing the user of things to do that the server will not do. Which is \"install extension code into the O/S\" and that mentioning CREATE EXTENSION at this point in the process is talking about something that is simply out-of-scope. \n> Any extensions that are used by the old cluster need to be installed \n> into the new cluster. Each database in the old cluster will have its \n> current version of all extensions migrated to the new cluster as-is. \n> You can use the ALTER EXTENSION command, on a per-database basis, to \n> update its extensions post-upgrade.\n\nThat assumes that the extension SQL files are capable of detecting a \nserver version change and perform the necessary (if any) steps to alter \nthe extension's objects accordingly.\n\nOff the top of my head I don't remember what happens when one executes \nALTER EXTENSION ... UPGRADE ... when it is already on the latest version \n*of the extension*. Might be an error or a no-op.\n\nAnd to make matters worse, it is not possible to work around this with a \nDROP EXTENSION ... CREATE EXTENSION. There are extensions that create \nobjects, like user defined data types and functions, that will be \nreferenced by end user objects like tables and views.These are all excellent points - but at present pg_upgrade simply doesn't care and hopes that the extension author's documentation deals with these possibilities in a sane manner.David J.",
"msg_date": "Thu, 29 Jul 2021 09:44:30 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/29/21 12:44 PM, David G. Johnston wrote:\n\n> ... - but at present pg_upgrade simply \n> doesn't care and hopes that the extension author's documentation deals \n> with these possibilities in a sane manner.\n\npg_upgrade is not able to address this in a guaranteed, consistent \nfashion. At this point there is no way to even make sure that a 3rd \nparty extension provides the scripts needed to upgrade from one \nextension version to the next. We don't have the mechanism to upgrade \nthe same extension version from one server version to the next, in case \nthere are any modifications in place necessary.\n\nHow exactly do you envision that we, the PostgreSQL project, make sure \nthat extension developers provide those mechanisms in the future?\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:55:51 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 9:34 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Jul 29, 2021 at 12:22:59PM -0400, Dave Cramer wrote:\n> > On Thu, 29 Jul 2021 at 12:16, Bruce Momjian <bruce@momjian.us> wrote:\n> > Can you review the text I just posted? Thanks. I think we are\n> making\n> > progress. ;-)\n> >\n> >\n> > I am OK with Everything except\n> >\n> > Do not load the schema definitions,\n> > e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> > will be recreated from the old cluster. (The extensions may be\n> > upgraded later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n> >\n> > I take issue with the word \"recreated\". This implies something new is\n> created,\n> > when in fact the old definitions are simply copied over.\n> >\n> > As I said earlier; using the wording \"may be upgraded\" is not nearly\n> cautionary\n> > enough.\n>\n> OK, I changed it to \"copy\" though I used \"recreated\" earlier since I was\n> worried \"copy\" would be confused with file copy. I changed the\n> recommendation to \"should be\".\n>\n>\nI'm warming up to \"should\" but maybe add a \"why\" such as \"the old versions\nare considered unsupported on the newer server\".\n\nI dislike \"usually via operating system commands\", just offload this to the\nextension, i.e., \"must be installed in the new cluster via installation\nprocedures specific to, and documented by, each extension (for contrib it\nis usually enough to ensure the -contrib package was chosen to be installed\nalong with the -server package for your operating system.)\"\n\nI would simplify the first two sentences to just:\n\nIf the old cluster used extensions those same extensions must be installed\nin the new cluster via installation procedures specific to, and documented\nby, each extension. For contrib extensions it is usually enough to install\nthe -contrib package via the same method you used to install the PostgreSQL\nserver.\n\nI would consider my suggestion for \"copy as-is/alter extension\" wording in\nmy previous email in lieu of the existing third and fourth sentences, with\nthe \"should\" and \"why\" wording possibly worked in. But the existing works\nok.\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 9:34 AM Bruce Momjian <bruce@momjian.us> wrote:On Thu, Jul 29, 2021 at 12:22:59PM -0400, Dave Cramer wrote:\n> On Thu, 29 Jul 2021 at 12:16, Bruce Momjian <bruce@momjian.us> wrote:\n> Can you review the text I just posted? Thanks. I think we are making\n> progress. ;-)\n> \n> \n> I am OK with Everything except\n> \n> Do not load the schema definitions,\n> e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> will be recreated from the old cluster. (The extensions may be\n> upgraded later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n> \n> I take issue with the word \"recreated\". This implies something new is created,\n> when in fact the old definitions are simply copied over.\n> \n> As I said earlier; using the wording \"may be upgraded\" is not nearly cautionary\n> enough.\n\nOK, I changed it to \"copy\" though I used \"recreated\" earlier since I was\nworried \"copy\" would be confused with file copy. I changed the\nrecommendation to \"should be\".I'm warming up to \"should\" but maybe add a \"why\" such as \"the old versions are considered unsupported on the newer server\".I dislike \"usually via operating system commands\", just offload this to the extension, i.e., \"must be installed in the new cluster via installation procedures specific to, and documented by, each extension (for contrib it is usually enough to ensure the -contrib package was chosen to be installed along with the -server package for your operating system.)\"I would simplify the first two sentences to just:If the old cluster used extensions those same extensions must be installed in the new cluster via installation procedures specific to, and documented by, each extension. For contrib extensions it is usually enough to install the -contrib package via the same method you used to install the PostgreSQL server.I would consider my suggestion for \"copy as-is/alter extension\" wording in my previous email in lieu of the existing third and fourth sentences, with the \"should\" and \"why\" wording possibly worked in. But the existing works ok.David J.",
"msg_date": "Thu, 29 Jul 2021 10:00:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 9:55 AM Jan Wieck <jan@wi3ck.info> wrote:\n\n> How exactly do you envision that we, the PostgreSQL project, make sure\n> that extension developers provide those mechanisms in the future?\n>\n>\nI have no suggestions here, and don't really plan to get deeply involved in\nthis area of the project anytime soon. But it is definitely a topic worthy\nof discussion on a new thread.\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 9:55 AM Jan Wieck <jan@wi3ck.info> wrote:How exactly do you envision that we, the PostgreSQL project, make sure \nthat extension developers provide those mechanisms in the future?I have no suggestions here, and don't really plan to get deeply involved in this area of the project anytime soon. But it is definitely a topic worthy of discussion on a new thread.David J.",
"msg_date": "Thu, 29 Jul 2021 10:05:21 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": ">\n>\n>\n> I would simplify the first two sentences to just:\n>\n> If the old cluster used extensions those same extensions must be installed\n> in the new cluster via installation procedures specific to, and documented\n> by, each extension. For contrib extensions it is usually enough to install\n> the -contrib package via the same method you used to install the PostgreSQL\n> server.\n>\n\nWell this is not strictly true. There are many extensions that would work\njust fine with the current pg_upgrade. It may not even be necessary to\nrecompile them.\n\nDave\n\nI would simplify the first two sentences to just:If the old cluster used extensions those same extensions must be installed in the new cluster via installation procedures specific to, and documented by, each extension. For contrib extensions it is usually enough to install the -contrib package via the same method you used to install the PostgreSQL server.Well this is not strictly true. There are many extensions that would work just fine with the current pg_upgrade. It may not even be necessary to recompile them.Dave",
"msg_date": "Thu, 29 Jul 2021 13:10:01 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 2021-Jul-29, Dave Cramer wrote:\n\n> > If the old cluster used extensions those same extensions must be\n> > installed in the new cluster via installation procedures specific\n> > to, and documented by, each extension. For contrib extensions it is\n> > usually enough to install the -contrib package via the same method\n> > you used to install the PostgreSQL server.\n> \n> Well this is not strictly true. There are many extensions that would\n> work just fine with the current pg_upgrade. It may not even be\n> necessary to recompile them.\n\nIt is always necessary to recompile because of the PG_MODULE_MAGIC\ndeclaration, which is mandatory and contains a check that the major\nversion matches. Copying the original shared object never works.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n",
"msg_date": "Thu, 29 Jul 2021 13:13:48 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 13:13, Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Jul-29, Dave Cramer wrote:\n>\n> > > If the old cluster used extensions those same extensions must be\n> > > installed in the new cluster via installation procedures specific\n> > > to, and documented by, each extension. For contrib extensions it is\n> > > usually enough to install the -contrib package via the same method\n> > > you used to install the PostgreSQL server.\n> >\n> > Well this is not strictly true. There are many extensions that would\n> > work just fine with the current pg_upgrade. It may not even be\n> > necessary to recompile them.\n>\n> It is always necessary to recompile because of the PG_MODULE_MAGIC\n> declaration, which is mandatory and contains a check that the major\n> version matches. Copying the original shared object never works.\n>\n\nThx, I knew I was on shaky ground with that last statement :)\n\nDave\n\n>\n>\n\nOn Thu, 29 Jul 2021 at 13:13, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Jul-29, Dave Cramer wrote:\n\n> > If the old cluster used extensions those same extensions must be\n> > installed in the new cluster via installation procedures specific\n> > to, and documented by, each extension. For contrib extensions it is\n> > usually enough to install the -contrib package via the same method\n> > you used to install the PostgreSQL server.\n> \n> Well this is not strictly true. There are many extensions that would\n> work just fine with the current pg_upgrade. It may not even be\n> necessary to recompile them.\n\nIt is always necessary to recompile because of the PG_MODULE_MAGIC\ndeclaration, which is mandatory and contains a check that the major\nversion matches. Copying the original shared object never works.Thx, I knew I was on shaky ground with that last statement :)Dave",
"msg_date": "Thu, 29 Jul 2021 13:15:30 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 2021-Jul-29, Bruce Momjian wrote:\n\n> + If the old cluster used extensions, whether from\n> + <filename>contrib</filename> or some other source, it used\n> + shared object files (or DLLs) to implement these extensions, e.g.,\n> + <filename>pgcrypto.so</filename>. Now, shared object files matching\n> + the new server binary must be installed in the new cluster, usually\n> + via operating system commands. Do not load the schema definitions,\n> + e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> + will be copied from the old cluster. (Extensions should be upgraded\n> + later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n\nI propose this:\n\n<para>\n If the old cluster used shared-object files (or DLLs) for extensions\n or other loadable modules, install recompiled versions of those files\n onto the new cluster.\n Do not install the extension themselves (i.e., do not run\n <command>CREATE EXTENSION</command>), because extension definitions\n will be carried forward from the old cluster.\n</para>\n\n<para>\n Extensions can be upgraded after pg_upgrade completes using\n <command>ALTER EXTENSION ... UPGRADE</command>, on a per-database\n basis.\n</para>\n\nI suggest \" ... for extensions or other loadable modules\" because\nloadable modules aren't necessarily for extensions. Also, it's\nperfectly possible to have extension that don't have a loadable module.\n\nI suggest \"extension definitions ... carried forward\" instead of\n\"extensions ... copied\" (your proposed text) to avoid the idea that\nfiles are copied; use it instead of \"schema definitions ... upgraded\"\n(the current docs) to avoid the idea that they are actually upgraded;\nalso, \"schema definition\" seems a misleading term to use here.\n\nI suggest \"can be upgraded\" rather than \"should be upgraded\" because\nwe're not making a recommendation, merely stating the fact that it is\npossible to do so.\n\nI suggest using the imperative mood, to be consistent with the\nsurrounding text. (Applies to the first para; the second para is\ninformative.)\n\n\nI haven't seen it mentioned in the thread, but I think the final phrase\nof this <step> should be a separate step,\n\n<step>\n <title>Copy custom full-text search files</title>\n <para>\n Copy any custom full text search file (dictionary, synonym, thesaurus,\n stop word list) to the new server.\n </para>\n</step>\n\nWhile this is closely related to extensions, it's completely different.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Sallah, I said NO camels! That's FIVE camels; can't you count?\"\n(Indiana Jones)\n\n\n",
"msg_date": "Thu, 29 Jul 2021 13:43:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 12:14 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Jul 29, 2021 at 11:46:12AM -0400, Jan Wieck wrote:\n> >\n> > This assumes that the scripts executed during CREATE EXTENSION have no\n> > conditional code in them that depends on the server version. Running the\n> > same SQL script on different server versions can have different effects.\n> >\n> > I don't have a ready example of such an extension, but if we ever would\n> > convert the backend parts of Slony into an extension, it would be one.\n>\n> The bottom line is that we have _no_ mechanism to handle this except\n> uninstalling the extension before the upgrade and re-installing it\n> afterward, which isn't clearly spelled out for each extension, as far as\n> I know, and would not work for extensions that create data types.\n>\n> Yes, I do feel this is a big hold in our upgrade instructions.\n\nFWIW I have an example of such an extension: powa-archivist extension\nscript runs an anonymous block code and creates if needed a custom\nwrapper to emulate the current_setting(text, boolean) variant that\ndoesn't exist on pre-pg96 servers.\n\n\n",
"msg_date": "Fri, 30 Jul 2021 02:04:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Thu, 29 Jul 2021 at 13:43, Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Jul-29, Bruce Momjian wrote:\n>\n> > + If the old cluster used extensions, whether from\n> > + <filename>contrib</filename> or some other source, it used\n> > + shared object files (or DLLs) to implement these extensions, e.g.,\n> > + <filename>pgcrypto.so</filename>. Now, shared object files\n> matching\n> > + the new server binary must be installed in the new cluster, usually\n> > + via operating system commands. Do not load the schema definitions,\n> > + e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> > + will be copied from the old cluster. (Extensions should be\n> upgraded\n> > + later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n>\n> I propose this:\n>\n> <para>\n> If the old cluster used shared-object files (or DLLs) for extensions\n> or other loadable modules, install recompiled versions of those files\n> onto the new cluster.\n> Do not install the extension themselves (i.e., do not run\n> <command>CREATE EXTENSION</command>), because extension definitions\n> will be carried forward from the old cluster.\n> </para>\n>\n> <para>\n> Extensions can be upgraded after pg_upgrade completes using\n> <command>ALTER EXTENSION ... UPGRADE</command>, on a per-database\n> basis.\n> </para>\n>\n> I suggest \" ... for extensions or other loadable modules\" because\n> loadable modules aren't necessarily for extensions. Also, it's\n> perfectly possible to have extension that don't have a loadable module.\n>\n> I suggest \"extension definitions ... carried forward\" instead of\n> \"extensions ... copied\" (your proposed text) to avoid the idea that\n> files are copied; use it instead of \"schema definitions ... upgraded\"\n> (the current docs) to avoid the idea that they are actually upgraded;\n> also, \"schema definition\" seems a misleading term to use here.\n>\n\nI like \"carried forward\", however it presumes quite a bit of knowledge of\nwhat is going on inside pg_upgrade.\nThat said I don't have a better option short of explaining the whole thing\nwhich is clearly out of scope.\n\n>\n> I suggest \"can be upgraded\" rather than \"should be upgraded\" because\n> we're not making a recommendation, merely stating the fact that it is\n> possible to do so.\n>\n> Why not recommend it? I was going to suggest that we actually run alter\nextension upgrade ... on all of them as a solution.\n\nWhat are the downsides to upgrading them all by default ? AFAIK if they\nneed upgrading this should run all of the SQL scripts, if they don't then\nthis should be a NO-OP.\n\nDave\n\nDave CramerOn Thu, 29 Jul 2021 at 13:43, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Jul-29, Bruce Momjian wrote:\n\n> + If the old cluster used extensions, whether from\n> + <filename>contrib</filename> or some other source, it used\n> + shared object files (or DLLs) to implement these extensions, e.g.,\n> + <filename>pgcrypto.so</filename>. Now, shared object files matching\n> + the new server binary must be installed in the new cluster, usually\n> + via operating system commands. Do not load the schema definitions,\n> + e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> + will be copied from the old cluster. (Extensions should be upgraded\n> + later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n\nI propose this:\n\n<para>\n If the old cluster used shared-object files (or DLLs) for extensions\n or other loadable modules, install recompiled versions of those files\n onto the new cluster.\n Do not install the extension themselves (i.e., do not run\n <command>CREATE EXTENSION</command>), because extension definitions\n will be carried forward from the old cluster.\n</para>\n\n<para>\n Extensions can be upgraded after pg_upgrade completes using\n <command>ALTER EXTENSION ... UPGRADE</command>, on a per-database\n basis.\n</para>\n\nI suggest \" ... for extensions or other loadable modules\" because\nloadable modules aren't necessarily for extensions. Also, it's\nperfectly possible to have extension that don't have a loadable module.\n\nI suggest \"extension definitions ... carried forward\" instead of\n\"extensions ... copied\" (your proposed text) to avoid the idea that\nfiles are copied; use it instead of \"schema definitions ... upgraded\"\n(the current docs) to avoid the idea that they are actually upgraded;\nalso, \"schema definition\" seems a misleading term to use here.I like \"carried forward\", however it presumes quite a bit of knowledge of what is going on inside pg_upgrade.That said I don't have a better option short of explaining the whole thing which is clearly out of scope.\n\nI suggest \"can be upgraded\" rather than \"should be upgraded\" because\nwe're not making a recommendation, merely stating the fact that it is\npossible to do so.\nWhy not recommend it? I was going to suggest that we actually run alter extension upgrade ... on all of them as a solution.What are the downsides to upgrading them all by default ? AFAIK if they need upgrading this should run all of the SQL scripts, if they don't then this should be a NO-OP.Dave",
"msg_date": "Thu, 29 Jul 2021 14:06:10 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/29/21 2:04 PM, Julien Rouhaud wrote:\n>> On Thu, Jul 29, 2021 at 11:46:12AM -0400, Jan Wieck wrote:\n\n>> > I don't have a ready example of such an extension, but if we ever would\n>> > convert the backend parts of Slony into an extension, it would be one.\n\n> FWIW I have an example of such an extension: powa-archivist extension\n> script runs an anonymous block code and creates if needed a custom\n> wrapper to emulate the current_setting(text, boolean) variant that\n> doesn't exist on pre-pg96 servers.\n> \n\nThank you!\n\nI presume that pg_upgrade on a database with that extension installed \nwould silently succeed and have the pg_catalog as well as public (or \nwherever) version of that function present.\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Thu, 29 Jul 2021 14:14:56 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 2021-Jul-29, Dave Cramer wrote:\n\n> > I suggest \"can be upgraded\" rather than \"should be upgraded\" because\n> > we're not making a recommendation, merely stating the fact that it is\n> > possible to do so.\n>\n> Why not recommend it? I was going to suggest that we actually run alter\n> extension upgrade ... on all of them as a solution.\n> \n> What are the downsides to upgrading them all by default ? AFAIK if they\n> need upgrading this should run all of the SQL scripts, if they don't then\n> this should be a NO-OP.\n\nI'm not aware of any downsides, and I think it would be a good idea to\ndo so, but I also think that it would be good to sort out the docs\nprecisely (a backpatchable doc change, IMV) and once that is done we can\ndiscuss how to improve pg_upgrade so that users no longer need to do\nthat (a non-backpatchable code change). Incremental improvements and\nall that ...\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n",
"msg_date": "Thu, 29 Jul 2021 14:19:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 10:00:12AM -0700, David G. Johnston wrote:\n> I'm warming up to \"should\" but maybe add a \"why\" such as \"the old versions are\n> considered unsupported on the newer server\".\n> \n> I dislike \"usually via operating system commands\", just offload this to the\n> extension, i.e., \"must be installed in the new cluster via installation\n> procedures specific to, and documented by, each extension (for contrib it is\n> usually enough to ensure the -contrib package was chosen to be installed along\n> with the -server package for your operating system.)\"\n> \n> I would simplify the first two sentences to just:\n> \n> If the old cluster used extensions those same extensions must be installed in\n> the new cluster via installation procedures specific to, and documented by,\n> each extension. For contrib extensions it is usually enough to install the\n> -contrib package via the same method you used to install the PostgreSQL server.\n> \n> I would consider my suggestion for \"copy as-is/alter extension\" wording in my\n> previous email in lieu of the existing third and fourth sentences, with the\n> \"should\" and \"why\" wording possibly worked in. But the existing works ok.\n\nI am sorry but none of your suggestions are exciting me --- they seem to\nget into too much detail for the context.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 14:28:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 02:28:55PM -0400, Bruce Momjian wrote:\n> On Thu, Jul 29, 2021 at 10:00:12AM -0700, David G. Johnston wrote:\n> > I'm warming up to \"should\" but maybe add a \"why\" such as \"the old versions are\n> > considered unsupported on the newer server\".\n> > \n> > I dislike \"usually via operating system commands\", just offload this to the\n> > extension, i.e., \"must be installed in the new cluster via installation\n> > procedures specific to, and documented by, each extension (for contrib it is\n> > usually enough to ensure the -contrib package was chosen to be installed along\n> > with the -server package for your operating system.)\"\n> > \n> > I would simplify the first two sentences to just:\n> > \n> > If the old cluster used extensions those same extensions must be installed in\n> > the new cluster via installation procedures specific to, and documented by,\n> > each extension. For contrib extensions it is usually enough to install the\n> > -contrib package via the same method you used to install the PostgreSQL server.\n\nOh, and you can't use the same installation procedures as when you\ninstalled the extension because that probably included CREATE EXTENSION.\nThis really highlights why this is tricky to explain --- we need the\nbinaries, but not the SQL that goes with it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 14:35:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 11:28 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I am sorry but none of your suggestions are exciting me --- they seem to\n> get into too much detail for the context.\n>\n\nFair, I still need to consider Alvaro's anyway, but given the amount of\ngeneral angst surrounding performing a pg_upgrade I do not feel that being\ndetailed is necessarily a bad thing, so long as the detail is relevant.\nBut I'll keep this in mind for my next reply.\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 11:28 AM Bruce Momjian <bruce@momjian.us> wrote: \nI am sorry but none of your suggestions are exciting me --- they seem to\nget into too much detail for the context.Fair, I still need to consider Alvaro's anyway, but given the amount of general angst surrounding performing a pg_upgrade I do not feel that being detailed is necessarily a bad thing, so long as the detail is relevant. But I'll keep this in mind for my next reply.David J.",
"msg_date": "Thu, 29 Jul 2021 11:36:58 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 11:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> Oh, and you can't use the same installation procedures as when you\n> installed the extension because that probably included CREATE EXTENSION.\n> This really highlights why this is tricky to explain --- we need the\n> binaries, but not the SQL that goes with it.\n>\n\nMaybe...but the fact that \"installation to the O/S\" is cluster-wide and\n\"CREATE EXTENSION\" is database-specific I believe this will generally take\ncare of itself in practice, especially if we leave the part (but ignore any\ninstallation instructions that advise executing create extension).\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 11:35 AM Bruce Momjian <bruce@momjian.us> wrote:\nOh, and you can't use the same installation procedures as when you\ninstalled the extension because that probably included CREATE EXTENSION.\nThis really highlights why this is tricky to explain --- we need the\nbinaries, but not the SQL that goes with it.Maybe...but the fact that \"installation to the O/S\" is cluster-wide and \"CREATE EXTENSION\" is database-specific I believe this will generally take care of itself in practice, especially if we leave the part (but ignore any installation instructions that advise executing create extension).David J.",
"msg_date": "Thu, 29 Jul 2021 11:38:57 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 01:43:09PM -0400, Álvaro Herrera wrote:\n> On 2021-Jul-29, Bruce Momjian wrote:\n> \n> > + If the old cluster used extensions, whether from\n> > + <filename>contrib</filename> or some other source, it used\n> > + shared object files (or DLLs) to implement these extensions, e.g.,\n> > + <filename>pgcrypto.so</filename>. Now, shared object files matching\n> > + the new server binary must be installed in the new cluster, usually\n> > + via operating system commands. Do not load the schema definitions,\n> > + e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> > + will be copied from the old cluster. (Extensions should be upgraded\n> > + later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n> \n> I propose this:\n> \n> <para>\n> If the old cluster used shared-object files (or DLLs) for extensions\n> or other loadable modules, install recompiled versions of those files\n> onto the new cluster.\n> Do not install the extension themselves (i.e., do not run\n> <command>CREATE EXTENSION</command>), because extension definitions\n> will be carried forward from the old cluster.\n> </para>\n> \n> <para>\n> Extensions can be upgraded after pg_upgrade completes using\n> <command>ALTER EXTENSION ... UPGRADE</command>, on a per-database\n> basis.\n> </para>\n> \n> I suggest \" ... for extensions or other loadable modules\" because\n> loadable modules aren't necessarily for extensions. Also, it's\n> perfectly possible to have extension that don't have a loadable module.\n\nYes, good point.\n\n> I suggest \"extension definitions ... carried forward\" instead of\n> \"extensions ... copied\" (your proposed text) to avoid the idea that\n> files are copied; use it instead of \"schema definitions ... upgraded\"\n> (the current docs) to avoid the idea that they are actually upgraded;\n> also, \"schema definition\" seems a misleading term to use here.\n\nI used the term \"duplicated\".\n\n> I suggest \"can be upgraded\" rather than \"should be upgraded\" because\n> we're not making a recommendation, merely stating the fact that it is\n> possible to do so.\n\nAgreed. Most extensions don't have updates between major versions.\n\n> I suggest using the imperative mood, to be consistent with the\n> surrounding text. (Applies to the first para; the second para is\n> informative.)\n\nOK.\n\n> I haven't seen it mentioned in the thread, but I think the final phrase\n> of this <step> should be a separate step,\n> \n> <step>\n> <title>Copy custom full-text search files</title>\n> <para>\n> Copy any custom full text search file (dictionary, synonym, thesaurus,\n> stop word list) to the new server.\n> </para>\n> </step>\n> \n> While this is closely related to extensions, it's completely different.\n\nAgreed. See attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Thu, 29 Jul 2021 15:06:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 02:19:17PM -0400, Álvaro Herrera wrote:\n> On 2021-Jul-29, Dave Cramer wrote:\n> \n> > > I suggest \"can be upgraded\" rather than \"should be upgraded\" because\n> > > we're not making a recommendation, merely stating the fact that it is\n> > > possible to do so.\n> >\n> > Why not recommend it? I was going to suggest that we actually run alter\n> > extension upgrade ... on all of them as a solution.\n> > \n> > What are the downsides to upgrading them all by default ? AFAIK if they\n> > need upgrading this should run all of the SQL scripts, if they don't then\n> > this should be a NO-OP.\n> \n> I'm not aware of any downsides, and I think it would be a good idea to\n> do so, but I also think that it would be good to sort out the docs\n> precisely (a backpatchable doc change, IMV) and once that is done we can\n> discuss how to improve pg_upgrade so that users no longer need to do\n> that (a non-backpatchable code change). Incremental improvements and\n> all that ...\n\nAgreed. I don't think we have any consistent set of steps for detecting\nand upgrading extensions --- that needs a lot more research.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 15:08:04 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Thu, 29 Jul 2021 at 15:06, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Jul 29, 2021 at 01:43:09PM -0400, Álvaro Herrera wrote:\n> > On 2021-Jul-29, Bruce Momjian wrote:\n> >\n> > > + If the old cluster used extensions, whether from\n> > > + <filename>contrib</filename> or some other source, it used\n> > > + shared object files (or DLLs) to implement these extensions,\n> e.g.,\n> > > + <filename>pgcrypto.so</filename>. Now, shared object files\n> matching\n> > > + the new server binary must be installed in the new cluster,\n> usually\n> > > + via operating system commands. Do not load the schema\n> definitions,\n> > > + e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> > > + will be copied from the old cluster. (Extensions should be\n> upgraded\n> > > + later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n> >\n> > I propose this:\n> >\n> > <para>\n> > If the old cluster used shared-object files (or DLLs) for extensions\n> > or other loadable modules, install recompiled versions of those files\n> > onto the new cluster.\n> > Do not install the extension themselves (i.e., do not run\n> > <command>CREATE EXTENSION</command>), because extension definitions\n> > will be carried forward from the old cluster.\n> > </para>\n> >\n> > <para>\n> > Extensions can be upgraded after pg_upgrade completes using\n> > <command>ALTER EXTENSION ... UPGRADE</command>, on a per-database\n> > basis.\n> > </para>\n> >\n> > I suggest \" ... for extensions or other loadable modules\" because\n> > loadable modules aren't necessarily for extensions. Also, it's\n> > perfectly possible to have extension that don't have a loadable module.\n>\n> Yes, good point.\n>\n> > I suggest \"extension definitions ... carried forward\" instead of\n> > \"extensions ... copied\" (your proposed text) to avoid the idea that\n> > files are copied; use it instead of \"schema definitions ... upgraded\"\n> > (the current docs) to avoid the idea that they are actually upgraded;\n> > also, \"schema definition\" seems a misleading term to use here.\n>\n> I used the term \"duplicated\".\n>\n> > I suggest \"can be upgraded\" rather than \"should be upgraded\" because\n> > we're not making a recommendation, merely stating the fact that it is\n> > possible to do so.\n>\n> Agreed. Most extensions don't have updates between major versions.\n>\n> > I suggest using the imperative mood, to be consistent with the\n> > surrounding text. (Applies to the first para; the second para is\n> > informative.)\n>\n> OK.\n>\n> > I haven't seen it mentioned in the thread, but I think the final phrase\n> > of this <step> should be a separate step,\n> >\n> > <step>\n> > <title>Copy custom full-text search files</title>\n> > <para>\n> > Copy any custom full text search file (dictionary, synonym, thesaurus,\n> > stop word list) to the new server.\n> > </para>\n> > </step>\n> >\n> > While this is closely related to extensions, it's completely different.\n>\n> Agreed. See attached patch.\n>\n\nSo back to the original motivation for bringing this up. Recall that a\ncluster was upgraded. Everything appeared to work fine, except that the\ndefinitions of the functions were slightly different enough to cause a\nfatal issue on restoring a dump from pg_dump.\nSince pg_upgrade is now part of the core project, we need to make sure this\nis not possible or be much more insistent that the user needs to upgrade\nany extensions that require it.\n\nI believe we should be doing more than making a recommendation.\n\nDave\n\n>\n>\n\nDave CramerOn Thu, 29 Jul 2021 at 15:06, Bruce Momjian <bruce@momjian.us> wrote:On Thu, Jul 29, 2021 at 01:43:09PM -0400, Álvaro Herrera wrote:\n> On 2021-Jul-29, Bruce Momjian wrote:\n> \n> > + If the old cluster used extensions, whether from\n> > + <filename>contrib</filename> or some other source, it used\n> > + shared object files (or DLLs) to implement these extensions, e.g.,\n> > + <filename>pgcrypto.so</filename>. Now, shared object files matching\n> > + the new server binary must be installed in the new cluster, usually\n> > + via operating system commands. Do not load the schema definitions,\n> > + e.g., <command>CREATE EXTENSION pgcrypto</command>, because these\n> > + will be copied from the old cluster. (Extensions should be upgraded\n> > + later using <literal>ALTER EXTENSION ... UPGRADE</literal>.)\n> \n> I propose this:\n> \n> <para>\n> If the old cluster used shared-object files (or DLLs) for extensions\n> or other loadable modules, install recompiled versions of those files\n> onto the new cluster.\n> Do not install the extension themselves (i.e., do not run\n> <command>CREATE EXTENSION</command>), because extension definitions\n> will be carried forward from the old cluster.\n> </para>\n> \n> <para>\n> Extensions can be upgraded after pg_upgrade completes using\n> <command>ALTER EXTENSION ... UPGRADE</command>, on a per-database\n> basis.\n> </para>\n> \n> I suggest \" ... for extensions or other loadable modules\" because\n> loadable modules aren't necessarily for extensions. Also, it's\n> perfectly possible to have extension that don't have a loadable module.\n\nYes, good point.\n\n> I suggest \"extension definitions ... carried forward\" instead of\n> \"extensions ... copied\" (your proposed text) to avoid the idea that\n> files are copied; use it instead of \"schema definitions ... upgraded\"\n> (the current docs) to avoid the idea that they are actually upgraded;\n> also, \"schema definition\" seems a misleading term to use here.\n\nI used the term \"duplicated\".\n\n> I suggest \"can be upgraded\" rather than \"should be upgraded\" because\n> we're not making a recommendation, merely stating the fact that it is\n> possible to do so.\n\nAgreed. Most extensions don't have updates between major versions.\n\n> I suggest using the imperative mood, to be consistent with the\n> surrounding text. (Applies to the first para; the second para is\n> informative.)\n\nOK.\n\n> I haven't seen it mentioned in the thread, but I think the final phrase\n> of this <step> should be a separate step,\n> \n> <step>\n> <title>Copy custom full-text search files</title>\n> <para>\n> Copy any custom full text search file (dictionary, synonym, thesaurus,\n> stop word list) to the new server.\n> </para>\n> </step>\n> \n> While this is closely related to extensions, it's completely different.\n\nAgreed. See attached patch.So back to the original motivation for bringing this up. Recall that a cluster was upgraded. Everything appeared to work fine, except that the definitions of the functions were slightly different enough to cause a fatal issue on restoring a dump from pg_dump.Since pg_upgrade is now part of the core project, we need to make sure this is not possible or be much more insistent that the user needs to upgrade any extensions that require it.I believe we should be doing more than making a recommendation.Dave",
"msg_date": "Thu, 29 Jul 2021 15:27:49 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 12:28 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n> So back to the original motivation for bringing this up. Recall that a\n> cluster was upgraded. Everything appeared to work fine, except that the\n> definitions of the functions were slightly different enough to cause a\n> fatal issue on restoring a dump from pg_dump.\n> Since pg_upgrade is now part of the core project, we need to make sure\n> this is not possible or be much more insistent that the user needs to\n> upgrade any extensions that require it.\n>\n\nI'm missing something here because I do not recall that level of detail\nbeing provided. The first email was simply an observation that the\npg_upgraded version and the create extension version were different in the\nnew cluster - which is working as designed (opinions of said design, at\nleast right here and now, are immaterial).\n\n From your piecemeal follow-on descriptions I do see that pg_dump seems to\nbe involved - though a self-contained demonstration is not available that I\ncan find. But so far as pg_dump is concerned it just needs to export the\ncurrent version each database is running for a given extension, and\npg_restore issue a CREATE EXTENSION for the same version when prompted. I\npresume it does this correctly but admittedly haven't checked. IOW, if\npg_dump is failing here it is more likely its own bug and should be fixed\nrather than blame pg_upgrade. Or its pg_stat_statement's bug and it should\nbe fixed.\n\nIn theory the procedure and requirements imposed by pg_upgrade here seem\nreasonable. Fewer moving parts during the upgrade is strictly better. The\ndocumentation was not clear on how things worked, and so its being cleaned\nup, but the how hasn't been shown yet to be a problem nor that simply\nrunning alter extension would be an appropriate solution for this single\ncase let alone in general. Since running alter extension manually is\nsimple constructing such a test case and proving that the alter extension\nat least works for it should be straight-forward.\n\nWithout that I cannot support changing the behavior or even saying that\nusers must run alter extension manually to overcome a limitation in\npg_upgrade. They should do so in order to keep their code base current and\nrunning supported code - but that is a judgement we've always left to the\nDBA, with the exception of strongly discouraging not updating to the newest\npoint release and getting off unsupported major releases.\n\nDavid J.\n\n\nDavid J.\n\nOn Thu, Jul 29, 2021 at 12:28 PM Dave Cramer <davecramer@gmail.com> wrote:So back to the original motivation for bringing this up. Recall that a cluster was upgraded. Everything appeared to work fine, except that the definitions of the functions were slightly different enough to cause a fatal issue on restoring a dump from pg_dump.Since pg_upgrade is now part of the core project, we need to make sure this is not possible or be much more insistent that the user needs to upgrade any extensions that require it.I'm missing something here because I do not recall that level of detail being provided. The first email was simply an observation that the pg_upgraded version and the create extension version were different in the new cluster - which is working as designed (opinions of said design, at least right here and now, are immaterial).From your piecemeal follow-on descriptions I do see that pg_dump seems to be involved - though a self-contained demonstration is not available that I can find. But so far as pg_dump is concerned it just needs to export the current version each database is running for a given extension, and pg_restore issue a CREATE EXTENSION for the same version when prompted. I presume it does this correctly but admittedly haven't checked. IOW, if pg_dump is failing here it is more likely its own bug and should be fixed rather than blame pg_upgrade. Or its pg_stat_statement's bug and it should be fixed.In theory the procedure and requirements imposed by pg_upgrade here seem reasonable. Fewer moving parts during the upgrade is strictly better. The documentation was not clear on how things worked, and so its being cleaned up, but the how hasn't been shown yet to be a problem nor that simply running alter extension would be an appropriate solution for this single case let alone in general. Since running alter extension manually is simple constructing such a test case and proving that the alter extension at least works for it should be straight-forward.Without that I cannot support changing the behavior or even saying that users must run alter extension manually to overcome a limitation in pg_upgrade. They should do so in order to keep their code base current and running supported code - but that is a judgement we've always left to the DBA, with the exception of strongly discouraging not updating to the newest point release and getting off unsupported major releases.David J.David J.",
"msg_date": "Thu, 29 Jul 2021 12:56:46 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> So back to the original motivation for bringing this up. Recall that a\n> cluster was upgraded. Everything appeared to work fine, except that the\n> definitions of the functions were slightly different enough to cause a\n> fatal issue on restoring a dump from pg_dump.\n> Since pg_upgrade is now part of the core project, we need to make sure this\n> is not possible or be much more insistent that the user needs to upgrade\n> any extensions that require it.\n\nTBH, this seems like mostly the fault of the extension author.\nThe established design is that the SQL-level objects will be\ncarried forward verbatim by pg_upgrade. Therefore, any newer-version\nshared library MUST be able to cope sanely with SQL objects from\nsome previous revision. The contrib modules provide multiple\nexamples of ways to do this.\n\nIf particular extension authors don't care to go to that much\ntrouble, it's on their heads to document that their extensions\nare unsafe to pg_upgrade.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Jul 2021 17:01:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": " On Thu, Jul 29, 2021 at 05:01:24PM -0400, Tom Lane wrote:\n> Dave Cramer <davecramer@gmail.com> writes:\n> > So back to the original motivation for bringing this up. Recall that a\n> > cluster was upgraded. Everything appeared to work fine, except that the\n> > definitions of the functions were slightly different enough to cause a\n> > fatal issue on restoring a dump from pg_dump.\n> > Since pg_upgrade is now part of the core project, we need to make sure this\n> > is not possible or be much more insistent that the user needs to upgrade\n> > any extensions that require it.\n> \n> TBH, this seems like mostly the fault of the extension author.\n> The established design is that the SQL-level objects will be\n> carried forward verbatim by pg_upgrade. Therefore, any newer-version\n> shared library MUST be able to cope sanely with SQL objects from\n> some previous revision. The contrib modules provide multiple\n> examples of ways to do this.\n> \n> If particular extension authors don't care to go to that much\n> trouble, it's on their heads to document that their extensions\n> are unsafe to pg_upgrade.\n\nI think we need to first give clear instructions on how to find out if\nan extension update is available, and then how to update it. I am\nthinking we should supply a query which reports all extensions that can\nbe upgraded, at least for contrib.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 18:11:03 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I think we need to first give clear instructions on how to find out if\n> an extension update is available, and then how to update it. I am\n> thinking we should supply a query which reports all extensions that can\n> be upgraded, at least for contrib.\n\nI suggested awhile ago that pg_upgrade should look into\npg_available_extensions in the new cluster, and prepare\na script with ALTER EXTENSION UPDATE commands for\nanything that's installed but is not the (new cluster's)\ndefault version.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Jul 2021 18:19:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 06:19:56PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I think we need to first give clear instructions on how to find out if\n> > an extension update is available, and then how to update it. I am\n> > thinking we should supply a query which reports all extensions that can\n> > be upgraded, at least for contrib.\n> \n> I suggested awhile ago that pg_upgrade should look into\n> pg_available_extensions in the new cluster, and prepare\n> a script with ALTER EXTENSION UPDATE commands for\n> anything that's installed but is not the (new cluster's)\n> default version.\n\nI can do that, but I would think a pg_dump/restore would also have this\nissue, so should this be more generic? If we had a doc section about\nthat, we could add it a step to run in the pg_upgrade instructions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 18:26:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, Jul 29, 2021 at 06:19:56PM -0400, Tom Lane wrote:\n>> I suggested awhile ago that pg_upgrade should look into\n>> pg_available_extensions in the new cluster, and prepare\n>> a script with ALTER EXTENSION UPDATE commands for\n>> anything that's installed but is not the (new cluster's)\n>> default version.\n\n> I can do that, but I would think a pg_dump/restore would also have this\n> issue, so should this be more generic?\n\nNo, because dump/restore does not have this issue. Regular pg_dump just\nissues \"CREATE EXTENSION\" commands, so you automatically get the target\nserver's default version.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Jul 2021 18:29:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 06:29:11PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Thu, Jul 29, 2021 at 06:19:56PM -0400, Tom Lane wrote:\n> >> I suggested awhile ago that pg_upgrade should look into\n> >> pg_available_extensions in the new cluster, and prepare\n> >> a script with ALTER EXTENSION UPDATE commands for\n> >> anything that's installed but is not the (new cluster's)\n> >> default version.\n> \n> > I can do that, but I would think a pg_dump/restore would also have this\n> > issue, so should this be more generic?\n> \n> No, because dump/restore does not have this issue. Regular pg_dump just\n> issues \"CREATE EXTENSION\" commands, so you automatically get the target\n> server's default version.\n\nOh, so pg_upgrade does it differently so the oids are preserved?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 18:38:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 18:39, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Jul 29, 2021 at 06:29:11PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Thu, Jul 29, 2021 at 06:19:56PM -0400, Tom Lane wrote:\n> > >> I suggested awhile ago that pg_upgrade should look into\n> > >> pg_available_extensions in the new cluster, and prepare\n> > >> a script with ALTER EXTENSION UPDATE commands for\n> > >> anything that's installed but is not the (new cluster's)\n> > >> default version.\n> >\n> > > I can do that, but I would think a pg_dump/restore would also have this\n> > > issue, so should this be more generic?\n> >\n> > No, because dump/restore does not have this issue. Regular pg_dump just\n> > issues \"CREATE EXTENSION\" commands, so you automatically get the target\n> > server's default version.\n>\n> Oh, so pg_upgrade does it differently so the oids are preserved?\n>\n>\nI suspect this is part of --binary_upgrade mode\n\nDave\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nOn Thu, 29 Jul 2021 at 18:39, Bruce Momjian <bruce@momjian.us> wrote:On Thu, Jul 29, 2021 at 06:29:11PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Thu, Jul 29, 2021 at 06:19:56PM -0400, Tom Lane wrote:\n> >> I suggested awhile ago that pg_upgrade should look into\n> >> pg_available_extensions in the new cluster, and prepare\n> >> a script with ALTER EXTENSION UPDATE commands for\n> >> anything that's installed but is not the (new cluster's)\n> >> default version.\n> \n> > I can do that, but I would think a pg_dump/restore would also have this\n> > issue, so should this be more generic?\n> \n> No, because dump/restore does not have this issue. Regular pg_dump just\n> issues \"CREATE EXTENSION\" commands, so you automatically get the target\n> server's default version.\n\nOh, so pg_upgrade does it differently so the oids are preserved?\nI suspect this is part of --binary_upgrade mode Dave \n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Thu, 29 Jul 2021 19:06:25 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 2021-Jul-29, Bruce Momjian wrote:\n\n> On Thu, Jul 29, 2021 at 06:29:11PM -0400, Tom Lane wrote:\n\n> > No, because dump/restore does not have this issue. Regular pg_dump just\n> > issues \"CREATE EXTENSION\" commands, so you automatically get the target\n> > server's default version.\n> \n> Oh, so pg_upgrade does it differently so the oids are preserved?\n\nHave a look at pg_dump --binary-upgrade output :-)\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)\n\n\n",
"msg_date": "Thu, 29 Jul 2021 19:26:46 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 02:14:56PM -0400, Jan Wieck wrote:\n> \n> I presume that pg_upgrade on a database with that extension installed would\n> silently succeed and have the pg_catalog as well as public (or wherever)\n> version of that function present.\n\nI'll have to run a pg_upgrade with it to be 100% sure, but given that this is a\nplpgsql function and since the created function is part of the extension\ndependencies (and looking at pg_dump source code for binary-upgrade mode), I'm\nalmost certain that the upgraded cluster would have the pg96- version of the\nfunction even if upgrading to pg9.6+.\n\nNote that in that case the extension would appear to work normally, but the\nonly way to simulate missing_ok = true is to add a BEGIN/EXCEPTION block.\n\nSince this wrapper function is extensively used, it seems quite possible to\nlead to overflowing the snapshot subxip array, as the extension basically runs\nevery x minutes many functions in a single trannsaction to retrieve many\nperformance metrics. This can ruin the performance.\n\nThis was an acceptable trade off for people still using pg96- in 2021, but\nwould be silly to have on more recent versions.\n\nUnfortunately I don't see any easy way to avoid that, as there isn't any\nguarantee that a new version will be available after the upgrade. AFAICT the\nonly way to ensure that the correct version of the function is present from an\nextension point of view would be to add a dedicated function to overwrite any\nobject that depends on the servers version and document the need to call that\nafter a pg_upgrade.\n\n\n",
"msg_date": "Fri, 30 Jul 2021 10:02:57 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 22:03, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Jul 29, 2021 at 02:14:56PM -0400, Jan Wieck wrote:\n> >\n> > I presume that pg_upgrade on a database with that extension installed\n> would\n> > silently succeed and have the pg_catalog as well as public (or wherever)\n> > version of that function present.\n>\n> I'll have to run a pg_upgrade with it to be 100% sure, but given that this\n> is a\n> plpgsql function and since the created function is part of the extension\n> dependencies (and looking at pg_dump source code for binary-upgrade mode),\n> I'm\n> almost certain that the upgraded cluster would have the pg96- version of\n> the\n> function even if upgrading to pg9.6+.\n>\n> Note that in that case the extension would appear to work normally, but the\n> only way to simulate missing_ok = true is to add a BEGIN/EXCEPTION block.\n>\n> Since this wrapper function is extensively used, it seems quite possible to\n> lead to overflowing the snapshot subxip array, as the extension basically\n> runs\n> every x minutes many functions in a single trannsaction to retrieve many\n> performance metrics. This can ruin the performance.\n>\n> This was an acceptable trade off for people still using pg96- in 2021, but\n> would be silly to have on more recent versions.\n>\n> Unfortunately I don't see any easy way to avoid that, as there isn't any\n> guarantee that a new version will be available after the upgrade. AFAICT\n> the\n> only way to ensure that the correct version of the function is present\n> from an\n> extension point of view would be to add a dedicated function to overwrite\n> any\n> object that depends on the servers version and document the need to call\n> that\n> after a pg_upgrade.\n>\n\nWhat would happen if subsequent to the upgrade \"ALTER EXTENSION UPGRADE\"\nwas executed ?\n\nOn Thu, 29 Jul 2021 at 22:03, Julien Rouhaud <rjuju123@gmail.com> wrote:On Thu, Jul 29, 2021 at 02:14:56PM -0400, Jan Wieck wrote:\n> \n> I presume that pg_upgrade on a database with that extension installed would\n> silently succeed and have the pg_catalog as well as public (or wherever)\n> version of that function present.\n\nI'll have to run a pg_upgrade with it to be 100% sure, but given that this is a\nplpgsql function and since the created function is part of the extension\ndependencies (and looking at pg_dump source code for binary-upgrade mode), I'm\nalmost certain that the upgraded cluster would have the pg96- version of the\nfunction even if upgrading to pg9.6+.\n\nNote that in that case the extension would appear to work normally, but the\nonly way to simulate missing_ok = true is to add a BEGIN/EXCEPTION block.\n\nSince this wrapper function is extensively used, it seems quite possible to\nlead to overflowing the snapshot subxip array, as the extension basically runs\nevery x minutes many functions in a single trannsaction to retrieve many\nperformance metrics. This can ruin the performance.\n\nThis was an acceptable trade off for people still using pg96- in 2021, but\nwould be silly to have on more recent versions.\n\nUnfortunately I don't see any easy way to avoid that, as there isn't any\nguarantee that a new version will be available after the upgrade. AFAICT the\nonly way to ensure that the correct version of the function is present from an\nextension point of view would be to add a dedicated function to overwrite any\nobject that depends on the servers version and document the need to call that\nafter a pg_upgrade.What would happen if subsequent to the upgrade \"ALTER EXTENSION UPGRADE\" was executed ?",
"msg_date": "Fri, 30 Jul 2021 06:03:50 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 06:03:50AM -0400, Dave Cramer wrote:\n> \n> What would happen if subsequent to the upgrade \"ALTER EXTENSION UPGRADE\"\n> was executed ?\n\nIf the extension was already up to date on the source cluster then obviously\nnothing.\n\nOtherwise, the extension would be updated. But unless I'm willing (and\nremember) to copy/paste until the end of time an anonymous block code that\nchecks the current server version to see if the wrapper function needs to be\noverwritten then nothing will happen either as far as this function is\nconcerned.\n\n\n",
"msg_date": "Fri, 30 Jul 2021 18:39:45 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Fri, 30 Jul 2021 at 06:39, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Fri, Jul 30, 2021 at 06:03:50AM -0400, Dave Cramer wrote:\n> >\n> > What would happen if subsequent to the upgrade \"ALTER EXTENSION UPGRADE\"\n> > was executed ?\n>\n> If the extension was already up to date on the source cluster then\n> obviously\n> nothing.\n>\n> Otherwise, the extension would be updated. But unless I'm willing (and\n> remember) to copy/paste until the end of time an anonymous block code that\n> checks the current server version to see if the wrapper function needs to\n> be\n> overwritten then nothing will happen either as far as this function is\n> concerned.\n>\n\nWell I think that's on the extension author to fix. There's only so much\npg_upgrade can do here.\nIt seems reasonable that upgrading the extension should upgrade the\nextension to the latest version.\n\nDave\n\nOn Fri, 30 Jul 2021 at 06:39, Julien Rouhaud <rjuju123@gmail.com> wrote:On Fri, Jul 30, 2021 at 06:03:50AM -0400, Dave Cramer wrote:\n> \n> What would happen if subsequent to the upgrade \"ALTER EXTENSION UPGRADE\"\n> was executed ?\n\nIf the extension was already up to date on the source cluster then obviously\nnothing.\n\nOtherwise, the extension would be updated. But unless I'm willing (and\nremember) to copy/paste until the end of time an anonymous block code that\nchecks the current server version to see if the wrapper function needs to be\noverwritten then nothing will happen either as far as this function is\nconcerned.Well I think that's on the extension author to fix. There's only so much pg_upgrade can do here. It seems reasonable that upgrading the extension should upgrade the extension to the latest version.Dave",
"msg_date": "Fri, 30 Jul 2021 06:48:34 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 06:48:34AM -0400, Dave Cramer wrote:\n> \n> Well I think that's on the extension author to fix. There's only so much\n> pg_upgrade can do here.\n> It seems reasonable that upgrading the extension should upgrade the\n> extension to the latest version.\n\nThat would only work iff the extension was *not* up to date on the original\ninstance. Otherwise I fail to see how any extension script will be called at\nall, and therefore the extension was no way to fix anything.\n\n\n",
"msg_date": "Fri, 30 Jul 2021 19:07:28 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Fri, 30 Jul 2021 at 07:07, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Fri, Jul 30, 2021 at 06:48:34AM -0400, Dave Cramer wrote:\n> >\n> > Well I think that's on the extension author to fix. There's only so much\n> > pg_upgrade can do here.\n> > It seems reasonable that upgrading the extension should upgrade the\n> > extension to the latest version.\n>\n> That would only work iff the extension was *not* up to date on the original\n> instance. Otherwise I fail to see how any extension script will be called\n> at\n> all, and therefore the extension was no way to fix anything.\n>\n\nSo my understanding is that upgrade is going to run all of the SQL files\nfrom whatever version the original instance was up to the current version.\n\nI'm at a loss as to how this would not work ? How do you upgrade your\nextension otherwise ?\n\nDave\n\nOn Fri, 30 Jul 2021 at 07:07, Julien Rouhaud <rjuju123@gmail.com> wrote:On Fri, Jul 30, 2021 at 06:48:34AM -0400, Dave Cramer wrote:\n> \n> Well I think that's on the extension author to fix. There's only so much\n> pg_upgrade can do here.\n> It seems reasonable that upgrading the extension should upgrade the\n> extension to the latest version.\n\nThat would only work iff the extension was *not* up to date on the original\ninstance. Otherwise I fail to see how any extension script will be called at\nall, and therefore the extension was no way to fix anything.So my understanding is that upgrade is going to run all of the SQL files from whatever version the original instance was up to the current version.I'm at a loss as to how this would not work ? How do you upgrade your extension otherwise ?Dave",
"msg_date": "Fri, 30 Jul 2021 07:18:56 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 07:18:56AM -0400, Dave Cramer wrote:\n> \n> So my understanding is that upgrade is going to run all of the SQL files\n> from whatever version the original instance was up to the current version.\n> \n> I'm at a loss as to how this would not work ? How do you upgrade your\n> extension otherwise ?\n\nYes, but as I said twice only if the currently installed version is different\nfrom the default version. Otherwise ALTER EXTENSION UPGRADE is a no-op.\n\nJust to be clear: I'm not arguing against automatically doing an ALTER\nEXTENSION UPGRADE for all extensions in all databases during pg_upgrade (I'm\nall for it), just that this specific corner case can't be solved by that\napproach.\n\n\n",
"msg_date": "Fri, 30 Jul 2021 19:28:32 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": ">\n> Yes, but as I said twice only if the currently installed version is\n> different\n> from the default version. Otherwise ALTER EXTENSION UPGRADE is a no-op.\n>\n\nAh, ok, got it, thanks. Well I'm not sure how to deal with this.\nThe only thing I can think of is that we add a post_upgrade function to the\nAPI.\n\nAll extensions would have to implement this.\n\nDave\n\n\nYes, but as I said twice only if the currently installed version is different\nfrom the default version. Otherwise ALTER EXTENSION UPGRADE is a no-op.Ah, ok, got it, thanks. Well I'm not sure how to deal with this. The only thing I can think of is that we add a post_upgrade function to the API.All extensions would have to implement this.Dave",
"msg_date": "Fri, 30 Jul 2021 07:33:55 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 07:33:55AM -0400, Dave Cramer wrote:\n> \n> Ah, ok, got it, thanks. Well I'm not sure how to deal with this.\n> The only thing I can think of is that we add a post_upgrade function to the\n> API.\n> \n> All extensions would have to implement this.\n\nIt seems like a really big hammer for a niche usage. As far as I know I'm the\nonly one who wrote an extension that can create different objects depending on\nthe server version, so I'm entirely fine with dealing with that problem in my\nextension rather than forcing everyone to implement an otherwise useless API.\n\nNow if that API can be useful for other cases or if there are other extensions\nwith similar problems that would be different story.\n\n\n",
"msg_date": "Fri, 30 Jul 2021 19:40:52 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 06:19:56PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I think we need to first give clear instructions on how to find out if\n> > an extension update is available, and then how to update it. I am\n> > thinking we should supply a query which reports all extensions that can\n> > be upgraded, at least for contrib.\n> \n> I suggested awhile ago that pg_upgrade should look into\n> pg_available_extensions in the new cluster, and prepare\n> a script with ALTER EXTENSION UPDATE commands for\n> anything that's installed but is not the (new cluster's)\n> default version.\n\nOK, done in this patch. I am assuming that everything that shows an\nupdate in pg_available_extensions can use ALTER EXTENSION UPDATE. I\nassume this would be backpatched to 9.6.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 30 Jul 2021 12:40:06 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/30/21 7:33 AM, Dave Cramer wrote:\n> \n> \n> \n> \n> Yes, but as I said twice only if the currently installed version is\n> different\n> from the default version. Otherwise ALTER EXTENSION UPGRADE is a no-op.\n> \n> \n> Ah, ok, got it, thanks. Well I'm not sure how to deal with this.\n> The only thing I can think of is that we add a post_upgrade function to \n> the API.\n> \n> All extensions would have to implement this.\n\nAn alternative to this would be a recommended version number scheme for \nextensions. If extensions were numbered\n\n MAJOR_SERVER.MAJOR_EXTENSION.MINOR_EXTENSION\n\nthen pg_upgrade could check the new cluster for the existence of an SQL \nscript that upgrades the extension from the old cluster's version to the \nnew current. And since an extension cannot have the same version number \non two major server versions, there is no ambiguity here.\n\nThe downside is that all the extensions that don't need anything done \nfor those upgrades (which I believe is the majority of them) would have \nto provide empty SQL files. Not necessarily a bad thing, as it actually \ndocuments \"yes, the extension developer checked this and there is \nnothing to do here.\"\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Fri, 30 Jul 2021 12:43:04 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/30/21 7:40 AM, Julien Rouhaud wrote:\n> On Fri, Jul 30, 2021 at 07:33:55AM -0400, Dave Cramer wrote:\n>> \n>> Ah, ok, got it, thanks. Well I'm not sure how to deal with this.\n>> The only thing I can think of is that we add a post_upgrade function to the\n>> API.\n>> \n>> All extensions would have to implement this.\n> \n> It seems like a really big hammer for a niche usage. As far as I know I'm the\n> only one who wrote an extension that can create different objects depending on\n> the server version, so I'm entirely fine with dealing with that problem in my\n> extension rather than forcing everyone to implement an otherwise useless API.\n> \n> Now if that API can be useful for other cases or if there are other extensions\n> with similar problems that would be different story.\n> \n\nI haven't worked on it for a while, but I think pl_profiler does the \nsame thing, so you are not alone.\n\n\nJan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Fri, 30 Jul 2021 12:44:17 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> An alternative to this would be a recommended version number scheme for \n> extensions. If extensions were numbered\n> MAJOR_SERVER.MAJOR_EXTENSION.MINOR_EXTENSION\n> then pg_upgrade could check the new cluster for the existence of an SQL \n> script that upgrades the extension from the old cluster's version to the \n> new current. And since an extension cannot have the same version number \n> on two major server versions, there is no ambiguity here.\n\nThat idea cannot get off the ground. We've spent ten years telling\npeople they can use whatever version-numbering scheme they like for\ntheir extensions; we can't suddenly go from that to \"you must use\nexactly this scheme\".\n\nI don't see the need for it anyway. What is different from just\nputting the update actions into an extension version upgrade\nscript created according to the current rules, and then setting\nthat new extension version as the default version in the extension\nbuild you ship for the new server version?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jul 2021 13:05:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On 7/30/21 1:05 PM, Tom Lane wrote:\n> I don't see the need for it anyway. What is different from just\n> putting the update actions into an extension version upgrade\n> script created according to the current rules, and then setting\n> that new extension version as the default version in the extension\n> build you ship for the new server version?\n\nYou are right. The real fix should actually be that an extension, that \ncreates different objects depending on the major server version it is \ninstalled on, should not use the same version number for itself on those \ntwo server versions. It is actually wrong to have DO blocks that execute \nserver version dependent sections in the CREATE EXTENSION scripts. \nHowever similar the code may be, it is intended for different server \nversions, so it is not the same version of the extension.\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Fri, 30 Jul 2021 13:28:32 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 03:06:45PM -0400, Bruce Momjian wrote:\n> > I haven't seen it mentioned in the thread, but I think the final phrase\n> > of this <step> should be a separate step,\n> > \n> > <step>\n> > <title>Copy custom full-text search files</title>\n> > <para>\n> > Copy any custom full text search file (dictionary, synonym, thesaurus,\n> > stop word list) to the new server.\n> > </para>\n> > </step>\n> > \n> > While this is closely related to extensions, it's completely different.\n> \n> Agreed. See attached patch.\n\nDoc patch applied to all supported versions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 3 Aug 2021 11:39:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 12:40:06PM -0400, Bruce Momjian wrote:\n> On Thu, Jul 29, 2021 at 06:19:56PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > I think we need to first give clear instructions on how to find out if\n> > > an extension update is available, and then how to update it. I am\n> > > thinking we should supply a query which reports all extensions that can\n> > > be upgraded, at least for contrib.\n> > \n> > I suggested awhile ago that pg_upgrade should look into\n> > pg_available_extensions in the new cluster, and prepare\n> > a script with ALTER EXTENSION UPDATE commands for\n> > anything that's installed but is not the (new cluster's)\n> > default version.\n> \n> OK, done in this patch. I am assuming that everything that shows an\n> update in pg_available_extensions can use ALTER EXTENSION UPDATE. I\n> assume this would be backpatched to 9.6.\n\nPatch applied through 9.6.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 3 Aug 2021 12:00:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Tue, Aug 03, 2021 at 12:00:47PM -0400, Bruce Momjian wrote:\n> Patch applied through 9.6.\n\nThe comment seems to be a leftover from a copy pasto.\n\n+ /* find hash indexes */\n+ res = executeQueryOrDie(conn,\n+ \"SELECT name \"\n+ \"FROM pg_available_extensions \"\n+ \"WHERE installed_version != default_version\"\n+ );\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 3 Aug 2021 11:13:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 11:13:45AM -0500, Justin Pryzby wrote:\n> On Tue, Aug 03, 2021 at 12:00:47PM -0400, Bruce Momjian wrote:\n> > Patch applied through 9.6.\n> \n> The comment seems to be a leftover from a copy pasto.\n> \n> + /* find hash indexes */\n> + res = executeQueryOrDie(conn,\n> + \"SELECT name \"\n> + \"FROM pg_available_extensions \"\n> + \"WHERE installed_version != default_version\"\n> + );\n\nThanks much, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 3 Aug 2021 12:26:21 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade does not upgrade pg_stat_statements properly"
}
] |
[
{
"msg_contents": "Hi,\n\nThomas^WA bad person recently nerdsniped me (with the help of an accidental use\nof an SSL connection in a benchmark leading to poor results) into checking what\nwould be needed to benefit from SSL/TLS hardware acceleration (available with\nsuitable hardware, OS support (linux and freebsd) and openssl 3). One problem\nturns out to be the custom BIO we use.\n\nWhich made me look at why we use those.\n\nIn the backend the first reason is:\n\n * Private substitute BIO: this does the sending and receiving using send() and\n * recv() instead. This is so that we can enable and disable interrupts\n * just while calling recv(). We cannot have interrupts occurring while\n * the bulk of OpenSSL runs, because it uses malloc() and possibly other\n * non-reentrant libc facilities.\n\nI think this part has been obsolete for a while now (primarily [1]). These days\nwe always operate the backend sockets in nonblocking mode, and handle blocking\nat a higher level. Which is where we then can handle interrupts etc correctly\n(which we couldn't really before, because it still wasn't ok to jump out of\nopenssl).\n\nThe second part is\n * We also need to call send() and recv()\n * directly so it gets passed through the socket/signals layer on Win32.\n\nAnd the not stated need to set/unset pgwin32_noblock around the recv/send\ncalls.\n\nI don't think the signal handling stuff is still needed with nonblocking\nsockets? It seems we could just ensure that there's a pgwin32_poll_signals()\nsomewhere higher up in secure_read/write()? E.g. in\nProcessClientReadInterrupt()/ProcessClientWriteInterrupt() or with an explicit\ncall.\n\nAnd the pgwin32_noblock handling could just be done outside the SSL_read/write().\n\n\n\nOn the client side it looks like things would be a bit harder. The main problem\nseems to be dealing with SIGPIPE. We could possibly deal with that by moving\nthe handling of that a layer up. That's a thick nest of ugly stuff :(.\n\n\nA problem shared by FE & BE openssl is that openssl internally uses\nBIO_set_data() inside the BIO routines we reuse. Oops. I hacked around that\nusing BIO_set_callback_arg()/BIO_get_callback_arg(), but that's not\nparticularly pretty either, although it probably is more reliable, since\ncallbacks are intended to be set separately from the BIO implementation.\n\nA better approach might be to move the code using per-bio custom data from\npqsecure_raw_read() up to pqsecure_read() etc.\n\n\nIf we wanted to be able to use TLS acceleration while continuing to use our\ncustom socket routines we'd have to copy a good bit more functionality from\nopenssl into our BIO implementations:(.\n\n\nFWIW, I don't think hardware tls acceleration is a particularly crucial thing\nfor now. Outside of backups it's rare to have the symmetric encryption part of\ntls be the problem these days thanks, to the AES etc functions in most of the\ncommon CPUs.\n\n\nI don't plan to work on this, but Thomas encouraged me to mention this on the\nlist when I mention it to him.\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=387da18874afa17156ee3af63766f17efb53c4b9\n\n\n",
"msg_date": "Wed, 14 Jul 2021 19:17:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Using a stock openssl BIO"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 07:17:47PM -0700, Andres Freund wrote:\n> FWIW, I don't think hardware tls acceleration is a particularly crucial thing\n> for now. Outside of backups it's rare to have the symmetric encryption part of\n> tls be the problem these days thanks, to the AES etc functions in most of the\n> common CPUs.\n> \n> I don't plan to work on this, but Thomas encouraged me to mention this on the\n> list when I mention it to him.\n\nSo, I am aware of CPU AES acceleration and I assume PG uses that. It is\nthe public key certificate verification part of TLS that we don't use\nhardware acceleration for, right?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 15 Jul 2021 13:59:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Using a stock openssl BIO"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-15 13:59:26 -0400, Bruce Momjian wrote:\n> On Wed, Jul 14, 2021 at 07:17:47PM -0700, Andres Freund wrote:\n> > FWIW, I don't think hardware tls acceleration is a particularly crucial thing\n> > for now. Outside of backups it's rare to have the symmetric encryption part of\n> > tls be the problem these days thanks, to the AES etc functions in most of the\n> > common CPUs.\n> >\n> > I don't plan to work on this, but Thomas encouraged me to mention this on the\n> > list when I mention it to him.\n>\n> So, I am aware of CPU AES acceleration and I assume PG uses that.\n\nYes, it does so via openssl. But that still happens on the CPU. And\nwhat's more, there's a lot of related work in TLS that's fairly\nexpensive (chunking up the data into TLS records etc). Some of the\nbetter NICs can do that work in the happy path, so the CPU doesn't have\nto do encryption nor framing. In some cases that can avoid the\nto-be-sent data ever being pulled into the CPU caches, but instead it\ncan be DMA directly to the NIC.\n\nIn PG's case that's particularly interesting when sending out file data\nin bulk, say in basebackup.c or walsender.c - the data can be sent via\nsendfile(), so it never goes to userspace.\n\nHere's an overview of the kernel TLS / TLS offload\nhttps://legacy.netdevconf.info/1.2/papers/ktls.pdf\n\n\n> It is the public key certificate verification part of TLS that we\n> don't use hardware acceleration for, right?\n\nThat's true, but separate from what I was talking about. For most\ndatabase workloads the public key stuff shouldn't be a major piece,\nbecause connection establishment shouldn't be that frequent...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 15 Jul 2021 11:41:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Using a stock openssl BIO"
},
{
"msg_contents": "> On 15 Jul 2021, at 04:17, Andres Freund <andres@anarazel.de> wrote:\n\n> Thomas^WA bad person recently nerdsniped me (with the help of an accidental use\n> of an SSL connection in a benchmark leading to poor results) into checking what\n> would be needed to benefit from SSL/TLS hardware acceleration (available with\n> suitable hardware, OS support (linux and freebsd) and openssl 3). \n\nNow why does that sounds so familiar.. =)\n\n> In the backend the first reason is:\n> \n> * Private substitute BIO: this does the sending and receiving using send() and\n> * recv() instead. This is so that we can enable and disable interrupts\n> * just while calling recv(). We cannot have interrupts occurring while\n> * the bulk of OpenSSL runs, because it uses malloc() and possibly other\n> * non-reentrant libc facilities.\n> \n> I think this part has been obsolete for a while now\n\nI concur.\n\n> The second part is\n> * We also need to call send() and recv()\n> * directly so it gets passed through the socket/signals layer on Win32.\n> \n> And the not stated need to set/unset pgwin32_noblock around the recv/send\n> calls.\n> \n> I don't think the signal handling stuff is still needed with nonblocking\n> sockets? It seems we could just ensure that there's a pgwin32_poll_signals()\n> somewhere higher up in secure_read/write()? E.g. in\n> ProcessClientReadInterrupt()/ProcessClientWriteInterrupt() or with an explicit\n> call.\n> \n> And the pgwin32_noblock handling could just be done outside the SSL_read/write().\n\nI hadn't yet looked at the pgwin32 parts in detail, but this is along what I\nwas thinking (just more refined).\n\n> On the client side it looks like things would be a bit harder. The main problem\n> seems to be dealing with SIGPIPE. We could possibly deal with that by moving\n> the handling of that a layer up. That's a thick nest of ugly stuff :(.\n\nMy initial plan was to keep this for the backend, as the invasiveness of the\nfrontend patch is unlikely to be justified by the returns of the acceleration.\n\n> FWIW, I don't think hardware tls acceleration is a particularly crucial thing\n> for now.\n\nAgreed, it will most likely be of limited use to most. It might however make\nsense to \"get in on the ground floor\" to be ready in case it's expanded on in\nkernel+OpenSSL with postgres automatically just reaping the benefits. Either\nway I was hoping to get to a patch which is close enough to what it would need\nto look like so we can decide with the facts at hand.\n\n> I don't plan to work on this, but Thomas encouraged me to mention this on the\n> list when I mention it to him.\n\nI still have it on my TODO for after the vacation, and hope to reach that part\nof the list soon.\n\t\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 00:21:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Using a stock openssl BIO"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-22 00:21:16 +0200, Daniel Gustafsson wrote:\n> > On 15 Jul 2021, at 04:17, Andres Freund <andres@anarazel.de> wrote:\n>\n> > Thomas^WA bad person recently nerdsniped me (with the help of an accidental use\n> > of an SSL connection in a benchmark leading to poor results) into checking what\n> > would be needed to benefit from SSL/TLS hardware acceleration (available with\n> > suitable hardware, OS support (linux and freebsd) and openssl 3).\n>\n> Now why does that sounds so familiar.. =)\n\n:)\n\n\n> > On the client side it looks like things would be a bit harder. The main problem\n> > seems to be dealing with SIGPIPE. We could possibly deal with that by moving\n> > the handling of that a layer up. That's a thick nest of ugly stuff :(.\n>\n> My initial plan was to keep this for the backend, as the invasiveness of the\n> frontend patch is unlikely to be justified by the returns of the acceleration.\n\nThere's two main reasons I'd prefer not to do that:\n\n1) It makes it surprisingly hard to benchmark the single connection TLS\n throughput, because there's no client that can pull data quick enough.\n2) The main case for wanting offload imo is bulk data stuff (basebackup,\n normal backup). For that you also want to be able to receive the data\n fast. Outside of situations like that I don't think the gains are likely to\n be meaningful given AES-NI and other similar cpu level acceleration.\n\n\n> > FWIW, I don't think hardware tls acceleration is a particularly crucial thing\n> > for now.\n>\n> Agreed, it will most likely be of limited use to most. It might however make\n> sense to \"get in on the ground floor\" to be ready in case it's expanded on in\n> kernel+OpenSSL with postgres automatically just reaping the benefits. Either\n> way I was hoping to get to a patch which is close enough to what it would need\n> to look like so we can decide with the facts at hand.\n\nYea. I also just think getting rid of the bio stuff is good for\nmaintainability / robustness. Relying on our own socket functions being\ncompatible with the openssl bio doesn't sound very future proof... Especially\nnot combined with our use of the data field, which of course the other bio\nfunctions may use (as they do when ktls is enabled!).\n\n\n> > I don't plan to work on this, but Thomas encouraged me to mention this on the\n> > list when I mention it to him.\n>\n> I still have it on my TODO for after the vacation, and hope to reach that part\n> of the list soon.\n\nCool!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 21 Jul 2021 16:10:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Using a stock openssl BIO"
}
] |
[
{
"msg_contents": "I think it'd be useful to be able to identify exactly which git commit\nwas used to produce a tarball. This would be especially useful when\ndownloading snapshot tarballs where that's not entirely clear, but can\nalso be used to verify that the release tarballs matches what's\nexpected (in the extremely rare case that a tarball is rewrapped for\nexample).\n\nWhat do people think of the attached?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 15 Jul 2021 10:33:11 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Git revision in tarballs"
},
{
"msg_contents": "čt 15. 7. 2021 v 10:33 odesílatel Magnus Hagander <magnus@hagander.net> napsal:\n>\n> I think it'd be useful to be able to identify exactly which git commit\n> was used to produce a tarball. This would be especially useful when\n> downloading snapshot tarballs where that's not entirely clear, but can\n> also be used to verify that the release tarballs matches what's\n> expected (in the extremely rare case that a tarball is rewrapped for\n> example).\n>\n> What do people think of the attached?\n\nThe only problem I do see is adding \"git\" as a new dependency. That\ncan potentially cause troubles.\n\nFor the file name, I have seen GIT_VERSION or REVISION file names used\nbefore in another projects. Using \".gitrevision\" doesn't make sense to\nme since it will be hidden on Unix by default and I'm not sure that is\nintended.\n\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/\n> Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 15 Jul 2021 13:40:31 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 1:40 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>\n> čt 15. 7. 2021 v 10:33 odesílatel Magnus Hagander <magnus@hagander.net> napsal:\n> >\n> > I think it'd be useful to be able to identify exactly which git commit\n> > was used to produce a tarball. This would be especially useful when\n> > downloading snapshot tarballs where that's not entirely clear, but can\n> > also be used to verify that the release tarballs matches what's\n> > expected (in the extremely rare case that a tarball is rewrapped for\n> > example).\n> >\n> > What do people think of the attached?\n>\n> The only problem I do see is adding \"git\" as a new dependency. That\n> can potentially cause troubles.\n\nBut only for *creating* the tarballs, and not for using them. I'm not\nsure what the usecase would be to create a tarball from an environment\nthat doesn't have git?\n\n\n> For the file name, I have seen GIT_VERSION or REVISION file names used\n> before in another projects. Using \".gitrevision\" doesn't make sense to\n> me since it will be hidden on Unix by default and I'm not sure that is\n> intended.\n\nIt was definitely intended, as I'd assume it's normally a file that\nmost people don't care about, but more something that scripts that\nverify things would. But I'm more than happy to change it to a\ndifferent name if that's preferred. I looked around a bit and couldn't\nfind any general consensus for a name for such a file, but I may not\nhave looked carefully enough.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 15 Jul 2021 13:44:45 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 01:44:45PM +0200, Magnus Hagander wrote:\n> But only for *creating* the tarballs, and not for using them. I'm not\n> sure what the usecase would be to create a tarball from an environment\n> that doesn't have git?\n\nWhich would likely mean somebody creating a release tarball in an\nenvironment doing a build with what is already a release tarball.\nAdding a dependency to git in this code path does not sound that bad\nto me, FWIW.\n--\nMichael",
"msg_date": "Thu, 15 Jul 2021 21:07:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Thu, Jul 15, 2021 at 1:40 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>> The only problem I do see is adding \"git\" as a new dependency. That\n>> can potentially cause troubles.\n\n> But only for *creating* the tarballs, and not for using them. I'm not\n> sure what the usecase would be to create a tarball from an environment\n> that doesn't have git?\n\nI agree, this objection seems silly. If we ever move off of git, the\nprocess could be adapted at that time. However, there *is* a reasonable\nquestion whether this ought to be handled by \"make dist\" versus the\ntarball-wrapping script.\n\n>> For the file name, I have seen GIT_VERSION or REVISION file names used\n>> before in another projects. Using \".gitrevision\" doesn't make sense to\n>> me since it will be hidden on Unix by default and I'm not sure that is\n>> intended.\n\n> It was definitely intended, as I'd assume it's normally a file that\n> most people don't care about, but more something that scripts that\n> verify things would. But I'm more than happy to change it to a\n> different name if that's preferred. I looked around a bit and couldn't\n> find any general consensus for a name for such a file, but I may not\n> have looked carefully enough.\n\nWe already have that convention in place:\n\n$ ls -a\n./ .gitignore README.git contrib/\n../ COPYRIGHT aclocal.m4 doc/\n.dir-locals.el GNUmakefile config/ src/\n.editorconfig GNUmakefile.in config.log tmp_install/\n.git/ HISTORY config.status*\n.git-blame-ignore-revs Makefile configure*\n.gitattributes README configure.ac\n\nSo \".gitrevision\" or the like seems fine to me.\n\nMy thoughts about the proposed patch are (1) you'd better have a\n.gitignore entry too, and (2) what is the mechanism that removes\nthis file? It seems weird to have a make rule that makes a \ngenerated file but none to remove it. Perhaps maintainer-clean\nshould remove it?\n\nBoth of those issues vanish if this is delegated to the tarball\nmaking script; as does the need to cope with a starting point\nthat isn't a specific commit. So on the whole I'm leaning to\nthe idea that it would be better done over there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Jul 2021 09:53:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 3:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Thu, Jul 15, 2021 at 1:40 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n> >> The only problem I do see is adding \"git\" as a new dependency. That\n> >> can potentially cause troubles.\n>\n> > But only for *creating* the tarballs, and not for using them. I'm not\n> > sure what the usecase would be to create a tarball from an environment\n> > that doesn't have git?\n>\n> I agree, this objection seems silly. If we ever move off of git, the\n> process could be adapted at that time. However, there *is* a reasonable\n> question whether this ought to be handled by \"make dist\" versus the\n> tarball-wrapping script.\n>\n> >> For the file name, I have seen GIT_VERSION or REVISION file names used\n> >> before in another projects. Using \".gitrevision\" doesn't make sense to\n> >> me since it will be hidden on Unix by default and I'm not sure that is\n> >> intended.\n>\n> > It was definitely intended, as I'd assume it's normally a file that\n> > most people don't care about, but more something that scripts that\n> > verify things would. But I'm more than happy to change it to a\n> > different name if that's preferred. I looked around a bit and couldn't\n> > find any general consensus for a name for such a file, but I may not\n> > have looked carefully enough.\n>\n> We already have that convention in place:\n>\n> $ ls -a\n> ./ .gitignore README.git contrib/\n> ../ COPYRIGHT aclocal.m4 doc/\n> .dir-locals.el GNUmakefile config/ src/\n> .editorconfig GNUmakefile.in config.log tmp_install/\n> .git/ HISTORY config.status*\n> .git-blame-ignore-revs Makefile configure*\n> .gitattributes README configure.ac\n>\n> So \".gitrevision\" or the like seems fine to me.\n>\n> My thoughts about the proposed patch are (1) you'd better have a\n> .gitignore entry too, and (2) what is the mechanism that removes\n> this file? It seems weird to have a make rule that makes a\n> generated file but none to remove it. Perhaps maintainer-clean\n> should remove it?\n\nmaintainer-clean sounds reasonable for that, yes.\n\n\n> Both of those issues vanish if this is delegated to the tarball\n> making script; as does the need to cope with a starting point\n> that isn't a specific commit. So on the whole I'm leaning to\n> the idea that it would be better done over there.\n\nI'd be fine with either. The argument for putting it in the makefile\nwould be, uh, maybe it makes it a tad bit easier to verify builds\nbecause you get it in your local build as well. But it's not like it's\nvery *hard* to do it...\n\nPutting it in the tarball making script certainly works for me,\nthough, if that's what people prefer. And that does away with the\n\"clean\" part as that one blows away the whole directory between each\nrun.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 15 Jul 2021 16:04:57 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> Putting it in the tarball making script certainly works for me,\n> though, if that's what people prefer. And that does away with the\n> \"clean\" part as that one blows away the whole directory between each\n> run.\n\nActually, we *have* to do it over there, because what that script\ndoes is\n\n # Export the selected git ref\n git archive ${i} | tar xf - -C pgsql\n\n cd pgsql\n ./configure\n # some irrelevant stuff\n make dist\n\nSo there's no .git subdirectory in the directory it runs \"make dist\"\nin. Now maybe it'd work anyway because of the GIT_DIR environment\nvariable, but what I think is more likely is that the file would\nend up containing the current master-branch HEAD commit, whereas\nthe thing we actually want to record here is ${i}.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Jul 2021 10:35:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 4:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > Putting it in the tarball making script certainly works for me,\n> > though, if that's what people prefer. And that does away with the\n> > \"clean\" part as that one blows away the whole directory between each\n> > run.\n>\n> Actually, we *have* to do it over there, because what that script\n> does is\n>\n> # Export the selected git ref\n> git archive ${i} | tar xf - -C pgsql\n>\n> cd pgsql\n> ./configure\n> # some irrelevant stuff\n> make dist\n>\n> So there's no .git subdirectory in the directory it runs \"make dist\"\n> in. Now maybe it'd work anyway because of the GIT_DIR environment\n> variable, but what I think is more likely is that the file would\n> end up containing the current master-branch HEAD commit, whereas\n> the thing we actually want to record here is ${i}.\n\nHah, yeah, that certainly decides it. And I was even poking around\nthat script as well today, just nt at the same time :)\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 15 Jul 2021 20:55:41 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "On 15.07.21 10:33, Magnus Hagander wrote:\n> I think it'd be useful to be able to identify exactly which git commit\n> was used to produce a tarball. This would be especially useful when\n> downloading snapshot tarballs where that's not entirely clear, but can\n> also be used to verify that the release tarballs matches what's\n> expected (in the extremely rare case that a tarball is rewrapped for\n> example).\n\nOr we could do what git-archive does:\n\n Additionally the commit ID is stored in a global extended\n pax header if the tar format is used; it can be extracted using git\n get-tar-commit-id. In ZIP files it is stored as\n a file comment.\n\n\n",
"msg_date": "Wed, 21 Jul 2021 20:25:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "> On 21 Jul 2021, at 20:25, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 15.07.21 10:33, Magnus Hagander wrote:\n>> I think it'd be useful to be able to identify exactly which git commit\n>> was used to produce a tarball. This would be especially useful when\n>> downloading snapshot tarballs where that's not entirely clear, but can\n>> also be used to verify that the release tarballs matches what's\n>> expected (in the extremely rare case that a tarball is rewrapped for\n>> example).\n> \n> Or we could do what git-archive does:\n> \n> Additionally the commit ID is stored in a global extended\n> pax header if the tar format is used; it can be extracted using git\n> get-tar-commit-id. In ZIP files it is stored as\n> a file comment.\n\nThat does adds Git as a dependency for consuming the tarball though, which\nmight not be a problem but it's a change from what we require today.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 21 Jul 2021 21:12:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 21 Jul 2021, at 20:25, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>> On 15.07.21 10:33, Magnus Hagander wrote:\n>>> I think it'd be useful to be able to identify exactly which git commit\n>>> was used to produce a tarball.\n\n>> Or we could do what git-archive does:\n>> Additionally the commit ID is stored in a global extended\n>> pax header if the tar format is used; it can be extracted using git\n>> get-tar-commit-id. In ZIP files it is stored as\n>> a file comment.\n\n> That does adds Git as a dependency for consuming the tarball though, which\n> might not be a problem but it's a change from what we require today.\n\nIt also requires keeping the tarball itself around, which you might not\nhave done, or you might not remember which directory you extracted which\ntarball into. So on the whole that solution seems strictly worse.\n\nFYI, the \"put it into .gitrevision\" solution is already implemented\nin the new tarball-building script that Magnus and I have been\nworking on off-list.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Jul 2021 15:23:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "> On 21 Jul 2021, at 21:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> FYI, the \"put it into .gitrevision\" solution is already implemented\n> in the new tarball-building script that Magnus and I have been\n> working on off-list.\n\n+1, I think that's the preferred option.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 21 Jul 2021 21:26:08 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "On 21.07.21 21:12, Daniel Gustafsson wrote:\n>> Or we could do what git-archive does:\n>>\n>> Additionally the commit ID is stored in a global extended\n>> pax header if the tar format is used; it can be extracted using git\n>> get-tar-commit-id. In ZIP files it is stored as\n>> a file comment.\n> \n> That does adds Git as a dependency for consuming the tarball though, which\n> might not be a problem but it's a change from what we require today.\n\nHow so?\n\n\n",
"msg_date": "Wed, 21 Jul 2021 21:49:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 21.07.21 21:12, Daniel Gustafsson wrote:\n>>> Additionally the commit ID is stored in a global extended\n>>> pax header if the tar format is used; it can be extracted using git\n>>> get-tar-commit-id. In ZIP files it is stored as\n>>> a file comment.\n\n>> That does adds Git as a dependency for consuming the tarball though, which\n>> might not be a problem but it's a change from what we require today.\n\n> How so?\n\nIt's only a dependency if you want to know the commit ID, which\nperhaps isn't something you would have use for if you don't have\ngit installed ... but I don't think that's totally obvious.\n\nPersonally I'd be more worried about rendering the tarballs\ntotally corrupt from the perspective of somebody using an old\n\"tar\" that hasn't heard of extended pax headers. Maybe there\nare no such versions in the wild anymore; but I do not see any\nadvantages to this approach that would justify taking any risk for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Jul 2021 15:56:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Git revision in tarballs"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI noticed that COMMIT PREPARED command is slow in the discussion [1].\n\n\nFirst, I made the following simple script for pgbench.\n\n``` prepare.pgbench\n\\set id random(1, 1000000)\n\nBEGIN;\nUPDATE test_table SET md5 = md5(clock_timestamp()::text) WHERE id = :id;\nPREPARE TRANSACTION 'prep_:client_id';\nCOMMIT PREPARED 'prep_:client_id';\n```\n\nI run the pgbench as follows.\n\n```\npgbench -f prepare.pgbench -c 1 -j 1 -T 60 -d postgres -r\n```\n\nThe result is following.\n\n\n<Result in master branch>\ntps:\t287.259\nLatency:\n UPDATE\t\t\t0.207ms\n PREPARE TRANSACTION\t0.212ms\n COMMIT PREPARED\t\t2.982ms\n\n\nNext, I analyzed the bottleneck using pstack and strace.\nI noticed that the open() during COMMIT PREPARED takes 2.7ms.\nFurthermore, I noticed that the backend process almost always open the same wal segment file.\n\n\nWhen COMMIT PREPARED command, there are two ways to find 2PC state data.\n - If it is stored in wal segment file, open and read wal segment file.\n - If not, read 2PC state file\n\nThe above script runs COMMIT PREPARED command just after PREPARE TRANSACTION command.\nI think it also won't take long time for XA transaction to run COMMIT PREPARED command after running PREPARE TRANSACTION command.\nTherefore, I think that the wal segment file which is opened during COMMIT PREPARED probably be the current wal segment file.\n\n\nTo speed up COMMIT PREPARED command, I made two patches for test.\n\n\n(1) Hold_xlogreader.patch\nSkip closing wal segment file after COMMIT PREPARED command completed.\nIf the next COMMIT PREPARED command use the same wal segment file, it is fast since the process need not to open wal segment file.\nHowever, I don't know when we should close the wal segment file.\nMoreover, it might not be useful when COMMIT PREPARED command is run not so often and use different wal segment file each time.\n\n<Result in Hold_xlogreader.patch>\ntps:\t1750.81\nLatency:\n UPDATE\t\t\t0.156ms\n PREPARE TRANSACTION\t0.184ms\n COMMIT PREPARED\t\t0.179ms\n \n\n(2) Read_from_walbuffer.patch\nRead the data from wal buffer if there are still in wal buffer.\nIf COMMIT PREPARED command is run just after PREPARE TRANSACTION command, the wal may be in the wal buffer.\nHowever, the period which the wal is in the wal buffer is not so long since wal writer recycle the wal buffer soon.\nMoreover, it may affect the other performance such as UPDATE since it needs to take lock on wal buffer.\n\n<Result in Read_from_walbuffer.patch>\ntps:\t446.371\nLatency:\n UPDATE\t\t\t0.187ms\n PREPARE TRANSACTION\t0.196ms\n COMMIT PREPARED\t\t1.974ms\n\n\nWhich approach do you think better?\n\n\n[1] https://www.postgresql.org/message-id/20191206.173215.1818665441859410805.horikyota.ntt%40gmail.com\n\n\nRegards,\nRyohei Takahashi",
"msg_date": "Thu, 15 Jul 2021 09:17:46 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Speed up COMMIT PREPARED"
},
{
"msg_contents": "I noticed that the previous Read_from_walbuffer.patch has a mistake in xlogreader.c.\nCould you please use the attached v2 patch?\n\nThe performance result of the previous mail is the result of v2 patch.\n\n\nRegards,\nRyohei Takahashi",
"msg_date": "Thu, 15 Jul 2021 23:34:36 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up COMMIT PREPARED"
},
{
"msg_contents": "Hi,\n\n\nI noticed that anti-virus software slow down the open().\nI stopped the anti-virus software and re-run the test.\n(Average of 10 times)\n\nmaster: 1924tps\nHold_xlogreader.patch: 1993tps (+3.5%)\nRead_from_walbuffer.patch: 1954tps(+1.5%)\n\nTherefore, the effect of my patch is limited.\n\nI'm sorry for the confusion.\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Fri, 16 Jul 2021 04:04:33 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up COMMIT PREPARED"
}
] |
[
{
"msg_contents": "Hi\n\nAttached a patch to improve the tab completion for backslash commands.\nI think it’s common for some people(I'm one of them) to use full-name commands than abbreviation.\nSo it's more convenient if we can add the full-name backslash commands in the tab-complete.c.\n\nWhen modify tab-complete.c, I found \\dS was added in the backslash_commands[], but I think maybe it should be removed just like other \\x[S].\nSo I removed it.\nBesides, I also added a little change in help.c.\n- exchange the positon of \\des and \\det according to alphabetical order\n- rename PATRN1/PATRN2 to ROLEPTRN/DBPTRN to make expression more comprehensible\n\nAny comment is welcome.\n\nRegards,\nTang",
"msg_date": "Thu, 15 Jul 2021 09:46:18 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "On Thursday, July 15, 2021 6:46 PM, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n>Attached a patch to improve the tab completion for backslash commands.\n>I think it's common for some people(I'm one of them) to use full-name commands than abbreviation.\n>So it's more convenient if we can add the full-name backslash commands in the tab-complete.c.\n\nAdd above patch in the commit fest as follows:\n\nhttps://commitfest.postgresql.org/34/3268/\n\nRegards,\nTang\n\n\n\n",
"msg_date": "Fri, 23 Jul 2021 05:46:26 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "Hi Tang,\n\n\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n\n> Hi\n>\n> Attached a patch to improve the tab completion for backslash commands.\n> I think it’s common for some people(I'm one of them) to use full-name\n> commands than abbreviation. So it's more convenient if we can add the\n> full-name backslash commands in the tab-complete.c.\n\nEven though I usually use the short versions, I agree that having the\nfull names in tab the completion as well is a good idea.\n\n> When modify tab-complete.c, I found \\dS was added in the\n> backslash_commands[], but I think maybe it should be removed just like\n> other \\x[S].\n> So I removed it.\n> Besides, I also added a little change in help.c.\n> - exchange the positon of \\des and \\det according to alphabetical order\n> - rename PATRN1/PATRN2 to ROLEPTRN/DBPTRN to make expression more comprehensible\n\nThese are also good changes.\n\n> Any comment is welcome.\n>\n> Regards,\n> Tang\n\n> +\t\t\"\\\\r\", \"\\\\rset\",\n\nThere's a typo here, that should be \"\\\\reset\". Also, I noticed that for\n\\connect, the situation is the opposite: it has the full form but not\nthe short form (\\c).\n\nI've addressed both in the attached v2 patch.\n\n- ilmari",
"msg_date": "Sun, 08 Aug 2021 00:13:14 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "On Sunday, August 8, 2021 8:13 AM, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\r\n>> +\t\t\"\\\\r\", \"\\\\rset\",\r\n>\r\n>There's a typo here, that should be \"\\\\reset\". Also, I noticed that for\r\n>\\connect, the situation is the opposite: it has the full form but not\r\n>the short form (\\c).\r\n>\r\n>I've addressed both in the attached v2 patch.\r\n\r\nThanks for you comments and fix. Your modified patch LGTM.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Sun, 8 Aug 2021 10:36:49 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n\n> On Sunday, August 8, 2021 8:13 AM, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>>> +\t\t\"\\\\r\", \"\\\\rset\",\n>>\n>>There's a typo here, that should be \"\\\\reset\". Also, I noticed that for\n>>\\connect, the situation is the opposite: it has the full form but not\n>>the short form (\\c).\n>>\n>>I've addressed both in the attached v2 patch.\n>\n> Thanks for you comments and fix. Your modified patch LGTM.\n\nI've updated the commitfest entry to Ready for Committer.\n\n> Regards,\n> Tang\n\n- ilmari\n\n\n",
"msg_date": "Sun, 08 Aug 2021 19:31:40 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> I've updated the commitfest entry to Ready for Committer.\n\nI was about to push this but started to have second thoughts about it.\nI'm not convinced that offering multiple variant spellings of the same\ncommand is really such a great thing: I'm afraid that it will be more\nconfusing than helpful. I particularly question why we'd offer both\nsingle- and multiple-character versions, as the single-character\nversion seems entirely useless from a completion standpoint.\n\nFor example, up to now \"\\o<TAB>\" got you \"\\o \", which isn't amazingly\nuseful but maybe it serves to confirm that you typed a valid command.\nThis patch now forces you to choose between alternative spellings\nof the exact same command, which is a waste of effort plus it will\nmake you stop to wonder whether they really are the same command.\nIt would be much better to either keep the old behavior, or just\nimmediately complete to \"\\out \" and stay out of the user's way.\n\nSo I'd be inclined to take out the single-character versions of any\ncommands that we offer a longer spelling of. I'm not dead set on that,\nbut I think the possibility ought to be discussed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Sep 2021 12:42:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "... BTW, I went ahead and pushed the changes in help.c,\nsince that part seemed uncontroversial.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Sep 2021 13:29:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "On Sunday, September 5, 2021 1:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>I particularly question why we'd offer both\r\n>single- and multiple-character versions, as the single-character\r\n>version seems entirely useless from a completion standpoint.\r\n\r\nI generally agreed with your opinion. But I'm not sure if there's someone\r\nwho'd like to see the list of backslash commands and choose one to use.\r\nI mean, someone may type '\\[tab][tab]' to check all supported backslash commands.\r\npostgres=# \\\r\nDisplay all 105 possibilities? (y or n)\r\n\\! \\dc \\dL \\dx \\h \\r\r\n...\r\n\r\nIn the above scenario, both single- and multiple-character versions could be helpful, thought?\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Mon, 6 Sep 2021 04:31:42 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> On Sunday, September 5, 2021 1:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I particularly question why we'd offer both\n>> single- and multiple-character versions, as the single-character\n>> version seems entirely useless from a completion standpoint.\n\n> I generally agreed with your opinion. But I'm not sure if there's someone\n> who'd like to see the list of backslash commands and choose one to use.\n> I mean, someone may type '\\[tab][tab]' to check all supported backslash commands.\n\nSure, but he'd still get all the commands, just not all the possible\nspellings of each one. And a person who's not sure what's available\nis unlikely to be helped by an entry for \"\\c\", because it's entirely\nnot clear which command that's an abbreviation for.\n\nIn any case, my main point is that the primary usage of tab-completion\nis as a typing aid, not documentation. I do not think we should make\nthe behavior less useful for typing in order to be exhaustive on the\ndocumentation side.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Sep 2021 16:05:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "On Wednesday, September 8, 2021 5:05 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>Sure, but he'd still get all the commands, just not all the possible\r\n>spellings of each one. And a person who's not sure what's available\r\n>is unlikely to be helped by an entry for \"\\c\", because it's entirely\r\n>not clear which command that's an abbreviation for.\r\n>\r\n>In any case, my main point is that the primary usage of tab-completion\r\n>is as a typing aid, not documentation. I do not think we should make\r\n>the behavior less useful for typing in order to be exhaustive on the\r\n>documentation side.\r\n\r\nYou are right. I think I've got your point.\r\nHere is the updated patch in which I added the multiple-character versions for backslash commands \r\nand remove their corresponding single-character version.\r\nOf course, for backslash commands with only single-character version, no change added.\r\n\r\nBTW, I've done the existing tap-tests for tab-completion with this patch, all tests passed.\r\n\r\nRegards,\r\nTang",
"msg_date": "Wed, 8 Sep 2021 10:07:38 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Add tab-complete for backslash commands"
},
{
"msg_contents": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> Here is the updated patch in which I added the multiple-character versions for backslash commands \n> and remove their corresponding single-character version.\n> Of course, for backslash commands with only single-character version, no change added.\n\nPushed. I tweaked your list to the extent of adding back \"\\ir\",\nbecause since it's two letters not one, the argument that it's\nentirely useless for tab completion doesn't quite apply. But\nif we wanted to make a hard-and-fast policy of offering only\nthe long form when there are two forms, maybe we should remove\nthat one too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Sep 2021 13:25:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add tab-complete for backslash commands"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been investigating hash indexes and have what I think is a clear\npicture in my head, so time for discussion.\n\nIt would be very desirable to allow Hash Indexes to become Primary Key\nIndexes, which requires both\n amroutine->amcanunique = true;\n amroutine->amcanmulticol = true;\n\nEvery other hash index TODO seems like performance tuning, so can wait\nawhile, even if it is tempting to do that first.\n\n1. Multi-column Indexes\nseems to have floundered because of this thread \"Multicolumn Hash Indexes\",\nhttps://www.postgresql.org/message-id/29263.1506483172%40sss.pgh.pa.us,\nbut those issues don't apply in most common cases and so they seem\nacceptable restrictions, especially since some already apply to btrees\netc..\n(And noting that Hash indexes already assume strict hash operators, so\nthat is not an issue).\nFor me, this separates into two sub-concerns:\n1.1 Allow multi-columns to be defined for hash indexes\nEnabling this a simple one line patch\n amroutine->amcanmulticol = true;\nwhich works just fine on current HEAD without further changes (manual\ntesting, as yet).\nIf we do this first, then any work on uniqueness checking can take\ninto account multiple columns.\n\n1.2 Combining multi-column hashes into one hash value\nTrivially, this is already how it works, in the sense that we just use\nthe first column, however many columns there are in the index! Doing\nmore is an already solved problem in Postgres,\n[TupleHashTableHash_internal() in src/backend/executor/execGrouping.c]\nas pointed out here: \"Combining hash values\"\nhttps://www.postgresql.org/message-id/CAEepm%3D3rdgjfxW4cKvJ0OEmya2-34B0qHNG1xV0vK7TGPJGMUQ%40mail.gmail.com\nthough noting there was no discussion on that point [1]. This just\nneeds a little refactoring to improve things, but it seems more like a\nnice to have than an essential aspect of hash indexes that need not\nblock us from enabling multi-column hash indexes.\n\n2. Unique Hash Indexes have been summarized here:\nhttps://www.postgresql.org/message-id/CAA4eK1KATC1TA5bR5eobYQVO3RWsnH6djNpk3P376em4V8MuUA%40mail.gmail.com\nwhich also seems to have two parts to it.\n\n2.1 Uniqueness Check\nAmit: \"to ensure that there is no duplicate entry we need to traverse\nthe whole bucket chain\"\nAgreed. That seems straightforward and can also be improved later.\n\n2.2 Locking\nAmit's idea of holding ExclusiveLock on the bucket page works for me,\nbut there was some doubt about splitting.\n\nSplitting seems to be an awful behavior that users would wish to avoid\nif they knew about the impact and duration. In my understanding,\nsplitting involves moving 50% of rows and likely touches all blocks\neventually. If the existing index is the wrong shape then just run\nREINDEX. If we tune the index build, looks like REINDEX would be\nquicker and easier way of growing an index than trying to split an\nexisting index. i.e. rely on ecdysis not internal growth. This is much\nmore viable now because of the CIC changes in PG14.\n\n(I would argue that removing splitting completely is a good idea,\nsimilar to the way we have removed the original VACUUM FULL algorithm,\nbut that will take a while to swallow that thought). Instead, I\nsuggest we introduce a new indexam option for hash indexes of\nautosplit=on (default) | off, so that users can turn splitting off.\nWhich means we would probably also need another option for\ninitial_buckets=0 (default) means use number of existing rows to size,\nor N=use that specific size. Note that turning off splitting does not\nlimit the size of the index, it just stops the index from re-freshing\nits number of buckets. B-trees are the default for PKs, so Hash\nindexes are an option for larger tables only, so there is less need to\nhave hash indexes cope with tables of unknown size - we wouldn't even\nbe using hash unless we already know it is a big table.\n\nIf splitting causes any difficulty at all, then we should simply say\nthat Unique Hash Index indexes should initially force autosplit=off,\nso we don't have to worry about the correctness of the locking. I\nsuggest we implement that first and then decide if we really care\nabout splitting, cos I'd bet we don't. Yes, I consider uniqueness much\nmore desirable than splitting.\n\nI've written a patch that refactors index build so we *never* need to\nperform a split during index build, allowing us to more credibly skip\nindex splitting completely. (Incidentally, it avoids the need to\nupdate the metapage for each row during the build, allowing us to\nconsider writing in batches to the index as a next step). So there\nneed be no *requirement* for splitting to be supported with\nuniqueness, while build/reindex looks like it can be much faster. I\ncan post it if anyone wants to see it, but I don't want to distract us\nfrom discussion of the main requirements.\n\nI have other performance tuning ideas, but they can wait.\n\nAnyway, that's what I think at present. Thoughts?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 15 Jul 2021 17:41:10 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 10:11 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> 2. Unique Hash Indexes have been summarized here:\n> https://www.postgresql.org/message-id/CAA4eK1KATC1TA5bR5eobYQVO3RWsnH6djNpk3P376em4V8MuUA%40mail.gmail.com\n> which also seems to have two parts to it.\n>\n> 2.1 Uniqueness Check\n> Amit: \"to ensure that there is no duplicate entry we need to traverse\n> the whole bucket chain\"\n> Agreed. That seems straightforward and can also be improved later.\n>\n> 2.2 Locking\n> Amit's idea of holding ExclusiveLock on the bucket page works for me,\n> but there was some doubt about splitting.\n>\n\nI think the main thing to think about for uniqueness check during\nsplit (where we scan both the old and new buckets) was whether we need\nto lock both the old (bucket_being_split) and new\n(bucket_being_populated) buckets or just holding locks on one of them\n(the current bucket in which we are inserting) is sufficient? During a\nscan of the new bucket, we just retain pins on both the buckets (see\ncomments in _hash_first()) but if we need to retain locks on both\nbuckets then we need to do something different then we do it for\nscans. But, I think it is sufficient to just hold an exclusive lock on\nthe primary bucket page in the bucket we are trying to insert and pin\non the other bucket (old bucket as we do for scans). Because no\nconcurrent inserter should try to insert into the old bucket and new\nbucket the same tuple as before starting the split we always update\nthe metapage for hashm_lowmask and hashm_highmask which decides the\nrouting of the tuples.\n\nNow, I think here the other problem we need to think about is that for\nthe hash index after finding the tuple in the index, we need to always\nrecheck in the heap as we don't store the actual value in the hash\nindex. For that in the scan, we get the tuple(s) from the index\n(release locks) and then match qual after fetching tuple from the\nheap. But we can't do that for uniqueness check because if we release\nthe locks on the index bucket page then another inserter could come\nbefore we match it in heap. I think we need some mechanism that after\nfetching TID from the index, we recheck the actual value in heap\nbefore releasing the lock on the index bucket page.\n\nThe other thing could be that if we have unique support for hash index\nthen probably we can allow Insert ... ON Conflict if the user\nspecifies unique index column as conflict_target.\n\nI am not sure if multicol index support is mandatory to allow\nuniqueness for hash indexes, sure it would be good but I feel that can\nbe done as a separate patch as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Jul 2021 17:30:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 5:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 15, 2021 at 10:11 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > 2. Unique Hash Indexes have been summarized here:\n> > https://www.postgresql.org/message-id/CAA4eK1KATC1TA5bR5eobYQVO3RWsnH6djNpk3P376em4V8MuUA%40mail.gmail.com\n> > which also seems to have two parts to it.\n> >\n> > 2.1 Uniqueness Check\n> > Amit: \"to ensure that there is no duplicate entry we need to traverse\n> > the whole bucket chain\"\n> > Agreed. That seems straightforward and can also be improved later.\n> >\n> > 2.2 Locking\n> > Amit's idea of holding ExclusiveLock on the bucket page works for me,\n> > but there was some doubt about splitting.\n> >\n>\n> I think the main thing to think about for uniqueness check during\n> split (where we scan both the old and new buckets) was whether we need\n> to lock both the old (bucket_being_split) and new\n> (bucket_being_populated) buckets or just holding locks on one of them\n> (the current bucket in which we are inserting) is sufficient? During a\n> scan of the new bucket, we just retain pins on both the buckets (see\n> comments in _hash_first()) but if we need to retain locks on both\n> buckets then we need to do something different then we do it for\n> scans. But, I think it is sufficient to just hold an exclusive lock on\n> the primary bucket page in the bucket we are trying to insert and pin\n> on the other bucket (old bucket as we do for scans). Because no\n> concurrent inserter should try to insert into the old bucket and new\n> bucket the same tuple as before starting the split we always update\n> the metapage for hashm_lowmask and hashm_highmask which decides the\n> routing of the tuples.\n>\n> Now, I think here the other problem we need to think about is that for\n> the hash index after finding the tuple in the index, we need to always\n> recheck in the heap as we don't store the actual value in the hash\n> index. For that in the scan, we get the tuple(s) from the index\n> (release locks) and then match qual after fetching tuple from the\n> heap. But we can't do that for uniqueness check because if we release\n> the locks on the index bucket page then another inserter could come\n> before we match it in heap. I think we need some mechanism that after\n> fetching TID from the index, we recheck the actual value in heap\n> before releasing the lock on the index bucket page.\n>\n\nOne more thing we need to think about here is when to find the right\nbucket page in the chain where we can insert the new tuple. Do we\nfirst try to complete the uniqueness check (which needs to scan\nthrough the entire bucket chain) and then again scan the bucket with\nspace to insert or do we want to do it along with uniqueness check\nscan and remember it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Jul 2021 17:56:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 1:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 15, 2021 at 10:11 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > 2. Unique Hash Indexes have been summarized here:\n> > https://www.postgresql.org/message-id/CAA4eK1KATC1TA5bR5eobYQVO3RWsnH6djNpk3P376em4V8MuUA%40mail.gmail.com\n> > which also seems to have two parts to it.\n> >\n> > 2.1 Uniqueness Check\n> > Amit: \"to ensure that there is no duplicate entry we need to traverse\n> > the whole bucket chain\"\n> > Agreed. That seems straightforward and can also be improved later.\n> >\n> > 2.2 Locking\n> > Amit's idea of holding ExclusiveLock on the bucket page works for me,\n> > but there was some doubt about splitting.\n> >\n>\n> I think the main thing to think about for uniqueness check during\n> split (where we scan both the old and new buckets) was whether we need\n> to lock both the old (bucket_being_split) and new\n> (bucket_being_populated) buckets or just holding locks on one of them\n> (the current bucket in which we are inserting) is sufficient? During a\n> scan of the new bucket, we just retain pins on both the buckets (see\n> comments in _hash_first()) but if we need to retain locks on both\n> buckets then we need to do something different then we do it for\n> scans. But, I think it is sufficient to just hold an exclusive lock on\n> the primary bucket page in the bucket we are trying to insert and pin\n> on the other bucket (old bucket as we do for scans). Because no\n> concurrent inserter should try to insert into the old bucket and new\n> bucket the same tuple as before starting the split we always update\n> the metapage for hashm_lowmask and hashm_highmask which decides the\n> routing of the tuples.\n\nDuring an incomplete split, we need to scan both old and new. So\nduring insert, we need to scan both old and new, while holding\nexclusive locks on both bucket pages. I've spent a few days looking at\nthe split behavior and this seems a complete approach. I'm working on\na patch now, still at hacking stage.\n\n(BTW, my opinion of the split mechanism has now changed from bad to\nvery good. It works really well for unique data, but can be completely\nineffective for badly skewed data).\n\n> Now, I think here the other problem we need to think about is that for\n> the hash index after finding the tuple in the index, we need to always\n> recheck in the heap as we don't store the actual value in the hash\n> index. For that in the scan, we get the tuple(s) from the index\n> (release locks) and then match qual after fetching tuple from the\n> heap. But we can't do that for uniqueness check because if we release\n> the locks on the index bucket page then another inserter could come\n> before we match it in heap. I think we need some mechanism that after\n> fetching TID from the index, we recheck the actual value in heap\n> before releasing the lock on the index bucket page.\n\nI don't think btree does that, so I'm not sure we do need that for\nhash. Yes, there is a race condition, but someone will win. Do we care\nwho? Do we care enough to take the concurrency hit? Duplicate inserts\nwould be very rare in a declared unique index, so it would be a poor\ntrade-off.\n\n> The other thing could be that if we have unique support for hash index\n> then probably we can allow Insert ... ON Conflict if the user\n> specifies unique index column as conflict_target.\n\nYes, that looks doable.\n\n> I am not sure if multicol index support is mandatory to allow\n> uniqueness for hash indexes, sure it would be good but I feel that can\n> be done as a separate patch as well.\n\nI have a patch for multicol support, attached.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 20 Jul 2021 14:02:42 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 1:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> One more thing we need to think about here is when to find the right\n> bucket page in the chain where we can insert the new tuple. Do we\n> first try to complete the uniqueness check (which needs to scan\n> through the entire bucket chain) and then again scan the bucket with\n> space to insert or do we want to do it along with uniqueness check\n> scan and remember it?\n\nThe latter approach, but that is just a performance tweak for later.\n\nOn a unique hash index, regular splitting means there are almost no\nbucket chains more than 2 long (bucket plus overflow), so it seems\nlike mostly wasted effort at this point.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Jul 2021 14:02:51 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 6:32 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, Jul 20, 2021 at 1:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 15, 2021 at 10:11 PM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> >\n> > I think the main thing to think about for uniqueness check during\n> > split (where we scan both the old and new buckets) was whether we need\n> > to lock both the old (bucket_being_split) and new\n> > (bucket_being_populated) buckets or just holding locks on one of them\n> > (the current bucket in which we are inserting) is sufficient? During a\n> > scan of the new bucket, we just retain pins on both the buckets (see\n> > comments in _hash_first()) but if we need to retain locks on both\n> > buckets then we need to do something different then we do it for\n> > scans. But, I think it is sufficient to just hold an exclusive lock on\n> > the primary bucket page in the bucket we are trying to insert and pin\n> > on the other bucket (old bucket as we do for scans). Because no\n> > concurrent inserter should try to insert into the old bucket and new\n> > bucket the same tuple as before starting the split we always update\n> > the metapage for hashm_lowmask and hashm_highmask which decides the\n> > routing of the tuples.\n>\n> During an incomplete split, we need to scan both old and new. So\n> during insert, we need to scan both old and new, while holding\n> exclusive locks on both bucket pages.\n>\n\nIt will surely work if we have an exclusive lock on both the buckets\n(old and new) in this case but I think it is better if we can avoid\nexclusive locking the old bucket (bucket_being_split) unless it is\nreally required. We need an exclusive lock on the primary bucket where\nwe are trying to insert to avoid any other inserter with the same key\nbut I think we don't need it for the old bucket because no inserter\nwith the same key can try to insert the key in an old bucket which\nwould belong to the new bucket.\n\n> I've spent a few days looking at\n> the split behavior and this seems a complete approach. I'm working on\n> a patch now, still at hacking stage.\n>\n> (BTW, my opinion of the split mechanism has now changed from bad to\n> very good. It works really well for unique data, but can be completely\n> ineffective for badly skewed data).\n>\n> > Now, I think here the other problem we need to think about is that for\n> > the hash index after finding the tuple in the index, we need to always\n> > recheck in the heap as we don't store the actual value in the hash\n> > index. For that in the scan, we get the tuple(s) from the index\n> > (release locks) and then match qual after fetching tuple from the\n> > heap. But we can't do that for uniqueness check because if we release\n> > the locks on the index bucket page then another inserter could come\n> > before we match it in heap. I think we need some mechanism that after\n> > fetching TID from the index, we recheck the actual value in heap\n> > before releasing the lock on the index bucket page.\n>\n> I don't think btree does that, so I'm not sure we do need that for\n> hash. Yes, there is a race condition, but someone will win. Do we care\n> who? Do we care enough to take the concurrency hit?\n>\n\nI think if we don't care we might end up with duplicates. I might be\nmissing something here but let me explain the problem I see. Say,\nwhile doing a unique check we found the same hash value in the bucket\nwe are trying to insert, we can't say unique key violation at this\nstage and error out without checking the actual value in heap. This is\nbecause there is always a chance that two different key values can map\nto the same hash value. Now, after checking in the heap if we found\nthat the actual value doesn't match so we decide to insert the value\nin the hash index, and in the meantime, another insert of the same key\nvalue already performed these checks and ends up inserting the value\nin hash index and that would lead to a duplicate value in the hash\nindex. I think btree doesn't have a similar risk so we don't need such\na mechanism for btree.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Jul 2021 10:40:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, 22 Jul 2021 at 06:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> It will surely work if we have an exclusive lock on both the buckets\n> (old and new) in this case but I think it is better if we can avoid\n> exclusive locking the old bucket (bucket_being_split) unless it is\n> really required. We need an exclusive lock on the primary bucket where\n> we are trying to insert to avoid any other inserter with the same key\n> but I think we don't need it for the old bucket because no inserter\n> with the same key can try to insert the key in an old bucket which\n> would belong to the new bucket.\n\nAgreed.\n\n> > I don't think btree does that, so I'm not sure we do need that for\n> > hash. Yes, there is a race condition, but someone will win. Do we care\n> > who? Do we care enough to take the concurrency hit?\n> >\n>\n> I think if we don't care we might end up with duplicates. I might be\n> missing something here but let me explain the problem I see. Say,\n> while doing a unique check we found the same hash value in the bucket\n> we are trying to insert, we can't say unique key violation at this\n> stage and error out without checking the actual value in heap. This is\n> because there is always a chance that two different key values can map\n> to the same hash value. Now, after checking in the heap if we found\n> that the actual value doesn't match so we decide to insert the value\n> in the hash index, and in the meantime, another insert of the same key\n> value already performed these checks and ends up inserting the value\n> in hash index and that would lead to a duplicate value in the hash\n> index. I think btree doesn't have a similar risk so we don't need such\n> a mechanism for btree.\n\nAgreed, after thinking about it more while coding.\n\nAll of the above implemented in the patches below:\n\nComplete patch for hash_multicol.v3.patch attached, slightly updated\nfrom earlier patch.\nDocs, tests, passes make check.\n\nWIP for hash_unique.v4.patch attached, patch-on-patch, to allow us to\ndiscuss flow of logic and shape of code.\nThe actual comparison is not implemented yet. Not trivial, but can\nwait until we decide main logic.\nPasses make check and executes attached tests.\n\nTests in separate file also attached, will eventually be merged into\nsrc/test/regress/sql/hash_index.sql\n\nNo tests yet for splitting or deferred uniqueness checks. The latter\nis because there are parse analysis changes needed to undo the\nassumption that only btrees support uniqueness, but nothing there\nlooks like an issue.\n\nThanks for your input, have a good weekend.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 23 Jul 2021 13:45:49 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 6:16 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 22 Jul 2021 at 06:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Complete patch for hash_multicol.v3.patch attached, slightly updated\n> from earlier patch.\n> Docs, tests, passes make check.\n\nI was looking into the hash_multicoul.v3.patch, I have a question\n\n <para>\n- Hash indexes support only single-column indexes and do not allow\n- uniqueness checking.\n+ Hash indexes support uniqueness checking.\n+ Hash indexes support multi-column indexes, but only store the hash value\n+ for the first column, so multiple columns are useful only for uniquness\n+ checking.\n </para>\n\nThe above comments say that we store hash value only for the first\ncolumn, my question is why don't we store for other columns as well?\nI mean we can search the bucket based on the first column hash but the\nhashes for the other column could be payload data and we can use that\nto match the hash value for other key columns before accessing the\nheap, as discussed here[1]. IMHO, this will further reduce the heap\naccess.\n\n[1] https://www.postgresql.org/message-id/7192.1506527843%40sss.pgh.pa.us\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Aug 2021 18:14:21 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 9:41 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> It would be very desirable to allow Hash Indexes to become Primary Key\n> Indexes, which requires both\n> amroutine->amcanunique = true;\n> amroutine->amcanmulticol = true;\n\nWhy do you say that? I don't think it's self-evident that it's desirable.\n\nIn general I don't think that hash indexes are all that compelling\ncompared to B-Trees. In practice the flexibility of B-Trees tends to\nwin out, even if B-Trees are slightly slower than hash indexes with\ncertain kinds of benchmarks that are heavy on point lookups and have\nno locality.\n\nI have no reason to object to any of this, and I don't object. I'm just asking.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 10 Aug 2021 17:37:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 6:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jul 23, 2021 at 6:16 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Thu, 22 Jul 2021 at 06:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Complete patch for hash_multicol.v3.patch attached, slightly updated\n> > from earlier patch.\n> > Docs, tests, passes make check.\n>\n> I was looking into the hash_multicoul.v3.patch, I have a question\n>\n> <para>\n> - Hash indexes support only single-column indexes and do not allow\n> - uniqueness checking.\n> + Hash indexes support uniqueness checking.\n> + Hash indexes support multi-column indexes, but only store the hash value\n> + for the first column, so multiple columns are useful only for uniquness\n> + checking.\n> </para>\n>\n> The above comments say that we store hash value only for the first\n> column, my question is why don't we store for other columns as well?\n> I mean we can search the bucket based on the first column hash but the\n> hashes for the other column could be payload data and we can use that\n> to match the hash value for other key columns before accessing the\n> heap, as discussed here[1]. IMHO, this will further reduce the heap\n> access.\n>\n\nTrue, the other idea could be that in the payload we store the value\nafter 'combining multi-column hashes into one hash value'. This will\nallow us to satisfy queries where the search is on all columns of the\nindex efficiently provided the planner doesn't remove some of them in\nwhich case we need to do more work.\n\nOne more thing which we need to consider is 'hashm_procid' stored in\nmeta page, currently, it works for the single-column index but for the\nmulti-column index, we might want to set it as InvalidOid.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Aug 2021 16:34:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 8:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I was looking into the hash_multicoul.v3.patch, I have a question\n>\n> <para>\n> - Hash indexes support only single-column indexes and do not allow\n> - uniqueness checking.\n> + Hash indexes support uniqueness checking.\n> + Hash indexes support multi-column indexes, but only store the hash value\n> + for the first column, so multiple columns are useful only for uniquness\n> + checking.\n> </para>\n>\n> The above comments say that we store hash value only for the first\n> column, my question is why don't we store for other columns as well?\n\nI suspect it would be hard to store multiple hash values, one per\ncolumn. It seems to me that what we ought to do is combine the hash\nvalues for the individual columns using hash_combine(64) and store the\ncombined value. I can't really imagine why we would NOT do that. It\nseems like it would be easy to do and make the behavior of the feature\nway less surprising.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Aug 2021 10:22:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I suspect it would be hard to store multiple hash values, one per\n> column. It seems to me that what we ought to do is combine the hash\n> values for the individual columns using hash_combine(64) and store the\n> combined value. I can't really imagine why we would NOT do that.\n\nThat would make it impossible to use the index except with queries\nthat provide equality conditions on all the index columns. Maybe\nthat's fine, but it seems less flexible than other possible definitions.\nIt really makes me wonder why anyone would bother with a multicol\nhash index.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Aug 2021 10:30:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 10:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I suspect it would be hard to store multiple hash values, one per\n> > column. It seems to me that what we ought to do is combine the hash\n> > values for the individual columns using hash_combine(64) and store the\n> > combined value. I can't really imagine why we would NOT do that.\n>\n> That would make it impossible to use the index except with queries\n> that provide equality conditions on all the index columns. Maybe\n> that's fine, but it seems less flexible than other possible definitions.\n> It really makes me wonder why anyone would bother with a multicol\n> hash index.\n\nHmm. That is a point I hadn't considered.\n\nI have to admit that after working with Amit on all the work to make\nhash indexes WAL-logged a few years ago, I was somewhat disillusioned\nwith the whole AM. It seems like a cool idea to me but it's just not\nthat well-implemented. For example, the strategy of just doubling the\nnumber of buckets in one shot seems pretty terrible for large indexes,\nand ea69a0dead5128c421140dc53fac165ba4af8520 will buy only a limited\namount of relief. Likewise, the fact that keys are stored in hash\nvalue order within pages but that the bucket as a whole is not kept in\norder seems like it's bad for search performance and really bad for\nimplementing unique indexes with reasonable amounts of locking. (I\ndon't know how the present patch tries to solve that problem.) It's\ntempting to think that we should think about creating something\naltogether new instead of hacking on the existing implementation, but\nthat's a lot of work and I'm not sure what specific design would be\nbest.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Aug 2021 10:54:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I have to admit that after working with Amit on all the work to make\n> hash indexes WAL-logged a few years ago, I was somewhat disillusioned\n> with the whole AM. It seems like a cool idea to me but it's just not\n> that well-implemented.\n\nYeah, agreed. The whole buckets-are-integral-numbers-of-pages scheme\nis pretty well designed to ensure bloat, but trying to ameliorate that\nby reducing the number of buckets creates its own problems (since, as\nyou mention, we have no scheme whatever for searching within a bucket).\nI'm quite unimpressed with Simon's upthread proposal to turn off bucket\nsplitting without doing anything about the latter issue.\n\nI feel like we'd be best off to burn the AM to the ground and start\nover. I do not know what a better design would look like exactly,\nbut I feel like it's got to decouple buckets from pages somehow.\nAlong the way, I'd want to store 64-bit hash values (we still haven't\ndone that have we?).\n\nAs far as the specific point at hand is concerned, I think storing\na hash value per index column, while using only the first column's\nhash for bucket selection, is what to do for multicol indexes.\nWe still couldn't set amoptionalkey=true for hash indexes, because\nwithout a hash for the first column we don't know which bucket to\nlook in. But storing hashes for the additional columns would allow\nus to check additional conditions in the index, and usually save\ntrips to the heap on queries that provide additional column\nconditions. You could also imagine sorting the contents of a bucket\non all the hashes, which would ease uniqueness checks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Aug 2021 11:17:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 10:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> don't know how the present patch tries to solve that problem.) It's\n> tempting to think that we should think about creating something\n> altogether new instead of hacking on the existing implementation, but\n> that's a lot of work and I'm not sure what specific design would be\n> best.\n\n(Standard disclaimer that I'm not qualified to design index AMs) I've seen\none mention in the literature about the possibility of simply having a\nbtree index over the hash values. That would require faster search within\npages, in particular using abbreviated keys in the ItemId array of internal\npages [1] and interpolated search rather than pure binary search (which\nshould work reliably with high-entropy keys like hash values), but doing\nthat would speed up all btree indexes, so that much is worth doing\nregardless of how hash indexes are implemented. In that scheme, the hash\nindex AM would just be around for backward compatibility.\n\n[1]\nhttps://wiki.postgresql.org/wiki/Key_normalization#Optimizations_enabled_by_key_normalization\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Aug 11, 2021 at 10:54 AM Robert Haas <robertmhaas@gmail.com> wrote:> don't know how the present patch tries to solve that problem.) It's> tempting to think that we should think about creating something> altogether new instead of hacking on the existing implementation, but> that's a lot of work and I'm not sure what specific design would be> best.(Standard disclaimer that I'm not qualified to design index AMs) I've seen one mention in the literature about the possibility of simply having a btree index over the hash values. That would require faster search within pages, in particular using abbreviated keys in the ItemId array of internal pages [1] and interpolated search rather than pure binary search (which should work reliably with high-entropy keys like hash values), but doing that would speed up all btree indexes, so that much is worth doing regardless of how hash indexes are implemented. In that scheme, the hash index AM would just be around for backward compatibility.[1] https://wiki.postgresql.org/wiki/Key_normalization#Optimizations_enabled_by_key_normalization--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Aug 2021 11:51:00 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> (Standard disclaimer that I'm not qualified to design index AMs) I've seen\n> one mention in the literature about the possibility of simply having a\n> btree index over the hash values.\n\nYeah, that's been talked about in the past --- we considered it\nmoderately seriously back when the hash AM was really only a toy\nfor lack of WAL support. The main knock on it is that searching\na btree is necessarily O(log N), while in principle a hash probe\ncould be O(1). Of course, a badly-implemented hash AM could be\nworse than O(1), but we'd basically be giving up on ever getting\nto O(1).\n\nThere's a separate discussion to be had about whether there should\nbe an easier way for users to build indexes that are btrees of\nhashes. You can do it today but the indexes aren't pleasant to\nuse, requiring query adjustment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Aug 2021 12:04:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 11:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, agreed. The whole buckets-are-integral-numbers-of-pages scheme\n> is pretty well designed to ensure bloat, but trying to ameliorate that\n> by reducing the number of buckets creates its own problems (since, as\n> you mention, we have no scheme whatever for searching within a bucket).\n> I'm quite unimpressed with Simon's upthread proposal to turn off bucket\n> splitting without doing anything about the latter issue.\n\nMaybe. I don't think that it should be a huge problem to decide that\nan occupied bucket has to consume an entire page; if that's a big\nissue, you should just have fewer buckets. I do think it's a problem\nthat a bucket containing no tuples at all still consumes an entire\npage, because the data is often going to be skewed so that many\nbuckets are entirely empty. I also think it's a problem that expanding\nthe directory from 2^{N} buckets to 2^{N+1} buckets requires 4\nallocations of 2^{N-2} *consecutive* pages. That's a lot of\nconsecutive pages for even fairly modest values of N.\n\nImagine a design where we have a single-page directory with 1024\nslots, each corresponding to one bucket. Each slot stores a block\nnumber, which might be InvalidBlockNumber if there are no tuples in\nthat bucket. Given a tuple with hash value H, check slot H%1024 and\nthen go to that page to look further. If there are more tuples in that\nbucket than can fit on the page, then it can link to another page. If\nwe assume for the sake of argument that 1024 is the right number of\nbuckets, this is going to use about as much space as the current\nsystem when the data distribution is uniform, but considerably less\nwhen it's skewed. The larger you make the number of buckets, the\nbetter this kind of thing looks on skewed data.\n\nNow, you can't just always have 1024 buckets, so we'd actually have to\ndo something a bit more clever, probably involving multiple levels of\ndirectories. For example, suppose a directory page contains only 32\nslots. That will leave a lot of empty space in the page, which can be\nused to store tuples. An index search has to scan all tuples that are\nstored directly in the page, and also use the first 5 bits of the hash\nkey to search the appropriate bucket. But the bucket is itself a\ndirectory: it can have some tuples stored directly in the page, and\nthen it has 32 more slots and you use the next 5 bits of the hash key\nto decide which one of those to search. Then it becomes possible to\nincrementally expand the hash index: when the space available in a\ndirectory page fills up, you can either create a sub-directory and\nmove as many tuples as you can into that page, or add an overflow page\nthat contains only tuples.\n\nIt's important to be able to do either one, because sometimes a bucket\nfills up with tuples that have identical hash values, and sometimes a\nbucket fills up with tuples that have a variety of hash values. The\ncurrent implementation tends to massively increase the number of\nbuckets even when it does very little to spread the index entries out.\n(\"Hey, I doubled the number of buckets and the keys are still almost\nall in one bucket ... let me double the number of buckets again and\nsee if it works better this time!\") If we were going to create a\nreplacement, we'd want the index to respond differently to a bunch of\ndups vs. a bunch of non-dups.\n\n> As far as the specific point at hand is concerned, I think storing\n> a hash value per index column, while using only the first column's\n> hash for bucket selection, is what to do for multicol indexes.\n> We still couldn't set amoptionalkey=true for hash indexes, because\n> without a hash for the first column we don't know which bucket to\n> look in. But storing hashes for the additional columns would allow\n> us to check additional conditions in the index, and usually save\n> trips to the heap on queries that provide additional column\n> conditions. You could also imagine sorting the contents of a bucket\n> on all the hashes, which would ease uniqueness checks.\n\nThat sounds reasonable I guess, but I don't know how hard it is to implement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Aug 2021 12:39:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 8:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> As far as the specific point at hand is concerned, I think storing\n> a hash value per index column, while using only the first column's\n> hash for bucket selection, is what to do for multicol indexes.\n> We still couldn't set amoptionalkey=true for hash indexes, because\n> without a hash for the first column we don't know which bucket to\n> look in. But storing hashes for the additional columns would allow\n> us to check additional conditions in the index, and usually save\n> trips to the heap on queries that provide additional column\n> conditions. You could also imagine sorting the contents of a bucket\n> on all the hashes, which would ease uniqueness checks.\n\nEarlier, I was thinking that we have two hashes, one for the first key\ncolumn that is for identifying the bucket, and one for the remaining\nkey columns which will further help with heap lookup and ordering for\nuniqueness checking. But yeah if we have a hash value for each column\nthen it will make it really flexible.\n\nI was also looking into other databases that how they support hash\nindexes, then I see at least in MySQL[1] the multiple column index has\na limitation that you have to give all key columns in search for\nselecting the index scan. IMHO, that limitation might be there\nbecause they are storing just one hash value based on all key columns\nand also selecting the bucket based on the same hash value.\n\n[1] https://dev.mysql.com/doc/refman/8.0/en/index-btree-hash.html\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Aug 2021 09:09:31 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 9:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Aug 11, 2021 at 8:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > As far as the specific point at hand is concerned, I think storing\n> > a hash value per index column, while using only the first column's\n> > hash for bucket selection, is what to do for multicol indexes.\n> > We still couldn't set amoptionalkey=true for hash indexes, because\n> > without a hash for the first column we don't know which bucket to\n> > look in. But storing hashes for the additional columns would allow\n> > us to check additional conditions in the index, and usually save\n> > trips to the heap on queries that provide additional column\n> > conditions.\n\nYeah, this sounds reasonable but I think the alternative proposal by\nDilip (see below) and me [1] also has merits.\n\n> > You could also imagine sorting the contents of a bucket\n> > on all the hashes, which would ease uniqueness checks.\n>\n\nYeah, we can do that but the current design also seems to have merits\nfor uniqueness checks. For sorting all the hashes in the bucket, we\nneed to read all the overflow pages and then do sort, which could lead\nto additional I/O in some cases. The other possibility is to keep all\nthe bucket pages sorted during insertion but that would also require\nadditional I/O. OTOH, in the current design, if the value is not found\nin the current bucket page (which has hash values in sorted order),\nonly then we move to the next page.\n\n> Earlier, I was thinking that we have two hashes, one for the first key\n> column that is for identifying the bucket, and one for the remaining\n> key columns which will further help with heap lookup and ordering for\n> uniqueness checking.\n>\n\nI have also mentioned an almost similar idea yesterday [1]. If we go\nwith a specification similar to MySQL and SQLServer then probably it\nwould be better than storing the hashes for all the columns.\n\n\n But yeah if we have a hash value for each column\n> then it will make it really flexible.\n>\n> I was also looking into other databases that how they support hash\n> indexes, then I see at least in MySQL[1] the multiple column index has\n> a limitation that you have to give all key columns in search for\n> selecting the index scan.\n\nI see that SQLServer also has the same specification for multi-column\nhash index [2]. See the \"Multi-column index\" section. So it might not\nbe a bad idea to have a similar specification.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JD1%3DnPDi0kDPGLC%2BJDGEYP8DgTanobvgve%2B%2BKniQ68TA%40mail.gmail.com\n[2] - https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/indexes-for-memory-optimized-tables?view=sql-server-ver15\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Aug 2021 09:49:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 8:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I have to admit that after working with Amit on all the work to make\n> > hash indexes WAL-logged a few years ago, I was somewhat disillusioned\n> > with the whole AM. It seems like a cool idea to me but it's just not\n> > that well-implemented.\n>\n> Yeah, agreed. The whole buckets-are-integral-numbers-of-pages scheme\n> is pretty well designed to ensure bloat, but trying to ameliorate that\n> by reducing the number of buckets creates its own problems (since, as\n> you mention, we have no scheme whatever for searching within a bucket).\n> I'm quite unimpressed with Simon's upthread proposal to turn off bucket\n> splitting without doing anything about the latter issue.\n>\n\nThe design of the patch has changed since the initial proposal. It\ntries to perform unique inserts by holding a write lock on the bucket\npage to avoid duplicate inserts.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Aug 2021 09:52:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 12:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> The design of the patch has changed since the initial proposal. It\n> tries to perform unique inserts by holding a write lock on the bucket\n> page to avoid duplicate inserts.\n\nDo you mean that you're holding a buffer lwlock while you search the\nwhole bucket? If so, that's surely not OK.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Aug 2021 11:00:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 8:51 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> (Standard disclaimer that I'm not qualified to design index AMs) I've seen one mention in the literature about the possibility of simply having a btree index over the hash values. That would require faster search within pages, in particular using abbreviated keys in the ItemId array of internal pages [1] and interpolated search rather than pure binary search (which should work reliably with high-entropy keys like hash values), but doing that would speed up all btree indexes, so that much is worth doing regardless of how hash indexes are implemented. In that scheme, the hash index AM would just be around for backward compatibility.\n\nI think that it's possible (though hard) to do that without involving\nhashing, even for datatypes like text. Having some kind of prefix\ncompression that makes the final abbreviated keys have high entropy\nwould be essential, though. I agree that it would probably be\nsignificantly easier when you knew you were dealing with hash values,\nbut even there you need some kind of prefix compression.\n\nIn any case I suspect that it would make sense to reimplement hash\nindexes as a translation layer between hash index opclasses and\nnbtree. Robert said \"Likewise, the fact that keys are stored in hash\nvalue order within pages but that the bucket as a whole is not kept in\norder seems like it's bad for search performance\". Obviously we've\nalready done a lot of work on an index AM that deals with a fully\nordered keyspace very well. This includes dealing with large groups of\nduplicates gracefully, since in a certain sense there are no duplicate\nB-Tree index tuples -- the heap TID tiebreaker ensures that. And it\nensures that you have heap-wise locality within these large groups,\nwhich is a key enabler of things like opportunistic index deletion.\n\nWhen hash indexes have been used in database systems, it tends to be\nin-memory database systems where the recovery path doesn't recover\nindexes -- they're just rebuilt from scratch instead. If that's\nalready a baked-in assumption then hash indexes make more sense. To me\nit seems like the problem with true hash indexes is that they're\nconstructed in a top-down fashion, which is approximately the opposite\nof the bottom-up, incremental approach used by B-Tree indexing. This\nseems to be where all the skew problems arise from. This approach\ncannot be robust to changes in the keyspace over time, really.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 12 Aug 2021 08:57:58 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 8:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 12, 2021 at 12:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The design of the patch has changed since the initial proposal. It\n> > tries to perform unique inserts by holding a write lock on the bucket\n> > page to avoid duplicate inserts.\n>\n> Do you mean that you're holding a buffer lwlock while you search the\n> whole bucket? If so, that's surely not OK.\n>\n\nI think here you are worried that after holding lwlock we might\nperform reads of overflow pages which is not a good idea. I think\nthere are fewer chances of having overflow pages for unique indexes so\nwe don't expect such cases in common and as far as I can understand\nthis can happen in btree as well during uniqueness check. Now, I think\nthe other possibility could be that we do some sort of lock chaining\nwhere we grab the lock of the next bucket before releasing the lock of\nthe current bucket as we do during bucket clean up but not sure about\nthe same.\n\nI haven't studied the patch in detail so it is better for Simon to\npitch in here to avoid any incorrect information or if he has a\ndifferent understanding/idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Aug 2021 09:31:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 9:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 12, 2021 at 8:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Aug 12, 2021 at 12:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > The design of the patch has changed since the initial proposal. It\n> > > tries to perform unique inserts by holding a write lock on the bucket\n> > > page to avoid duplicate inserts.\n> >\n> > Do you mean that you're holding a buffer lwlock while you search the\n> > whole bucket? If so, that's surely not OK.\n> >\n>\n> I think here you are worried that after holding lwlock we might\n> perform reads of overflow pages which is not a good idea. I think\n> there are fewer chances of having overflow pages for unique indexes so\n> we don't expect such cases in common\n\nI think if we identify the bucket based on the hash value of all the\ncolumns then there should be a fewer overflow bucket, but IIUC, in\nthis patch bucket, is identified based on the hash value of the first\ncolumn only so there could be a lot of duplicates on the first column.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Aug 2021 11:39:49 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 11:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Aug 13, 2021 at 9:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 12, 2021 at 8:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 12, 2021 at 12:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > The design of the patch has changed since the initial proposal. It\n> > > > tries to perform unique inserts by holding a write lock on the bucket\n> > > > page to avoid duplicate inserts.\n> > >\n> > > Do you mean that you're holding a buffer lwlock while you search the\n> > > whole bucket? If so, that's surely not OK.\n> > >\n> >\n> > I think here you are worried that after holding lwlock we might\n> > perform reads of overflow pages which is not a good idea. I think\n> > there are fewer chances of having overflow pages for unique indexes so\n> > we don't expect such cases in common\n>\n> I think if we identify the bucket based on the hash value of all the\n> columns then there should be a fewer overflow bucket, but IIUC, in\n> this patch bucket, is identified based on the hash value of the first\n> column only so there could be a lot of duplicates on the first column.\n\n\nIMHO, as discussed above, since other databases also have the\nlimitation that if you create a multi-column hash index then the hash\nindex can not be used until all the key columns are used in the search\ncondition. So my point is that users might be using the hash index\nwith this limitation and their use case might be that they want to\ngain the best performance when they use this particular case and they\nmight not be looking for much flexibility like we provide in BTREE.\n\nFor reference:\nhttps://dev.mysql.com/doc/refman/8.0/en/index-btree-hash.html\nhttps://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/indexes-for-memory-optimized-tables?view=sql-server-ver15\n\nWe already know that performance will be better with a single hash for\nmultiple columns, but still I just wanted to check the performance\ndifference in PG. This might help us to decide the approach we need to\ngo for. With a quick POC of both the ideas, I have observed there is a\nmajor performance advantage with single combined hash for multi-key\ncolumns.\n\nPerformance Test Details: (Used PGBENCH Tool)\n Initialize cmd: “./pgbench -i -s 100 -d postgres\"\n\npostgres=# \\d+ pgbench_accounts\n\n Table \"public.pgbench_accounts\"\n\n Column | Type | Collation | Nullable | Default | Storage\n| Compression | Stats target | Description\n\n----------+---------------+-----------+----------+---------+----------+-------------+--------------+-------------\n\n aid | integer | | not null | | plain\n| | |\n bid | integer | | | | plain\n| | |\n abalance | integer | | | | plain\n| | |\n filler | character(84) | | | | extended\n| | |\n\nIndexes:\n \"pgbench_accounts_idx\" hash (aid, bid)\nAccess method: heap\nOptions: fillfactor=100\n\nTest Command: “./pgbench -j 1 postgres -C -M prepared -S -T 300”\n\nPerformance Test Results:\nIdea-1: Single Hash value for multiple key columns\n TPS = ~292\n\nIdea-2: Separate Hash values for each key column. But use only the\nfirst one to search the bucket. Other hash values are used as payload\nto get to the matching tuple before going to the heap.\n TPS = ~212\n\nNote: Here we got near to 25% better performance in a single combine\nhash approach with only TWO key columns. If we go for separate Hash\nvalues for all key columns mentioned then there will be a performance\ndip and storage also will be relatively higher when we have more key\ncolumns.\n\nI have just done separate POC patches to get the performance results\nas mentioned above, there are many other scenarios like split case, to\nbe taken care further.\nAttaching the POC patches here just for reference…\n\n\nThanks & Regards\nSadhuPrasad\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 27 Aug 2021 16:27:46 +0530",
"msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Fri, Aug 27, 2021 at 4:27 PM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>\n> IMHO, as discussed above, since other databases also have the\n> limitation that if you create a multi-column hash index then the hash\n> index can not be used until all the key columns are used in the search\n> condition. So my point is that users might be using the hash index\n> with this limitation and their use case might be that they want to\n> gain the best performance when they use this particular case and they\n> might not be looking for much flexibility like we provide in BTREE.\n>\n> For reference:\n> https://dev.mysql.com/doc/refman/8.0/en/index-btree-hash.html\n> https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/indexes-for-memory-optimized-tables?view=sql-server-ver15\n>\n> We already know that performance will be better with a single hash for\n> multiple columns, but still I just wanted to check the performance\n> difference in PG. This might help us to decide the approach we need to\n> go for. With a quick POC of both the ideas, I have observed there is a\n> major performance advantage with single combined hash for multi-key\n> columns.\n>\n> Performance Test Details: (Used PGBENCH Tool)\n> Initialize cmd: “./pgbench -i -s 100 -d postgres\"\n>\n> postgres=# \\d+ pgbench_accounts\n>\n> Table \"public.pgbench_accounts\"\n>\n> Column | Type | Collation | Nullable | Default | Storage\n> | Compression | Stats target | Description\n>\n> ----------+---------------+-----------+----------+---------+----------+-------------+--------------+-------------\n>\n> aid | integer | | not null | | plain\n> | | |\n> bid | integer | | | | plain\n> | | |\n> abalance | integer | | | | plain\n> | | |\n> filler | character(84) | | | | extended\n> | | |\n>\n> Indexes:\n> \"pgbench_accounts_idx\" hash (aid, bid)\n> Access method: heap\n> Options: fillfactor=100\n>\n> Test Command: “./pgbench -j 1 postgres -C -M prepared -S -T 300”\n>\n> Performance Test Results:\n> Idea-1: Single Hash value for multiple key columns\n> TPS = ~292\n>\n> Idea-2: Separate Hash values for each key column. But use only the\n> first one to search the bucket. Other hash values are used as payload\n> to get to the matching tuple before going to the heap.\n> TPS = ~212\n>\n> Note: Here we got near to 25% better performance in a single combine\n> hash approach with only TWO key columns.\n>\n\nThat's a significant difference. Have you checked via perf or some\nother way what causes this difference? I have seen that sometimes\nsingle client performance with pgbench is not stable, so can you\nplease once check with 4 clients or so and possibly with a larger\ndataset as well.\n\nOne more thing to consider is that it seems that the planner requires\na condition for the first column of an index before considering an\nindexscan plan. See Tom's email [1] in this regard. I think it would\nbe better to see what kind of work is involved there if you want to\nexplore a single hash value for all columns idea.\n\n[1] - https://www.postgresql.org/message-id/29263.1506483172%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 28 Aug 2021 16:30:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": ">\n> That's a significant difference. Have you checked via perf or some\n> other way what causes this difference? I have seen that sometimes\n> single client performance with pgbench is not stable, so can you\n> please once check with 4 clients or so and possibly with a larger\n> dataset as well.\n\nI have verified manually, without the PGBENCH tool also. I can see a\nsignificant difference for each query fired in both the versions of\npatch implemented. We can see as mentioned below, I have run the SAME\nquery on the SAME dataset on both patches. We have a significant\nperformance impact with Separate Hash values for multiple key columns.\n\nSingleHash_MultiColumn:\npostgres=# create table perftest(a int, b int, c int, d int, e int, f int);\nCREATE TABLE\n\npostgres=# insert into perftest values (generate_series(1, 10000000),\ngenerate_series(1, 10000000), generate_series(1, 10000000), 9, 7);\nINSERT 0 10000000\n\npostgres=# create index idx on perftest using hash(a, b, c);\nCREATE INDEX\n\npostgres=# select * from perftest where a=5999 and b=5999 and c=5999;\n a | b | c | d | e | f\n------+------+------+---+---+---\n 5999 | 5999 | 5999 | 9 | 7 |\n(1 row)\nTime: 2.022 ms\n\npostgres=# select * from perftest where a=597989 and b=597989 and c=597989;\n a | b | c | d | e | f\n--------+--------+--------+---+---+---\n 597989 | 597989 | 597989 | 9 | 7 |\n(1 row)\nTime: 0.867 ms\n\npostgres=# select * from perftest where a=6297989 and b=6297989 and c=6297989;\n a | b | c | d | e | f\n---------+---------+---------+---+---+---\n 6297989 | 6297989 | 6297989 | 9 | 7 |\n(1 row)\nTime: 1.439 ms\n\npostgres=# select * from perftest where a=6290798 and b=6290798 and c=6290798;\n a | b | c | d | e | f\n---------+---------+---------+---+---+---\n 6290798 | 6290798 | 6290798 | 9 | 7 |\n(1 row)\nTime: 1.013 ms\n\npostgres=# select * from perftest where a=6290791 and b=6290791 and c=6290791;\n a | b | c | d | e | f\n---------+---------+---------+---+---+---\n 6290791 | 6290791 | 6290791 | 9 | 7 |\n(1 row)\nTime: 0.903 ms\n\npostgres=# select * from perftest where a=62907 and b=62907 and c=62907;\n a | b | c | d | e | f\n-------+-------+-------+---+---+---\n 62907 | 62907 | 62907 | 9 | 7 |\n(1 row)\nTime: 0.894 ms\n\nSeparateHash_MultiColumn:\npostgres=# create table perftest(a int, b int, c int, d int, e int, f int);\nCREATE TABLE\n\npostgres=# insert into perftest values (generate_series(1, 10000000),\ngenerate_series(1, 10000000), generate_series(1, 10000000), 9, 7);\nINSERT 0 10000000\n\npostgres=# create index idx on perftest using hash(a, b, c);\nCREATE INDEX\n\npostgres=# select * from perftest where a=5999 and b=5999 and c=5999;\n a | b | c | d | e | f\n------+------+------+---+---+---\n 5999 | 5999 | 5999 | 9 | 7 |\n(1 row)\nTime: 2.915 ms\n\npostgres=# select * from perftest where a=597989 and b=597989 and c=597989;\n a | b | c | d | e | f\n--------+--------+--------+---+---+---\n 597989 | 597989 | 597989 | 9 | 7 |\n(1 row)\nTime: 1.129 ms\n\npostgres=# select * from perftest where a=6297989 and b=6297989 and c=6297989;\n a | b | c | d | e | f\n---------+---------+---------+---+---+---\n 6297989 | 6297989 | 6297989 | 9 | 7 |\n(1 row)\nTime: 2.454 ms\n\npostgres=# select * from perftest where a=6290798 and b=6290798 and c=6290798;\n a | b | c | d | e | f\n---------+---------+---------+---+---+---\n 6290798 | 6290798 | 6290798 | 9 | 7 |\n(1 row)\nTime: 2.327 ms\n\npostgres=# select * from perftest where a=6290791 and b=6290791 and c=6290791;\n a | b | c | d | e | f\n---------+---------+---------+---+---+---\n 6290791 | 6290791 | 6290791 | 9 | 7 |\n(1 row)\nTime: 1.676 ms\n\npostgres=# select * from perftest where a=62907 and b=62907 and c=62907;\n a | b | c | d | e | f\n-------+-------+-------+---+---+---\n 62907 | 62907 | 62907 | 9 | 7 |\n(1 row)\nTime: 2.614 ms\n\nIf I do a test with 4 clients, then there is not much visible\ndifference. I think this is because of contentions. And here our focus\nis single thread & single operation performance.\n\n>\n> One more thing to consider is that it seems that the planner requires\n> a condition for the first column of an index before considering an\n> indexscan plan. See Tom's email [1] in this regard. I think it would\n> be better to see what kind of work is involved there if you want to\n> explore a single hash value for all columns idea.\n>\n> [1] - https://www.postgresql.org/message-id/29263.1506483172%40sss.pgh.pa.us\n\nAbout this point, I will analyze further and update.\n\nThanks & Regards\nSadhuPrasad\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 Sep 2021 16:10:20 +0530",
"msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "> > One more thing to consider is that it seems that the planner requires\n> > a condition for the first column of an index before considering an\n> > indexscan plan. See Tom's email [1] in this regard. I think it would\n> > be better to see what kind of work is involved there if you want to\n> > explore a single hash value for all columns idea.\n> >\n> > [1] - https://www.postgresql.org/message-id/29263.1506483172%40sss.pgh.pa.us\n>\n> About this point, I will analyze further and update.\n>\n\nI have checked the planner code, there does not seem to be any\ncomplicated changes needed to cover if we take up a single hash value\nfor all columns... Below are the major part of changes needed:\n\nIn build_index_paths(), there is a check like, \"if (index_clauses ==\nNIL && !index->amoptionalkey)\", which helps to figure out if the\nleading column has any clause or not. This needs to be moved out of\nthe loop and check for clauses on all key columns.\nWith this we need to add a \"amallcolumncluse\" field to Index\nstructure, which will be set to TRUE for HASH index and FALSE in other\ncases.\n\nAnd to get the multi-column hash index selected, we may set\nenable_hashjoin =off, to avoid any condition become join condition,\nsaw similar behaviors in other DBs as well...\n\nThanks & Regards\nSadhuPrasad\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Sep 2021 10:03:56 +0530",
"msg_from": "Sadhuprasad Patro <b.sadhu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 10:04 AM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>\n> > > One more thing to consider is that it seems that the planner requires\n> > > a condition for the first column of an index before considering an\n> > > indexscan plan. See Tom's email [1] in this regard. I think it would\n> > > be better to see what kind of work is involved there if you want to\n> > > explore a single hash value for all columns idea.\n> > >\n> > > [1] - https://www.postgresql.org/message-id/29263.1506483172%40sss.pgh.pa.us\n> >\n> > About this point, I will analyze further and update.\n> >\n>\n> I have checked the planner code, there does not seem to be any\n> complicated changes needed to cover if we take up a single hash value\n> for all columns... Below are the major part of changes needed:\n>\n> In build_index_paths(), there is a check like, \"if (index_clauses ==\n> NIL && !index->amoptionalkey)\", which helps to figure out if the\n> leading column has any clause or not. This needs to be moved out of\n> the loop and check for clauses on all key columns.\n> With this we need to add a \"amallcolumncluse\" field to Index\n> structure, which will be set to TRUE for HASH index and FALSE in other\n> cases.\n\nRight we can add an AM level option and based on that we can decide\nwhether to select the index scan if conditions are not given for all\nthe key columns. And changes don't look that complicated.\n\n>\n> And to get the multi-column hash index selected, we may set\n> enable_hashjoin =off, to avoid any condition become join condition,\n> saw similar behaviors in other DBs as well...\n\nThis may be related to Tom's point that, if some of the quals are\nremoved due to optimization or converted to join quals, then now, even\nif the user has given qual on all the key columns the index scan will\nnot be selected because we will be forcing that the hash index can\nonly be selected if it has quals on all the key attributes?\n\nI don't think suggesting enable_hashjoin =off is a solution, this can\nhappen with merge join or the nested loop join with materialized node,\nin all such cases join filter can not be pushed down to the inner node\nbecause the outer node will not start to scan until we\nmaterialize/sort/hash the inner node. But yeah if we test this\nbehavior in other databases also and if it appeared that this is how\nthe hash index is being used then maybe this behavior can be\ndocumented.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Sep 2021 11:11:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 11:11 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Sep 23, 2021 at 10:04 AM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n> >\n> > And to get the multi-column hash index selected, we may set\n> > enable_hashjoin =off, to avoid any condition become join condition,\n> > saw similar behaviors in other DBs as well...\n>\n> This may be related to Tom's point that, if some of the quals are\n> removed due to optimization or converted to join quals, then now, even\n> if the user has given qual on all the key columns the index scan will\n> not be selected because we will be forcing that the hash index can\n> only be selected if it has quals on all the key attributes?\n>\n> I don't think suggesting enable_hashjoin =off is a solution,\n>\n\nYeah, this doesn't sound like a good idea. How about instead try to\nexplore the idea where the hash (bucket assignment and search) will be\nbased on the first index key and the other columns will be stored as\npayload? I think this might pose some difficulty in the consecutive\npatch to enable a unique index because it will increase the chance of\ntraversing more buckets for uniqueness checks. If we see such\nproblems, then I have another idea to minimize the number of buckets\nthat we need to lock during uniqueness check which is by lock chaining\nas is used during hash bucket clean up where at a time we don't need\nto lock more than two buckets at a time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Sep 2021 11:22:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Mon, 27 Sept 2021 at 06:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Sep 23, 2021 at 11:11 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Sep 23, 2021 at 10:04 AM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n> > >\n> > > And to get the multi-column hash index selected, we may set\n> > > enable_hashjoin =off, to avoid any condition become join condition,\n> > > saw similar behaviors in other DBs as well...\n> >\n> > This may be related to Tom's point that, if some of the quals are\n> > removed due to optimization or converted to join quals, then now, even\n> > if the user has given qual on all the key columns the index scan will\n> > not be selected because we will be forcing that the hash index can\n> > only be selected if it has quals on all the key attributes?\n> >\n> > I don't think suggesting enable_hashjoin =off is a solution,\n> >\n>\n> Yeah, this doesn't sound like a good idea. How about instead try to\n> explore the idea where the hash (bucket assignment and search) will be\n> based on the first index key and the other columns will be stored as\n> payload? I think this might pose some difficulty in the consecutive\n> patch to enable a unique index because it will increase the chance of\n> traversing more buckets for uniqueness checks. If we see such\n> problems, then I have another idea to minimize the number of buckets\n> that we need to lock during uniqueness check which is by lock chaining\n> as is used during hash bucket clean up where at a time we don't need\n> to lock more than two buckets at a time.\n\nI have presented a simple, almost trivial, patch to allow multi-col\nhash indexes. It hashes the first column only, which can be a downside\nin *some* cases. If that is clearly documented, it would not cause\nmany issues, IMHO. However, it does not have any optimization issues\nor complexities, which is surely a very good thing.\n\nTrying to involve *all* columns in the hash index is a secondary\noptimization. It requires subtle changes in optimizer code, as Tom\npoints out. It also needs fine tuning to make the all-column approach\nbeneficial for the additional cases without losing against what the\n\"first column\" approach gives.\n\nI did consider both approaches and after this discussion I am still in\nfavour of committing the very simple \"first column\" approach to\nmulti-col hash indexes now.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 5 Oct 2021 11:38:02 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Fri, 13 Aug 2021 at 05:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 12, 2021 at 8:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Aug 12, 2021 at 12:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > The design of the patch has changed since the initial proposal. It\n> > > tries to perform unique inserts by holding a write lock on the bucket\n> > > page to avoid duplicate inserts.\n> >\n> > Do you mean that you're holding a buffer lwlock while you search the\n> > whole bucket? If so, that's surely not OK.\n> >\n>\n> I think here you are worried that after holding lwlock we might\n> perform reads of overflow pages which is not a good idea. I think\n> there are fewer chances of having overflow pages for unique indexes so\n> we don't expect such cases in common and as far as I can understand\n> this can happen in btree as well during uniqueness check. Now, I think\n> the other possibility could be that we do some sort of lock chaining\n> where we grab the lock of the next bucket before releasing the lock of\n> the current bucket as we do during bucket clean up but not sure about\n> the same.\n>\n> I haven't studied the patch in detail so it is better for Simon to\n> pitch in here to avoid any incorrect information or if he has a\n> different understanding/idea.\n\nThat is correct. After analysis of their behavior, I think further\nsimple work on hash indexes is worthwhile.\n\nWith unique data, starting at 1 and monotonically ascending, hash\nindexes will grow very nicely from 0 to 10E7 rows without causing >1\noverflow block to be allocated for any bucket. This keeps the search\ntime for such data to just 2 blocks (bucket plus, if present, 1\noverflow block). The small number of overflow blocks is because of the\nregular and smooth way that splits occur, which works very nicely\nwithout significant extra latency.\n\nThe probability of bucket collision while we hold the lock is fairly\nlow. This is because even with adjacent data values the hash values\nwould be spread across multiple buckets, so we would expect the\ncontention to be less than we would get on a monotonically increasing\nbtree.\n\nSo I don't now see any problem from holding the buffer lwlock on the\nbucket while we do multi-buffer operations.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 5 Oct 2021 11:50:12 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 4:08 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Mon, 27 Sept 2021 at 06:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Sep 23, 2021 at 11:11 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 23, 2021 at 10:04 AM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n> > > >\n> > > > And to get the multi-column hash index selected, we may set\n> > > > enable_hashjoin =off, to avoid any condition become join condition,\n> > > > saw similar behaviors in other DBs as well...\n> > >\n> > > This may be related to Tom's point that, if some of the quals are\n> > > removed due to optimization or converted to join quals, then now, even\n> > > if the user has given qual on all the key columns the index scan will\n> > > not be selected because we will be forcing that the hash index can\n> > > only be selected if it has quals on all the key attributes?\n> > >\n> > > I don't think suggesting enable_hashjoin =off is a solution,\n> > >\n> >\n> > Yeah, this doesn't sound like a good idea. How about instead try to\n> > explore the idea where the hash (bucket assignment and search) will be\n> > based on the first index key and the other columns will be stored as\n> > payload? I think this might pose some difficulty in the consecutive\n> > patch to enable a unique index because it will increase the chance of\n> > traversing more buckets for uniqueness checks. If we see such\n> > problems, then I have another idea to minimize the number of buckets\n> > that we need to lock during uniqueness check which is by lock chaining\n> > as is used during hash bucket clean up where at a time we don't need\n> > to lock more than two buckets at a time.\n>\n> I have presented a simple, almost trivial, patch to allow multi-col\n> hash indexes. It hashes the first column only, which can be a downside\n> in *some* cases. If that is clearly documented, it would not cause\n> many issues, IMHO. However, it does not have any optimization issues\n> or complexities, which is surely a very good thing.\n>\n> Trying to involve *all* columns in the hash index is a secondary\n> optimization. It requires subtle changes in optimizer code, as Tom\n> points out. It also needs fine tuning to make the all-column approach\n> beneficial for the additional cases without losing against what the\n> \"first column\" approach gives.\n>\n> I did consider both approaches and after this discussion I am still in\n> favour of committing the very simple \"first column\" approach to\n> multi-col hash indexes now.\n\nBut what about the other approach suggested by Tom, basically we hash\nonly based on the first column for identifying the bucket, but we also\nstore the hash value for other columns? With that, we don't need\nchanges in the optimizer and we can also avoid a lot of disk fetches\nbecause after finding the bucket we can match the secondary columns\nbefore fetching the disk tuple. I agree, one downside with this\napproach is we will increase the index size.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 16:54:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, 5 Oct 2021 at 12:24, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 5, 2021 at 4:08 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Mon, 27 Sept 2021 at 06:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 23, 2021 at 11:11 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Thu, Sep 23, 2021 at 10:04 AM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n> > > > >\n> > > > > And to get the multi-column hash index selected, we may set\n> > > > > enable_hashjoin =off, to avoid any condition become join condition,\n> > > > > saw similar behaviors in other DBs as well...\n> > > >\n> > > > This may be related to Tom's point that, if some of the quals are\n> > > > removed due to optimization or converted to join quals, then now, even\n> > > > if the user has given qual on all the key columns the index scan will\n> > > > not be selected because we will be forcing that the hash index can\n> > > > only be selected if it has quals on all the key attributes?\n> > > >\n> > > > I don't think suggesting enable_hashjoin =off is a solution,\n> > > >\n> > >\n> > > Yeah, this doesn't sound like a good idea. How about instead try to\n> > > explore the idea where the hash (bucket assignment and search) will be\n> > > based on the first index key and the other columns will be stored as\n> > > payload? I think this might pose some difficulty in the consecutive\n> > > patch to enable a unique index because it will increase the chance of\n> > > traversing more buckets for uniqueness checks. If we see such\n> > > problems, then I have another idea to minimize the number of buckets\n> > > that we need to lock during uniqueness check which is by lock chaining\n> > > as is used during hash bucket clean up where at a time we don't need\n> > > to lock more than two buckets at a time.\n> >\n> > I have presented a simple, almost trivial, patch to allow multi-col\n> > hash indexes. It hashes the first column only, which can be a downside\n> > in *some* cases. If that is clearly documented, it would not cause\n> > many issues, IMHO. However, it does not have any optimization issues\n> > or complexities, which is surely a very good thing.\n> >\n> > Trying to involve *all* columns in the hash index is a secondary\n> > optimization. It requires subtle changes in optimizer code, as Tom\n> > points out. It also needs fine tuning to make the all-column approach\n> > beneficial for the additional cases without losing against what the\n> > \"first column\" approach gives.\n> >\n> > I did consider both approaches and after this discussion I am still in\n> > favour of committing the very simple \"first column\" approach to\n> > multi-col hash indexes now.\n>\n> But what about the other approach suggested by Tom, basically we hash\n> only based on the first column for identifying the bucket, but we also\n> store the hash value for other columns? With that, we don't need\n> changes in the optimizer and we can also avoid a lot of disk fetches\n> because after finding the bucket we can match the secondary columns\n> before fetching the disk tuple. I agree, one downside with this\n> approach is we will increase the index size.\n\nIdentifying the bucket is the main part of a hash index's work, so\nthat part would be identical.\n\nOnce we have identified the bucket, we sort the bucket page by hash,\nso having an all-col hash would help de-duplicate multi-col hash\ncollisions, but not enough to be worth it, IMHO, given that storing an\nextra 4 bytes per index tuple is a significant size increase which\nwould cause extra overflow pages etc.. The same thought applies to\n8-byte hashes.\n\nIMHO, multi-column hash collisions are a secondary issue, given that\nwe can already select the column order for an index and hash indexes\nwould only be used by explicit user choice. If there are some minor\nsub-optimal aspects of using hash indexes, then btree was already the\ndefault and a great choice for many cases.\n\nIf btree didn't already exist I would care more about making hash\nindexes perfect. I just want to make them usable.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 5 Oct 2021 17:28:33 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "\n\nOn 10/5/21 18:28, Simon Riggs wrote:\n> On Tue, 5 Oct 2021 at 12:24, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Tue, Oct 5, 2021 at 4:08 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>>>\n>>> On Mon, 27 Sept 2021 at 06:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>\n>>>> On Thu, Sep 23, 2021 at 11:11 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>>>>\n>>>>> On Thu, Sep 23, 2021 at 10:04 AM Sadhuprasad Patro <b.sadhu@gmail.com> wrote:\n>>>>>>\n>>>>>> And to get the multi-column hash index selected, we may set\n>>>>>> enable_hashjoin =off, to avoid any condition become join condition,\n>>>>>> saw similar behaviors in other DBs as well...\n>>>>>\n>>>>> This may be related to Tom's point that, if some of the quals are\n>>>>> removed due to optimization or converted to join quals, then now, even\n>>>>> if the user has given qual on all the key columns the index scan will\n>>>>> not be selected because we will be forcing that the hash index can\n>>>>> only be selected if it has quals on all the key attributes?\n>>>>>\n>>>>> I don't think suggesting enable_hashjoin =off is a solution,\n>>>>>\n>>>>\n>>>> Yeah, this doesn't sound like a good idea. How about instead try to\n>>>> explore the idea where the hash (bucket assignment and search) will be\n>>>> based on the first index key and the other columns will be stored as\n>>>> payload? I think this might pose some difficulty in the consecutive\n>>>> patch to enable a unique index because it will increase the chance of\n>>>> traversing more buckets for uniqueness checks. If we see such\n>>>> problems, then I have another idea to minimize the number of buckets\n>>>> that we need to lock during uniqueness check which is by lock chaining\n>>>> as is used during hash bucket clean up where at a time we don't need\n>>>> to lock more than two buckets at a time.\n>>>\n>>> I have presented a simple, almost trivial, patch to allow multi-col\n>>> hash indexes. It hashes the first column only, which can be a downside\n>>> in *some* cases. If that is clearly documented, it would not cause\n>>> many issues, IMHO. However, it does not have any optimization issues\n>>> or complexities, which is surely a very good thing.\n>>>\n>>> Trying to involve *all* columns in the hash index is a secondary\n>>> optimization. It requires subtle changes in optimizer code, as Tom\n>>> points out. It also needs fine tuning to make the all-column approach\n>>> beneficial for the additional cases without losing against what the\n>>> \"first column\" approach gives.\n>>>\n>>> I did consider both approaches and after this discussion I am still in\n>>> favour of committing the very simple \"first column\" approach to\n>>> multi-col hash indexes now.\n>>\n>> But what about the other approach suggested by Tom, basically we hash\n>> only based on the first column for identifying the bucket, but we also\n>> store the hash value for other columns? With that, we don't need\n>> changes in the optimizer and we can also avoid a lot of disk fetches\n>> because after finding the bucket we can match the secondary columns\n>> before fetching the disk tuple. I agree, one downside with this\n>> approach is we will increase the index size.\n> \n> Identifying the bucket is the main part of a hash index's work, so\n> that part would be identical.\n> \n> Once we have identified the bucket, we sort the bucket page by hash,\n> so having an all-col hash would help de-duplicate multi-col hash\n> collisions, but not enough to be worth it, IMHO, given that storing an\n> extra 4 bytes per index tuple is a significant size increase which\n> would cause extra overflow pages etc.. The same thought applies to\n> 8-byte hashes.\n> \n\nIMO it'd be nice to show some numbers to support the claims that storing \nthe extra hashes and/or 8B hashes is not worth it ...\n\nI'm sure there are cases where it'd be a net loss, but imagine for \nexample a case when the first column has a lot of duplicate values. \nWhich is not all that unlikely - duplicates seem like one of the natural \nreasons why people want multi-column hash indexes. And those duplicates \nare quite expensive, due to having to access the heap. Being able to \neliminate those extra accesses cheaply might be a clear win, even if it \nmakes the index a bit larger (shorter hashes might be enough).\n\n\n> IMHO, multi-column hash collisions are a secondary issue, given that\n> we can already select the column order for an index and hash indexes\n> would only be used by explicit user choice. If there are some minor\n> sub-optimal aspects of using hash indexes, then btree was already the\n> default and a great choice for many cases.\n> \n\nBut we can't select arbitrary column order, because the first column is \nused to select the bucket. Which means it's also about what columns are \nused by the queries. If the query is not using the first column, it \ncan't use the index.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 5 Oct 2021 21:06:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, 5 Oct 2021 at 20:06, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> >>> I have presented a simple, almost trivial, patch to allow multi-col\n> >>> hash indexes. It hashes the first column only, which can be a downside\n> >>> in *some* cases. If that is clearly documented, it would not cause\n> >>> many issues, IMHO. However, it does not have any optimization issues\n> >>> or complexities, which is surely a very good thing.\n> >>>\n> >>> Trying to involve *all* columns in the hash index is a secondary\n> >>> optimization. It requires subtle changes in optimizer code, as Tom\n> >>> points out. It also needs fine tuning to make the all-column approach\n> >>> beneficial for the additional cases without losing against what the\n> >>> \"first column\" approach gives.\n> >>>\n> >>> I did consider both approaches and after this discussion I am still in\n> >>> favour of committing the very simple \"first column\" approach to\n> >>> multi-col hash indexes now.\n> >>\n> >> But what about the other approach suggested by Tom, basically we hash\n> >> only based on the first column for identifying the bucket, but we also\n> >> store the hash value for other columns? With that, we don't need\n> >> changes in the optimizer and we can also avoid a lot of disk fetches\n> >> because after finding the bucket we can match the secondary columns\n> >> before fetching the disk tuple. I agree, one downside with this\n> >> approach is we will increase the index size.\n> >\n> > Identifying the bucket is the main part of a hash index's work, so\n> > that part would be identical.\n> >\n> > Once we have identified the bucket, we sort the bucket page by hash,\n> > so having an all-col hash would help de-duplicate multi-col hash\n> > collisions, but not enough to be worth it, IMHO, given that storing an\n> > extra 4 bytes per index tuple is a significant size increase which\n> > would cause extra overflow pages etc.. The same thought applies to\n> > 8-byte hashes.\n> >\n>\n> IMO it'd be nice to show some numbers to support the claims that storing\n> the extra hashes and/or 8B hashes is not worth it ...\n\nUsing an 8-byte hash is possible, but only becomes effective when\n4-byte hash collisions get hard to manage. 8-byte hash also makes the\nindex 20% bigger, so it is not a good default.\n\nLet's look at the distribution of values:\n\nIn a table with 100 million rows, with consecutive monotonic values,\nstarting at 1\nNo Collisions - 98.8%\n1 Collision - 1.15%\n2+ Collisions - 0.009% (8979 values colliding)\nMax=4\n\nIn a table with 1 billion rows (2^30), with consecutive monotonic\nvalues, starting at 1\nNo Collisions - 89.3%\n1 Collision - 9.8%\n2 Collisions - 0.837%\n3+ Collisions - 0.0573% (615523 values colliding)\nMax=9\n\nAt 100 million rows, the collisions from a 4-byte hash are not\npainful, but by a billion rows they are starting to become a problem,\nand by 2 billion rows we have a noticeable issue (>18% collisions).\n\nClearly, 8-byte hash values would be appropriate for tables larger\nthan this. However, we expect users to know about and to use\npartitioning, with reasonable limits somewhere in the 100 million row\n(with 100 byte rows, 10GB) to 1 billion row (with 100 byte rows,\n100GB) range.\n\nThe change from 4 to 8 byte hashes seems simple, so I am not against\nit for that reason. IMHO there is no use case for 8-byte hashes since\nreasonable users would not have tables big enough to care.\n\nThat is my reasoning, YMMV.\n\n> I'm sure there are cases where it'd be a net loss, but imagine for\n> example a case when the first column has a lot of duplicate values.\n> Which is not all that unlikely - duplicates seem like one of the natural\n> reasons why people want multi-column hash indexes. And those duplicates\n> are quite expensive, due to having to access the heap. Being able to\n> eliminate those extra accesses cheaply might be a clear win, even if it\n> makes the index a bit larger (shorter hashes might be enough).\n\nI agree, eliminating duplicates would be a good thing, if that is possible.\n\nHowever, hashing on multiple columns doesn't eliminate duplicates, we\ncan still get them from different combinations of rows.\n\nWith a first-column hash then (1,1) and (1,2) collide.\nBut with an all-column hash then (1,2) and (2,1) collide.\nSo we can still end up with collisions and this depends on the data\nvalues and types.\nWe can all come up with pessimal use cases.\n\nPerhaps it would be possible to specify a parameter that says how many\ncolumns in the index are part of the hash? Not sure how easy that is.\n\nIf you have a situation with lots of duplicates, then use btrees\ninstead. We shouldn't have to make hash indexes work well for *all*\ncases before we allow multiple columns for some cases. The user will\nalready get to compare btree-vs-hash before they use them in a\nparticular use case. The perfect should not be the enemy of the good.\n\nStoring multiple hashes uses more space and is more complex. It\ndoesn't feel like a good solution to me, given the purpose of an index\nis not completeness but optimality.\nStoring 2 4-byte hashes uses 20% more space than one 4-byte hash.\nStoring 2 8-byte hashes uses 40% more space than one 4-byte hash.\n\n> > IMHO, multi-column hash collisions are a secondary issue, given that\n> > we can already select the column order for an index and hash indexes\n> > would only be used by explicit user choice. If there are some minor\n> > sub-optimal aspects of using hash indexes, then btree was already the\n> > default and a great choice for many cases.\n> >\n>\n> But we can't select arbitrary column order, because the first column is\n> used to select the bucket. Which means it's also about what columns are\n> used by the queries. If the query is not using the first column, it\n> can't use the index.\n\nNeither approach works in that case, so it is moot. i.e. you cannot\nuse a first-column hash, nor an all-column hash.\n\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 13 Oct 2021 11:43:44 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Oct 13, 2021 at 3:44 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> > IMO it'd be nice to show some numbers to support the claims that storing\n> > the extra hashes and/or 8B hashes is not worth it ...\n>\n> Using an 8-byte hash is possible, but only becomes effective when\n> 4-byte hash collisions get hard to manage. 8-byte hash also makes the\n> index 20% bigger, so it is not a good default.\n\nAre you sure? I know that nbtree index tuples for a single-column int8\nindex are exactly the same size as those from a single column int4\nindex, due to alignment overhead at the tuple level. So my guess is\nthat hash index tuples (which use the same basic IndexTuple\nrepresentation) work in the same way.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Oct 2021 12:15:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Oct 13, 2021 at 12:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Are you sure? I know that nbtree index tuples for a single-column int8\n> index are exactly the same size as those from a single column int4\n> index, due to alignment overhead at the tuple level. So my guess is\n> that hash index tuples (which use the same basic IndexTuple\n> representation) work in the same way.\n\nI'm assuming a 64-bit architecture here, by the way. That assumption\nshouldn't matter, since of course approximately 100% of all computers\nthat run Postgres are 64-bit these days.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Oct 2021 12:23:23 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, 13 Oct 2021 at 20:16, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Oct 13, 2021 at 3:44 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > > IMO it'd be nice to show some numbers to support the claims that storing\n> > > the extra hashes and/or 8B hashes is not worth it ...\n> >\n> > Using an 8-byte hash is possible, but only becomes effective when\n> > 4-byte hash collisions get hard to manage. 8-byte hash also makes the\n> > index 20% bigger, so it is not a good default.\n>\n> Are you sure? I know that nbtree index tuples for a single-column int8\n> index are exactly the same size as those from a single column int4\n> index, due to alignment overhead at the tuple level. So my guess is\n> that hash index tuples (which use the same basic IndexTuple\n> representation) work in the same way.\n\nThe hash index tuples are 20-bytes each. If that were rounded up to\n8-byte alignment, then that would be 24 bytes.\n\nUsing pageinspect, the max(live_items) on any data page (bucket or\noverflow) is 407 items, so they can't be 24 bytes long.\n\n\nOther stats of interest would be that the current bucket design/page\nsplitting is very effective at maintaining distribution. On a hash\nindex for a table with 2 billion rows in it, with integer values from\n1 to 2billion, there are 3670016 bucket pages and 524286 overflow\npages, distributed so that 87.5% of buckets have no overflow pages,\nand 12.5% of buckets have only one overflow page; there are no buckets\nwith >1 overflow page. The most heavily populated overflow page has\n209 items.\n\nThe CREATE INDEX time is fairly poor at present, but that can be\noptimized easily enough, but I expect to do that after uniqueness is\nadded, since it would complicate the code to do that work in a\ndifferent order.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 14 Oct 2021 08:48:12 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, Oct 14, 2021 at 12:48 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> The hash index tuples are 20-bytes each. If that were rounded up to\n> 8-byte alignment, then that would be 24 bytes.\n>\n> Using pageinspect, the max(live_items) on any data page (bucket or\n> overflow) is 407 items, so they can't be 24 bytes long.\n\nThat's the same as an nbtree page, which confirms my suspicion. The 20\nbytes consists of a 16 byte tuple, plus a 4 byte line pointer. The\ntuple-level alignment overhead gets you from 12 bytes to 16 bytes with\na single int4 column. So the padding is there for the taking.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Oct 2021 08:08:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Thu, 14 Oct 2021 at 16:09, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Oct 14, 2021 at 12:48 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > The hash index tuples are 20-bytes each. If that were rounded up to\n> > 8-byte alignment, then that would be 24 bytes.\n> >\n> > Using pageinspect, the max(live_items) on any data page (bucket or\n> > overflow) is 407 items, so they can't be 24 bytes long.\n>\n> That's the same as an nbtree page, which confirms my suspicion. The 20\n> bytes consists of a 16 byte tuple, plus a 4 byte line pointer. The\n> tuple-level alignment overhead gets you from 12 bytes to 16 bytes with\n> a single int4 column. So the padding is there for the taking.\n\nThank you for nudging me to review the tuple length.\n\nSince hash indexes never store Nulls, and the hash is always fixed\nlength, ISTM that we can compress the hash index entries down to\nItemPointerData (6 bytes) plus any hashes.\n\nThat doesn't change any arguments about size differences between\napproaches, but we can significantly reduce index size (by up to 50%).\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sun, 17 Oct 2021 12:00:24 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Oct 13, 2021 at 4:13 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, 5 Oct 2021 at 20:06, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> > >>> I have presented a simple, almost trivial, patch to allow multi-col\n> > >>> hash indexes. It hashes the first column only, which can be a downside\n> > >>> in *some* cases. If that is clearly documented, it would not cause\n> > >>> many issues, IMHO. However, it does not have any optimization issues\n> > >>> or complexities, which is surely a very good thing.\n> > >>>\n> > >>> Trying to involve *all* columns in the hash index is a secondary\n> > >>> optimization. It requires subtle changes in optimizer code, as Tom\n> > >>> points out. It also needs fine tuning to make the all-column approach\n> > >>> beneficial for the additional cases without losing against what the\n> > >>> \"first column\" approach gives.\n> > >>>\n> > >>> I did consider both approaches and after this discussion I am still in\n> > >>> favour of committing the very simple \"first column\" approach to\n> > >>> multi-col hash indexes now.\n> > >>\n> > >> But what about the other approach suggested by Tom, basically we hash\n> > >> only based on the first column for identifying the bucket, but we also\n> > >> store the hash value for other columns? With that, we don't need\n> > >> changes in the optimizer and we can also avoid a lot of disk fetches\n> > >> because after finding the bucket we can match the secondary columns\n> > >> before fetching the disk tuple. I agree, one downside with this\n> > >> approach is we will increase the index size.\n> > >\n> > > Identifying the bucket is the main part of a hash index's work, so\n> > > that part would be identical.\n> > >\n> > > Once we have identified the bucket, we sort the bucket page by hash,\n> > > so having an all-col hash would help de-duplicate multi-col hash\n> > > collisions, but not enough to be worth it, IMHO, given that storing an\n> > > extra 4 bytes per index tuple is a significant size increase which\n> > > would cause extra overflow pages etc.. The same thought applies to\n> > > 8-byte hashes.\n> > >\n> >\n> > IMO it'd be nice to show some numbers to support the claims that storing\n> > the extra hashes and/or 8B hashes is not worth it ...\n>\n> Using an 8-byte hash is possible, but only becomes effective when\n> 4-byte hash collisions get hard to manage. 8-byte hash also makes the\n> index 20% bigger, so it is not a good default.\n>\n> Let's look at the distribution of values:\n>\n> In a table with 100 million rows, with consecutive monotonic values,\n> starting at 1\n> No Collisions - 98.8%\n> 1 Collision - 1.15%\n> 2+ Collisions - 0.009% (8979 values colliding)\n> Max=4\n>\n> In a table with 1 billion rows (2^30), with consecutive monotonic\n> values, starting at 1\n> No Collisions - 89.3%\n> 1 Collision - 9.8%\n> 2 Collisions - 0.837%\n> 3+ Collisions - 0.0573% (615523 values colliding)\n> Max=9\n>\n> At 100 million rows, the collisions from a 4-byte hash are not\n> painful, but by a billion rows they are starting to become a problem,\n> and by 2 billion rows we have a noticeable issue (>18% collisions).\n>\n> Clearly, 8-byte hash values would be appropriate for tables larger\n> than this. However, we expect users to know about and to use\n> partitioning, with reasonable limits somewhere in the 100 million row\n> (with 100 byte rows, 10GB) to 1 billion row (with 100 byte rows,\n> 100GB) range.\n>\n> The change from 4 to 8 byte hashes seems simple, so I am not against\n> it for that reason. IMHO there is no use case for 8-byte hashes since\n> reasonable users would not have tables big enough to care.\n>\n> That is my reasoning, YMMV.\n>\n> > I'm sure there are cases where it'd be a net loss, but imagine for\n> > example a case when the first column has a lot of duplicate values.\n> > Which is not all that unlikely - duplicates seem like one of the natural\n> > reasons why people want multi-column hash indexes. And those duplicates\n> > are quite expensive, due to having to access the heap. Being able to\n> > eliminate those extra accesses cheaply might be a clear win, even if it\n> > makes the index a bit larger (shorter hashes might be enough).\n>\n> I agree, eliminating duplicates would be a good thing, if that is possible.\n>\n> However, hashing on multiple columns doesn't eliminate duplicates, we\n> can still get them from different combinations of rows.\n>\n> With a first-column hash then (1,1) and (1,2) collide.\n> But with an all-column hash then (1,2) and (2,1) collide.\n> So we can still end up with collisions and this depends on the data\n> values and types.\n>\n\nI don't think this will happen if we store the first column as bucket\nidentifier and other cols as payload.\n\n> We can all come up with pessimal use cases.\n>\n> Perhaps it would be possible to specify a parameter that says how many\n> columns in the index are part of the hash? Not sure how easy that is.\n>\n> If you have a situation with lots of duplicates, then use btrees\n> instead. We shouldn't have to make hash indexes work well for *all*\n> cases before we allow multiple columns for some cases. The user will\n> already get to compare btree-vs-hash before they use them in a\n> particular use case. The perfect should not be the enemy of the good.\n>\n> Storing multiple hashes uses more space and is more complex.\n>\n\nI agree that storing trailing columns (except the first one) as\npayload uses more space but it will save heap fetches in many cases.\nApart from search, even for unique key insertion, we need to do heap\nfetches as we can only verify the other values after fetching the row\nfrom the heap.\n\nNow, here I feel the question is do we want to save random heap I/O or\nsave extra space in a hash? I think both approaches have pros and cons\nbut probably saving heap I/O appears to be important especially for\nunique index checks where we need to hold bucket lock as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Oct 2021 17:20:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Sun, Oct 17, 2021 at 4:30 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 14 Oct 2021 at 16:09, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Thu, Oct 14, 2021 at 12:48 AM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > > The hash index tuples are 20-bytes each. If that were rounded up to\n> > > 8-byte alignment, then that would be 24 bytes.\n> > >\n> > > Using pageinspect, the max(live_items) on any data page (bucket or\n> > > overflow) is 407 items, so they can't be 24 bytes long.\n> >\n> > That's the same as an nbtree page, which confirms my suspicion. The 20\n> > bytes consists of a 16 byte tuple, plus a 4 byte line pointer. The\n> > tuple-level alignment overhead gets you from 12 bytes to 16 bytes with\n> > a single int4 column. So the padding is there for the taking.\n>\n> Thank you for nudging me to review the tuple length.\n>\n> Since hash indexes never store Nulls, and the hash is always fixed\n> length, ISTM that we can compress the hash index entries down to\n> ItemPointerData (6 bytes) plus any hashes.\n>\n\nNice observation but we use INDEX_AM_RESERVED_BIT (as\nINDEX_MOVED_BY_SPLIT_MASK) for hash indexes, so we need to take care\nof that in some way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Oct 2021 17:22:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 6:50 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> With unique data, starting at 1 and monotonically ascending, hash\n> indexes will grow very nicely from 0 to 10E7 rows without causing >1\n> overflow block to be allocated for any bucket. This keeps the search\n> time for such data to just 2 blocks (bucket plus, if present, 1\n> overflow block). The small number of overflow blocks is because of the\n> regular and smooth way that splits occur, which works very nicely\n> without significant extra latency.\n\nIt is my impression that with non-unique data things degrade rather\nbadly. There's no way to split the buckets that are overflowing\nwithout also splitting the buckets that are completely empty or, in\nany event, not full enough to need any overflow pages. I think that's\nrather awful.\n\n> The probability of bucket collision while we hold the lock is fairly\n> low. This is because even with adjacent data values the hash values\n> would be spread across multiple buckets, so we would expect the\n> contention to be less than we would get on a monotonically increasing\n> btree.\n>\n> So I don't now see any problem from holding the buffer lwlock on the\n> bucket while we do multi-buffer operations.\n\nI don't think that contention is the primary concern here. I think one\nmajor concern is interruptibility: a process must be careful not to\nhold lwlocks across long stretches of code, because it cannot be\ncancelled while it does. Even if the code is bug-free and the database\nhas no corruption, long pauses before cancels take effect can be\ninconvenient. But as soon as you add in those considerations things\nget much worse. Imagine a hash index that is corrupted so that there\nis a loop in the list of overflow pages. No matter what, we're going\nto go into an infinite loop scanning that bucket, but if we're holding\na buffer lock while we do it, there's no way to escape short of\nbouncing the entire server. That's pretty bad.\n\nUndetected deadlock is something to think about, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Oct 2021 17:02:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 2:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Oct 5, 2021 at 6:50 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > With unique data, starting at 1 and monotonically ascending, hash\n> > indexes will grow very nicely from 0 to 10E7 rows without causing >1\n> > overflow block to be allocated for any bucket. This keeps the search\n> > time for such data to just 2 blocks (bucket plus, if present, 1\n> > overflow block). The small number of overflow blocks is because of the\n> > regular and smooth way that splits occur, which works very nicely\n> > without significant extra latency.\n>\n> It is my impression that with non-unique data things degrade rather\n> badly.\n>\n\nBut we will hold the bucket lock only for unique-index in which case\nthere shouldn't be non-unique data in the index. The non-unique case\nshould work as it works today. I guess this is the reason Simon took\nan example of unique data.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Oct 2021 16:27:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, 27 Oct 2021 at 12:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 2:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Oct 5, 2021 at 6:50 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > > With unique data, starting at 1 and monotonically ascending, hash\n> > > indexes will grow very nicely from 0 to 10E7 rows without causing >1\n> > > overflow block to be allocated for any bucket. This keeps the search\n> > > time for such data to just 2 blocks (bucket plus, if present, 1\n> > > overflow block). The small number of overflow blocks is because of the\n> > > regular and smooth way that splits occur, which works very nicely\n> > > without significant extra latency.\n> >\n> > It is my impression that with non-unique data things degrade rather\n> > badly.\n> >\n>\n> But we will hold the bucket lock only for unique-index in which case\n> there shouldn't be non-unique data in the index.\n\nEven in unique indexes there might be many duplicate index entries: A\nfrequently updated row, to which HOT cannot apply, whose row versions\nare waiting for vacuum (which is waiting for that one long-running\ntransaction to commit) will have many entries in each index.\n\nSure, it generally won't hit 10E7 duplicates, but we can hit large\nnumbers of duplicates fast on a frequently updated row. Updating one\nrow 1000 times between two runs of VACUUM is not at all impossible,\nand although I don't think it happens all the time, I do think it can\nhappen often enough on e.g. an HTAP system to make it a noteworthy\ntest case.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 27 Oct 2021 13:25:15 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 4:55 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 27 Oct 2021 at 12:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 27, 2021 at 2:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 5, 2021 at 6:50 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > > > With unique data, starting at 1 and monotonically ascending, hash\n> > > > indexes will grow very nicely from 0 to 10E7 rows without causing >1\n> > > > overflow block to be allocated for any bucket. This keeps the search\n> > > > time for such data to just 2 blocks (bucket plus, if present, 1\n> > > > overflow block). The small number of overflow blocks is because of the\n> > > > regular and smooth way that splits occur, which works very nicely\n> > > > without significant extra latency.\n> > >\n> > > It is my impression that with non-unique data things degrade rather\n> > > badly.\n> > >\n> >\n> > But we will hold the bucket lock only for unique-index in which case\n> > there shouldn't be non-unique data in the index.\n>\n> Even in unique indexes there might be many duplicate index entries: A\n> frequently updated row, to which HOT cannot apply, whose row versions\n> are waiting for vacuum (which is waiting for that one long-running\n> transaction to commit) will have many entries in each index.\n>\n> Sure, it generally won't hit 10E7 duplicates, but we can hit large\n> numbers of duplicates fast on a frequently updated row. Updating one\n> row 1000 times between two runs of VACUUM is not at all impossible,\n> and although I don't think it happens all the time, I do think it can\n> happen often enough on e.g. an HTAP system to make it a noteworthy\n> test case.\n>\n\nI think it makes to test such cases and see the behavior w.r.t overflow buckets.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Oct 2021 17:46:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Next Steps with Hash Indexes"
}
] |
[
{
"msg_contents": "Hackers,\n\nI believe there is a hazard in reorderbuffer.c if a call to write() buffers data rather than flushing it to disk, only to fail when flushing the data during close(). The relevant code is in ReorderBufferSerializeTXN(), which iterates over changes for a transaction, opening transient files as needed for the changes to be written, writing the transaction changes to the transient files using ReorderBufferSerializeChange(), and closing the files when finished using CloseTransientFile(), the return code from which is ignored. If ReorderBufferSerializeChange() were fsync()ing the writes, then ignoring the failures on close() would likely be ok, or at least in line with what we do elsewhere. But as far as I can see, no call to sync the file is performed.\n\nI expect a failure here could result in a partially written change in the transient file, perhaps preceded or followed by additional complete or partial changes. Perhaps even worse, a failure could result in a change not being written at all (rather than partially), potentially with preceding and following changes written intact, with no indication that one change is missing.\n\nUpon testing, both of these expectations appear to be true. Skipping some writes while not others easily creates a variety of failures, and for brevity I won't post a patch to demonstrate that here. The following code change causes whole rather than partial changes to be written or skipped. After applying this change and running the tests in contrib/test_decoding/, \"test toast\" shows that an UPDATE command has been silently skipped.\n\ndiff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\nindex 7378beb684..a6c60b81c9 100644\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -108,6 +108,7 @@\n #include \"utils/rel.h\"\n #include \"utils/relfilenodemap.h\"\n \n+static bool lose_writes_until_close = false;\n \n /* entry for a hash table we use to map from xid to our transaction state */\n typedef struct ReorderBufferTXNByIdEnt\n@@ -3523,6 +3524,8 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)\n Size spilled = 0;\n Size size = txn->size;\n \n+ lose_writes_until_close = false;\n+\n elog(DEBUG2, \"spill %u changes in XID %u to disk\",\n (uint32) txn->nentries_mem, txn->xid);\n \n@@ -3552,7 +3555,10 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)\n char path[MAXPGPATH];\n \n if (fd != -1)\n+ {\n CloseTransientFile(fd);\n+ lose_writes_until_close = !lose_writes_until_close;\n+ }\n \n XLByteToSeg(change->lsn, curOpenSegNo, wal_segment_size);\n \n@@ -3600,6 +3606,8 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)\n \n if (fd != -1)\n CloseTransientFile(fd);\n+\n+ lose_writes_until_close = false;\n }\n \n /*\n@@ -3790,7 +3798,10 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,\n \n errno = 0;\n pgstat_report_wait_start(WAIT_EVENT_REORDER_BUFFER_WRITE);\n- if (write(fd, rb->outbuf, ondisk->size) != ondisk->size)\n+\n+ if (lose_writes_until_close)\n+ ; /* silently do nothing with buffer contents */\n+ else if (write(fd, rb->outbuf, ondisk->size) != ondisk->size)\n {\n int save_errno = errno;\n\n\nThe attached patch catches the failures on close(), but to really fix this properly, we should call pg_fsync() before close().\n\nAny thoughts on this? It seems to add a fair amount of filesystem burden to add all the extra fsync activity, though I admit to having not benchmarked that yet. Perhaps doing something like Thomas's work for commit dee663f7843 where closing files is delayed so that fewer syncs are needed? I'm not sure how much that would help here, and would like feedback before pursuing anything of that sort.\n\nThe relevant code exists back as far as the 9_4_STABLE branch, where commit b89e151054a from 2014 first appears.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 15 Jul 2021 13:03:05 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "data corruption hazard in reorderbuffer.c"
},
{
"msg_contents": "\n\n> On Jul 15, 2021, at 1:03 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Skipping some writes while not others easily creates a variety of failures, and for brevity I won't post a patch to demonstrate that here.\n\nIf anybody is curious, one common error I see when simulating a close() skipping partial changes rather than whole ones looks like:\n\n\tERROR: got sequence entry 31 for toast chunk 16719 instead of seq 21\n\n(where the exact numbers differ, of course). This symptom has shown up in at least two ([1], [2] below) unsolved user bug reports specifically mentioning replication. That doesn't prove a connection between the those reports and this issue, but it makes me wonder.\n\n\n[1] https://www.postgresql.org/message-id/CA+_m4OBs2aPkjqd1gxnx2ykuTJogYCfq8TZgr1uPP3ZtBTyvew@mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/3bd6953d-3da6-040c-62bf-9522808d5c2f%402ndquadrant.com#f6f165ebea024f851d47a17723de5d29\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 15 Jul 2021 13:51:41 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: data corruption hazard in reorderbuffer.c"
},
{
"msg_contents": "\n\n> On Jul 15, 2021, at 1:51 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> one common error I see\n\nAnother common error is of the form\n\n\tERROR: could not map filenode \"base/16384/16413\" to relation OID\n\nresulting from a ddl statement having not been written correctly, I think. This, too, has shown up in [1] fairly recently against a 12.3 database. A similar looking bug was reported a couple years earlier in [2], for which Andres pushed the fix e9edc1ba0be21278de8f04a068c2fb3504dc03fc and backpatched as far back as 9.2. Postgres 12.3 was released quite a bit more recently than that, so it appears the bug report from [1] occurred despite having the fix from [2].\n\n[1] https://www.postgresql-archive.org/BUG-16812-Logical-decoding-error-td6170022.html\n[2] https://www.postgresql.org/message-id/flat/20180914021046.oi7dm4ra3ot2g2kt%40alap3.anarazel.de\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 15 Jul 2021 14:17:32 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: data corruption hazard in reorderbuffer.c"
},
{
"msg_contents": "Hi,\n\nI think it's mostly futile to list all the possible issues this might \nhave caused - if you skip arbitrary decoded changes, that can trigger \npretty much any bug in reorder buffer. But those bugs can be triggered \nby various other issues, of course.\n\nIt's hard to say what was the cause, but the \"logic\" bugs are probably \npermanent, while the issues triggered by I/O probably disappear after a \nrestart?\n\nThat being said, I agree this seems like an issue and we should not \nignore I/O errors. I'd bet other places using transient files (like \nsorting or hashagg spilling to disk) has the same issue, although in \nthat case the impact is likely limited to a single query.\n\nI wonder if sync before the close is an appropriate solution, though. It \nseems rather expensive, and those files are meant to be \"temporary\" \n(i.e. we don't keep them over restart). So maybe we could ensure the \nconsistency is a cheaper way - perhaps tracking some sort of checksum \nfor each file, or something like that?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 16 Jul 2021 00:32:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: data corruption hazard in reorderbuffer.c"
},
{
"msg_contents": "\n\n> On Jul 15, 2021, at 3:32 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> I think it's mostly futile to list all the possible issues this might have caused - if you skip arbitrary decoded changes, that can trigger pretty much any bug in reorder buffer. But those bugs can be triggered by various other issues, of course.\n\nI thought that at first, too, but when I searched for bug reports with the given error messages, they all had to do with logical replication or logical decoding. That seems a bit fishy to me. If these bugs were coming from all over the system, why would that be so?\n\n> It's hard to say what was the cause, but the \"logic\" bugs are probably permanent, while the issues triggered by I/O probably disappear after a restart?\n\nIf you mean \"logic\" bugs like passing an invalid file descriptor to close(), then yes, those would be permanent, but I don't believe we have any bugs of that kind.\n\nThe consequences of I/O errors are not going to go away after restart. On the subscriber side, if logical replication has replayed less than a full transaction worth of changes without raising an error, the corruption will be permanent.\n\n> That being said, I agree this seems like an issue and we should not ignore I/O errors.\n\nI agree.\n\n> I'd bet other places using transient files (like sorting or hashagg spilling to disk) has the same issue, although in that case the impact is likely limited to a single query.\n\nI'm not so convinced. In most places, the result is ignored for close()-type operations only when the prior operation failed and we're closing the handle in preparation for raising an error. There are a small number of other places, such as pg_import_system_collations and perform_base_backup, which ignore the result from closing a handle, but those are reading from the handle, not writing to it, so the situation is not comparable.\n\nI believe the oversight in reorderbuffer.c really is a special case.\n\n> I wonder if sync before the close is an appropriate solution, though. It seems rather expensive, and those files are meant to be \"temporary\" (i.e. we don't keep them over restart). So maybe we could ensure the consistency is a cheaper way - perhaps tracking some sort of checksum for each file, or something like that?\n\nI'm open to suggestions of what to do, but thinking of these files as temporary may be what misleads developers to believe they don't have to be treated carefully. The code was first committed in 2014 and as far as I am aware nobody else has complained about this before. In some part that might be because CloseTemporaryFile() is less familiar than close() to most developers, and they may be assuming that it contains its own error handling and just don't realize that it returns an error code just like regular close() does.\n\nThe point of the reorder buffer is to sort the changes from a single transaction so that they can be replayed in order. The files used for the sorting are temporary, but there is nothing temporary about the failure to include all the changes in the files, as that will have permanent consequences for whoever replays them. If lucky, the attempt to replay the changes will abort because they don't make sense, but I've demonstrated to my own satisfaction that nothing prevents silent data loss if the failure to write changes happens to destroy a complete rather than partial change.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 15 Jul 2021 16:07:46 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: data corruption hazard in reorderbuffer.c"
},
{
"msg_contents": "\n\nOn 7/16/21 1:07 AM, Mark Dilger wrote:\n> \n> \n>> On Jul 15, 2021, at 3:32 PM, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>> \n>> I think it's mostly futile to list all the possible issues this\n>> might have caused - if you skip arbitrary decoded changes, that can\n>> trigger pretty much any bug in reorder buffer. But those bugs can\n>> be triggered by various other issues, of course.\n> \n> I thought that at first, too, but when I searched for bug reports\n> with the given error messages, they all had to do with logical\n> replication or logical decoding. That seems a bit fishy to me. If\n> these bugs were coming from all over the system, why would that be\n> so?\n> \n\nNo, I'm not suggesting those bugs are coming from all over the system. \nThe theory is that there's a bug in logical decoding / reorder buffer, \nwith the same symptoms. So it'd affect systems with logical replication.\n\n>> It's hard to say what was the cause, but the \"logic\" bugs are\n>> probably permanent, while the issues triggered by I/O probably\n>> disappear after a restart?\n> \n> If you mean \"logic\" bugs like passing an invalid file descriptor to\n> close(), then yes, those would be permanent, but I don't believe we\n> have any bugs of that kind.\n> \n\nNo, by logic bugs I mean cases like failing to update the snapshot, \nlosing track of relfilenode changes, etc. due to failing to consider \nsome cases etc. As opposed to \"I/O errors\" where this is caused by \nexternal events.\n\n> The consequences of I/O errors are not going to go away after\n> restart. On the subscriber side, if logical replication has replayed\n> less than a full transaction worth of changes without raising an\n> error, the corruption will be permanent.\n> \n\nTrue, good point. I was thinking about the \"could not map relfilenode\" \nerror, which forces the decoding to restart, and on the retry it's \nunlikely to hit the same I/O error, so it'll succeed. But you're right \nthat if it reaches the subscriber and gets committed (e.g. missing a \nrow), it's permanent.\n\n>> That being said, I agree this seems like an issue and we should not\n>> ignore I/O errors.\n> \n> I agree.\n> \n>> I'd bet other places using transient files (like sorting or hashagg\n>> spilling to disk) has the same issue, although in that case the\n>> impact is likely limited to a single query.\n> \n> I'm not so convinced. In most places, the result is ignored for\n> close()-type operations only when the prior operation failed and\n> we're closing the handle in preparation for raising an error. There\n> are a small number of other places, such as\n> pg_import_system_collations and perform_base_backup, which ignore the\n> result from closing a handle, but those are reading from the handle,\n> not writing to it, so the situation is not comparable.\n> \n> I believe the oversight in reorderbuffer.c really is a special case.\n> \n\nHmm, you might be right. I have only briefly looked at buffile.c and \ntuplestore.c yesterday, and I wasn't sure about the error checking. But \nmaybe that works fine, now that I look at it, because we don't really \nuse the files after close().\n\n>> I wonder if sync before the close is an appropriate solution,\n>> though. It seems rather expensive, and those files are meant to be\n>> \"temporary\" (i.e. we don't keep them over restart). So maybe we\n>> could ensure the consistency is a cheaper way - perhaps tracking\n>> some sort of checksum for each file, or something like that?\n> \n> I'm open to suggestions of what to do, but thinking of these files as\n> temporary may be what misleads developers to believe they don't have\n> to be treated carefully. The code was first committed in 2014 and as\n> far as I am aware nobody else has complained about this before. In\n> some part that might be because CloseTemporaryFile() is less familiar\n> than close() to most developers, and they may be assuming that it\n> contains its own error handling and just don't realize that it\n> returns an error code just like regular close() does.\n> \n\nI don't think anyone thinks corruption of temporary files would not be \nan issue, but making them fully persistent (which is what fsync would \ndo) just to eliminate a piece is not lost seems like an overkill to me. \nAnd the file may get corrupted even with fsync(), so I'm wondering if \nthere's a way to ensure integrity without the fsync (so cheaper).\n\nThe scheme I was thinking about is a simple checksum for each BufFile. \nWhen writing a buffer to disk, update the crc32c checksum, and then \ncheck it after reading the file (and error-out if it disagrees).\n\n> The point of the reorder buffer is to sort the changes from a single\n> transaction so that they can be replayed in order. The files used\n> for the sorting are temporary, but there is nothing temporary about\n> the failure to include all the changes in the files, as that will\n> have permanent consequences for whoever replays them. If lucky, the\n> attempt to replay the changes will abort because they don't make\n> sense, but I've demonstrated to my own satisfaction that nothing\n> prevents silent data loss if the failure to write changes happens to\n> destroy a complete rather than partial change.\n> \n\nSure, I agree with all of that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 16 Jul 2021 10:21:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: data corruption hazard in reorderbuffer.c"
}
] |
[
{
"msg_contents": "Hello,\n\nI found that using \"BEGIN ISOLATINO LEVEL SERIALIZABLE\" in a pipline with\nprepared statement makes pgbench abort.\n\n $ cat pipeline.sql \n \\startpipeline\n begin isolation level repeatable read;\n select 1;\n end;\n \\endpipeline\n\n $ pgbench -f pipeline.sql -M prepared -t 1\n pgbench (15devel)\n starting vacuum...end.\n pgbench: error: client 0 script 0 aborted in command 4 query 0: \n transaction type: pipeline.sql\n scaling factor: 1\n query mode: prepared\n number of clients: 1\n number of threads: 1\n number of transactions per client: 1\n number of transactions actually processed: 0/1\n pgbench: fatal: Run was aborted; the above results are incomplete.\n\nThe error that occured in the backend was\n\"ERROR: SET TRANSACTION ISOLATION LEVEL must be called before any query\".\n\nAfter investigating this, now I've got the cause as below. \n\n1. The commands in the script are executed in the order. First, pipeline\n mode starts at \\startpipeline.\n2. Parse messages for all SQL commands in the script are sent to the backend\n because it is first time to execute them.\n3. An implicit transaction starts, and this is not committed yet because Sync\n message is not sent at that time in pipeline mode.\n4. All prepared statements are sent to the backend.\n5. After processing \\endpipeline, Sync is issued and all sent commands are\n executed.\n6. However, the BEGIN doesn't start new transaction because the implicit\n transaction has already started. The error above occurs because the snapshot\n was already created before the BEGIN command.\n\nWe can also see the similar error when using \"BEGIN DEFERRABLE\". \n\nOne way to avoid these errors is to send Parse messages before pipeline mode\nstarts. I attached a patch to fix to prepare commands at starting of a script\ninstead of at the first execution of the command. \n\nOr, we can also avoid these errors by placing \\startpipeline after the BEGIN, \nso it might be enogh just to note in the documentation. \n\nActually, we also get an error just when there is another SQL command before the\nBEGIN in a pipelne, as below, regardless to using prepared statement or not,\nbecause this command cause an implicit transaction.\n\n \\startpipeline\n select 0;\n begin isolation level repeatable read;\n select 1;\n end;\n \\endpipeline\n\nI think it is hard to prevent this error from pgbench without analysing command\nstrings. Therefore, noting in the documentation that the first command in a pipeline\nstarts an implicit transaction might be useful for users.\n\n\nWhat do you think?\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 16 Jul 2021 15:30:13 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "pgbench: using prepared BEGIN statement in a pipeline could cause\n an error"
},
{
"msg_contents": "\nHello Yugo-san,\n\n> [...] One way to avoid these errors is to send Parse messages before \n> pipeline mode starts. I attached a patch to fix to prepare commands at \n> starting of a script instead of at the first execution of the command.\n\n> What do you think?\n\nISTM that moving prepare out of command execution is a good idea, so I'm \nin favor of this approach: the code is simpler and cleaner.\n\nISTM that a minor impact is that the preparation is not counted in the \ncommand performance statistics. I do not think that it is a problem, even \nif it would change detailed results under -C -r -M prepared.\n\nPatch applies & compiles cleanly, global & local make check ok. However \nthe issue is not tested. I think that the patch should add a tap test case \nfor the problem being addressed.\n\nI'd suggest to move the statement preparation call in the \nCSTATE_CHOOSE_SCRIPT case.\n\nIn comments: not yet -> needed.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 17 Jul 2021 07:03:01 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "Hello Fabien,\n\nOn Sat, 17 Jul 2021 07:03:01 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> Hello Yugo-san,\n> \n> > [...] One way to avoid these errors is to send Parse messages before \n> > pipeline mode starts. I attached a patch to fix to prepare commands at \n> > starting of a script instead of at the first execution of the command.\n> \n> > What do you think?\n> \n> ISTM that moving prepare out of command execution is a good idea, so I'm \n> in favor of this approach: the code is simpler and cleaner.\n> \n> ISTM that a minor impact is that the preparation is not counted in the \n> command performance statistics. I do not think that it is a problem, even \n> if it would change detailed results under -C -r -M prepared.\n\nI agree with you. Currently, whether prepares are sent in pipeline mode or\nnot depends on whether the first SQL command is placed between \\startpipeline\nand \\endpipeline regardless whether other commands are executed in pipeline\nor not. ISTM, this behavior would be not intuitive for users. Therefore, \nI think preparing all statements not using pipeline mode is not problem for now.\n\nIf some users would like to send prepares in pipeline, I think it would be\nbetter to provide more simple and direct way. For example, we prepare statements\nin pipeline if the user use an option, or if the script has at least one\n\\startpipeline in their pipeline. Maybe, --pipeline option is useful for users\nwho want to use pipeline mode for all queries in scirpts including built-in ones.\nHowever, these features seems to be out of the patch proposed in this thread.\n\n> Patch applies & compiles cleanly, global & local make check ok. However \n> the issue is not tested. I think that the patch should add a tap test case \n> for the problem being addressed.\n\nOk. I'll add a tap test to confirm the error I found is avoidable.\n\n> I'd suggest to move the statement preparation call in the \n> CSTATE_CHOOSE_SCRIPT case.\n\nI thought so at first, but I noticed we cannot do it at least if we are\nusing -C because the connection may not be established in the\nCSTATE_CHOOSE_SCRIPT state. \n\n> In comments: not yet -> needed.\n\nThanks. I'll fix it.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 19 Jul 2021 10:51:36 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "On Mon, 19 Jul 2021 10:51:36 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hello Fabien,\n> \n> On Sat, 17 Jul 2021 07:03:01 +0200 (CEST)\n> Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> \n> > \n> > Hello Yugo-san,\n> > \n> > > [...] One way to avoid these errors is to send Parse messages before \n> > > pipeline mode starts. I attached a patch to fix to prepare commands at \n> > > starting of a script instead of at the first execution of the command.\n> > \n> > > What do you think?\n> > \n> > ISTM that moving prepare out of command execution is a good idea, so I'm \n> > in favor of this approach: the code is simpler and cleaner.\n> > \n> > ISTM that a minor impact is that the preparation is not counted in the \n> > command performance statistics. I do not think that it is a problem, even \n> > if it would change detailed results under -C -r -M prepared.\n> \n> I agree with you. Currently, whether prepares are sent in pipeline mode or\n> not depends on whether the first SQL command is placed between \\startpipeline\n> and \\endpipeline regardless whether other commands are executed in pipeline\n> or not. ISTM, this behavior would be not intuitive for users. Therefore, \n> I think preparing all statements not using pipeline mode is not problem for now.\n> \n> If some users would like to send prepares in pipeline, I think it would be\n> better to provide more simple and direct way. For example, we prepare statements\n> in pipeline if the user use an option, or if the script has at least one\n> \\startpipeline in their pipeline. Maybe, --pipeline option is useful for users\n> who want to use pipeline mode for all queries in scirpts including built-in ones.\n> However, these features seems to be out of the patch proposed in this thread.\n> \n> > Patch applies & compiles cleanly, global & local make check ok. However \n> > the issue is not tested. I think that the patch should add a tap test case \n> > for the problem being addressed.\n> \n> Ok. I'll add a tap test to confirm the error I found is avoidable.\n> \n> > I'd suggest to move the statement preparation call in the \n> > CSTATE_CHOOSE_SCRIPT case.\n> \n> I thought so at first, but I noticed we cannot do it at least if we are\n> using -C because the connection may not be established in the\n> CSTATE_CHOOSE_SCRIPT state. \n> \n> > In comments: not yet -> needed.\n> \n> Thanks. I'll fix it.\n\nI attached the updated patch v2, which includes a comment fix and a TAP test.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Wed, 21 Jul 2021 10:49:09 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "> On 21 Jul 2021, at 03:49, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> I attached the updated patch v2, which includes a comment fix and a TAP test.\n\nThis patch fails the TAP test for pgbench:\n\n # Tests were run but no plan was declared and done_testing() was not seen.\n # Looks like your test exited with 25 just after 224.\n t/001_pgbench_with_server.pl ..\n Dubious, test returned 25 (wstat 6400, 0x1900)\n All 224 subtests passed\n t/002_pgbench_no_server.pl .... ok\n Test Summary Report\n -------------------\n t/001_pgbench_with_server.pl (Wstat: 6400 Tests: 224 Failed: 0)\n Non-zero exit status: 25\n Parse errors: No plan found in TAP output\n Files=2, Tests=426, 3 wallclock secs ( 0.04 usr 0.00 sys + 1.20 cusr 0.36 csys = 1.60 CPU)\n Result: FAIL\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 15 Nov 2021 14:13:32 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could cause\n an error"
},
{
"msg_contents": "Hello Daniel Gustafsson,\n\nOn Mon, 15 Nov 2021 14:13:32 +0100\nDaniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 21 Jul 2021, at 03:49, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > I attached the updated patch v2, which includes a comment fix and a TAP test.\n> \n> This patch fails the TAP test for pgbench:\n\nThank you for pointing it out!\nI attached the updated patch.\n\nRegards,\nYugo Nagata\n\n> # Tests were run but no plan was declared and done_testing() was not seen.\n> # Looks like your test exited with 25 just after 224.\n> t/001_pgbench_with_server.pl ..\n> Dubious, test returned 25 (wstat 6400, 0x1900)\n> All 224 subtests passed\n> t/002_pgbench_no_server.pl .... ok\n> Test Summary Report\n> -------------------\n> t/001_pgbench_with_server.pl (Wstat: 6400 Tests: 224 Failed: 0)\n> Non-zero exit status: 25\n> Parse errors: No plan found in TAP output\n> Files=2, Tests=426, 3 wallclock secs ( 0.04 usr 0.00 sys + 1.20 cusr 0.36 csys = 1.60 CPU)\n> Result: FAIL\n> \n> --\n> Daniel Gustafsson\t\thttps://vmware.com/\n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 16 Nov 2021 02:26:43 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "At Tue, 16 Nov 2021 02:26:43 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> Thank you for pointing it out!\n> I attached the updated patch.\n\nI think we want more elabolative comment for the new place of\npreparing as you mentioned in the first mail.\n\nAt Fri, 16 Jul 2021 15:30:13 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> One way to avoid these errors is to send Parse messages before\n> pipeline mode starts.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 27 Jan 2022 17:50:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "Hi Horiguchi-san,\n\nOn Thu, 27 Jan 2022 17:50:25 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Tue, 16 Nov 2021 02:26:43 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> > Thank you for pointing it out!\n> > I attached the updated patch.\n> \n> I think we want more elabolative comment for the new place of\n> preparing as you mentioned in the first mail.\n\nThank you for your suggestion.\n\nI added comments on the prepareCommands() call as in the updated patch.\n\nRegards,\nYugo Nagata\n\n\nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 1 Mar 2022 15:55:59 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> [...] One way to avoid these errors is to send Parse messages before \n>> pipeline mode starts. I attached a patch to fix to prepare commands at \n>> starting of a script instead of at the first execution of the command.\n\n> ISTM that moving prepare out of command execution is a good idea, so I'm \n> in favor of this approach: the code is simpler and cleaner.\n> ISTM that a minor impact is that the preparation is not counted in the \n> command performance statistics. I do not think that it is a problem, even \n> if it would change detailed results under -C -r -M prepared.\n\nI am not convinced this is a great idea. The current behavior is that\na statement is not prepared until it's about to be executed, and I think\nwe chose that deliberately to avoid semantic differences between prepared\nand not-prepared mode. For example, if a script looks like\n\nCREATE FUNCTION foo(...) ...;\nSELECT foo(...);\nDROP FUNCTION foo;\n\ntrying to prepare the SELECT in advance would lead to failure.\n\nWe could perhaps get away with preparing the commands within a pipeline\njust before we start to execute the pipeline, but it looks to me like\nthis patch tries to prepare the entire script in advance.\n\nBTW, the cfbot says the patch is failing to apply anyway ...\nI think it was sideswiped by 4a39f87ac.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Mar 2022 16:19:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could cause\n an error"
},
{
"msg_contents": "On Fri, 25 Mar 2022 16:19:54 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> >> [...] One way to avoid these errors is to send Parse messages before \n> >> pipeline mode starts. I attached a patch to fix to prepare commands at \n> >> starting of a script instead of at the first execution of the command.\n> \n> > ISTM that moving prepare out of command execution is a good idea, so I'm \n> > in favor of this approach: the code is simpler and cleaner.\n> > ISTM that a minor impact is that the preparation is not counted in the \n> > command performance statistics. I do not think that it is a problem, even \n> > if it would change detailed results under -C -r -M prepared.\n> \n> I am not convinced this is a great idea. The current behavior is that\n> a statement is not prepared until it's about to be executed, and I think\n> we chose that deliberately to avoid semantic differences between prepared\n> and not-prepared mode. For example, if a script looks like\n> \n> CREATE FUNCTION foo(...) ...;\n> SELECT foo(...);\n> DROP FUNCTION foo;\n> \n> trying to prepare the SELECT in advance would lead to failure.\n>\n> We could perhaps get away with preparing the commands within a pipeline\n> just before we start to execute the pipeline, but it looks to me like\n> this patch tries to prepare the entire script in advance.\n> \nWell, the semantic differences is already in the current behavior.\nCurrently, pgbench fails to execute the above script in prepared mode\nbecause it prepares the entire script in advance just before the first\ncommand execution. This patch does not change the semantic.\n\n> BTW, the cfbot says the patch is failing to apply anyway ...\n> I think it was sideswiped by 4a39f87ac.\n\nI attached the rebased patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 28 Mar 2022 12:33:16 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 8:35 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Fri, 25 Mar 2022 16:19:54 -0400\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> > >> [...] One way to avoid these errors is to send Parse messages before\n> > >> pipeline mode starts. I attached a patch to fix to prepare commands\n> at\n> > >> starting of a script instead of at the first execution of the command.\n> >\n> > > ISTM that moving prepare out of command execution is a good idea, so\n> I'm\n> > > in favor of this approach: the code is simpler and cleaner.\n> > > ISTM that a minor impact is that the preparation is not counted in the\n> > > command performance statistics. I do not think that it is a problem,\n> even\n> > > if it would change detailed results under -C -r -M prepared.\n> >\n> > I am not convinced this is a great idea. The current behavior is that\n> > a statement is not prepared until it's about to be executed, and I think\n> > we chose that deliberately to avoid semantic differences between prepared\n> > and not-prepared mode. For example, if a script looks like\n> >\n> > CREATE FUNCTION foo(...) ...;\n> > SELECT foo(...);\n> > DROP FUNCTION foo;\n> >\n> > trying to prepare the SELECT in advance would lead to failure.\n> >\n> > We could perhaps get away with preparing the commands within a pipeline\n> > just before we start to execute the pipeline, but it looks to me like\n> > this patch tries to prepare the entire script in advance.\n> >\n> Well, the semantic differences is already in the current behavior.\n> Currently, pgbench fails to execute the above script in prepared mode\n> because it prepares the entire script in advance just before the first\n> command execution. This patch does not change the semantic.\n>\n> > BTW, the cfbot says the patch is failing to apply anyway ...\n> > I think it was sideswiped by 4a39f87ac.\n>\n> I attached the rebased patch.\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\n\nHi Kyotaro Horiguchi, Fabien Coelho, Daniel Gustafsson,\n\nSince you haven't had time to write a review the last many days, the author\nreplied\nwith a rebased patch for a long time and never heard. We've taken your name\noff the reviewer list for this patch. Of course, you are still welcome to\nreview it if you can\nfind the time. We're removing your name so that other reviewers know the\npatch still needs\nattention. We understand that day jobs and other things get in the way of\ndoing patch\nreviews when you want to, so please come back and review a patch or two\nlater when you\nhave more time.\n\n-- \nIbrar Ahmed\n\nOn Mon, Mar 28, 2022 at 8:35 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:On Fri, 25 Mar 2022 16:19:54 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> >> [...] One way to avoid these errors is to send Parse messages before \n> >> pipeline mode starts. I attached a patch to fix to prepare commands at \n> >> starting of a script instead of at the first execution of the command.\n> \n> > ISTM that moving prepare out of command execution is a good idea, so I'm \n> > in favor of this approach: the code is simpler and cleaner.\n> > ISTM that a minor impact is that the preparation is not counted in the \n> > command performance statistics. I do not think that it is a problem, even \n> > if it would change detailed results under -C -r -M prepared.\n> \n> I am not convinced this is a great idea. The current behavior is that\n> a statement is not prepared until it's about to be executed, and I think\n> we chose that deliberately to avoid semantic differences between prepared\n> and not-prepared mode. For example, if a script looks like\n> \n> CREATE FUNCTION foo(...) ...;\n> SELECT foo(...);\n> DROP FUNCTION foo;\n> \n> trying to prepare the SELECT in advance would lead to failure.\n>\n> We could perhaps get away with preparing the commands within a pipeline\n> just before we start to execute the pipeline, but it looks to me like\n> this patch tries to prepare the entire script in advance.\n> \nWell, the semantic differences is already in the current behavior.\nCurrently, pgbench fails to execute the above script in prepared mode\nbecause it prepares the entire script in advance just before the first\ncommand execution. This patch does not change the semantic.\n\n> BTW, the cfbot says the patch is failing to apply anyway ...\n> I think it was sideswiped by 4a39f87ac.\n\nI attached the rebased patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\nHi Kyotaro Horiguchi, Fabien Coelho, Daniel Gustafsson,Since you haven't had time to write a review the last many days, the author repliedwith a rebased patch for a long time and never heard. We've taken your nameoff the reviewer list for this patch. Of course, you are still welcome to review it if you canfind the time. We're removing your name so that other reviewers know the patch still needsattention. We understand that day jobs and other things get in the way of doing patchreviews when you want to, so please come back and review a patch or two later when youhave more time.-- Ibrar Ahmed",
"msg_date": "Sat, 3 Sep 2022 10:36:37 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could cause\n an error"
},
{
"msg_contents": "Hi,\n\nOn Sat, Sep 03, 2022 at 10:36:37AM +0500, Ibrar Ahmed wrote:\n>\n> Hi Kyotaro Horiguchi, Fabien Coelho, Daniel Gustafsson,\n>\n> Since you haven't had time to write a review the last many days, the author\n> replied\n> with a rebased patch for a long time and never heard. We've taken your name\n> off the reviewer list for this patch. Of course, you are still welcome to\n> review it if you can\n> find the time. We're removing your name so that other reviewers know the\n> patch still needs\n> attention. We understand that day jobs and other things get in the way of\n> doing patch\n> reviews when you want to, so please come back and review a patch or two\n> later when you\n> have more time.\n\nI thought that we decided not to remove assigned reviewers from a CF entry,\neven if they didn't reply recently? See the discussion around\nhttps://www.postgresql.org/message-id/CA%2BTgmoZSBNhX0zCkG5T5KiQize9Aq4%2Bec%2BuqLcfBhm_%2B12MbQA%40mail.gmail.com\n\n\n",
"msg_date": "Sat, 3 Sep 2022 15:09:03 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "On Sat, Sep 3, 2022 at 12:09 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Sat, Sep 03, 2022 at 10:36:37AM +0500, Ibrar Ahmed wrote:\n> >\n> > Hi Kyotaro Horiguchi, Fabien Coelho, Daniel Gustafsson,\n> >\n> > Since you haven't had time to write a review the last many days, the\n> author\n> > replied\n> > with a rebased patch for a long time and never heard. We've taken your\n> name\n> > off the reviewer list for this patch. Of course, you are still welcome to\n> > review it if you can\n> > find the time. We're removing your name so that other reviewers know the\n> > patch still needs\n> > attention. We understand that day jobs and other things get in the way of\n> > doing patch\n> > reviews when you want to, so please come back and review a patch or two\n> > later when you\n> > have more time.\n>\n> I thought that we decided not to remove assigned reviewers from a CF entry,\n> even if they didn't reply recently? See the discussion around\n>\n> https://www.postgresql.org/message-id/CA%2BTgmoZSBNhX0zCkG5T5KiQize9Aq4%2Bec%2BuqLcfBhm_%2B12MbQA%40mail.gmail.com\n>\n\nAh, ok, thanks for the clarification. I will add them back.\n\n@Jacob Champion, we need to update the CommitFest Checklist [1] document\naccordingly.\n\n\n\n\n\n*\"Reviewer Clear [reviewer name]:*\n\n* Since you haven't had time to write a review of [patch] in the last 5\ndays, we've taken your name off the reviewer list for this patch.\"*\n\n\n[1] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n\n-- \nIbrar Ahmed\n\nOn Sat, Sep 3, 2022 at 12:09 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Sat, Sep 03, 2022 at 10:36:37AM +0500, Ibrar Ahmed wrote:\n>\n> Hi Kyotaro Horiguchi, Fabien Coelho, Daniel Gustafsson,\n>\n> Since you haven't had time to write a review the last many days, the author\n> replied\n> with a rebased patch for a long time and never heard. We've taken your name\n> off the reviewer list for this patch. Of course, you are still welcome to\n> review it if you can\n> find the time. We're removing your name so that other reviewers know the\n> patch still needs\n> attention. We understand that day jobs and other things get in the way of\n> doing patch\n> reviews when you want to, so please come back and review a patch or two\n> later when you\n> have more time.\n\nI thought that we decided not to remove assigned reviewers from a CF entry,\neven if they didn't reply recently? See the discussion around\nhttps://www.postgresql.org/message-id/CA%2BTgmoZSBNhX0zCkG5T5KiQize9Aq4%2Bec%2BuqLcfBhm_%2B12MbQA%40mail.gmail.com\nAh, ok, thanks for the clarification. I will add them back. @Jacob Champion, we need to update the CommitFest Checklist [1] document accordingly.\"Reviewer Clear [reviewer name]: Since you haven't had time to write a review of [patch] in the last 5 days, we've taken your name off the reviewer list for this patch.\"[1] https://wiki.postgresql.org/wiki/CommitFest_Checklist-- Ibrar Ahmed",
"msg_date": "Sun, 4 Sep 2022 12:56:58 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could cause\n an error"
},
{
"msg_contents": "On 2022-Mar-28, Yugo NAGATA wrote:\n\n> On Fri, 25 Mar 2022 16:19:54 -0400\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > I am not convinced this is a great idea. The current behavior is that\n> > a statement is not prepared until it's about to be executed, and I think\n> > we chose that deliberately to avoid semantic differences between prepared\n> > and not-prepared mode. For example, if a script looks like\n> > \n> > CREATE FUNCTION foo(...) ...;\n> > SELECT foo(...);\n> > DROP FUNCTION foo;\n> > \n> > trying to prepare the SELECT in advance would lead to failure.\n> >\n> > We could perhaps get away with preparing the commands within a pipeline\n> > just before we start to execute the pipeline, but it looks to me like\n> > this patch tries to prepare the entire script in advance.\n\nMaybe it would work to have one extra boolean in struct Command, indicating\nthat the i-th command in the script is inside a pipeline; in -M\nprepared, issue PREPARE for each command marked with that flag ahead of\ntime, and for all other commands, do as today. That way, we don't\nchange behavior for anything except those commands that need the change.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Digital and video cameras have this adjustment and film cameras don't for the\nsame reason dogs and cats lick themselves: because they can.\" (Ken Rockwell)\n\n\n",
"msg_date": "Mon, 12 Sep 2022 17:03:43 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "Hi,\n\nOn Mon, 12 Sep 2022 17:03:43 +0200\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Mar-28, Yugo NAGATA wrote:\n> \n> > On Fri, 25 Mar 2022 16:19:54 -0400\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > > I am not convinced this is a great idea. The current behavior is that\n> > > a statement is not prepared until it's about to be executed, and I think\n> > > we chose that deliberately to avoid semantic differences between prepared\n> > > and not-prepared mode. For example, if a script looks like\n> > > \n> > > CREATE FUNCTION foo(...) ...;\n> > > SELECT foo(...);\n> > > DROP FUNCTION foo;\n> > > \n> > > trying to prepare the SELECT in advance would lead to failure.\n> > >\n> > > We could perhaps get away with preparing the commands within a pipeline\n> > > just before we start to execute the pipeline, but it looks to me like\n> > > this patch tries to prepare the entire script in advance.\n> \n> Maybe it would work to have one extra boolean in struct Command, indicating\n> that the i-th command in the script is inside a pipeline; in -M\n> prepared, issue PREPARE for each command marked with that flag ahead of\n> time, and for all other commands, do as today. That way, we don't\n> change behavior for anything except those commands that need the change.\n\nWell, I still don't understand why we need to prepare only \"the\ncommands within a pipeline\" before starting pipeline. In the current\nbehavior, the entire script is prepared in advance just before executing\nthe first SQL command in the script, instead of preparing each command\none by one. This patch also prepare the entire script in advance, so\nthere is no behavioural change in this sense.\n\nHowever, there are a few behavioural changes. One is that the preparation\nis not counted in the command performance statistics as Fabien mentioned.\nAnother is that all meta-commands including \\shell and \\sleep etc. are\nexecuted before the preparation.\n\nTo reduce impact of these changes, I updated the patch to prepare the\ncommands just before executing the first SQL command or \\startpipeline\nmeta-command instead of at the beginning of the script. \n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n> \"Digital and video cameras have this adjustment and film cameras don't for the\n> same reason dogs and cats lick themselves: because they can.\" (Ken Rockwell)\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 30 Sep 2022 10:07:43 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "I'm writing my own patch for this problem. While playing around with\nit, I noticed this:\n\nstruct Command {\n PQExpBufferData lines; /* 0 24 */\n char * first_line; /* 24 8 */\n int type; /* 32 4 */\n MetaCommand meta; /* 36 4 */\n int argc; /* 40 4 */\n\n /* XXX 4 bytes hole, try to pack */\n\n char * argv[256]; /* 48 2048 */\n /* --- cacheline 32 boundary (2048 bytes) was 48 bytes ago --- */\n char * varprefix; /* 2096 8 */\n PgBenchExpr * expr; /* 2104 8 */\n /* --- cacheline 33 boundary (2112 bytes) --- */\n SimpleStats stats; /* 2112 40 */\n int64 retries; /* 2152 8 */\n int64 failures; /* 2160 8 */\n\n /* size: 2168, cachelines: 34, members: 11 */\n /* sum members: 2164, holes: 1, sum holes: 4 */\n /* last cacheline: 56 bytes */\n};\n\nNot great. I suppose this makes pgbench slower than it needs to be.\nCan we do better?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Mon, 6 Feb 2023 11:11:32 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I'm writing my own patch for this problem. While playing around with\n> it, I noticed this:\n\n> struct Command {\n> /* size: 2168, cachelines: 34, members: 11 */\n> /* sum members: 2164, holes: 1, sum holes: 4 */\n> /* last cacheline: 56 bytes */\n> };\n\nI think the original intent was for argv[] to be at the end,\nwhich fell victim to ye olde add-at-the-end antipattern.\nCache-friendliness-wise, putting it back to the end would\nlikely be enough. But turning it into a variable-size array\nwould be better from a functionality standpoint.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Feb 2023 10:23:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could cause\n an error"
},
{
"msg_contents": "On 2022-Sep-30, Yugo NAGATA wrote:\n\n> Well, I still don't understand why we need to prepare only \"the\n> commands within a pipeline\" before starting pipeline. In the current\n> behavior, the entire script is prepared in advance just before executing\n> the first SQL command in the script, instead of preparing each command\n> one by one. This patch also prepare the entire script in advance, so\n> there is no behavioural change in this sense.\n> \n> However, there are a few behavioural changes. One is that the preparation\n> is not counted in the command performance statistics as Fabien mentioned.\n> Another is that all meta-commands including \\shell and \\sleep etc. are\n> executed before the preparation.\n> \n> To reduce impact of these changes, I updated the patch to prepare the\n> commands just before executing the first SQL command or \\startpipeline\n> meta-command instead of at the beginning of the script. \n\nI propose instead the following: each command is prepared just before\nit's executed, as previously, and if we see a \\startpipeline, then we\nprepare all commands starting with the one just after, and until the\n\\endpipeline.\n\nI didn't test additional cases other than the one you submitted.\n\nTesting this I noticed that pg_log_debug et al don't support\nmultithreading very well -- the lines are interspersed.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 8 Feb 2023 13:09:28 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "On 2023-Feb-08, Alvaro Herrera wrote:\n\n> I propose instead the following: each command is prepared just before\n> it's executed, as previously, and if we see a \\startpipeline, then we\n> prepare all commands starting with the one just after, and until the\n> \\endpipeline.\n\nHere's the patch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Wed, 8 Feb 2023 13:10:40 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 13:10:40 +0100, Alvaro Herrera wrote:\n> On 2023-Feb-08, Alvaro Herrera wrote:\n> \n> > I propose instead the following: each command is prepared just before\n> > it's executed, as previously, and if we see a \\startpipeline, then we\n> > prepare all commands starting with the one just after, and until the\n> > \\endpipeline.\n> \n> Here's the patch.\n\nThere's something wrong with the patch, it reliably fails with core dumps:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3260\n\nExample crash:\nhttps://api.cirrus-ci.com/v1/task/4922406553255936/logs/cores.log\nhttps://api.cirrus-ci.com/v1/artifact/task/6611256413519872/crashlog/crashlog-pgbench.EXE_0750_2023-02-13_14-07-06-189.txt\n\nAndres\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:20:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "On 2023-Feb-13, Andres Freund wrote:\n\n> There's something wrong with the patch, it reliably fails with core dumps:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3260\n\nI think this would happen on machines where sizeof(bool) is not 1 (which\nmine is evidently not). Fixed.\n\nIn addition, there was the problem that the psprintf() to generate the\ncommand name would race against each other if you had multiple threads.\nI changed the code so that the name to prepare each statement under is\ngenerated when the Command struct is first initialized, which occurs\nbefore the threads are started. One small issue is that now we use a\nsingle counter for all commands of all scripts, rather than a\nscript-local counter. This doesn't seem at all important.\n\n\nI did realize that Nagata-san was right that we've always prepared the\nwhole script in advance; that behavior was there already in commit\n49639a7b2c52 that introduced -Mprepared. We've never done each command\njust before executing it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nY una voz del caos me habló y me dijo\n\"Sonríe y sé feliz, podría ser peor\".\nY sonreí. Y fui feliz.\nY fue peor.",
"msg_date": "Fri, 17 Feb 2023 21:35:12 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "Found one last problem: if you do \"-f foobar.sql -M prepared\" in that\norder, then the prepare fails because the statement names would not be\nassigned when the file is parsed. This coding only supported doing\n\"-M prepared -f foobar.sql\", which funnily enough is the only one that\nPostgreSQL/Cluster.pm->pgbench() supports. So I moved the prepared\nstatement name generation to the postprocess step.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)",
"msg_date": "Mon, 20 Feb 2023 21:17:47 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "On 2023-Feb-20, Alvaro Herrera wrote:\n\n> Found one last problem: if you do \"-f foobar.sql -M prepared\" in that\n> order, then the prepare fails because the statement names would not be\n> assigned when the file is parsed. This coding only supported doing\n> \"-M prepared -f foobar.sql\", which funnily enough is the only one that\n> PostgreSQL/Cluster.pm->pgbench() supports. So I moved the prepared\n> statement name generation to the postprocess step.\n\nPushed to all three branches -- thanks, Nagata-san, for diagnosing the\nissue.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n",
"msg_date": "Tue, 21 Feb 2023 17:32:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "Hello Alvaro,\n\n21.02.2023 19:32, Alvaro Herrera wrote:\n> On 2023-Feb-20, Alvaro Herrera wrote:\n>\n>> Found one last problem: if you do \"-f foobar.sql -M prepared\" in that\n>> order, then the prepare fails because the statement names would not be\n>> assigned when the file is parsed. This coding only supported doing\n>> \"-M prepared -f foobar.sql\", which funnily enough is the only one that\n>> PostgreSQL/Cluster.pm->pgbench() supports. So I moved the prepared\n>> statement name generation to the postprocess step.\n> Pushed to all three branches -- thanks, Nagata-san, for diagnosing the\n> issue.\n\nStarting from 038f586d5, the following script:\necho \"\n\\startpipeline\n\\endpipeline\n\" >test.sql\npgbench -n -M prepared -f test.sql\n\nleads to the pgbench's segfault:\nCore was generated by `pgbench -n -M prepared -f test.sql'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\nwarning: Section `.reg-xstate/2327306' in core file too small.\n#0 0x0000555a402546b4 in prepareCommandsInPipeline (st=st@entry=0x555a409d62e0) at pgbench.c:3130\n3130 st->prepared[st->use_file][st->command] = true;\n(gdb) bt\n#0 0x0000555a402546b4 in prepareCommandsInPipeline (st=st@entry=0x555a409d62e0) at pgbench.c:3130\n#1 0x0000555a40257fca in executeMetaCommand (st=st@entry=0x555a409d62e0, now=now@entry=0x7ffdd46eff58)\n at pgbench.c:4413\n#2 0x0000555a402585ce in advanceConnectionState (thread=thread@entry=0x555a409d6580, st=st@entry=0x555a409d62e0,\n agg=agg@entry=0x7ffdd46f0090) at pgbench.c:3807\n#3 0x0000555a40259564 in threadRun (arg=arg@entry=0x555a409d6580) at pgbench.c:7535\n#4 0x0000555a4025ca40 in main (argc=<optimized out>, argv=<optimized out>) at pgbench.c:7253\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 20 May 2023 19:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could cause\n an error"
},
{
"msg_contents": "On 2023-May-20, Alexander Lakhin wrote:\n\n> Starting from 038f586d5, the following script:\n> echo \"\n> \\startpipeline\n> \\endpipeline\n> \" >test.sql\n> pgbench -n -M prepared -f test.sql\n> \n> leads to the pgbench's segfault:\n\nHah, yeah, that's because an empty pipeline never calls the code to\nallocate the flag array. Here's the trivial fix.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)",
"msg_date": "Mon, 22 May 2023 13:49:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
},
{
"msg_contents": "On 2023-May-22, Alvaro Herrera wrote:\n\n> Hah, yeah, that's because an empty pipeline never calls the code to\n> allocate the flag array. Here's the trivial fix.\n\nPushed to both branches, thanks for the report.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 25 May 2023 12:39:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: using prepared BEGIN statement in a pipeline could\n cause an error"
}
] |
[
{
"msg_contents": "While I looked into a .map file in src/backend/utils/mb/Unicode, I\nnotice of a typo in it.\n\n > static const pg_mb_radix_tree euc_jp_from_unicode_tree =\n > {\n > ..\n > 0x0000, /* offset of table for 1-byte inputs */\n > ...\n > 0x0040, /* offset of table for 2-byte inputs */\n > ...\n > 0x02c3, /* offset of table for 3-byte inputs */\n > ...\n!> 0x0000, /* offset of table for 3-byte inputs */\n > 0x00, /* b4_1_lower */\n > 0x00, /* b4_1_upper */\n > ...\n > };\n\nYeah, the line above prefixed by '!' is apparently a typo of \"4-byte\ninputs\", which comes from a typo in convutils.pm.\n\nFortunately make maintainer-clean; make all in the directory results\nin no other differences so we can apply the attached patch to fix the\nall propagated typos.\n\nI don't think no backpatch is needed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 16 Jul 2021 17:02:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "A (but copied many) typo of char-mapping tables"
},
{
"msg_contents": "On 16.07.21 10:02, Kyotaro Horiguchi wrote:\n> While I looked into a .map file in src/backend/utils/mb/Unicode, I\n> notice of a typo in it.\n> \n> > static const pg_mb_radix_tree euc_jp_from_unicode_tree =\n> > {\n> > ..\n> > 0x0000, /* offset of table for 1-byte inputs */\n> > ...\n> > 0x0040, /* offset of table for 2-byte inputs */\n> > ...\n> > 0x02c3, /* offset of table for 3-byte inputs */\n> > ...\n> !> 0x0000, /* offset of table for 3-byte inputs */\n> > 0x00, /* b4_1_lower */\n> > 0x00, /* b4_1_upper */\n> > ...\n> > };\n> \n> Yeah, the line above prefixed by '!' is apparently a typo of \"4-byte\n> inputs\", which comes from a typo in convutils.pm.\n\nfixed, thanks\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:45:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: A (but copied many) typo of char-mapping tables"
}
] |
[
{
"msg_contents": "\nOn Fri, 16 Jul 2021 at 16:26, Japin Li <japinli@hotmail.com> wrote:\n> Hi, hackers\n>\n> When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1], Ranier Vilela\n> finds that ReplicationSlotValidateName() has redundant strlen() call, Since it's\n> not related to that problem, so I start a new thread to discuss it.\n>\n> [1] - https://www.postgresql.org/message-id/MEYP282MB1669CBD98E721C77CA696499B61A9%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 16 Jul 2021 16:35:25 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
},
{
"msg_contents": "On Fri, 16 Jul 2021 at 20:35, Japin Li <japinli@hotmail.com> wrote:\n> > When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1], Ranier Vilela\n> > finds that ReplicationSlotValidateName() has redundant strlen() call, Since it's\n> > not related to that problem, so I start a new thread to discuss it.\n\nI think this is a waste of time. The first strlen() call is just\nchecking for an empty string. I imagine all compilers would just\noptimise that to checking if the first char is '\\0';\n\nhttps://godbolt.org/z/q58EGYMfM\n\nDavid\n\n\n",
"msg_date": "Fri, 16 Jul 2021 22:05:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
},
{
"msg_contents": "\nOn Fri, 16 Jul 2021 at 18:05, David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 16 Jul 2021 at 20:35, Japin Li <japinli@hotmail.com> wrote:\n>> > When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1], Ranier Vilela\n>> > finds that ReplicationSlotValidateName() has redundant strlen() call, Since it's\n>> > not related to that problem, so I start a new thread to discuss it.\n>\n> I think this is a waste of time. The first strlen() call is just\n> checking for an empty string. I imagine all compilers would just\n> optimise that to checking if the first char is '\\0';\n>\n> https://godbolt.org/z/q58EGYMfM\n>\n\nThanks for your review, this tool is amazing. The two writes on some compiler\nmight be difference. As Amit Kapila said, it might be helpless for reducing\noverhead.\n\nhttps://godbolt.org/z/j5on6Khxb\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 16 Jul 2021 18:42:38 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
},
{
"msg_contents": "Em sex., 16 de jul. de 2021 às 07:05, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Fri, 16 Jul 2021 at 20:35, Japin Li <japinli@hotmail.com> wrote:\n> > > When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1],\n> Ranier Vilela\n> > > finds that ReplicationSlotValidateName() has redundant strlen() call,\n> Since it's\n> > > not related to that problem, so I start a new thread to discuss it.\n>\n> I think this is a waste of time. The first strlen() call is just\n> checking for an empty string. I imagine all compilers would just\n> optimise that to checking if the first char is '\\0';\n>\nI think with very simple functions, the compiler can do the job.\n\nBut, it's not always like that, I think.\n\nhttps://godbolt.org/z/1jdW3zT58\n\nWith gcc 11, I can see clear and different ASM.\n\nstrlen1(char const*):\n sub rsp, 8\n cmp BYTE PTR [rdi], 0\n je .L8\n call strlen\n mov r8, rax\n xor eax, eax\n cmp r8, 64\n ja .L9\n\nstrlen2(char const*):\n sub rsp, 8\n call strlen\n test eax, eax\n je .L15\n xor r8d, r8d\n cmp eax, 64\n jg .L16\n\nFor me strlen2's ASM is much more compact.\nAnd as some functions with strlen are always a hotpath, it's worth it.\n\nregards,\nRanier Vilela\n\nEm sex., 16 de jul. de 2021 às 07:05, David Rowley <dgrowleyml@gmail.com> escreveu:On Fri, 16 Jul 2021 at 20:35, Japin Li <japinli@hotmail.com> wrote:\n> > When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1], Ranier Vilela\n> > finds that ReplicationSlotValidateName() has redundant strlen() call, Since it's\n> > not related to that problem, so I start a new thread to discuss it.\n\nI think this is a waste of time. The first strlen() call is just\nchecking for an empty string. I imagine all compilers would just\noptimise that to checking if the first char is '\\0';I think with very simple functions, the compiler can do the job. But, it's not always like that, I think.https://godbolt.org/z/1jdW3zT58With gcc 11, I can see clear and different ASM. \nstrlen1(char const*): sub rsp, 8 cmp BYTE PTR [rdi], 0 je .L8 call strlen mov r8, rax xor eax, eax cmp r8, 64 ja .L9\n\nstrlen2(char const*): sub rsp, 8 call strlen test eax, eax je .L15 xor r8d, r8d cmp eax, 64 jg .L16\nFor me strlen2's ASM is much more compact.And as some functions with strlen are always a hotpath, it's worth it.regards,Ranier Vilela",
"msg_date": "Fri, 16 Jul 2021 07:55:39 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
},
{
"msg_contents": "On 2021-Jul-16, David Rowley wrote:\n\n> On Fri, 16 Jul 2021 at 20:35, Japin Li <japinli@hotmail.com> wrote:\n> > > When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1], Ranier Vilela\n> > > finds that ReplicationSlotValidateName() has redundant strlen() call, Since it's\n> > > not related to that problem, so I start a new thread to discuss it.\n> \n> I think this is a waste of time. The first strlen() call is just\n> checking for an empty string. I imagine all compilers would just\n> optimise that to checking if the first char is '\\0';\n\nI could find the following idioms\n\n95 times: var[0] == '\\0'\n146 times: *var == '\\0'\n35 times: strlen(var) == 0\n\nResp.\ngit grep \"[a-zA-Z_]*\\[0\\] == '\\\\\\\\0\"\ngit grep \"\\*[a-zA-Z_]* == '\\\\\\\\0\"\ngit grep \"strlen([^)]*) == 0\"\n\nSee https://postgr.es/m/13847.1587332283@sss.pgh.pa.us about replacing\nstrlen with a check on first byte being zero. So still not Ranier's\npatch, but rather the attached. I doubt this change is worth committing\non its own, though, since performance-wise it doesn't matter at all; if\nsomebody were to make it so that all \"strlen(foo) == 0\" occurrences were\nchanged to use the test on byte 0, that could be said to be establishing\na consistent style, which might be more pallatable.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Just treat us the way you want to be treated + some extra allowance\n for ignorance.\" (Michael Brusser)",
"msg_date": "Fri, 16 Jul 2021 11:37:34 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
},
{
"msg_contents": "Em sex., 16 de jul. de 2021 às 12:37, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> On 2021-Jul-16, David Rowley wrote:\n>\n> > On Fri, 16 Jul 2021 at 20:35, Japin Li <japinli@hotmail.com> wrote:\n> > > > When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1],\n> Ranier Vilela\n> > > > finds that ReplicationSlotValidateName() has redundant strlen()\n> call, Since it's\n> > > > not related to that problem, so I start a new thread to discuss it.\n> >\n> > I think this is a waste of time. The first strlen() call is just\n> > checking for an empty string. I imagine all compilers would just\n> > optimise that to checking if the first char is '\\0';\n>\n> I could find the following idioms\n>\n> 95 times: var[0] == '\\0'\n> 146 times: *var == '\\0'\n> 35 times: strlen(var) == 0\n>\n> Resp.\n> git grep \"[a-zA-Z_]*\\[0\\] == '\\\\\\\\0\"\n> git grep \"\\*[a-zA-Z_]* == '\\\\\\\\0\"\n> git grep \"strlen([^)]*) == 0\"\n>\n> See https://postgr.es/m/13847.1587332283@sss.pgh.pa.us about replacing\n> strlen with a check on first byte being zero. So still not Ranier's\n> patch, but rather the attached. I doubt this change is worth committing\n> on its own, though, since performance-wise it doesn't matter at all; if\n> somebody were to make it so that all \"strlen(foo) == 0\" occurrences were\n> changed to use the test on byte 0, that could be said to be establishing\n> a consistent style, which might be more pallatable.\n>\nYeah, can be considered a refactoring.\n\nIMHO not in C is free.\nI do not think that this will improve 1% generally, but some function can\ngain.\n\nAnother example I can mention is this little Lua code, in which I made\ncomparisons between the generated asms, made some time ago.\n\np[0] = luaS_newlstr(L, str, strlen(str));\n\nwith strlen (msvc 64 bits):\ninc r8\ncmp BYTE PTR [r11+r8], 0\njne SHORT $LL19@luaS_new\nmov rdx, r11\nmov rcx, rdi\ncall luaS_newlstr\n\nwithout strlen (msvc 64 bits):\nmov r8, rsi\nmov rdx, r11\nmov QWORD PTR [rbx+8], rax\nmov rcx, rdi\ncall luaS_newlstr\n\nOf course, that doesn't mean anything about Postgres, but that I'm talking\nabout using strlen.\nClearly I can see that usage is not always free.\n\nIf have some interest in actually changing that in Postgres, I can prepare\na patch,\nwhere static analyzers claim it's performance loss.\nBut without any reason to backpatch, it can only be considered as\nrefactoring.\n\nregards,\nRanier Vilela\n\nEm sex., 16 de jul. de 2021 às 12:37, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:On 2021-Jul-16, David Rowley wrote:\n\n> On Fri, 16 Jul 2021 at 20:35, Japin Li <japinli@hotmail.com> wrote:\n> > > When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1], Ranier Vilela\n> > > finds that ReplicationSlotValidateName() has redundant strlen() call, Since it's\n> > > not related to that problem, so I start a new thread to discuss it.\n> \n> I think this is a waste of time. The first strlen() call is just\n> checking for an empty string. I imagine all compilers would just\n> optimise that to checking if the first char is '\\0';\n\nI could find the following idioms\n\n95 times: var[0] == '\\0'\n146 times: *var == '\\0'\n35 times: strlen(var) == 0\n\nResp.\ngit grep \"[a-zA-Z_]*\\[0\\] == '\\\\\\\\0\"\ngit grep \"\\*[a-zA-Z_]* == '\\\\\\\\0\"\ngit grep \"strlen([^)]*) == 0\"\n\nSee https://postgr.es/m/13847.1587332283@sss.pgh.pa.us about replacing\nstrlen with a check on first byte being zero. So still not Ranier's\npatch, but rather the attached. I doubt this change is worth committing\non its own, though, since performance-wise it doesn't matter at all; if\nsomebody were to make it so that all \"strlen(foo) == 0\" occurrences were\nchanged to use the test on byte 0, that could be said to be establishing\na consistent style, which might be more pallatable.Yeah, can be considered a refactoring.IMHO not in C is free.I do not think that this will improve 1% generally, but some function cangain.Another example I can mention is this little Lua code, in which I made comparisons between the generated asms, made some time ago.p[0] = luaS_newlstr(L, str, strlen(str));with strlen (msvc 64 bits):\tinc\tr8\tcmp\tBYTE PTR [r11+r8], 0\tjne\tSHORT $LL19@luaS_new\tmov\trdx, r11\tmov\trcx, rdi\tcall\tluaS_newlstrwithout strlen (msvc 64 bits):\tmov\tr8, rsi\tmov\trdx, r11\tmov\tQWORD PTR [rbx+8], rax\tmov\trcx, rdi\tcall\tluaS_newlstrOf course, that doesn't mean anything about Postgres, but that I'm talking about using strlen.Clearly I can see that usage is not always free.If have some interest in actually changing that in Postgres, I can prepare a patch, where static analyzers claim it's performance loss.But without any reason to backpatch, it can only be considered as refactoring.regards,Ranier Vilela",
"msg_date": "Fri, 16 Jul 2021 13:18:20 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
},
{
"msg_contents": "Em sex., 16 de jul. de 2021 às 13:18, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em sex., 16 de jul. de 2021 às 12:37, Alvaro Herrera <\n> alvherre@alvh.no-ip.org> escreveu:\n>\n>> On 2021-Jul-16, David Rowley wrote:\n>>\n>> > On Fri, 16 Jul 2021 at 20:35, Japin Li <japinli@hotmail.com> wrote:\n>> > > > When I fix a bug about ALTER SUBSCRIPTION ... SET (slot_name) [1],\n>> Ranier Vilela\n>> > > > finds that ReplicationSlotValidateName() has redundant strlen()\n>> call, Since it's\n>> > > > not related to that problem, so I start a new thread to discuss it.\n>> >\n>> > I think this is a waste of time. The first strlen() call is just\n>> > checking for an empty string. I imagine all compilers would just\n>> > optimise that to checking if the first char is '\\0';\n>>\n>> I could find the following idioms\n>>\n>> 95 times: var[0] == '\\0'\n>> 146 times: *var == '\\0'\n>> 35 times: strlen(var) == 0\n>>\n>> Resp.\n>> git grep \"[a-zA-Z_]*\\[0\\] == '\\\\\\\\0\"\n>> git grep \"\\*[a-zA-Z_]* == '\\\\\\\\0\"\n>> git grep \"strlen([^)]*) == 0\"\n>>\n>> See https://postgr.es/m/13847.1587332283@sss.pgh.pa.us about replacing\n>> strlen with a check on first byte being zero. So still not Ranier's\n>> patch, but rather the attached. I doubt this change is worth committing\n>> on its own, though, since performance-wise it doesn't matter at all; if\n>> somebody were to make it so that all \"strlen(foo) == 0\" occurrences were\n>> changed to use the test on byte 0, that could be said to be establishing\n>> a consistent style, which might be more pallatable.\n>>\n> Yeah, can be considered a refactoring.\n>\n> IMHO not in C is free.\n> I do not think that this will improve 1% generally, but some function can\n> gain.\n>\n> Another example I can mention is this little Lua code, in which I made\n> comparisons between the generated asms, made some time ago.\n>\n> p[0] = luaS_newlstr(L, str, strlen(str));\n>\n> with strlen (msvc 64 bits):\n> inc r8\n> cmp BYTE PTR [r11+r8], 0\n> jne SHORT $LL19@luaS_new\n> mov rdx, r11\n> mov rcx, rdi\n> call luaS_newlstr\n>\n> without strlen (msvc 64 bits):\n> mov r8, rsi\n> mov rdx, r11\n> mov QWORD PTR [rbx+8], rax\n> mov rcx, rdi\n> call luaS_newlstr\n>\n> Of course, that doesn't mean anything about Postgres, but that I'm talking\n> about using strlen.\n> Clearly I can see that usage is not always free.\n>\n> If have some interest in actually changing that in Postgres, I can prepare\n> a patch,\n> where static analyzers claim it's performance loss.\n>\nI did the patch, but to my surprise, the results weren't so good.\nDespite that claiming a tiny improvement in performance, I didn't expect\nany slowdown.\nI put a counter in pg_regress.c, summing the results of each test and did\nit three times for HEAD and for the patch.\nSome tests were better, but others were bad.\nTests comments per example, show 180%, combocid 174%, dbize 165%, xmlmap\n136%, lock 134%.\n\n\nHEAD 1 HEAD 2 HEAD 3 HEAD avg\npatch strlen1 patch strlen2 patch strlen3 patch avg\n\n\n\n\n\n\n\n\n\n\n\ntablespace 326 328 339 331\n321 320 335 325,333333333333 101,74%\nboolean 63 48 62 57,6666666666667\n63 39 62 54,6666666666667 105,49%\nchar 47 25 24 32\n33 21 25 26,3333333333333 121,52%\nname 30 33 26 29,6666666666667\n34 26 62 40,6666666666667 72,95%\nvarchar 43 47 26 38,6666666666667\n65 31 55 50,3333333333333 76,82%\ntext 51 43 53 49\n37 57 54 49,3333333333333 99,32%\nint2 29 66 46 47\n57 58 38 51 92,16%\nint4 55 35 46 45,3333333333333\n51 54 70 58,3333333333333 77,71%\nint8 103 85 111 99,6666666666667\n79 96 91 88,6666666666667 112,41%\noid 70 85 71 75,3333333333333\n45 64 80 63 119,58%\nfloat4 68 74 86 76\n84 68 49 67 113,43%\nfloat8 98 87 109 98\n95 98 88 93,6666666666667 104,63%\nbit 99 84 91 91,3333333333333\n108 96 92 98,6666666666667 92,57%\nnumeric 414 392 393 399,666666666667\n379 399 411 396,333333333333 100,84%\ntxid 82 84 63 76,3333333333333\n71 79 56 68,6666666666667 111,17%\nuuid 48 96 76 73,3333333333333\n79 76 67 74 99,10%\nenum 101 125 102 109,333333333333\n113 117 113 114,333333333333 95,63%\nmoney 81 77 79 79\n45 66 77 62,6666666666667 126,06%\nrangetypes 404 411 415 410\n426 404 421 417 98,32%\npg_lsn 60 63 69 64\n69 57 51 59 108,47%\nregproc 69 52 60 60,3333333333333\n79 57 76 70,6666666666667 85,38%\nstrings 145 152 139 145,333333333333\n170 138 136 148 98,20%\nnumerology 46 35 31 37,3333333333333\n34 35 33 34 109,80%\npoint 45 37 45 42,3333333333333\n52 38 76 55,3333333333333 76,51%\nlseg 25 15 19 19,6666666666667\n33 37 19 29,6666666666667 66,29%\nline 31 30 24 28,3333333333333\n20 24 32 25,3333333333333 111,84%\nbox 259 254 266 259,666666666667\n255 307 305 289 89,85%\npath 25 20 52 32,3333333333333\n35 57 25 39 82,91%\npolygon 274 269 271 271,333333333333\n247 258 315 273,333333333333 99,27%\ncircle 35 52 55 47,3333333333333\n43 33 51 42,3333333333333 111,81%\ndate 108 82 90 93,3333333333333\n106 101 104 103,666666666667 90,03%\ntime 67 62 39 56\n69 34 36 46,3333333333333 120,86%\ntimetz 38 62 41 47\n36 75 33 48 97,92%\ntimestamp 416 412 425 417,666666666667\n451 431 357 413 101,13%\ntimestamptz 479 468 474 473,666666666667\n503 461 411 458,333333333333 103,35%\ninterval 81 104 93 92,6666666666667\n69 96 79 81,3333333333333 113,93%\ninet 72 74 108 84,6666666666667\n83 80 85 82,6666666666667 102,42%\nmacaddr 37 61 43 47\n59 74 72 68,3333333333333 68,78%\nmacaddr8 54 61 64 59,6666666666667\n39 78 78 65 91,79%\nmultirangetypes 278 271 290 279,666666666667\n290 333 265 296 94,48%\ncreate_function_0 66 37 59 54\n48 31 39 39,3333333333333 137,29%\ngeometry 153 136 133 140,666666666667\n131 137 148 138,666666666667 101,44%\nhorology 102 132 98 110,666666666667\n108 101 97 102 108,50%\ntstypes 81 69 60 70\n86 56 54 65,3333333333333 107,14%\nregex 498 515 517 510\n499 486 510 498,333333333333 102,34%\ntype_sanity 153 132 134 139,666666666667\n161 130 144 145 96,32%\nopr_sanity 389 377 381 382,333333333333\n387 387 363 379 100,88%\nmisc_sanity 33 37 37 35,6666666666667\n47 34 38 39,6666666666667 89,92%\ncomments 19 56 35 36,6666666666667\n30 17 14 20,3333333333333 180,33%\nexpressions 62 44 67 57,6666666666667\n52 40 69 53,6666666666667 107,45%\nunicode 49 52 21 40,6666666666667\n53 46 17 38,6666666666667 105,17%\nxid 46 45 40 43,6666666666667\n83 81 38 67,3333333333333 64,85%\nmvcc 98 107 111 105,333333333333\n95 116 94 101,666666666667 103,61%\ncreate_function_1 9 10 10 9,66666666666667\n10 10 9 9,66666666666667 100,00%\ncreate_type 29 28 30 29\n28 29 29 28,6666666666667 101,16%\ncreate_table 365 364 364 364,333333333333\n365 363 360 362,666666666667 100,46%\ncreate_function_2 13 12 12 12,3333333333333\n13 12 12 12,3333333333333 100,00%\ncopy 461 450 458 456,333333333333\n463 454 450 455,666666666667 100,15%\ncopyselect 29 29 32 30\n30 27 29 28,6666666666667 104,65%\ncopydml 35 35 35 35\n37 35 33 35 100,00%\ninsert 303 309 338 316,666666666667\n305 287 305 299 105,91%\ninsert_conflict 142 141 176 153\n150 137 154 147 104,08%\ncreate_misc 94 93 96 94,3333333333333\n100 94 114 102,666666666667 91,88%\ncreate_operator 24 27 22 24,3333333333333\n28 26 24 26 93,59%\ncreate_procedure 62 64 65 63,6666666666667\n73 63 82 72,6666666666667 87,61%\ncreate_index 880 933 866 893\n896 864 904 888 100,56%\ncreate_index_spgist 582 512 574 556\n598 588 515 567 98,06%\ncreate_view 341 250 329 306,666666666667\n286 334 292 304 100,88%\nindex_including 183 207 204 198\n215 143 163 173,666666666667 114,01%\nindex_including_gist 307 362 364 344,333333333333\n413 260 278 317 108,62%\ncreate_aggregate 62 69 90 73,6666666666667\n60 100 107 89 82,77%\ncreate_function_3 146 162 173 160,333333333333\n151 144 181 158,666666666667 101,05%\ncreate_cast 26 29 22 25,6666666666667\n24 18 71 37,6666666666667 68,14%\nconstraints 210 218 216 214,666666666667\n239 227 195 220,333333333333 97,43%\ntriggers 856 920 882 886\n889 871 861 873,666666666667 101,41%\nselect 103 133 101 112,333333333333\n92 133 140 121,666666666667 92,33%\ninherit 615 627 671 637,666666666667\n642 646 616 634,666666666667 100,47%\ntyped_table 146 129 131 135,333333333333\n115 121 116 117,333333333333 115,34%\nvacuum 494 417 455 455,333333333333\n442 451 402 431,666666666667 105,48%\ndrop_if_exists 115 84 83 94\n138 63 57 86 109,30%\nupdatable_views 645 630 678 651\n635 644 647 642 101,40%\nroleattributes 119 95 80 98\n105 134 90 109,666666666667 89,36%\ncreate_am 156 114 126 132\n155 167 191 171 77,19%\nhash_func 83 52 61 65,3333333333333\n50 50 91 63,6666666666667 102,62%\nerrors 54 80 63 65,6666666666667\n91 33 41 55 119,39%\ninfinite_recurse 312 321 361 331,333333333333\n329 341 401 357 92,81%\nsanity_check 136 134 134 134,666666666667\n135 134 134 134,333333333333 100,25%\nselect_into 91 139 66 98,6666666666667\n81 118 78 92,3333333333333 106,86%\nselect_distinct 228 202 181 203,666666666667\n242 238 214 231,333333333333 88,04%\nselect_distinct_on 27 38 60 41,6666666666667\n44 36 24 34,6666666666667 120,19%\nselect_implicit 77 48 47 57,3333333333333\n44 51 94 63 91,01%\nselect_having 38 48 75 53,6666666666667\n50 64 39 51 105,23%\nsubselect 274 297 305 292\n316 351 310 325,666666666667 89,66%\nunion 308 285 289 294\n351 289 281 307 95,77%\ncase 53 137 47 79\n81 56 149 95,3333333333333 82,87%\njoin 685 718 736 713\n690 678 793 720,333333333333 98,98%\naggregates 706 675 697 692,666666666667\n802 680 728 736,666666666667 94,03%\ntransactions 285 182 273 246,666666666667\n159 247 251 219 112,63%\nrandom 44 67 46 52,3333333333333\n43 82 50 58,3333333333333 89,71%\nportals 264 265 189 239,333333333333\n263 247 225 245 97,69%\narrays 379 355 358 364\n352 398 339 363 100,28%\nbtree_index 1550 1532 1476 1519,33333333333\n1708 1565 1557 1610 94,37%\nhash_index 436 480 477 464,333333333333\n464 447 440 450,333333333333 103,11%\nupdate 468 464 464 465,333333333333\n522 464 472 486 95,75%\ndelete 112 43 48 67,6666666666667\n101 89 41 77 87,88%\nnamespace 96 50 56 67,3333333333333\n136 52 49 79 85,23%\nprepared_xacts 185 109 160 151,333333333333\n161 127 175 154,333333333333 98,06%\nbrin 1689 1719 1740 1716\n1777 1671 1704 1717,33333333333 99,92%\ngin 1166 1213 1139 1172,66666666667\n1215 1243 1095 1184,33333333333 99,01%\ngist 1052 1045 1073 1056,66666666667\n1029 1072 1064 1055 100,16%\nspgist 1013 1029 1094 1045,33333333333\n1009 946 994 983 106,34%\nprivileges 876 863 904 881\n924 942 1008 958 91,96%\ninit_privs 27 20 25 24\n44 18 17 26,3333333333333 91,14%\nsecurity_label 43 49 63 51,6666666666667\n89 46 170 101,666666666667 50,82%\ncollate 315 393 346 351,333333333333\n319 222 341 294 119,50%\nmatview 671 699 663 677,666666666667\n698 676 723 699 96,95%\nlock 220 171 177 189,333333333333\n112 132 179 141 134,28%\nreplica_identity 294 248 235 259\n314 229 428 323,666666666667 80,02%\nrowsecurity 869 888 832 863\n853 919 933 901,666666666667 95,71%\nobject_address 244 210 202 218,666666666667\n257 224 301 260,666666666667 83,89%\ntablesample 154 164 194 170,666666666667\n217 149 169 178,333333333333 95,70%\ngroupingsets 644 606 640 630\n564 637 665 622 101,29%\ndrop_operator 98 125 55 92,6666666666667\n54 72 76 67,3333333333333 137,62%\npassword 585 572 535 564\n527 538 465 510 110,59%\nidentity 529 435 499 487,666666666667\n504 571 453 509,333333333333 95,75%\ngenerated 777 672 759 736\n830 744 734 769,333333333333 95,67%\njoin_hash 1672 1706 1726 1701,33333333333\n1761 1667 1691 1706,33333333333 99,71%\nbrin_bloom 97 97 131 108,333333333333\n99 109 97 101,666666666667 106,56%\nbrin_multi 236 234 269 246,333333333333\n245 236 226 235,666666666667 104,53%\ncreate_table_like 281 277 257 271,666666666667\n283 277 280 280 97,02%\nalter_generic 124 116 126 122\n119 131 137 129 94,57%\nalter_operator 33 28 31 30,6666666666667\n61 36 50 49 62,59%\nmisc 161 172 157 163,333333333333\n176 160 167 167,666666666667 97,42%\nasync 19 13 40 24\n33 60 22 38,3333333333333 62,61%\ndbsize 73 31 22 42\n24 19 33 25,3333333333333 165,79%\nmisc_functions 84 79 84 82,3333333333333\n94 89 92 91,6666666666667 89,82%\nsysviews 97 108 86 97\n91 88 92 90,3333333333333 107,38%\ntsrf 69 72 80 73,6666666666667\n60 96 84 80 92,08%\ntid 84 37 38 53\n47 73 57 59 89,83%\ntidscan 75 61 81 72,3333333333333\n98 77 96 90,3333333333333 80,07%\ntidrangescan 83 49 76 69,3333333333333\n47 69 63 59,6666666666667 116,20%\ncollate.icu.utf8 30 37 29 32\n16 60 27 34,3333333333333 93,20%\nincremental_sort 174 176 163 171\n160 164 179 167,666666666667 101,99%\nrules 303 383 315 333,666666666667\n385 376 341 367,333333333333 90,83%\npsql 253 315 253 273,666666666667\n329 243 297 289,666666666667 94,48%\npsql_crosstab 35 31 30 32\n49 33 33 38,3333333333333 83,48%\namutils 18 20 18 18,6666666666667\n34 20 18 24 77,78%\nstats_ext 1530 1568 1572 1556,66666666667\n1576 1605 1571 1584 98,27%\ncollate.linux.utf8 10 11 11 10,6666666666667\n17 10 11 12,6666666666667 84,21%\nselect_parallel 786 786 797 789,666666666667\n797 807 826 810 97,49%\nwrite_parallel 78 80 80 79,3333333333333\n79 79 84 80,6666666666667 98,35%\npublication 76 83 81 80\n74 75 74 74,3333333333333 107,62%\nsubscription 41 48 48 45,6666666666667\n40 43 43 42 108,73%\nselect_views 201 222 199 207,333333333333\n188 205 204 199 104,19%\nportals_p2 57 38 41 45,3333333333333\n35 44 34 37,6666666666667 120,35%\nforeign_key 969 955 965 963\n925 971 947 947,666666666667 101,62%\ncluster 407 400 408 405\n417 439 398 418 96,89%\ndependency 107 212 112 143,666666666667\n178 147 164 163 88,14%\nguc 101 184 168 151\n177 181 161 173 87,28%\nbitmapops 448 443 429 440\n444 460 446 450 97,78%\ncombocid 156 144 150 150\n46 162 50 86 174,42%\ntsearch 396 443 424 421\n404 434 415 417,666666666667 100,80%\ntsdicts 137 170 166 157,666666666667\n98 157 130 128,333333333333 122,86%\nforeign_data 633 637 636 635,333333333333\n647 648 629 641,333333333333 99,06%\nwindow 374 428 391 397,666666666667\n353 374 355 360,666666666667 110,26%\nxmlmap 108 51 113 90,6666666666667\n85 45 69 66,3333333333333 136,68%\nfunctional_deps 177 169 125 157\n145 162 144 150,333333333333 104,43%\nadvisory_lock 71 68 79 72,6666666666667\n72 57 86 71,6666666666667 101,40%\nindirect_toast 541 522 554 539\n567 627 596 596,666666666667 90,34%\nequivclass 123 197 118 146\n184 178 165 175,666666666667 83,11%\njson 106 105 111 107,333333333333\n116 108 110 111,333333333333 96,41%\njsonb 246 253 256 251,666666666667\n262 253 256 257 97,92%\njson_encoding 17 16 16 16,3333333333333\n17 15 19 17 96,08%\njsonpath 31 32 31 31,3333333333333\n32 32 32 32 97,92%\njsonpath_encoding 15 14 14 14,3333333333333\n15 15 14 14,6666666666667 97,73%\njsonb_jsonpath 81 81 84 82\n83 82 84 83 98,80%\nplancache 107 103 145 118,333333333333\n131 101 120 117,333333333333 100,85%\nlimit 186 130 127 147,666666666667\n138 167 138 147,666666666667 100,00%\nplpgsql 725 713 739 725,666666666667\n726 699 722 715,666666666667 101,40%\ncopy2 324 227 187 246\n226 137 226 196,333333333333 125,30%\ntemp 215 236 251 234\n208 178 190 192 121,88%\ndomain 380 368 391 379,666666666667\n371 351 351 357,666666666667 106,15%\nrangefuncs 379 336 371 362\n354 373 359 362 100,00%\nprepare 164 175 131 156,666666666667\n87 179 149 138,333333333333 113,25%\nconversion 190 113 198 167\n186 259 221 222 75,23%\ntruncate 349 362 365 358,666666666667\n285 342 305 310,666666666667 115,45%\nalter_table 1419 1415 1363 1399\n1406 1409 1402 1405,66666666667 99,53%\nsequence 260 227 312 266,333333333333\n236 264 212 237,333333333333 112,22%\npolymorphism 298 341 357 332\n342 341 323 335,333333333333 99,01%\nrowtypes 291 266 284 280,333333333333\n281 254 309 281,333333333333 99,64%\nreturning 146 92 64 100,666666666667\n106 243 108 152,333333333333 66,08%\nlargeobject 365 447 415 409\n426 494 426 448,666666666667 91,16%\nwith 385 329 360 358\n345 286 349 326,666666666667 109,59%\nxml 160 222 217 199,666666666667\n289 253 228 256,666666666667 77,79%\npartition_join 865 861 861 862,333333333333\n816 924 872 870,666666666667 99,04%\npartition_prune 783 719 791 764,333333333333\n758 758 781 765,666666666667 99,83%\nreloptions 96 84 78 86\n74 74 87 78,3333333333333 109,79%\nhash_part 51 43 39 44,3333333333333\n46 57 58 53,6666666666667 82,61%\nindexing 702 707 685 698\n737 714 739 730 95,62%\npartition_aggregate 949 939 884 924\n960 957 915 944 97,88%\npartition_info 96 111 77 94,6666666666667\n111 83 104 99,3333333333333 95,30%\ntuplesort 1359 1339 1387 1361,66666666667\n1404 1381 1359 1381,33333333333 98,58%\nexplain 97 101 82 93,3333333333333\n102 102 74 92,6666666666667 100,72%\ncompression 164 157 162 161\n154 149 156 153 105,23%\nmemoize 90 95 87 90,6666666666667\n80 75 84 79,6666666666667 113,81%\nevent_trigger 126 126 135 129\n127 111 112 116,666666666667 110,57%\noidjoins 257 240 252 249,666666666667\n245 243 245 244,333333333333 102,18%\nfast_default 146 145 145 145,333333333333\n148 145 162 151,666666666667 95,82%\nstats 615 611 611 612,333333333333\n621 613 611 615 99,57%\n\nSo I'm posting the patch here, merely as an illustration of my findings.\nPerhaps someone with a better understanding of the process of translating C\nto asm can have an explanation.\nIs it worth it to change only where there has been improvement?\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 18 Jul 2021 10:08:49 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
},
{
"msg_contents": "On Sun, Jul 18, 2021 at 11:09 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> ...\n>\nI did the patch, but to my surprise, the results weren't so good.\n> Despite that claiming a tiny improvement in performance, I didn't expect\n> any slowdown.\n> I put a counter in pg_regress.c, summing the results of each test and did\n> it three times for HEAD and for the patch.\n> Some tests were better, but others were bad.\n> Tests comments per example, show 180%, combocid 174%, dbize 165%, xmlmap\n> 136%, lock 134%.\n>\n> ... ...\n>\n> So I'm posting the patch here, merely as an illustration of my findings.\n> Perhaps someone with a better understanding of the process of translating\n> C to asm can have an explanation.\n> Is it worth it to change only where there has been improvement?\n>\n>\nMy guess is that your hypothetical performance improvement has been\ncompletely swamped by the natural variations of each run.\n\nFor example,\ndrop_if_exists 115 84 83 94\n138 63 57 86 109,30%\n\nThose numbers are all over the place, so I doubt the results are really\nsaying anything at all about what is better/worse, because I think you have\nzero chance to notice a couple of nanoseconds of improvement within the\nnoise when each run is varying from 57 to 138 ms.\n\nIMO the only conclusion you can draw from your results is that any\nperformance gain is too small to be observable.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nOn Sun, Jul 18, 2021 at 11:09 PM Ranier Vilela <ranier.vf@gmail.com> wrote:... I did the patch, but to my surprise, the results weren't so good.Despite that claiming a tiny improvement in performance, I didn't expect any slowdown.I put a counter in pg_regress.c, summing the results of each test and did it three times for HEAD and for the patch.Some tests were better, but others were bad.Tests comments per example, show 180%, combocid 174%, dbize 165%, xmlmap 136%, lock 134%.\n\n\n\n\n\n...\n...So I'm posting the patch here, merely as an illustration of my findings.Perhaps someone with a better understanding of the process of translating C to asm can have an explanation.Is it worth it to change only where there has been improvement?My guess is that your hypothetical performance improvement has been completely swamped by the natural variations of each run.For example, drop_if_exists115848394138635786109,30%Those numbers are all over the place, so I doubt the results are really saying anything at all about what is better/worse, because I think you have zero chance to notice a couple of nanoseconds of improvement within the noise when each run is varying from 57 to 138 ms. IMO the only conclusion you can draw from your results is that any performance gain is too small to be observable.------Kind Regards,Peter Smith.Fujitsu Australia",
"msg_date": "Mon, 19 Jul 2021 10:23:19 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
},
{
"msg_contents": "Em dom., 18 de jul. de 2021 às 21:23, Peter Smith <smithpb2250@gmail.com>\nescreveu:\n\n>\n>\n> On Sun, Jul 18, 2021 at 11:09 PM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n>\n>> ...\n>>\n> I did the patch, but to my surprise, the results weren't so good.\n>> Despite that claiming a tiny improvement in performance, I didn't expect\n>> any slowdown.\n>> I put a counter in pg_regress.c, summing the results of each test and did\n>> it three times for HEAD and for the patch.\n>> Some tests were better, but others were bad.\n>> Tests comments per example, show 180%, combocid 174%, dbize 165%, xmlmap\n>> 136%, lock 134%.\n>>\n>> ... ...\n>>\n>> So I'm posting the patch here, merely as an illustration of my findings.\n>> Perhaps someone with a better understanding of the process of translating\n>> C to asm can have an explanation.\n>> Is it worth it to change only where there has been improvement?\n>>\n>>\n> My guess is that your hypothetical performance improvement has been\n> completely swamped by the natural variations of each run.\n>\n> For example,\n> drop_if_exists 115 84 83 94\n> 138 63 57 86 109,30%\n>\n> Those numbers are all over the place, so I doubt the results are really\n> saying anything at all about what is better/worse, because I think you have\n> zero chance to notice a couple of nanoseconds of improvement within the\n> noise when each run is varying from 57 to 138 ms.\n>\n> IMO the only conclusion you can draw from your results is that any\n> performance gain is too small to be observable.\n>\nThanks Peter for your explanations.\n\nI can conclude then that the test results are not a reference for\nperformance/regression.\nSo the patch serves as a refactoring, without any further indication.\n\nregards,\nRanier Vilela\n\nEm dom., 18 de jul. de 2021 às 21:23, Peter Smith <smithpb2250@gmail.com> escreveu:On Sun, Jul 18, 2021 at 11:09 PM Ranier Vilela <ranier.vf@gmail.com> wrote:... I did the patch, but to my surprise, the results weren't so good.Despite that claiming a tiny improvement in performance, I didn't expect any slowdown.I put a counter in pg_regress.c, summing the results of each test and did it three times for HEAD and for the patch.Some tests were better, but others were bad.Tests comments per example, show 180%, combocid 174%, dbize 165%, xmlmap 136%, lock 134%.\n\n\n\n\n\n...\n...So I'm posting the patch here, merely as an illustration of my findings.Perhaps someone with a better understanding of the process of translating C to asm can have an explanation.Is it worth it to change only where there has been improvement?My guess is that your hypothetical performance improvement has been completely swamped by the natural variations of each run.For example, drop_if_exists115848394138635786109,30%Those numbers are all over the place, so I doubt the results are really saying anything at all about what is better/worse, because I think you have zero chance to notice a couple of nanoseconds of improvement within the noise when each run is varying from 57 to 138 ms. IMO the only conclusion you can draw from your results is that any performance gain is too small to be observable.Thanks Peter for your explanations. I can conclude then that the test results are not a reference for performance/regression.So the patch serves as a refactoring, without any further indication.regards,Ranier Vilela",
"msg_date": "Sun, 18 Jul 2021 21:38:30 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant strlen call in ReplicationSlotValidateName"
}
] |
[
{
"msg_contents": "Hi,\n\nCascade and restrict options are supported for drop statistics syntax:\ndrop statistics stat1 cascade;\ndrop statistics stat2 restrict;\n\nThe documentation for this was missing, attached a patch which\nincludes the documentation for these options.\n\nRegards,\nVignesh",
"msg_date": "Fri, 16 Jul 2021 20:12:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Added documentation for cascade and restrict option of drop\n statistics"
},
{
"msg_contents": "On Fri, Jul 16, 2021 at 08:12:56PM +0530, vignesh C wrote:\n> Cascade and restrict options are supported for drop statistics syntax:\n> drop statistics stat1 cascade;\n> drop statistics stat2 restrict;\n> \n> The documentation for this was missing, attached a patch which\n> includes the documentation for these options.\n\nIndeed, good catch. The other commands document that, so let's fix\nit.\n--\nMichael",
"msg_date": "Sun, 18 Jul 2021 12:37:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Added documentation for cascade and restrict option of drop\n statistics"
},
{
"msg_contents": "On Sun, Jul 18, 2021 at 12:37:48PM +0900, Michael Paquier wrote:\n> Indeed, good catch. The other commands document that, so let's fix\n> it.\n\nApplied.\n--\nMichael",
"msg_date": "Mon, 19 Jul 2021 12:46:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Added documentation for cascade and restrict option of drop\n statistics"
},
{
"msg_contents": "On Mon, 19 Jul 2021, 09:16 Michael Paquier, <michael@paquier.xyz> wrote:\n>\n> On Sun, Jul 18, 2021 at 12:37:48PM +0900, Michael Paquier wrote:\n> > Indeed, good catch. The other commands document that, so let's fix\n> > it.\n>\n> Applied.\n\nThanks for committing this patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 19 Jul 2021 21:01:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Added documentation for cascade and restrict option of drop\n statistics"
}
] |
[
{
"msg_contents": "Hi Everyone,\n\nWe would like to propose the below 2 new plpgsql diagnostic items,\nrelated to parsing. Because, the current diag items are not providing\nthe useful diagnostics about the dynamic SQL statements.\n\n1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)\n2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor\nposition)\n\nConsider the below example, which is an invalid SQL statement.\n\npostgres=# SELECT 1 JOIN SELECT 2;\nERROR: syntax error at or near \"JOIN\"\nLINE 1: SELECT 1 JOIN SELECT 2;\n ^\nHere, there is a syntax error at JOIN clause,\nand also we are getting the syntax error position(^ symbol, the position of\nJOIN clause).\nThis will be helpful, while dealing with long queries.\n\nNow, if we run the same statement as a dynamic SQL(by using EXECUTE <sql\nstatement>),\nthen it seems we are not getting the text cursor position,\nand the SQL statement which is failing at parse level.\n\nPlease find the below example.\n\npostgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\nNOTICE: RETURNED_SQLSTATE 42601\nNOTICE: COLUMN_NAME\nNOTICE: CONSTRAINT_NAME\nNOTICE: PG_DATATYPE_NAME\nNOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"\nNOTICE: TABLE_NAME\nNOTICE: SCHEMA_NAME\nNOTICE: PG_EXCEPTION_DETAIL\nNOTICE: PG_EXCEPTION_HINT\nNOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at\nEXECUTE\nNOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET STACKED\nDIAGNOSTICS\n exec_me\n---------\n\n(1 row)\n\n From the above results, by using all the existing diag items, we are unable\nto get the position of \"JOIN\" in the submitted SQL statement.\nBy using these proposed diag items, we will be getting the required\ninformation,\nwhich will be helpful while running long SQL statements as dynamic SQL\nstatements.\n\nPlease find the below example.\n\npostgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\nNOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2\nNOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10\n exec_me\n---------\n\n(1 row)\n\n From the above results, by using these diag items,\nwe are able to get what is failing and it's position as well.\nThis information will be much helpful to debug the issue,\nwhile a long running SQL statement is running as a dynamic SQL statement.\n\nWe are attaching the patch for this proposal, and will be looking for your\ninputs.\n\nRegards,\nDinesh Kumar",
"msg_date": "Sat, 17 Jul 2021 01:17:01 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "[PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi\n\npá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com>\nnapsal:\n\n> Hi Everyone,\n>\n> We would like to propose the below 2 new plpgsql diagnostic items,\n> related to parsing. Because, the current diag items are not providing\n> the useful diagnostics about the dynamic SQL statements.\n>\n> 1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)\n> 2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor\n> position)\n>\n> Consider the below example, which is an invalid SQL statement.\n>\n> postgres=# SELECT 1 JOIN SELECT 2;\n> ERROR: syntax error at or near \"JOIN\"\n> LINE 1: SELECT 1 JOIN SELECT 2;\n> ^\n> Here, there is a syntax error at JOIN clause,\n> and also we are getting the syntax error position(^ symbol, the position\n> of JOIN clause).\n> This will be helpful, while dealing with long queries.\n>\n> Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql\n> statement>),\n> then it seems we are not getting the text cursor position,\n> and the SQL statement which is failing at parse level.\n>\n> Please find the below example.\n>\n> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n> NOTICE: RETURNED_SQLSTATE 42601\n> NOTICE: COLUMN_NAME\n> NOTICE: CONSTRAINT_NAME\n> NOTICE: PG_DATATYPE_NAME\n> NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"\n> NOTICE: TABLE_NAME\n> NOTICE: SCHEMA_NAME\n> NOTICE: PG_EXCEPTION_DETAIL\n> NOTICE: PG_EXCEPTION_HINT\n> NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at\n> EXECUTE\n> NOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET STACKED\n> DIAGNOSTICS\n> exec_me\n> ---------\n>\n> (1 row)\n>\n> From the above results, by using all the existing diag items, we are\n> unable to get the position of \"JOIN\" in the submitted SQL statement.\n> By using these proposed diag items, we will be getting the required\n> information,\n> which will be helpful while running long SQL statements as dynamic SQL\n> statements.\n>\n> Please find the below example.\n>\n> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n> NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2\n> NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10\n> exec_me\n> ---------\n>\n> (1 row)\n>\n> From the above results, by using these diag items,\n> we are able to get what is failing and it's position as well.\n> This information will be much helpful to debug the issue,\n> while a long running SQL statement is running as a dynamic SQL statement.\n>\n> We are attaching the patch for this proposal, and will be looking for your\n> inputs.\n>\n\n+1 It is good idea. I am not sure if the used names are good. I propose\n\nPG_SQL_TEXT and PG_ERROR_LOCATION\n\nRegards\n\nPavel\n\n\n\n> Regards,\n> Dinesh Kumar\n>\n\nHipá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Everyone,We would like to propose the below 2 new plpgsql diagnostic items,related to parsing. Because, the current diag items are not providingthe useful diagnostics about the dynamic SQL statements.1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor position)Consider the below example, which is an invalid SQL statement.postgres=# SELECT 1 JOIN SELECT 2;ERROR: syntax error at or near \"JOIN\"LINE 1: SELECT 1 JOIN SELECT 2; ^Here, there is a syntax error at JOIN clause,and also we are getting the syntax error position(^ symbol, the position of JOIN clause).This will be helpful, while dealing with long queries.Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql statement>),then it seems we are not getting the text cursor position,and the SQL statement which is failing at parse level. Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: RETURNED_SQLSTATE 42601NOTICE: COLUMN_NAME NOTICE: CONSTRAINT_NAME NOTICE: PG_DATATYPE_NAME NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"NOTICE: TABLE_NAME NOTICE: SCHEMA_NAME NOTICE: PG_EXCEPTION_DETAIL NOTICE: PG_EXCEPTION_HINT NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at EXECUTENOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET STACKED DIAGNOSTICS exec_me --------- (1 row)From the above results, by using all the existing diag items, we are unable to get the position of \"JOIN\" in the submitted SQL statement.By using these proposed diag items, we will be getting the required information,which will be helpful while running long SQL statements as dynamic SQL statements.Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10 exec_me --------- (1 row)From the above results, by using these diag items,we are able to get what is failing and it's position as well.This information will be much helpful to debug the issue,while a long running SQL statement is running as a dynamic SQL statement.We are attaching the patch for this proposal, and will be looking for your inputs.+1 It is good idea. I am not sure if the used names are good. I proposePG_SQL_TEXT and PG_ERROR_LOCATIONRegardsPavelRegards,Dinesh Kumar",
"msg_date": "Fri, 16 Jul 2021 21:58:38 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> Hi\n>\n> pá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <\n> dinesh.kumar@migops.com> napsal:\n>\n>> Hi Everyone,\n>>\n>> We would like to propose the below 2 new plpgsql diagnostic items,\n>> related to parsing. Because, the current diag items are not providing\n>> the useful diagnostics about the dynamic SQL statements.\n>>\n>> 1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)\n>> 2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor\n>> position)\n>>\n>> Consider the below example, which is an invalid SQL statement.\n>>\n>> postgres=# SELECT 1 JOIN SELECT 2;\n>> ERROR: syntax error at or near \"JOIN\"\n>> LINE 1: SELECT 1 JOIN SELECT 2;\n>> ^\n>> Here, there is a syntax error at JOIN clause,\n>> and also we are getting the syntax error position(^ symbol, the position\n>> of JOIN clause).\n>> This will be helpful, while dealing with long queries.\n>>\n>> Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql\n>> statement>),\n>> then it seems we are not getting the text cursor position,\n>> and the SQL statement which is failing at parse level.\n>>\n>> Please find the below example.\n>>\n>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>> NOTICE: RETURNED_SQLSTATE 42601\n>> NOTICE: COLUMN_NAME\n>> NOTICE: CONSTRAINT_NAME\n>> NOTICE: PG_DATATYPE_NAME\n>> NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"\n>> NOTICE: TABLE_NAME\n>> NOTICE: SCHEMA_NAME\n>> NOTICE: PG_EXCEPTION_DETAIL\n>> NOTICE: PG_EXCEPTION_HINT\n>> NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at\n>> EXECUTE\n>> NOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET\n>> STACKED DIAGNOSTICS\n>> exec_me\n>> ---------\n>>\n>> (1 row)\n>>\n>> From the above results, by using all the existing diag items, we are\n>> unable to get the position of \"JOIN\" in the submitted SQL statement.\n>> By using these proposed diag items, we will be getting the required\n>> information,\n>> which will be helpful while running long SQL statements as dynamic SQL\n>> statements.\n>>\n>> Please find the below example.\n>>\n>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>> NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2\n>> NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10\n>> exec_me\n>> ---------\n>>\n>> (1 row)\n>>\n>> From the above results, by using these diag items,\n>> we are able to get what is failing and it's position as well.\n>> This information will be much helpful to debug the issue,\n>> while a long running SQL statement is running as a dynamic SQL statement.\n>>\n>> We are attaching the patch for this proposal, and will be looking for\n>> your inputs.\n>>\n>\n> +1 It is good idea. I am not sure if the used names are good. I propose\n>\n> PG_SQL_TEXT and PG_ERROR_LOCATION\n>\n> Regards\n>\n> Pavel\n>\n>\nThanks Pavel,\n\nSorry for the late reply.\n\nThe proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much\nbetter and generic.\n\nBut, as we are only dealing with the parsing failure, I thought of adding\nthat to the diag name.\n\nRegards,\nDinesh Kumar\n\n\n>\n>\n>> Regards,\n>> Dinesh Kumar\n>>\n>\n\nOn Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Everyone,We would like to propose the below 2 new plpgsql diagnostic items,related to parsing. Because, the current diag items are not providingthe useful diagnostics about the dynamic SQL statements.1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor position)Consider the below example, which is an invalid SQL statement.postgres=# SELECT 1 JOIN SELECT 2;ERROR: syntax error at or near \"JOIN\"LINE 1: SELECT 1 JOIN SELECT 2; ^Here, there is a syntax error at JOIN clause,and also we are getting the syntax error position(^ symbol, the position of JOIN clause).This will be helpful, while dealing with long queries.Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql statement>),then it seems we are not getting the text cursor position,and the SQL statement which is failing at parse level. Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: RETURNED_SQLSTATE 42601NOTICE: COLUMN_NAME NOTICE: CONSTRAINT_NAME NOTICE: PG_DATATYPE_NAME NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"NOTICE: TABLE_NAME NOTICE: SCHEMA_NAME NOTICE: PG_EXCEPTION_DETAIL NOTICE: PG_EXCEPTION_HINT NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at EXECUTENOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET STACKED DIAGNOSTICS exec_me --------- (1 row)From the above results, by using all the existing diag items, we are unable to get the position of \"JOIN\" in the submitted SQL statement.By using these proposed diag items, we will be getting the required information,which will be helpful while running long SQL statements as dynamic SQL statements.Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10 exec_me --------- (1 row) From the above results, by using these diag items,we are able to get what is failing and it's position as well.This information will be much helpful to debug the issue,while a long running SQL statement is running as a dynamic SQL statement.We are attaching the patch for this proposal, and will be looking for your inputs.+1 It is good idea. I am not sure if the used names are good. I proposePG_SQL_TEXT and PG_ERROR_LOCATIONRegardsPavelThanks Pavel,Sorry for the late reply.The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much better and generic.But, as we are only dealing with the parsing failure, I thought of adding that to the diag name.Regards,Dinesh Kumar Regards,Dinesh Kumar",
"msg_date": "Sun, 25 Jul 2021 16:22:03 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "ne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com>\nnapsal:\n\n> On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> pá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <\n>> dinesh.kumar@migops.com> napsal:\n>>\n>>> Hi Everyone,\n>>>\n>>> We would like to propose the below 2 new plpgsql diagnostic items,\n>>> related to parsing. Because, the current diag items are not providing\n>>> the useful diagnostics about the dynamic SQL statements.\n>>>\n>>> 1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)\n>>> 2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor\n>>> position)\n>>>\n>>> Consider the below example, which is an invalid SQL statement.\n>>>\n>>> postgres=# SELECT 1 JOIN SELECT 2;\n>>> ERROR: syntax error at or near \"JOIN\"\n>>> LINE 1: SELECT 1 JOIN SELECT 2;\n>>> ^\n>>> Here, there is a syntax error at JOIN clause,\n>>> and also we are getting the syntax error position(^ symbol, the position\n>>> of JOIN clause).\n>>> This will be helpful, while dealing with long queries.\n>>>\n>>> Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql\n>>> statement>),\n>>> then it seems we are not getting the text cursor position,\n>>> and the SQL statement which is failing at parse level.\n>>>\n>>> Please find the below example.\n>>>\n>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>> NOTICE: RETURNED_SQLSTATE 42601\n>>> NOTICE: COLUMN_NAME\n>>> NOTICE: CONSTRAINT_NAME\n>>> NOTICE: PG_DATATYPE_NAME\n>>> NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"\n>>> NOTICE: TABLE_NAME\n>>> NOTICE: SCHEMA_NAME\n>>> NOTICE: PG_EXCEPTION_DETAIL\n>>> NOTICE: PG_EXCEPTION_HINT\n>>> NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at\n>>> EXECUTE\n>>> NOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET\n>>> STACKED DIAGNOSTICS\n>>> exec_me\n>>> ---------\n>>>\n>>> (1 row)\n>>>\n>>> From the above results, by using all the existing diag items, we are\n>>> unable to get the position of \"JOIN\" in the submitted SQL statement.\n>>> By using these proposed diag items, we will be getting the required\n>>> information,\n>>> which will be helpful while running long SQL statements as dynamic SQL\n>>> statements.\n>>>\n>>> Please find the below example.\n>>>\n>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>> NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2\n>>> NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10\n>>> exec_me\n>>> ---------\n>>>\n>>> (1 row)\n>>>\n>>> From the above results, by using these diag items,\n>>> we are able to get what is failing and it's position as well.\n>>> This information will be much helpful to debug the issue,\n>>> while a long running SQL statement is running as a dynamic SQL statement.\n>>>\n>>> We are attaching the patch for this proposal, and will be looking for\n>>> your inputs.\n>>>\n>>\n>> +1 It is good idea. I am not sure if the used names are good. I propose\n>>\n>> PG_SQL_TEXT and PG_ERROR_LOCATION\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n> Thanks Pavel,\n>\n> Sorry for the late reply.\n>\n> The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much\n> better and generic.\n>\n> But, as we are only dealing with the parsing failure, I thought of adding\n> that to the diag name.\n>\n\nI understand. But parsing is only one case - and these variables can be\nused for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT,\nPG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...\n\nThe idea is good, and you found the case, where it has benefits for users.\nNaming is hard.\n\nRegards\n\nPavel\n\n\n> Regards,\n> Dinesh Kumar\n>\n>\n>>\n>>\n>>> Regards,\n>>> Dinesh Kumar\n>>>\n>>\n\nne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Everyone,We would like to propose the below 2 new plpgsql diagnostic items,related to parsing. Because, the current diag items are not providingthe useful diagnostics about the dynamic SQL statements.1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor position)Consider the below example, which is an invalid SQL statement.postgres=# SELECT 1 JOIN SELECT 2;ERROR: syntax error at or near \"JOIN\"LINE 1: SELECT 1 JOIN SELECT 2; ^Here, there is a syntax error at JOIN clause,and also we are getting the syntax error position(^ symbol, the position of JOIN clause).This will be helpful, while dealing with long queries.Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql statement>),then it seems we are not getting the text cursor position,and the SQL statement which is failing at parse level. Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: RETURNED_SQLSTATE 42601NOTICE: COLUMN_NAME NOTICE: CONSTRAINT_NAME NOTICE: PG_DATATYPE_NAME NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"NOTICE: TABLE_NAME NOTICE: SCHEMA_NAME NOTICE: PG_EXCEPTION_DETAIL NOTICE: PG_EXCEPTION_HINT NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at EXECUTENOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET STACKED DIAGNOSTICS exec_me --------- (1 row)From the above results, by using all the existing diag items, we are unable to get the position of \"JOIN\" in the submitted SQL statement.By using these proposed diag items, we will be getting the required information,which will be helpful while running long SQL statements as dynamic SQL statements.Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10 exec_me --------- (1 row)From the above results, by using these diag items,we are able to get what is failing and it's position as well.This information will be much helpful to debug the issue,while a long running SQL statement is running as a dynamic SQL statement.We are attaching the patch for this proposal, and will be looking for your inputs.+1 It is good idea. I am not sure if the used names are good. I proposePG_SQL_TEXT and PG_ERROR_LOCATIONRegardsPavelThanks Pavel,Sorry for the late reply.The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much better and generic.But, as we are only dealing with the parsing failure, I thought of adding that to the diag name.I understand. But parsing is only one case - and these variables can be used for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT, PG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...The idea is good, and you found the case, where it has benefits for users. Naming is hard.RegardsPavelRegards,Dinesh Kumar Regards,Dinesh Kumar",
"msg_date": "Sun, 25 Jul 2021 13:03:35 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "On Sun, 25 Jul 2021 at 16:34, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n>\n> ne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <\n> dinesh.kumar@migops.com> napsal:\n>\n>> On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> Hi\n>>>\n>>> pá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <\n>>> dinesh.kumar@migops.com> napsal:\n>>>\n>>>> Hi Everyone,\n>>>>\n>>>> We would like to propose the below 2 new plpgsql diagnostic items,\n>>>> related to parsing. Because, the current diag items are not providing\n>>>> the useful diagnostics about the dynamic SQL statements.\n>>>>\n>>>> 1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)\n>>>> 2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text\n>>>> cursor position)\n>>>>\n>>>> Consider the below example, which is an invalid SQL statement.\n>>>>\n>>>> postgres=# SELECT 1 JOIN SELECT 2;\n>>>> ERROR: syntax error at or near \"JOIN\"\n>>>> LINE 1: SELECT 1 JOIN SELECT 2;\n>>>> ^\n>>>> Here, there is a syntax error at JOIN clause,\n>>>> and also we are getting the syntax error position(^ symbol, the\n>>>> position of JOIN clause).\n>>>> This will be helpful, while dealing with long queries.\n>>>>\n>>>> Now, if we run the same statement as a dynamic SQL(by using EXECUTE\n>>>> <sql statement>),\n>>>> then it seems we are not getting the text cursor position,\n>>>> and the SQL statement which is failing at parse level.\n>>>>\n>>>> Please find the below example.\n>>>>\n>>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>>> NOTICE: RETURNED_SQLSTATE 42601\n>>>> NOTICE: COLUMN_NAME\n>>>> NOTICE: CONSTRAINT_NAME\n>>>> NOTICE: PG_DATATYPE_NAME\n>>>> NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"\n>>>> NOTICE: TABLE_NAME\n>>>> NOTICE: SCHEMA_NAME\n>>>> NOTICE: PG_EXCEPTION_DETAIL\n>>>> NOTICE: PG_EXCEPTION_HINT\n>>>> NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18\n>>>> at EXECUTE\n>>>> NOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET\n>>>> STACKED DIAGNOSTICS\n>>>> exec_me\n>>>> ---------\n>>>>\n>>>> (1 row)\n>>>>\n>>>> From the above results, by using all the existing diag items, we are\n>>>> unable to get the position of \"JOIN\" in the submitted SQL statement.\n>>>> By using these proposed diag items, we will be getting the required\n>>>> information,\n>>>> which will be helpful while running long SQL statements as dynamic SQL\n>>>> statements.\n>>>>\n>>>> Please find the below example.\n>>>>\n>>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>>> NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2\n>>>> NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10\n>>>> exec_me\n>>>> ---------\n>>>>\n>>>> (1 row)\n>>>>\n>>>> From the above results, by using these diag items,\n>>>> we are able to get what is failing and it's position as well.\n>>>> This information will be much helpful to debug the issue,\n>>>> while a long running SQL statement is running as a dynamic SQL\n>>>> statement.\n>>>>\n>>>> We are attaching the patch for this proposal, and will be looking for\n>>>> your inputs.\n>>>>\n>>>\n>>> +1 It is good idea. I am not sure if the used names are good. I propose\n>>>\n>>> PG_SQL_TEXT and PG_ERROR_LOCATION\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>> Thanks Pavel,\n>>\n>> Sorry for the late reply.\n>>\n>> The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much\n>> better and generic.\n>>\n>> But, as we are only dealing with the parsing failure, I thought of adding\n>> that to the diag name.\n>>\n>\n> I understand. But parsing is only one case - and these variables can be\n> used for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT,\n> PG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...\n>\n> The idea is good, and you found the case, where it has benefits for users.\n> Naming is hard.\n>\n>\nThanks for your time and suggestions Pavel.\nI updated the patch as per the suggestions, and attached it here for\nfurther inputs.\n\nRegards,\nDinesh Kumar\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> Regards,\n>> Dinesh Kumar\n>>\n>>\n>>>\n>>>\n>>>> Regards,\n>>>> Dinesh Kumar\n>>>>\n>>>",
"msg_date": "Fri, 20 Aug 2021 13:54:16 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi\n\npá 20. 8. 2021 v 10:24 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com>\nnapsal:\n\n> On Sun, 25 Jul 2021 at 16:34, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n\nplease, can you register this patch to commitfest app\n\nhttps://commitfest.postgresql.org/34/\n\nRegards\n\nPavel\n\n>\n>>\n>> ne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <\n>> dinesh.kumar@migops.com> napsal:\n>>\n>>> On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>>\n>>>> Hi\n>>>>\n>>>> pá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <\n>>>> dinesh.kumar@migops.com> napsal:\n>>>>\n>>>>> Hi Everyone,\n>>>>>\n>>>>> We would like to propose the below 2 new plpgsql diagnostic items,\n>>>>> related to parsing. Because, the current diag items are not providing\n>>>>> the useful diagnostics about the dynamic SQL statements.\n>>>>>\n>>>>> 1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)\n>>>>> 2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text\n>>>>> cursor position)\n>>>>>\n>>>>> Consider the below example, which is an invalid SQL statement.\n>>>>>\n>>>>> postgres=# SELECT 1 JOIN SELECT 2;\n>>>>> ERROR: syntax error at or near \"JOIN\"\n>>>>> LINE 1: SELECT 1 JOIN SELECT 2;\n>>>>> ^\n>>>>> Here, there is a syntax error at JOIN clause,\n>>>>> and also we are getting the syntax error position(^ symbol, the\n>>>>> position of JOIN clause).\n>>>>> This will be helpful, while dealing with long queries.\n>>>>>\n>>>>> Now, if we run the same statement as a dynamic SQL(by using EXECUTE\n>>>>> <sql statement>),\n>>>>> then it seems we are not getting the text cursor position,\n>>>>> and the SQL statement which is failing at parse level.\n>>>>>\n>>>>> Please find the below example.\n>>>>>\n>>>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>>>> NOTICE: RETURNED_SQLSTATE 42601\n>>>>> NOTICE: COLUMN_NAME\n>>>>> NOTICE: CONSTRAINT_NAME\n>>>>> NOTICE: PG_DATATYPE_NAME\n>>>>> NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"\n>>>>> NOTICE: TABLE_NAME\n>>>>> NOTICE: SCHEMA_NAME\n>>>>> NOTICE: PG_EXCEPTION_DETAIL\n>>>>> NOTICE: PG_EXCEPTION_HINT\n>>>>> NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18\n>>>>> at EXECUTE\n>>>>> NOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET\n>>>>> STACKED DIAGNOSTICS\n>>>>> exec_me\n>>>>> ---------\n>>>>>\n>>>>> (1 row)\n>>>>>\n>>>>> From the above results, by using all the existing diag items, we are\n>>>>> unable to get the position of \"JOIN\" in the submitted SQL statement.\n>>>>> By using these proposed diag items, we will be getting the required\n>>>>> information,\n>>>>> which will be helpful while running long SQL statements as dynamic SQL\n>>>>> statements.\n>>>>>\n>>>>> Please find the below example.\n>>>>>\n>>>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>>>> NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2\n>>>>> NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10\n>>>>> exec_me\n>>>>> ---------\n>>>>>\n>>>>> (1 row)\n>>>>>\n>>>>> From the above results, by using these diag items,\n>>>>> we are able to get what is failing and it's position as well.\n>>>>> This information will be much helpful to debug the issue,\n>>>>> while a long running SQL statement is running as a dynamic SQL\n>>>>> statement.\n>>>>>\n>>>>> We are attaching the patch for this proposal, and will be looking for\n>>>>> your inputs.\n>>>>>\n>>>>\n>>>> +1 It is good idea. I am not sure if the used names are good. I propose\n>>>>\n>>>> PG_SQL_TEXT and PG_ERROR_LOCATION\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>>>\n>>> Thanks Pavel,\n>>>\n>>> Sorry for the late reply.\n>>>\n>>> The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much\n>>> better and generic.\n>>>\n>>> But, as we are only dealing with the parsing failure, I thought of\n>>> adding that to the diag name.\n>>>\n>>\n>> I understand. But parsing is only one case - and these variables can be\n>> used for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT,\n>> PG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...\n>>\n>> The idea is good, and you found the case, where it has benefits for\n>> users. Naming is hard.\n>>\n>>\n> Thanks for your time and suggestions Pavel.\n> I updated the patch as per the suggestions, and attached it here for\n> further inputs.\n>\n> Regards,\n> Dinesh Kumar\n>\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> Regards,\n>>> Dinesh Kumar\n>>>\n>>>\n>>>>\n>>>>\n>>>>> Regards,\n>>>>> Dinesh Kumar\n>>>>>\n>>>>\n\nHipá 20. 8. 2021 v 10:24 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:On Sun, 25 Jul 2021 at 16:34, Pavel Stehule <pavel.stehule@gmail.com> wrote:please, can you register this patch to commitfest apphttps://commitfest.postgresql.org/34/RegardsPavel ne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Everyone,We would like to propose the below 2 new plpgsql diagnostic items,related to parsing. Because, the current diag items are not providingthe useful diagnostics about the dynamic SQL statements.1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor position)Consider the below example, which is an invalid SQL statement.postgres=# SELECT 1 JOIN SELECT 2;ERROR: syntax error at or near \"JOIN\"LINE 1: SELECT 1 JOIN SELECT 2; ^Here, there is a syntax error at JOIN clause,and also we are getting the syntax error position(^ symbol, the position of JOIN clause).This will be helpful, while dealing with long queries.Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql statement>),then it seems we are not getting the text cursor position,and the SQL statement which is failing at parse level. Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: RETURNED_SQLSTATE 42601NOTICE: COLUMN_NAME NOTICE: CONSTRAINT_NAME NOTICE: PG_DATATYPE_NAME NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"NOTICE: TABLE_NAME NOTICE: SCHEMA_NAME NOTICE: PG_EXCEPTION_DETAIL NOTICE: PG_EXCEPTION_HINT NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at EXECUTENOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET STACKED DIAGNOSTICS exec_me --------- (1 row)From the above results, by using all the existing diag items, we are unable to get the position of \"JOIN\" in the submitted SQL statement.By using these proposed diag items, we will be getting the required information,which will be helpful while running long SQL statements as dynamic SQL statements.Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10 exec_me --------- (1 row)From the above results, by using these diag items,we are able to get what is failing and it's position as well.This information will be much helpful to debug the issue,while a long running SQL statement is running as a dynamic SQL statement.We are attaching the patch for this proposal, and will be looking for your inputs.+1 It is good idea. I am not sure if the used names are good. I proposePG_SQL_TEXT and PG_ERROR_LOCATIONRegardsPavelThanks Pavel,Sorry for the late reply.The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much better and generic.But, as we are only dealing with the parsing failure, I thought of adding that to the diag name.I understand. But parsing is only one case - and these variables can be used for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT, PG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...The idea is good, and you found the case, where it has benefits for users. Naming is hard.Thanks for your time and suggestions Pavel.I updated the patch as per the suggestions, and attached it here for further inputs.Regards,Dinesh Kumar RegardsPavelRegards,Dinesh Kumar Regards,Dinesh Kumar",
"msg_date": "Mon, 23 Aug 2021 20:48:54 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi Pavel,\n\nOn Tue, 24 Aug 2021 at 00:19, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> Hi\n>\n> pá 20. 8. 2021 v 10:24 odesílatel Dinesh Chemuduru <\n> dinesh.kumar@migops.com> napsal:\n>\n>> On Sun, 25 Jul 2021 at 16:34, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>\n> please, can you register this patch to commitfest app\n>\n> https://commitfest.postgresql.org/34/\n>\n> This patch is registered\n\nhttps://commitfest.postgresql.org/34/3258/\n\n\n> Regards\n>\n> Pavel\n>\n>>\n>>>\n>>> ne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <\n>>> dinesh.kumar@migops.com> napsal:\n>>>\n>>>> On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com>\n>>>> wrote:\n>>>>\n>>>>> Hi\n>>>>>\n>>>>> pá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <\n>>>>> dinesh.kumar@migops.com> napsal:\n>>>>>\n>>>>>> Hi Everyone,\n>>>>>>\n>>>>>> We would like to propose the below 2 new plpgsql diagnostic items,\n>>>>>> related to parsing. Because, the current diag items are not providing\n>>>>>> the useful diagnostics about the dynamic SQL statements.\n>>>>>>\n>>>>>> 1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)\n>>>>>> 2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text\n>>>>>> cursor position)\n>>>>>>\n>>>>>> Consider the below example, which is an invalid SQL statement.\n>>>>>>\n>>>>>> postgres=# SELECT 1 JOIN SELECT 2;\n>>>>>> ERROR: syntax error at or near \"JOIN\"\n>>>>>> LINE 1: SELECT 1 JOIN SELECT 2;\n>>>>>> ^\n>>>>>> Here, there is a syntax error at JOIN clause,\n>>>>>> and also we are getting the syntax error position(^ symbol, the\n>>>>>> position of JOIN clause).\n>>>>>> This will be helpful, while dealing with long queries.\n>>>>>>\n>>>>>> Now, if we run the same statement as a dynamic SQL(by using EXECUTE\n>>>>>> <sql statement>),\n>>>>>> then it seems we are not getting the text cursor position,\n>>>>>> and the SQL statement which is failing at parse level.\n>>>>>>\n>>>>>> Please find the below example.\n>>>>>>\n>>>>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>>>>> NOTICE: RETURNED_SQLSTATE 42601\n>>>>>> NOTICE: COLUMN_NAME\n>>>>>> NOTICE: CONSTRAINT_NAME\n>>>>>> NOTICE: PG_DATATYPE_NAME\n>>>>>> NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"\n>>>>>> NOTICE: TABLE_NAME\n>>>>>> NOTICE: SCHEMA_NAME\n>>>>>> NOTICE: PG_EXCEPTION_DETAIL\n>>>>>> NOTICE: PG_EXCEPTION_HINT\n>>>>>> NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18\n>>>>>> at EXECUTE\n>>>>>> NOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET\n>>>>>> STACKED DIAGNOSTICS\n>>>>>> exec_me\n>>>>>> ---------\n>>>>>>\n>>>>>> (1 row)\n>>>>>>\n>>>>>> From the above results, by using all the existing diag items, we are\n>>>>>> unable to get the position of \"JOIN\" in the submitted SQL statement.\n>>>>>> By using these proposed diag items, we will be getting the required\n>>>>>> information,\n>>>>>> which will be helpful while running long SQL statements as dynamic\n>>>>>> SQL statements.\n>>>>>>\n>>>>>> Please find the below example.\n>>>>>>\n>>>>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>>>>> NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2\n>>>>>> NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10\n>>>>>> exec_me\n>>>>>> ---------\n>>>>>>\n>>>>>> (1 row)\n>>>>>>\n>>>>>> From the above results, by using these diag items,\n>>>>>> we are able to get what is failing and it's position as well.\n>>>>>> This information will be much helpful to debug the issue,\n>>>>>> while a long running SQL statement is running as a dynamic SQL\n>>>>>> statement.\n>>>>>>\n>>>>>> We are attaching the patch for this proposal, and will be looking for\n>>>>>> your inputs.\n>>>>>>\n>>>>>\n>>>>> +1 It is good idea. I am not sure if the used names are good. I\n>>>>> propose\n>>>>>\n>>>>> PG_SQL_TEXT and PG_ERROR_LOCATION\n>>>>>\n>>>>> Regards\n>>>>>\n>>>>> Pavel\n>>>>>\n>>>>>\n>>>> Thanks Pavel,\n>>>>\n>>>> Sorry for the late reply.\n>>>>\n>>>> The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much\n>>>> better and generic.\n>>>>\n>>>> But, as we are only dealing with the parsing failure, I thought of\n>>>> adding that to the diag name.\n>>>>\n>>>\n>>> I understand. But parsing is only one case - and these variables can be\n>>> used for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT,\n>>> PG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...\n>>>\n>>> The idea is good, and you found the case, where it has benefits for\n>>> users. Naming is hard.\n>>>\n>>>\n>> Thanks for your time and suggestions Pavel.\n>> I updated the patch as per the suggestions, and attached it here for\n>> further inputs.\n>>\n>> Regards,\n>> Dinesh Kumar\n>>\n>>\n>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>> Regards,\n>>>> Dinesh Kumar\n>>>>\n>>>>\n>>>>>\n>>>>>\n>>>>>> Regards,\n>>>>>> Dinesh Kumar\n>>>>>>\n>>>>>\n\nHi Pavel,On Tue, 24 Aug 2021 at 00:19, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 20. 8. 2021 v 10:24 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:On Sun, 25 Jul 2021 at 16:34, Pavel Stehule <pavel.stehule@gmail.com> wrote:please, can you register this patch to commitfest apphttps://commitfest.postgresql.org/34/This patch is registeredhttps://commitfest.postgresql.org/34/3258/ RegardsPavel ne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Everyone,We would like to propose the below 2 new plpgsql diagnostic items,related to parsing. Because, the current diag items are not providingthe useful diagnostics about the dynamic SQL statements.1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor position)Consider the below example, which is an invalid SQL statement.postgres=# SELECT 1 JOIN SELECT 2;ERROR: syntax error at or near \"JOIN\"LINE 1: SELECT 1 JOIN SELECT 2; ^Here, there is a syntax error at JOIN clause,and also we are getting the syntax error position(^ symbol, the position of JOIN clause).This will be helpful, while dealing with long queries.Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql statement>),then it seems we are not getting the text cursor position,and the SQL statement which is failing at parse level. Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: RETURNED_SQLSTATE 42601NOTICE: COLUMN_NAME NOTICE: CONSTRAINT_NAME NOTICE: PG_DATATYPE_NAME NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"NOTICE: TABLE_NAME NOTICE: SCHEMA_NAME NOTICE: PG_EXCEPTION_DETAIL NOTICE: PG_EXCEPTION_HINT NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at EXECUTENOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET STACKED DIAGNOSTICS exec_me --------- (1 row)From the above results, by using all the existing diag items, we are unable to get the position of \"JOIN\" in the submitted SQL statement.By using these proposed diag items, we will be getting the required information,which will be helpful while running long SQL statements as dynamic SQL statements.Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10 exec_me --------- (1 row)From the above results, by using these diag items,we are able to get what is failing and it's position as well.This information will be much helpful to debug the issue,while a long running SQL statement is running as a dynamic SQL statement.We are attaching the patch for this proposal, and will be looking for your inputs.+1 It is good idea. I am not sure if the used names are good. I proposePG_SQL_TEXT and PG_ERROR_LOCATIONRegardsPavelThanks Pavel,Sorry for the late reply.The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much better and generic.But, as we are only dealing with the parsing failure, I thought of adding that to the diag name.I understand. But parsing is only one case - and these variables can be used for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT, PG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...The idea is good, and you found the case, where it has benefits for users. Naming is hard.Thanks for your time and suggestions Pavel.I updated the patch as per the suggestions, and attached it here for further inputs.Regards,Dinesh Kumar RegardsPavelRegards,Dinesh Kumar Regards,Dinesh Kumar",
"msg_date": "Tue, 24 Aug 2021 11:46:33 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "út 24. 8. 2021 v 8:16 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com>\nnapsal:\n\n> Hi Pavel,\n>\n> On Tue, 24 Aug 2021 at 00:19, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> pá 20. 8. 2021 v 10:24 odesílatel Dinesh Chemuduru <\n>> dinesh.kumar@migops.com> napsal:\n>>\n>>> On Sun, 25 Jul 2021 at 16:34, Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>>\n>>\n>> please, can you register this patch to commitfest app\n>>\n>> https://commitfest.postgresql.org/34/\n>>\n>> This patch is registered\n>\n> https://commitfest.postgresql.org/34/3258/\n>\n\nok, I looked it over.\n\nRegards\n\nPavel\n\n\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>>\n>>>>\n>>>> ne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <\n>>>> dinesh.kumar@migops.com> napsal:\n>>>>\n>>>>> On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com>\n>>>>> wrote:\n>>>>>\n>>>>>> Hi\n>>>>>>\n>>>>>> pá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <\n>>>>>> dinesh.kumar@migops.com> napsal:\n>>>>>>\n>>>>>>> Hi Everyone,\n>>>>>>>\n>>>>>>> We would like to propose the below 2 new plpgsql diagnostic items,\n>>>>>>> related to parsing. Because, the current diag items are not providing\n>>>>>>> the useful diagnostics about the dynamic SQL statements.\n>>>>>>>\n>>>>>>> 1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)\n>>>>>>> 2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text\n>>>>>>> cursor position)\n>>>>>>>\n>>>>>>> Consider the below example, which is an invalid SQL statement.\n>>>>>>>\n>>>>>>> postgres=# SELECT 1 JOIN SELECT 2;\n>>>>>>> ERROR: syntax error at or near \"JOIN\"\n>>>>>>> LINE 1: SELECT 1 JOIN SELECT 2;\n>>>>>>> ^\n>>>>>>> Here, there is a syntax error at JOIN clause,\n>>>>>>> and also we are getting the syntax error position(^ symbol, the\n>>>>>>> position of JOIN clause).\n>>>>>>> This will be helpful, while dealing with long queries.\n>>>>>>>\n>>>>>>> Now, if we run the same statement as a dynamic SQL(by using EXECUTE\n>>>>>>> <sql statement>),\n>>>>>>> then it seems we are not getting the text cursor position,\n>>>>>>> and the SQL statement which is failing at parse level.\n>>>>>>>\n>>>>>>> Please find the below example.\n>>>>>>>\n>>>>>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>>>>>> NOTICE: RETURNED_SQLSTATE 42601\n>>>>>>> NOTICE: COLUMN_NAME\n>>>>>>> NOTICE: CONSTRAINT_NAME\n>>>>>>> NOTICE: PG_DATATYPE_NAME\n>>>>>>> NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"\n>>>>>>> NOTICE: TABLE_NAME\n>>>>>>> NOTICE: SCHEMA_NAME\n>>>>>>> NOTICE: PG_EXCEPTION_DETAIL\n>>>>>>> NOTICE: PG_EXCEPTION_HINT\n>>>>>>> NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line\n>>>>>>> 18 at EXECUTE\n>>>>>>> NOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET\n>>>>>>> STACKED DIAGNOSTICS\n>>>>>>> exec_me\n>>>>>>> ---------\n>>>>>>>\n>>>>>>> (1 row)\n>>>>>>>\n>>>>>>> From the above results, by using all the existing diag items, we are\n>>>>>>> unable to get the position of \"JOIN\" in the submitted SQL statement.\n>>>>>>> By using these proposed diag items, we will be getting the required\n>>>>>>> information,\n>>>>>>> which will be helpful while running long SQL statements as dynamic\n>>>>>>> SQL statements.\n>>>>>>>\n>>>>>>> Please find the below example.\n>>>>>>>\n>>>>>>> postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');\n>>>>>>> NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2\n>>>>>>> NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10\n>>>>>>> exec_me\n>>>>>>> ---------\n>>>>>>>\n>>>>>>> (1 row)\n>>>>>>>\n>>>>>>> From the above results, by using these diag items,\n>>>>>>> we are able to get what is failing and it's position as well.\n>>>>>>> This information will be much helpful to debug the issue,\n>>>>>>> while a long running SQL statement is running as a dynamic SQL\n>>>>>>> statement.\n>>>>>>>\n>>>>>>> We are attaching the patch for this proposal, and will be looking\n>>>>>>> for your inputs.\n>>>>>>>\n>>>>>>\n>>>>>> +1 It is good idea. I am not sure if the used names are good. I\n>>>>>> propose\n>>>>>>\n>>>>>> PG_SQL_TEXT and PG_ERROR_LOCATION\n>>>>>>\n>>>>>> Regards\n>>>>>>\n>>>>>> Pavel\n>>>>>>\n>>>>>>\n>>>>> Thanks Pavel,\n>>>>>\n>>>>> Sorry for the late reply.\n>>>>>\n>>>>> The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are\n>>>>> much better and generic.\n>>>>>\n>>>>> But, as we are only dealing with the parsing failure, I thought of\n>>>>> adding that to the diag name.\n>>>>>\n>>>>\n>>>> I understand. But parsing is only one case - and these variables can be\n>>>> used for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT,\n>>>> PG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...\n>>>>\n>>>> The idea is good, and you found the case, where it has benefits for\n>>>> users. Naming is hard.\n>>>>\n>>>>\n>>> Thanks for your time and suggestions Pavel.\n>>> I updated the patch as per the suggestions, and attached it here for\n>>> further inputs.\n>>>\n>>> Regards,\n>>> Dinesh Kumar\n>>>\n>>>\n>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>>>\n>>>>> Regards,\n>>>>> Dinesh Kumar\n>>>>>\n>>>>>\n>>>>>>\n>>>>>>\n>>>>>>> Regards,\n>>>>>>> Dinesh Kumar\n>>>>>>>\n>>>>>>\n\nút 24. 8. 2021 v 8:16 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Pavel,On Tue, 24 Aug 2021 at 00:19, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 20. 8. 2021 v 10:24 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:On Sun, 25 Jul 2021 at 16:34, Pavel Stehule <pavel.stehule@gmail.com> wrote:please, can you register this patch to commitfest apphttps://commitfest.postgresql.org/34/This patch is registeredhttps://commitfest.postgresql.org/34/3258/ok, I looked it over. RegardsPavel RegardsPavel ne 25. 7. 2021 v 12:52 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:On Sat, 17 Jul 2021 at 01:29, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 16. 7. 2021 v 21:47 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Everyone,We would like to propose the below 2 new plpgsql diagnostic items,related to parsing. Because, the current diag items are not providingthe useful diagnostics about the dynamic SQL statements.1. PG_PARSE_SQL_STATEMENT (returns parse failed sql statement)2. PG_PARSE_SQL_STATEMENT_POSITION (returns parse failed sql text cursor position)Consider the below example, which is an invalid SQL statement.postgres=# SELECT 1 JOIN SELECT 2;ERROR: syntax error at or near \"JOIN\"LINE 1: SELECT 1 JOIN SELECT 2; ^Here, there is a syntax error at JOIN clause,and also we are getting the syntax error position(^ symbol, the position of JOIN clause).This will be helpful, while dealing with long queries.Now, if we run the same statement as a dynamic SQL(by using EXECUTE <sql statement>),then it seems we are not getting the text cursor position,and the SQL statement which is failing at parse level. Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: RETURNED_SQLSTATE 42601NOTICE: COLUMN_NAME NOTICE: CONSTRAINT_NAME NOTICE: PG_DATATYPE_NAME NOTICE: MESSAGE_TEXT syntax error at or near \"JOIN\"NOTICE: TABLE_NAME NOTICE: SCHEMA_NAME NOTICE: PG_EXCEPTION_DETAIL NOTICE: PG_EXCEPTION_HINT NOTICE: PG_EXCEPTION_CONTEXT PL/pgSQL function exec_me(text) line 18 at EXECUTENOTICE: PG_CONTEXT PL/pgSQL function exec_me(text) line 21 at GET STACKED DIAGNOSTICS exec_me --------- (1 row)From the above results, by using all the existing diag items, we are unable to get the position of \"JOIN\" in the submitted SQL statement.By using these proposed diag items, we will be getting the required information,which will be helpful while running long SQL statements as dynamic SQL statements.Please find the below example.postgres=# SELECT exec_me('SELECT 1 JOIN SELECT 2');NOTICE: PG_PARSE_SQL_STATEMENT SELECT 1 JOIN SELECT 2NOTICE: PG_PARSE_SQL_STATEMENT_POSITION 10 exec_me --------- (1 row)From the above results, by using these diag items,we are able to get what is failing and it's position as well.This information will be much helpful to debug the issue,while a long running SQL statement is running as a dynamic SQL statement.We are attaching the patch for this proposal, and will be looking for your inputs.+1 It is good idea. I am not sure if the used names are good. I proposePG_SQL_TEXT and PG_ERROR_LOCATIONRegardsPavelThanks Pavel,Sorry for the late reply.The proposed diag items are `PG_SQL_TEXT`, `PG_ERROR_LOCATION` are much better and generic.But, as we are only dealing with the parsing failure, I thought of adding that to the diag name.I understand. But parsing is only one case - and these variables can be used for any case. Sure, ***we don't want*** to have PG_PARSE_SQL_TEXT, PG_ANALYZE_SQL_TEXT, PG_EXECUTION_SQL_TEXT ...The idea is good, and you found the case, where it has benefits for users. Naming is hard.Thanks for your time and suggestions Pavel.I updated the patch as per the suggestions, and attached it here for further inputs.Regards,Dinesh Kumar RegardsPavelRegards,Dinesh Kumar Regards,Dinesh Kumar",
"msg_date": "Tue, 24 Aug 2021 08:21:39 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi\n\nI tested the last patch, and I think I found unwanted behavior.\n\nThe value of PG_SQL_TEXT is not empty only when the error is related to the\nparser stage. When the error is raised in the query evaluation stage, then\nthe value is empty.\nI think this is too confusing. PL/pgSQL is a high level language, and the\nbehaviour should be consistent independent of internal implementation. I am\nafraid this feature requires much more work.\n\npostgres=# DO $$\nDECLARE\n err_sql_stmt TEXT;\n err_sql_pos INT;\nBEGIN\n EXECUTE 'SELECT 1/0';\nEXCEPTION\n WHEN OTHERS THEN\n GET STACKED DIAGNOSTICS\n err_sql_stmt = PG_SQL_TEXT,\n err_sql_pos = PG_ERROR_LOCATION;\n RAISE NOTICE 'exception sql \"%\"', err_sql_stmt;\n RAISE NOTICE 'exception sql position %', err_sql_pos;\nEND;\n$$;\nNOTICE: exception sql \"\"\nNOTICE: exception sql position 0\nDO\n\nFor this case, the empty result is not acceptable in this language. It is\ntoo confusing. The implemented behaviour is well described in regress\ntests, but I don't think it is user (developer) friendly. The location\nfield is not important, and can be 0 some times. But query text should be\nnot empty in all possible cases related to any query evaluation. I think\nthis can be a nice and useful feature, but the behavior should be\nconsistent.\n\nSecond, but minor, objection to this patch is zero formatting in a regress\ntest.\n\nRegards\n\nPavel\n\nHiI tested the last patch, and I think I found unwanted behavior.The value of PG_SQL_TEXT is not empty only when the error is related to the parser stage. When the error is raised in the query evaluation stage, then the value is empty.I think this is too confusing. PL/pgSQL is a high level language, and the behaviour should be consistent independent of internal implementation. I am afraid this feature requires much more work.postgres=# DO $$DECLARE err_sql_stmt TEXT; err_sql_pos INT;BEGIN EXECUTE 'SELECT 1/0';EXCEPTION WHEN OTHERS THEN GET STACKED DIAGNOSTICS err_sql_stmt = PG_SQL_TEXT, err_sql_pos = PG_ERROR_LOCATION; RAISE NOTICE 'exception sql \"%\"', err_sql_stmt; RAISE NOTICE 'exception sql position %', err_sql_pos;END;$$;NOTICE: exception sql \"\"NOTICE: exception sql position 0DOFor this case, the empty result is not acceptable in this language. It is too confusing. The implemented behaviour is well described in regress tests, but I don't think it is user (developer) friendly. The location field is not important, and can be 0 some times. But query text should be not empty in all possible cases related to any query evaluation. I think this can be a nice and useful feature, but the behavior should be consistent.Second, but minor, objection to this patch is zero formatting in a regress test.RegardsPavel",
"msg_date": "Thu, 9 Sep 2021 07:37:06 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "On Thu, 9 Sept 2021 at 11:07, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> Hi\n>\n> I tested the last patch, and I think I found unwanted behavior.\n>\n> The value of PG_SQL_TEXT is not empty only when the error is related to\n> the parser stage. When the error is raised in the query evaluation stage,\n> then the value is empty.\n> I think this is too confusing. PL/pgSQL is a high level language, and the\n> behaviour should be consistent independent of internal implementation. I am\n> afraid this feature requires much more work.\n>\n> postgres=# DO $$\n> DECLARE\n> err_sql_stmt TEXT;\n> err_sql_pos INT;\n> BEGIN\n> EXECUTE 'SELECT 1/0';\n> EXCEPTION\n> WHEN OTHERS THEN\n> GET STACKED DIAGNOSTICS\n> err_sql_stmt = PG_SQL_TEXT,\n> err_sql_pos = PG_ERROR_LOCATION;\n> RAISE NOTICE 'exception sql \"%\"', err_sql_stmt;\n> RAISE NOTICE 'exception sql position %', err_sql_pos;\n> END;\n> $$;\n> NOTICE: exception sql \"\"\n> NOTICE: exception sql position 0\n> DO\n>\n> For this case, the empty result is not acceptable in this language. It is\n> too confusing. The implemented behaviour is well described in regress\n> tests, but I don't think it is user (developer) friendly. The location\n> field is not important, and can be 0 some times. But query text should be\n> not empty in all possible cases related to any query evaluation. I think\n> this can be a nice and useful feature, but the behavior should be\n> consistent.\n>\n> Thanks for your time in evaluating this patch.\nLet me try to fix the suggested case(I tried to fix this case in the past,\nbut this time I will try to spend more time on this), and will submit a new\npatch.\n\n\n\n> Second, but minor, objection to this patch is zero formatting in a regress\n> test.\n>\n> Regards\n>\n> Pavel\n>\n>\n\nOn Thu, 9 Sept 2021 at 11:07, Pavel Stehule <pavel.stehule@gmail.com> wrote:HiI tested the last patch, and I think I found unwanted behavior.The value of PG_SQL_TEXT is not empty only when the error is related to the parser stage. When the error is raised in the query evaluation stage, then the value is empty.I think this is too confusing. PL/pgSQL is a high level language, and the behaviour should be consistent independent of internal implementation. I am afraid this feature requires much more work.postgres=# DO $$DECLARE err_sql_stmt TEXT; err_sql_pos INT;BEGIN EXECUTE 'SELECT 1/0';EXCEPTION WHEN OTHERS THEN GET STACKED DIAGNOSTICS err_sql_stmt = PG_SQL_TEXT, err_sql_pos = PG_ERROR_LOCATION; RAISE NOTICE 'exception sql \"%\"', err_sql_stmt; RAISE NOTICE 'exception sql position %', err_sql_pos;END;$$;NOTICE: exception sql \"\"NOTICE: exception sql position 0DOFor this case, the empty result is not acceptable in this language. It is too confusing. The implemented behaviour is well described in regress tests, but I don't think it is user (developer) friendly. The location field is not important, and can be 0 some times. But query text should be not empty in all possible cases related to any query evaluation. I think this can be a nice and useful feature, but the behavior should be consistent.Thanks for your time in evaluating this patch.Let me try to fix the suggested case(I tried to fix this case in the past, but this time I will try to spend more time on this), and will submit a new patch. Second, but minor, objection to this patch is zero formatting in a regress test.RegardsPavel",
"msg_date": "Thu, 9 Sep 2021 11:53:26 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "čt 9. 9. 2021 v 8:23 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com>\nnapsal:\n\n>\n>\n> On Thu, 9 Sept 2021 at 11:07, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> I tested the last patch, and I think I found unwanted behavior.\n>>\n>> The value of PG_SQL_TEXT is not empty only when the error is related to\n>> the parser stage. When the error is raised in the query evaluation stage,\n>> then the value is empty.\n>> I think this is too confusing. PL/pgSQL is a high level language, and the\n>> behaviour should be consistent independent of internal implementation. I am\n>> afraid this feature requires much more work.\n>>\n>> postgres=# DO $$\n>> DECLARE\n>> err_sql_stmt TEXT;\n>> err_sql_pos INT;\n>> BEGIN\n>> EXECUTE 'SELECT 1/0';\n>> EXCEPTION\n>> WHEN OTHERS THEN\n>> GET STACKED DIAGNOSTICS\n>> err_sql_stmt = PG_SQL_TEXT,\n>> err_sql_pos = PG_ERROR_LOCATION;\n>> RAISE NOTICE 'exception sql \"%\"', err_sql_stmt;\n>> RAISE NOTICE 'exception sql position %', err_sql_pos;\n>> END;\n>> $$;\n>> NOTICE: exception sql \"\"\n>> NOTICE: exception sql position 0\n>> DO\n>>\n>> For this case, the empty result is not acceptable in this language. It is\n>> too confusing. The implemented behaviour is well described in regress\n>> tests, but I don't think it is user (developer) friendly. The location\n>> field is not important, and can be 0 some times. But query text should be\n>> not empty in all possible cases related to any query evaluation. I think\n>> this can be a nice and useful feature, but the behavior should be\n>> consistent.\n>>\n>> Thanks for your time in evaluating this patch.\n> Let me try to fix the suggested case(I tried to fix this case in the past,\n> but this time I will try to spend more time on this), and will submit a new\n> patch.\n>\n\nsure\n\nPavel\n\n\n>\n>\n>> Second, but minor, objection to this patch is zero formatting in a\n>> regress test.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n\nčt 9. 9. 2021 v 8:23 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:On Thu, 9 Sept 2021 at 11:07, Pavel Stehule <pavel.stehule@gmail.com> wrote:HiI tested the last patch, and I think I found unwanted behavior.The value of PG_SQL_TEXT is not empty only when the error is related to the parser stage. When the error is raised in the query evaluation stage, then the value is empty.I think this is too confusing. PL/pgSQL is a high level language, and the behaviour should be consistent independent of internal implementation. I am afraid this feature requires much more work.postgres=# DO $$DECLARE err_sql_stmt TEXT; err_sql_pos INT;BEGIN EXECUTE 'SELECT 1/0';EXCEPTION WHEN OTHERS THEN GET STACKED DIAGNOSTICS err_sql_stmt = PG_SQL_TEXT, err_sql_pos = PG_ERROR_LOCATION; RAISE NOTICE 'exception sql \"%\"', err_sql_stmt; RAISE NOTICE 'exception sql position %', err_sql_pos;END;$$;NOTICE: exception sql \"\"NOTICE: exception sql position 0DOFor this case, the empty result is not acceptable in this language. It is too confusing. The implemented behaviour is well described in regress tests, but I don't think it is user (developer) friendly. The location field is not important, and can be 0 some times. But query text should be not empty in all possible cases related to any query evaluation. I think this can be a nice and useful feature, but the behavior should be consistent.Thanks for your time in evaluating this patch.Let me try to fix the suggested case(I tried to fix this case in the past, but this time I will try to spend more time on this), and will submit a new patch.sure Pavel Second, but minor, objection to this patch is zero formatting in a regress test.RegardsPavel",
"msg_date": "Thu, 9 Sep 2021 09:39:41 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "> On 9 Sep 2021, at 08:23, Dinesh Chemuduru <dinesh.kumar@migops.com> wrote:\n\n> Let me try to fix the suggested case(I tried to fix this case in the past, but this time I will try to spend more time on this), and will submit a new patch.\n\nThis CF entry is marked Waiting on Author, have you had the chance to prepare a\nnew version of the patch addressing the comments from Pavel?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 4 Nov 2021 13:09:59 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi Daniel,\n\nThank you for your follow up, and attaching a new patch which addresses\nPavel's comments.\nLet me know If I miss anything here.\n\n\nOn Thu, 4 Nov 2021 at 17:40, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 9 Sep 2021, at 08:23, Dinesh Chemuduru <dinesh.kumar@migops.com>\n> wrote:\n>\n> > Let me try to fix the suggested case(I tried to fix this case in the\n> past, but this time I will try to spend more time on this), and will submit\n> a new patch.\n>\n> This CF entry is marked Waiting on Author, have you had the chance to\n> prepare a\n> new version of the patch addressing the comments from Pavel?\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>",
"msg_date": "Fri, 5 Nov 2021 23:57:29 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi\n\npá 5. 11. 2021 v 19:27 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com>\nnapsal:\n\n> Hi Daniel,\n>\n> Thank you for your follow up, and attaching a new patch which addresses\n> Pavel's comments.\n> Let me know If I miss anything here.\n>\n>\n> On Thu, 4 Nov 2021 at 17:40, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n>> > On 9 Sep 2021, at 08:23, Dinesh Chemuduru <dinesh.kumar@migops.com>\n>> wrote:\n>>\n>> > Let me try to fix the suggested case(I tried to fix this case in the\n>> past, but this time I will try to spend more time on this), and will submit\n>> a new patch.\n>>\n>> This CF entry is marked Waiting on Author, have you had the chance to\n>> prepare a\n>> new version of the patch addressing the comments from Pavel?\n>>\n>\nI think the functionality is correct. I am not sure about implementation\n\n1. Is it necessary to introduce a new field \"current_query\"? Cannot be used\n\"internalquery\" instead? Introducing a new field opens some questions -\nwhat is difference between internalquery and current_query, and where and\nwhen have to be used first and when second? ErrorData is a fundamental\ngeneric structure for Postgres, and can be confusing to enhance it by one\nfield just for one purpose, that is not used across Postgres.\n\n2. The name set_current_err_query is not consistent with names in elog.c -\nprobably something like errquery or set_errquery or set_errcurrent_query\ncan be more consistent with other names.\n\nRegards\n\nPavel\n\n\n\n>> --\n>> Daniel Gustafsson https://vmware.com/\n>>\n>>\n\nHipá 5. 11. 2021 v 19:27 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Daniel,Thank you for your follow up, and attaching a new patch which addresses Pavel's comments.Let me know If I miss anything here.On Thu, 4 Nov 2021 at 17:40, Daniel Gustafsson <daniel@yesql.se> wrote:> On 9 Sep 2021, at 08:23, Dinesh Chemuduru <dinesh.kumar@migops.com> wrote:\n\n> Let me try to fix the suggested case(I tried to fix this case in the past, but this time I will try to spend more time on this), and will submit a new patch.\n\nThis CF entry is marked Waiting on Author, have you had the chance to prepare a\nnew version of the patch addressing the comments from Pavel?I think the functionality is correct. I am not sure about implementation1. Is it necessary to introduce a new field \"current_query\"? Cannot be used \"internalquery\" instead? Introducing a new field opens some questions - what is difference between internalquery and current_query, and where and when have to be used first and when second? ErrorData is a fundamental generic structure for Postgres, and can be confusing to enhance it by one field just for one purpose, that is not used across Postgres.2. The name set_current_err_query is not consistent with names in elog.c - probably something like errquery or set_errquery or set_errcurrent_query can be more consistent with other names. RegardsPavel\n\n--\nDaniel Gustafsson https://vmware.com/",
"msg_date": "Sun, 7 Nov 2021 08:23:01 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi Pavel,\n\nOn Sun, 7 Nov 2021 at 12:53, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> Hi\n>\n> pá 5. 11. 2021 v 19:27 odesílatel Dinesh Chemuduru <\n> dinesh.kumar@migops.com> napsal:\n>\n>> Hi Daniel,\n>>\n>> Thank you for your follow up, and attaching a new patch which addresses\n>> Pavel's comments.\n>> Let me know If I miss anything here.\n>>\n>>\n>> On Thu, 4 Nov 2021 at 17:40, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>\n>>> > On 9 Sep 2021, at 08:23, Dinesh Chemuduru <dinesh.kumar@migops.com>\n>>> wrote:\n>>>\n>>> > Let me try to fix the suggested case(I tried to fix this case in the\n>>> past, but this time I will try to spend more time on this), and will submit\n>>> a new patch.\n>>>\n>>> This CF entry is marked Waiting on Author, have you had the chance to\n>>> prepare a\n>>> new version of the patch addressing the comments from Pavel?\n>>>\n>>\n> I think the functionality is correct. I am not sure about implementation\n>\n>\nThank you for your time in validating this patch.\n\n\n> 1. Is it necessary to introduce a new field \"current_query\"? Cannot be\n> used \"internalquery\" instead? Introducing a new field opens some questions\n> - what is difference between internalquery and current_query, and where and\n> when have to be used first and when second? ErrorData is a fundamental\n> generic structure for Postgres, and can be confusing to enhance it by one\n> field just for one purpose, that is not used across Postgres.\n>\n> Internalquery is the one, which was generated by another command.\nFor example, \"DROP <OBJECT> CASCADE\"(current_query) will generate many\ninternal query statements for each of the dependent objects.\n\nAt this moment, we do save the current_query in PG_CONTEXT.\nAs we know, PG_CONTEXT returns the whole statements as stacktrace and it's\nhard to fetch the required SQL from it.\n\nI failed to see another way to access the current_query apart from the\nPG_CONTEXT.\nHence, we have introduced the new member \"current_query\" to the \"ErrorData\"\nobject.\n\nWe tested the internalquery for this purpose, but there are few(10+ unit\ntest) test cases that failed.\nThis is also another reason we added this new member to the \"ErrorData\",\nand with the current patch all test cases are successful,\nand we are not disturbing the existing functionality.\n\n\n> 2. The name set_current_err_query is not consistent with names in elog.c -\n> probably something like errquery or set_errquery or set_errcurrent_query\n> can be more consistent with other names.\n>\n> Updated as per your suggestion\n\nPlease find the new patch version.\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>>> --\n>>> Daniel Gustafsson https://vmware.com/\n>>>\n>>>",
"msg_date": "Sun, 7 Nov 2021 18:53:19 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "On Sun, Nov 7, 2021 at 5:23 AM Dinesh Chemuduru <dinesh.kumar@migops.com>\nwrote:\n\n> Hi Pavel,\n>\n> On Sun, 7 Nov 2021 at 12:53, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> pá 5. 11. 2021 v 19:27 odesílatel Dinesh Chemuduru <\n>> dinesh.kumar@migops.com> napsal:\n>>\n>>> Hi Daniel,\n>>>\n>>> Thank you for your follow up, and attaching a new patch which addresses\n>>> Pavel's comments.\n>>> Let me know If I miss anything here.\n>>>\n>>>\n>>> On Thu, 4 Nov 2021 at 17:40, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>>\n>>>> > On 9 Sep 2021, at 08:23, Dinesh Chemuduru <dinesh.kumar@migops.com>\n>>>> wrote:\n>>>>\n>>>> > Let me try to fix the suggested case(I tried to fix this case in the\n>>>> past, but this time I will try to spend more time on this), and will submit\n>>>> a new patch.\n>>>>\n>>>> This CF entry is marked Waiting on Author, have you had the chance to\n>>>> prepare a\n>>>> new version of the patch addressing the comments from Pavel?\n>>>>\n>>>\n>> I think the functionality is correct. I am not sure about implementation\n>>\n>>\n> Thank you for your time in validating this patch.\n>\n>\n>> 1. Is it necessary to introduce a new field \"current_query\"? Cannot be\n>> used \"internalquery\" instead? Introducing a new field opens some questions\n>> - what is difference between internalquery and current_query, and where and\n>> when have to be used first and when second? ErrorData is a fundamental\n>> generic structure for Postgres, and can be confusing to enhance it by one\n>> field just for one purpose, that is not used across Postgres.\n>>\n>> Internalquery is the one, which was generated by another command.\n> For example, \"DROP <OBJECT> CASCADE\"(current_query) will generate many\n> internal query statements for each of the dependent objects.\n>\n> At this moment, we do save the current_query in PG_CONTEXT.\n> As we know, PG_CONTEXT returns the whole statements as stacktrace and it's\n> hard to fetch the required SQL from it.\n>\n> I failed to see another way to access the current_query apart from the\n> PG_CONTEXT.\n> Hence, we have introduced the new member \"current_query\" to the\n> \"ErrorData\" object.\n>\n> We tested the internalquery for this purpose, but there are few(10+ unit\n> test) test cases that failed.\n> This is also another reason we added this new member to the \"ErrorData\",\n> and with the current patch all test cases are successful,\n> and we are not disturbing the existing functionality.\n>\n>\n>> 2. The name set_current_err_query is not consistent with names in elog.c\n>> - probably something like errquery or set_errquery or set_errcurrent_query\n>> can be more consistent with other names.\n>>\n>> Updated as per your suggestion\n>\n> Please find the new patch version.\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>>> --\n>>>> Daniel Gustafsson https://vmware.com/\n>>>>\n>>>> Hi,\n\n+set_errcurrent_query (const char *query)\n\nYou can remove the space prior to (.\nI wonder if the new field can be named current_err_query because that's\nwhat the setter implies.\ncurrent_query may give the impression that the field can store normal query\n(which doesn't cause exception).\nThe following code implies that only one of internalquery and current_query\nwould be set.\n\n+ if (estate->cur_error->internalquery)\n+ exec_assign_c_string(estate, var,\nestate->cur_error->internalquery);\n+ else\n+ exec_assign_c_string(estate, var,\nestate->cur_error->current_query);\n\nCheers\n\nOn Sun, Nov 7, 2021 at 5:23 AM Dinesh Chemuduru <dinesh.kumar@migops.com> wrote:Hi Pavel,On Sun, 7 Nov 2021 at 12:53, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 5. 11. 2021 v 19:27 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Hi Daniel,Thank you for your follow up, and attaching a new patch which addresses Pavel's comments.Let me know If I miss anything here.On Thu, 4 Nov 2021 at 17:40, Daniel Gustafsson <daniel@yesql.se> wrote:> On 9 Sep 2021, at 08:23, Dinesh Chemuduru <dinesh.kumar@migops.com> wrote:\n\n> Let me try to fix the suggested case(I tried to fix this case in the past, but this time I will try to spend more time on this), and will submit a new patch.\n\nThis CF entry is marked Waiting on Author, have you had the chance to prepare a\nnew version of the patch addressing the comments from Pavel?I think the functionality is correct. I am not sure about implementationThank you for your time in validating this patch. 1. Is it necessary to introduce a new field \"current_query\"? Cannot be used \"internalquery\" instead? Introducing a new field opens some questions - what is difference between internalquery and current_query, and where and when have to be used first and when second? ErrorData is a fundamental generic structure for Postgres, and can be confusing to enhance it by one field just for one purpose, that is not used across Postgres.Internalquery is the one, which was generated by another command.For example, \"DROP <OBJECT> CASCADE\"(current_query) will generate many internal query statements for each of the dependent objects.At this moment, we do save the current_query in PG_CONTEXT.As we know, PG_CONTEXT returns the whole statements as stacktrace and it's hard to fetch the required SQL from it.I failed to see another way to access the current_query apart from the PG_CONTEXT.Hence, we have introduced the new member \"current_query\" to the \"ErrorData\" object.We tested the internalquery for this purpose, but there are few(10+ unit test) test cases that failed.This is also another reason we added this new member to the \"ErrorData\", and with the current patch all test cases are successful,and we are not disturbing the existing functionality. 2. The name set_current_err_query is not consistent with names in elog.c - probably something like errquery or set_errquery or set_errcurrent_query can be more consistent with other names. Updated as per your suggestionPlease find the new patch version. RegardsPavel\n\n--\nDaniel Gustafsson https://vmware.com/\nHi,+set_errcurrent_query (const char *query)You can remove the space prior to (.I wonder if the new field can be named current_err_query because that's what the setter implies.current_query may give the impression that the field can store normal query (which doesn't cause exception).The following code implies that only one of internalquery and current_query would be set.+ if (estate->cur_error->internalquery)+ exec_assign_c_string(estate, var, estate->cur_error->internalquery);+ else+ exec_assign_c_string(estate, var, estate->cur_error->current_query);Cheers",
"msg_date": "Sun, 7 Nov 2021 06:48:24 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": ">\n>\n>\n> +set_errcurrent_query (const char *query)\n>\n> You can remove the space prior to (.\n> I wonder if the new field can be named current_err_query because that's\n> what the setter implies.\n> current_query may give the impression that the field can store normal\n> query (which doesn't cause exception).\n> The following code implies that only one of internalquery and\n> current_query would be set.\n>\n\nyes, I think so current_query is not a good name too. Maybe query can be\ngood enough - all in ErrorData is related to error\n\n\n\n> + if (estate->cur_error->internalquery)\n> + exec_assign_c_string(estate, var,\n> estate->cur_error->internalquery);\n> + else\n> + exec_assign_c_string(estate, var,\n> estate->cur_error->current_query);\n>\n> Cheers\n>\n\n+set_errcurrent_query (const char *query)You can remove the space prior to (.I wonder if the new field can be named current_err_query because that's what the setter implies.current_query may give the impression that the field can store normal query (which doesn't cause exception).The following code implies that only one of internalquery and current_query would be set.yes, I think so current_query is not a good name too. Maybe query can be good enough - all in ErrorData is related to error + if (estate->cur_error->internalquery)+ exec_assign_c_string(estate, var, estate->cur_error->internalquery);+ else+ exec_assign_c_string(estate, var, estate->cur_error->current_query);Cheers",
"msg_date": "Mon, 8 Nov 2021 05:07:03 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "po 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>>\n>> +set_errcurrent_query (const char *query)\n>>\n>> You can remove the space prior to (.\n>> I wonder if the new field can be named current_err_query because that's\n>> what the setter implies.\n>> current_query may give the impression that the field can store normal\n>> query (which doesn't cause exception).\n>> The following code implies that only one of internalquery and\n>> current_query would be set.\n>>\n>\n> yes, I think so current_query is not a good name too. Maybe query can be\n> good enough - all in ErrorData is related to error\n>\n\nso the name of field can be query, and routine for setting errquery or\nset_errquery\n\n\n>\n>\n>> + if (estate->cur_error->internalquery)\n>> + exec_assign_c_string(estate, var,\n>> estate->cur_error->internalquery);\n>> + else\n>> + exec_assign_c_string(estate, var,\n>> estate->cur_error->current_query);\n>>\n>> Cheers\n>>\n>\n\npo 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:+set_errcurrent_query (const char *query)You can remove the space prior to (.I wonder if the new field can be named current_err_query because that's what the setter implies.current_query may give the impression that the field can store normal query (which doesn't cause exception).The following code implies that only one of internalquery and current_query would be set.yes, I think so current_query is not a good name too. Maybe query can be good enough - all in ErrorData is related to errorso the name of field can be query, and routine for setting errquery or set_errquery + if (estate->cur_error->internalquery)+ exec_assign_c_string(estate, var, estate->cur_error->internalquery);+ else+ exec_assign_c_string(estate, var, estate->cur_error->current_query);Cheers",
"msg_date": "Mon, 8 Nov 2021 05:24:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "po 8. 11. 2021 v 5:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> po 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>>\n>>> +set_errcurrent_query (const char *query)\n>>>\n>>> You can remove the space prior to (.\n>>> I wonder if the new field can be named current_err_query because that's\n>>> what the setter implies.\n>>> current_query may give the impression that the field can store normal\n>>> query (which doesn't cause exception).\n>>> The following code implies that only one of internalquery and\n>>> current_query would be set.\n>>>\n>>\n>> yes, I think so current_query is not a good name too. Maybe query can be\n>> good enough - all in ErrorData is related to error\n>>\n>\n> so the name of field can be query, and routine for setting errquery or\n> set_errquery\n>\n\nand this part is not correct\n\n<--><-->switch (carg->mode)\n<--><-->{\n<--><--><-->case RAW_PARSE_PLPGSQL_EXPR:\n<--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);\n<--><--><--><-->break;\n<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:\n<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:\n<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:\n<--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);\n<--><--><--><-->break;\n<--><--><-->default:\n<--><--><--><-->set_errcurrent_query(query);\n<--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);\n<--><--><--><-->break;\n<--><-->}\n<-->}\n\nset_errcurrent_query should be outside the switch\n\nWe want PG_SQL_TEXT for assign statements too\n\n_t := (select ...);\n\nRegards\n\nPavel\n\npo 8. 11. 2021 v 5:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:+set_errcurrent_query (const char *query)You can remove the space prior to (.I wonder if the new field can be named current_err_query because that's what the setter implies.current_query may give the impression that the field can store normal query (which doesn't cause exception).The following code implies that only one of internalquery and current_query would be set.yes, I think so current_query is not a good name too. Maybe query can be good enough - all in ErrorData is related to errorso the name of field can be query, and routine for setting errquery or set_errqueryand this part is not correct<--><-->switch (carg->mode)<--><-->{<--><--><-->case RAW_PARSE_PLPGSQL_EXPR:<--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);<--><--><--><-->break;<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:<--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);<--><--><--><-->break;<--><--><-->default:<--><--><--><-->set_errcurrent_query(query);<--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);<--><--><--><-->break;<--><-->}<-->}set_errcurrent_query should be outside the switch We want PG_SQL_TEXT for assign statements too_t := (select ...);RegardsPavel",
"msg_date": "Mon, 8 Nov 2021 05:33:04 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Thanks Zhihong/Pavel,\n\nOn Mon, 8 Nov 2021 at 10:03, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n>\n> po 8. 11. 2021 v 5:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> po 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n>> napsal:\n>>\n>>>\n>>>>\n>>>> +set_errcurrent_query (const char *query)\n>>>>\n>>>> You can remove the space prior to (.\n>>>> I wonder if the new field can be named current_err_query because that's\n>>>> what the setter implies.\n>>>> current_query may give the impression that the field can store normal\n>>>> query (which doesn't cause exception).\n>>>> The following code implies that only one of internalquery and\n>>>> current_query would be set.\n>>>>\n>>>\n>>> yes, I think so current_query is not a good name too. Maybe query can be\n>>> good enough - all in ErrorData is related to error\n>>>\n>>\n>> so the name of field can be query, and routine for setting errquery or\n>> set_errquery\n>>\n>\n> and this part is not correct\n>\n> <--><-->switch (carg->mode)\n> <--><-->{\n> <--><--><-->case RAW_PARSE_PLPGSQL_EXPR:\n> <--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);\n> <--><--><--><-->break;\n> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:\n> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:\n> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:\n> <--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);\n> <--><--><--><-->break;\n> <--><--><-->default:\n> <--><--><--><-->set_errcurrent_query(query);\n> <--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);\n> <--><--><--><-->break;\n> <--><-->}\n> <-->}\n>\n> set_errcurrent_query should be outside the switch\n>\n> We want PG_SQL_TEXT for assign statements too\n>\n> _t := (select ...);\n>\n> Please find the new patch, which has the suggested changes.\n\n\n\n> Regards\n>\n> Pavel\n>",
"msg_date": "Mon, 8 Nov 2021 14:26:59 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi\n\npo 8. 11. 2021 v 9:57 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com>\nnapsal:\n\n> Thanks Zhihong/Pavel,\n>\n> On Mon, 8 Nov 2021 at 10:03, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> po 8. 11. 2021 v 5:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n>> napsal:\n>>\n>>>\n>>>\n>>> po 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n>>> napsal:\n>>>\n>>>>\n>>>>>\n>>>>> +set_errcurrent_query (const char *query)\n>>>>>\n>>>>> You can remove the space prior to (.\n>>>>> I wonder if the new field can be named current_err_query because\n>>>>> that's what the setter implies.\n>>>>> current_query may give the impression that the field can store normal\n>>>>> query (which doesn't cause exception).\n>>>>> The following code implies that only one of internalquery and\n>>>>> current_query would be set.\n>>>>>\n>>>>\n>>>> yes, I think so current_query is not a good name too. Maybe query can\n>>>> be good enough - all in ErrorData is related to error\n>>>>\n>>>\n>>> so the name of field can be query, and routine for setting errquery or\n>>> set_errquery\n>>>\n>>\n>> and this part is not correct\n>>\n>> <--><-->switch (carg->mode)\n>> <--><-->{\n>> <--><--><-->case RAW_PARSE_PLPGSQL_EXPR:\n>> <--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);\n>> <--><--><--><-->break;\n>> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:\n>> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:\n>> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:\n>> <--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);\n>> <--><--><--><-->break;\n>> <--><--><-->default:\n>> <--><--><--><-->set_errcurrent_query(query);\n>> <--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);\n>> <--><--><--><-->break;\n>> <--><-->}\n>> <-->}\n>>\n>> set_errcurrent_query should be outside the switch\n>>\n>> We want PG_SQL_TEXT for assign statements too\n>>\n>> _t := (select ...);\n>>\n>> Please find the new patch, which has the suggested changes.\n>\n\nNow, I have only minor objection\n+ <row>\n+ <entry><literal>PG_SQL_TEXT</literal></entry>\n+ <entry><type>text</type></entry>\n+ <entry>invalid sql statement, if any</entry>\n+ </row>\n+ <row>\n+ <entry><literal>PG_ERROR_LOCATION</literal></entry>\n+ <entry><type>text</type></entry>\n+ <entry>invalid dynamic sql statement's text cursor position, if\nany</entry>\n+ </row>\n\nI think so an these text should be little bit enhanced\n\n\"the text of query or command of invalid sql statement (dynamic or\nembedded)\"\n\nand\n\n\"the location of error of invalid dynamic text, if any\"\n\nI am not a native speaker, so I am sure my proposal can be enhanced too.\n\nI have not any other objections\n\nall tests passed without any problem\n\nRegards\n\nPavel\n\n\n\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>\n\nHipo 8. 11. 2021 v 9:57 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Thanks Zhihong/Pavel,On Mon, 8 Nov 2021 at 10:03, Pavel Stehule <pavel.stehule@gmail.com> wrote:po 8. 11. 2021 v 5:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:+set_errcurrent_query (const char *query)You can remove the space prior to (.I wonder if the new field can be named current_err_query because that's what the setter implies.current_query may give the impression that the field can store normal query (which doesn't cause exception).The following code implies that only one of internalquery and current_query would be set.yes, I think so current_query is not a good name too. Maybe query can be good enough - all in ErrorData is related to errorso the name of field can be query, and routine for setting errquery or set_errqueryand this part is not correct<--><-->switch (carg->mode)<--><-->{<--><--><-->case RAW_PARSE_PLPGSQL_EXPR:<--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);<--><--><--><-->break;<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:<--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);<--><--><--><-->break;<--><--><-->default:<--><--><--><-->set_errcurrent_query(query);<--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);<--><--><--><-->break;<--><-->}<-->}set_errcurrent_query should be outside the switch We want PG_SQL_TEXT for assign statements too_t := (select ...);Please find the new patch, which has the suggested changes.Now, I have only minor objection+ <row>+ <entry><literal>PG_SQL_TEXT</literal></entry>+ <entry><type>text</type></entry>+ <entry>invalid sql statement, if any</entry>+ </row>+ <row>+ <entry><literal>PG_ERROR_LOCATION</literal></entry>+ <entry><type>text</type></entry>+ <entry>invalid dynamic sql statement's text cursor position, if any</entry>+ </row>I think so an these text should be little bit enhanced \"the text of query or command of invalid sql statement (dynamic or embedded)\"and\"the location of error of invalid dynamic text, if any\"I am not a native speaker, so I am sure my proposal can be enhanced too.I have not any other objectionsall tests passed without any problemRegardsPavel RegardsPavel",
"msg_date": "Tue, 9 Nov 2021 09:01:46 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Thanks for your time Pavel\n\n> On 09-Nov-2021, at 13:32, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> \n> Hi\n> \n> po 8. 11. 2021 v 9:57 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:\n>> Thanks Zhihong/Pavel,\n>> \n>>> On Mon, 8 Nov 2021 at 10:03, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>> \n>>> \n>>> po 8. 11. 2021 v 5:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>>>> \n>>>> \n>>>> po 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>>>>>> \n>>>>>> \n>>>>>> +set_errcurrent_query (const char *query)\n>>>>>> \n>>>>>> You can remove the space prior to (.\n>>>>>> I wonder if the new field can be named current_err_query because that's what the setter implies.\n>>>>>> current_query may give the impression that the field can store normal query (which doesn't cause exception).\n>>>>>> The following code implies that only one of internalquery and current_query would be set.\n>>>>> \n>>>>> yes, I think so current_query is not a good name too. Maybe query can be good enough - all in ErrorData is related to error\n>>>> \n>>>> so the name of field can be query, and routine for setting errquery or set_errquery\n>>> \n>>> and this part is not correct\n>>> \n>>> <--><-->switch (carg->mode)\n>>> <--><-->{\n>>> <--><--><-->case RAW_PARSE_PLPGSQL_EXPR:\n>>> <--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);\n>>> <--><--><--><-->break;\n>>> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:\n>>> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:\n>>> <--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:\n>>> <--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);\n>>> <--><--><--><-->break;\n>>> <--><--><-->default:\n>>> <--><--><--><-->set_errcurrent_query(query);\n>>> <--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);\n>>> <--><--><--><-->break;\n>>> <--><-->}\n>>> <-->}\n>>> \n>>> set_errcurrent_query should be outside the switch \n>>> \n>>> We want PG_SQL_TEXT for assign statements too\n>>> \n>>> _t := (select ...);\n>>> \n>> Please find the new patch, which has the suggested changes.\n> \n> Now, I have only minor objection\n> + <row>\n> + <entry><literal>PG_SQL_TEXT</literal></entry>\n> + <entry><type>text</type></entry>\n> + <entry>invalid sql statement, if any</entry>\n> + </row>\n> + <row>\n> + <entry><literal>PG_ERROR_LOCATION</literal></entry>\n> + <entry><type>text</type></entry>\n> + <entry>invalid dynamic sql statement's text cursor position, if any</entry>\n> + </row>\n> \n> I think so an these text should be little bit enhanced \n> \n> \"the text of query or command of invalid sql statement (dynamic or embedded)\"\n> \n> and\n> \n> \"the location of error of invalid dynamic text, if any\"\n> \n> I am not a native speaker, so I am sure my proposal can be enhanced too.\n\nThe proposed statements are much clear, but will wait for other’s suggestion, and will fix it accordingly.\n\n> I have not any other objections\n> \n> all tests passed without any problem\n> \n> Regards\n> \n> Pavel\n> \n> \n>> \n>> \n>>> Regards\n>>> \n>>> Pavel\n\nThanks for your time PavelOn 09-Nov-2021, at 13:32, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipo 8. 11. 2021 v 9:57 odesílatel Dinesh Chemuduru <dinesh.kumar@migops.com> napsal:Thanks Zhihong/Pavel,On Mon, 8 Nov 2021 at 10:03, Pavel Stehule <pavel.stehule@gmail.com> wrote:po 8. 11. 2021 v 5:24 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 8. 11. 2021 v 5:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:+set_errcurrent_query (const char *query)You can remove the space prior to (.I wonder if the new field can be named current_err_query because that's what the setter implies.current_query may give the impression that the field can store normal query (which doesn't cause exception).The following code implies that only one of internalquery and current_query would be set.yes, I think so current_query is not a good name too. Maybe query can be good enough - all in ErrorData is related to errorso the name of field can be query, and routine for setting errquery or set_errqueryand this part is not correct<--><-->switch (carg->mode)<--><-->{<--><--><-->case RAW_PARSE_PLPGSQL_EXPR:<--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);<--><--><--><-->break;<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:<--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);<--><--><--><-->break;<--><--><-->default:<--><--><--><-->set_errcurrent_query(query);<--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);<--><--><--><-->break;<--><-->}<-->}set_errcurrent_query should be outside the switch We want PG_SQL_TEXT for assign statements too_t := (select ...);Please find the new patch, which has the suggested changes.Now, I have only minor objection+ <row>+ <entry><literal>PG_SQL_TEXT</literal></entry>+ <entry><type>text</type></entry>+ <entry>invalid sql statement, if any</entry>+ </row>+ <row>+ <entry><literal>PG_ERROR_LOCATION</literal></entry>+ <entry><type>text</type></entry>+ <entry>invalid dynamic sql statement's text cursor position, if any</entry>+ </row>I think so an these text should be little bit enhanced \"the text of query or command of invalid sql statement (dynamic or embedded)\"and\"the location of error of invalid dynamic text, if any\"I am not a native speaker, so I am sure my proposal can be enhanced too.The proposed statements are much clear, but will wait for other’s suggestion, and will fix it accordingly.I have not any other objectionsall tests passed without any problemRegardsPavel RegardsPavel",
"msg_date": "Wed, 10 Nov 2021 13:58:58 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "On Wed, Nov 10, 2021 at 01:58:58PM +0530, Dinesh Chemuduru wrote:\n> The proposed statements are much clear, but will wait for other’s\n> suggestion, and will fix it accordingly.\n\nThis update was three weeks ago, and no new version has been\nprovided, so I am marking this as returned with feedback in the CF\napp. If you can work more on this proposal and send an updated patch,\nplease feel free to resubmit.\n--\nMichael",
"msg_date": "Fri, 3 Dec 2021 16:39:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi Michael,\n\nAttaching the latest patch here(It's the recent patch), and looking for\nmore suggestions/inputs from the team.\n\nOn Fri, 3 Dec 2021 at 13:09, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Nov 10, 2021 at 01:58:58PM +0530, Dinesh Chemuduru wrote:\n> > The proposed statements are much clear, but will wait for other’s\n> > suggestion, and will fix it accordingly.\n>\n> This update was three weeks ago, and no new version has been\n> provided, so I am marking this as returned with feedback in the CF\n> app. If you can work more on this proposal and send an updated patch,\n> please feel free to resubmit.\n> --\n> Michael\n>",
"msg_date": "Fri, 3 Dec 2021 16:45:27 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "On Fri, Dec 3, 2021 at 3:15 AM Dinesh Chemuduru <dinesh.kumar@migops.com>\nwrote:\n\n> Hi Michael,\n>\n> Attaching the latest patch here(It's the recent patch), and looking for\n> more suggestions/inputs from the team.\n>\n> On Fri, 3 Dec 2021 at 13:09, Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> On Wed, Nov 10, 2021 at 01:58:58PM +0530, Dinesh Chemuduru wrote:\n>> > The proposed statements are much clear, but will wait for other’s\n>> > suggestion, and will fix it accordingly.\n>>\n>> This update was three weeks ago, and no new version has been\n>> provided, so I am marking this as returned with feedback in the CF\n>> app. If you can work more on this proposal and send an updated patch,\n>> please feel free to resubmit.\n>> --\n>> Michael\n>>\n> Hi,\n\n+int\n+set_errquery(const char *query)\n\nSince the return value is ignored, the return type can be void.\n\nCheers\n\nOn Fri, Dec 3, 2021 at 3:15 AM Dinesh Chemuduru <dinesh.kumar@migops.com> wrote:Hi Michael,Attaching the latest patch here(It's the recent patch), and looking for more suggestions/inputs from the team.On Fri, 3 Dec 2021 at 13:09, Michael Paquier <michael@paquier.xyz> wrote:On Wed, Nov 10, 2021 at 01:58:58PM +0530, Dinesh Chemuduru wrote:\n> The proposed statements are much clear, but will wait for other’s\n> suggestion, and will fix it accordingly.\n\nThis update was three weeks ago, and no new version has been\nprovided, so I am marking this as returned with feedback in the CF\napp. If you can work more on this proposal and send an updated patch,\nplease feel free to resubmit.\n--\nMichaelHi,+int+set_errquery(const char *query)Since the return value is ignored, the return type can be void.Cheers",
"msg_date": "Fri, 3 Dec 2021 08:35:58 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "On Fri, 3 Dec 2021 at 22:04, Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Dec 3, 2021 at 3:15 AM Dinesh Chemuduru <dinesh.kumar@migops.com>\n> wrote:\n>\n>> Hi Michael,\n>>\n>> Attaching the latest patch here(It's the recent patch), and looking for\n>> more suggestions/inputs from the team.\n>>\n>> On Fri, 3 Dec 2021 at 13:09, Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>>> On Wed, Nov 10, 2021 at 01:58:58PM +0530, Dinesh Chemuduru wrote:\n>>> > The proposed statements are much clear, but will wait for other’s\n>>> > suggestion, and will fix it accordingly.\n>>>\n>>> This update was three weeks ago, and no new version has been\n>>> provided, so I am marking this as returned with feedback in the CF\n>>> app. If you can work more on this proposal and send an updated patch,\n>>> please feel free to resubmit.\n>>> --\n>>> Michael\n>>>\n>> Hi,\n>\n> +int\n> +set_errquery(const char *query)\n>\n> Agreed,\n\nThe other error log relateds functions are also not following the void as\nreturn type and they are using the int.\nSo, I tried to submit the same behaviour.\n\nSee other error log related functions in src/backend/utils/error/elog.c\n\n\n> Since the return value is ignored, the return type can be void.\n>\n> Cheers\n>\n\nOn Fri, 3 Dec 2021 at 22:04, Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Dec 3, 2021 at 3:15 AM Dinesh Chemuduru <dinesh.kumar@migops.com> wrote:Hi Michael,Attaching the latest patch here(It's the recent patch), and looking for more suggestions/inputs from the team.On Fri, 3 Dec 2021 at 13:09, Michael Paquier <michael@paquier.xyz> wrote:On Wed, Nov 10, 2021 at 01:58:58PM +0530, Dinesh Chemuduru wrote:\n> The proposed statements are much clear, but will wait for other’s\n> suggestion, and will fix it accordingly.\n\nThis update was three weeks ago, and no new version has been\nprovided, so I am marking this as returned with feedback in the CF\napp. If you can work more on this proposal and send an updated patch,\nplease feel free to resubmit.\n--\nMichaelHi,+int+set_errquery(const char *query)Agreed,The other error log relateds functions are also not following the void as return type and they are using the int.So, I tried to submit the same behaviour.See other error log related functions in src/backend/utils/error/elog.c Since the return value is ignored, the return type can be void.Cheers",
"msg_date": "Fri, 17 Dec 2021 10:54:50 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Hi Everyone,\n\nLet me know if anything else is needed on my end\n\nOn Fri, 17 Dec 2021 at 10:54, Dinesh Chemuduru <dinesh.kumar@migops.com>\nwrote:\n\n>\n>\n> On Fri, 3 Dec 2021 at 22:04, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Fri, Dec 3, 2021 at 3:15 AM Dinesh Chemuduru <dinesh.kumar@migops.com>\n>> wrote:\n>>\n>>> Hi Michael,\n>>>\n>>> Attaching the latest patch here(It's the recent patch), and looking for\n>>> more suggestions/inputs from the team.\n>>>\n>>> On Fri, 3 Dec 2021 at 13:09, Michael Paquier <michael@paquier.xyz>\n>>> wrote:\n>>>\n>>>> On Wed, Nov 10, 2021 at 01:58:58PM +0530, Dinesh Chemuduru wrote:\n>>>> > The proposed statements are much clear, but will wait for other’s\n>>>> > suggestion, and will fix it accordingly.\n>>>>\n>>>> This update was three weeks ago, and no new version has been\n>>>> provided, so I am marking this as returned with feedback in the CF\n>>>> app. If you can work more on this proposal and send an updated patch,\n>>>> please feel free to resubmit.\n>>>> --\n>>>> Michael\n>>>>\n>>> Hi,\n>>\n>> +int\n>> +set_errquery(const char *query)\n>>\n>> Agreed,\n>\n> The other error log relateds functions are also not following the void as\n> return type and they are using the int.\n> So, I tried to submit the same behaviour.\n>\n> See other error log related functions in src/backend/utils/error/elog.c\n>\n>\n>> Since the return value is ignored, the return type can be void.\n>>\n>> Cheers\n>>\n>\n\nHi Everyone,Let me know if anything else is needed on my endOn Fri, 17 Dec 2021 at 10:54, Dinesh Chemuduru <dinesh.kumar@migops.com> wrote:On Fri, 3 Dec 2021 at 22:04, Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Dec 3, 2021 at 3:15 AM Dinesh Chemuduru <dinesh.kumar@migops.com> wrote:Hi Michael,Attaching the latest patch here(It's the recent patch), and looking for more suggestions/inputs from the team.On Fri, 3 Dec 2021 at 13:09, Michael Paquier <michael@paquier.xyz> wrote:On Wed, Nov 10, 2021 at 01:58:58PM +0530, Dinesh Chemuduru wrote:\n> The proposed statements are much clear, but will wait for other’s\n> suggestion, and will fix it accordingly.\n\nThis update was three weeks ago, and no new version has been\nprovided, so I am marking this as returned with feedback in the CF\napp. If you can work more on this proposal and send an updated patch,\nplease feel free to resubmit.\n--\nMichaelHi,+int+set_errquery(const char *query)Agreed,The other error log relateds functions are also not following the void as return type and they are using the int.So, I tried to submit the same behaviour.See other error log related functions in src/backend/utils/error/elog.c Since the return value is ignored, the return type can be void.Cheers",
"msg_date": "Wed, 29 Dec 2021 13:09:29 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": " From looking at this patch and its history [1, 2], I think momentum was\nprobably lost during the January CF, where this patch was unregistered\n(presumably by accident).\n\nI've carried it forward, but it needs some help to keep from stalling\nout. Definitely make sure it's rebased and up to date by the time the\nnext CF starts, to give it the best chance at getting additional review\n(if you haven't received any by then).\n\n--Jacob\n\n[1] https://commitfest.postgresql.org/34/3258/\n[2] https://commitfest.postgresql.org/38/3537/\n\n\n",
"msg_date": "Tue, 2 Aug 2022 15:09:55 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "On 8/2/22 15:09, Jacob Champion wrote:\n> I've carried it forward, but it needs some help to keep from stalling\n> out. Definitely make sure it's rebased and up to date by the time the\n> next CF starts, to give it the best chance at getting additional review\n> (if you haven't received any by then).\n\n...and Dinesh's email has just bounced back undelivered. :(\n\nAnybody interested in running with this? If no one speaks up, I think we\nshould return this as \"needs more interest\" before the next CF starts.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 2 Aug 2022 15:13:03 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> ...and Dinesh's email has just bounced back undelivered. :(\n\n> Anybody interested in running with this? If no one speaks up, I think we\n> should return this as \"needs more interest\" before the next CF starts.\n\nMeh ... the last versions of the patch were far too invasive for a\nuse-case that seemed pretty hypothetical to begin with. It was never\nexplained why somebody would be trying to debug dynamic SQL without\nuse of the reporting that already exists:\n\nregression=# do $$ begin\nregression$# execute 'SELECT 1 JOIN SELECT 2';\nregression$# end $$;\nERROR: syntax error at or near \"SELECT\"\nLINE 1: SELECT 1 JOIN SELECT 2\n ^\nQUERY: SELECT 1 JOIN SELECT 2\nCONTEXT: PL/pgSQL function inline_code_block line 2 at EXECUTE\n\npsql didn't provide that query text and cursor position out of thin air.\n\nNow admittedly, what it did base that on is the PG_DIAG_INTERNAL_QUERY and\nPG_DIAG_INTERNAL_POSITION fields of the error report, and the fact that\nthose aren't available to plpgsql error trapping logic is arguably a\ndeficiency. It's not a big deficiency, because what an EXCEPTION clause\nprobably ought to do in a case like this is just re-RAISE, which will\npreserve those fields in the eventual client error report. But maybe\nit's worth fixing.\n\nI think the real reason this patch stalled is that Pavel wanted the\ngoal posts moved into the next stadium. Rather than just duplicate\nthe functionality available in the wire protocol, he wanted some other\ndefinition entirely, hiding the fact that not every error report has\nthose fields. There isn't infrastructure for that, and I doubt that\nthis patch is enough to create it, even if there were consensus that\nthe definition is right. If we were to go forward, I'd recommend\nreverting to a wire-protocol-equivalent definition, and just returning\nNULL in the cases where the data isn't supplied.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Aug 2022 18:55:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
},
{
"msg_contents": "2022年8月3日(水) 7:56 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Jacob Champion <jchampion@timescale.com> writes:\n> > ...and Dinesh's email has just bounced back undelivered. :(\n>\n> > Anybody interested in running with this? If no one speaks up, I think we\n> > should return this as \"needs more interest\" before the next CF starts.\n>\n> Meh ... the last versions of the patch were far too invasive for a\n> use-case that seemed pretty hypothetical to begin with. It was never\n> explained why somebody would be trying to debug dynamic SQL without\n> use of the reporting that already exists:\n>\n> regression=# do $$ begin\n> regression$# execute 'SELECT 1 JOIN SELECT 2';\n> regression$# end $$;\n> ERROR: syntax error at or near \"SELECT\"\n> LINE 1: SELECT 1 JOIN SELECT 2\n> ^\n> QUERY: SELECT 1 JOIN SELECT 2\n> CONTEXT: PL/pgSQL function inline_code_block line 2 at EXECUTE\n>\n> psql didn't provide that query text and cursor position out of thin air.\n>\n> Now admittedly, what it did base that on is the PG_DIAG_INTERNAL_QUERY and\n> PG_DIAG_INTERNAL_POSITION fields of the error report, and the fact that\n> those aren't available to plpgsql error trapping logic is arguably a\n> deficiency. It's not a big deficiency, because what an EXCEPTION clause\n> probably ought to do in a case like this is just re-RAISE, which will\n> preserve those fields in the eventual client error report. But maybe\n> it's worth fixing.\n>\n> I think the real reason this patch stalled is that Pavel wanted the\n> goal posts moved into the next stadium. Rather than just duplicate\n> the functionality available in the wire protocol, he wanted some other\n> definition entirely, hiding the fact that not every error report has\n> those fields. There isn't infrastructure for that, and I doubt that\n> this patch is enough to create it, even if there were consensus that\n> the definition is right. If we were to go forward, I'd recommend\n> reverting to a wire-protocol-equivalent definition, and just returning\n> NULL in the cases where the data isn't supplied.\n\nI think given this patch has gone nowhere for the past year, we can mark\nit as returned with feedback. If there's potential for the items mentioned\nby Tom and someone wants to run with them, that'd be better done\nwith a fresh entry, maybe referencing this one.\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Sun, 11 Dec 2022 14:42:13 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] new diagnostic items for the dynamic sql"
}
] |
[
{
"msg_contents": "Hi,\n\nI just tried to use the slab allocator for a case where aset.c was\nbloating memory usage substantially. First: It worked wonders for memory\nusage, nearly eliminating overhead.\n\nBut it turned out to cause a *substantial* slowdown. With aset the\nallocator is barely in the profile. With slab the profile is dominated\nby allocator performance.\n\nslab:\nNOTICE: 00000: 100000000 ordered insertions in 5.216287 seconds, 19170724/sec\nLOCATION: bfm_test_insert_bulk, radix.c:2880\n Overhead Command Shared Object Symbol\n\n+ 28.27% postgres postgres [.] SlabAlloc\n+ 9.64% postgres bdbench.so [.] bfm_delete\n+ 9.03% postgres bdbench.so [.] bfm_set\n+ 8.39% postgres bdbench.so [.] bfm_lookup\n+ 8.36% postgres bdbench.so [.] bfm_set_leaf.constprop.0\n+ 8.16% postgres libc-2.31.so [.] __memmove_avx_unaligned_erms\n+ 6.88% postgres bdbench.so [.] bfm_delete_leaf\n+ 3.24% postgres libc-2.31.so [.] _int_malloc\n+ 2.58% postgres bdbench.so [.] bfm_tests\n+ 2.33% postgres postgres [.] SlabFree\n+ 1.29% postgres libc-2.31.so [.] _int_free\n+ 1.09% postgres libc-2.31.so [.] unlink_chunk.constprop.0\n\naset:\n\nNOTICE: 00000: 100000000 ordered insertions in 2.082602 seconds, 48016848/sec\nLOCATION: bfm_test_insert_bulk, radix.c:2880\n\n+ 16.43% postgres bdbench.so [.] bfm_lookup\n+ 15.38% postgres bdbench.so [.] bfm_delete\n+ 12.82% postgres libc-2.31.so [.] __memmove_avx_unaligned_erms\n+ 12.65% postgres bdbench.so [.] bfm_set\n+ 12.15% postgres bdbench.so [.] bfm_set_leaf.constprop.0\n+ 10.57% postgres bdbench.so [.] bfm_delete_leaf\n+ 4.05% postgres bdbench.so [.] bfm_tests\n+ 2.93% postgres [kernel.vmlinux] [k] clear_page_erms\n+ 1.59% postgres postgres [.] AllocSetAlloc\n+ 1.15% postgres bdbench.so [.] memmove@plt\n+ 1.06% postgres bdbench.so [.] bfm_grow_leaf_16\n\nOS:\nNOTICE: 00000: 100000000 ordered insertions in 2.089790 seconds, 47851690/sec\nLOCATION: bfm_test_insert_bulk, radix.c:2880\n\n\nThat is somewhat surprising - part of the promise of a slab allocator is\nthat it's fast...\n\n\nThis is caused by multiple issues, I think. Some of which seems fairly easy to\nfix.\n\n1) If allocations are short-lived slab.c, can end up constantly\nfreeing/initializing blocks. Which requires fairly expensively iterating over\nall potential chunks in the block and initializing it. Just to then free that\nmemory again after a small number of allocations. The extreme case of this is\nwhen there are phases of alloc/free of a single allocation.\n\nI \"fixed\" this by adding a few && slab->nblocks > 1 in SlabFree() and the\nproblem vastly reduced. Instead of a 0.4x slowdown it's 0.88x. Of course that\nonly works if the problem is with the only, so it's not a great\napproach. Perhaps just keeping the last allocated block around would work?\n\n\n2) SlabChunkIndex() in SlabFree() is slow. It requires a 64bit division, taking\nup ~50% of the cycles in SlabFree(). A 64bit div, according to [1] , has a\nlatency of 35-88 cycles on skylake-x (and a reverse throughput of 21-83,\ni.e. no parallelism). While it's getting a bit faster on icelake / zen 3, it's\nstill slow enough there to be very worrisome.\n\nI don't see a way to get around the division while keeping the freelist\nstructure as is. But:\n\nISTM that we only need the index because of the free-chunk list, right? Why\ndon't we make the chunk list use actual pointers? Is it concern that that'd\nincrease the minimum allocation size? If so, I see two ways around that:\nFirst, we could make the index just the offset from the start of the block,\nthat's much cheaper to calculate. Second, we could store the next pointer in\nSlabChunk->slab/block instead (making it a union) - while on the freelist we\ndon't need to dereference those, right?\n\nI suspect both would also make the block initialization a bit cheaper.\n\nThat should also accelerate SlabBlockGetChunk(), which currently shows up as\nan imul, which isn't exactly fast either (and uses a lot of execution ports).\n\n\n3) Every free/alloc needing to unlink from slab->freelist[i] and then relink\nshows up prominently in profiles. That's ~tripling the number of cachelines\ntouched in the happy path, with unpredictable accesses to boot.\n\nPerhaps we could reduce the precision of slab->freelist indexing to amortize\nthat cost? I.e. not have one slab->freelist entry for each nfree, but instead\nhave an upper limit on the number of freelists?\n\n\n4) Less of a performance, and more of a usability issue: The constant\nblock size strikes me as problematic. Most users of an allocator can\nsometimes be used with a small amount of data, and sometimes with a\nlarge amount.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.agner.org/optimize/instruction_tables.pdf\n\n\n",
"msg_date": "Sat, 17 Jul 2021 12:43:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-17 12:43:33 -0700, Andres Freund wrote:\n> 2) SlabChunkIndex() in SlabFree() is slow. It requires a 64bit division, taking\n> up ~50% of the cycles in SlabFree(). A 64bit div, according to [1] , has a\n> latency of 35-88 cycles on skylake-x (and a reverse throughput of 21-83,\n> i.e. no parallelism). While it's getting a bit faster on icelake / zen 3, it's\n> still slow enough there to be very worrisome.\n> \n> I don't see a way to get around the division while keeping the freelist\n> structure as is. But:\n> \n> ISTM that we only need the index because of the free-chunk list, right? Why\n> don't we make the chunk list use actual pointers? Is it concern that that'd\n> increase the minimum allocation size? If so, I see two ways around that:\n> First, we could make the index just the offset from the start of the block,\n> that's much cheaper to calculate. Second, we could store the next pointer in\n> SlabChunk->slab/block instead (making it a union) - while on the freelist we\n> don't need to dereference those, right?\n> \n> I suspect both would also make the block initialization a bit cheaper.\n> \n> That should also accelerate SlabBlockGetChunk(), which currently shows up as\n> an imul, which isn't exactly fast either (and uses a lot of execution ports).\n\nOh - I just saw that effectively the allocation size already is a\nuintptr_t at minimum. I had only seen\n\n\t/* Make sure the linked list node fits inside a freed chunk */\n\tif (chunkSize < sizeof(int))\n\t\tchunkSize = sizeof(int);\nbut it's followed by\n\t/* chunk, including SLAB header (both addresses nicely aligned) */\n\tfullChunkSize = sizeof(SlabChunk) + MAXALIGN(chunkSize);\n\nwhich means we are reserving enough space for a pointer on just about\nany platform already? Seems we can just make that official and reserve\nspace for a pointer as part of the chunk size rounding up, instead of\nfullChunkSize?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 17 Jul 2021 12:53:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nOn 7/17/21 9:43 PM, Andres Freund wrote:\n> Hi,\n> \n> I just tried to use the slab allocator for a case where aset.c was\n> bloating memory usage substantially. First: It worked wonders for memory\n> usage, nearly eliminating overhead.\n> \n> But it turned out to cause a *substantial* slowdown. With aset the\n> allocator is barely in the profile. With slab the profile is dominated\n> by allocator performance.\n> \n> slab:\n> NOTICE: 00000: 100000000 ordered insertions in 5.216287 seconds, 19170724/sec\n> LOCATION: bfm_test_insert_bulk, radix.c:2880\n> Overhead Command Shared Object Symbol\n> \n> + 28.27% postgres postgres [.] SlabAlloc\n> + 9.64% postgres bdbench.so [.] bfm_delete\n> + 9.03% postgres bdbench.so [.] bfm_set\n> + 8.39% postgres bdbench.so [.] bfm_lookup\n> + 8.36% postgres bdbench.so [.] bfm_set_leaf.constprop.0\n> + 8.16% postgres libc-2.31.so [.] __memmove_avx_unaligned_erms\n> + 6.88% postgres bdbench.so [.] bfm_delete_leaf\n> + 3.24% postgres libc-2.31.so [.] _int_malloc\n> + 2.58% postgres bdbench.so [.] bfm_tests\n> + 2.33% postgres postgres [.] SlabFree\n> + 1.29% postgres libc-2.31.so [.] _int_free\n> + 1.09% postgres libc-2.31.so [.] unlink_chunk.constprop.0\n> \n> aset:\n> \n> NOTICE: 00000: 100000000 ordered insertions in 2.082602 seconds, 48016848/sec\n> LOCATION: bfm_test_insert_bulk, radix.c:2880\n> \n> + 16.43% postgres bdbench.so [.] bfm_lookup\n> + 15.38% postgres bdbench.so [.] bfm_delete\n> + 12.82% postgres libc-2.31.so [.] __memmove_avx_unaligned_erms\n> + 12.65% postgres bdbench.so [.] bfm_set\n> + 12.15% postgres bdbench.so [.] bfm_set_leaf.constprop.0\n> + 10.57% postgres bdbench.so [.] bfm_delete_leaf\n> + 4.05% postgres bdbench.so [.] bfm_tests\n> + 2.93% postgres [kernel.vmlinux] [k] clear_page_erms\n> + 1.59% postgres postgres [.] AllocSetAlloc\n> + 1.15% postgres bdbench.so [.] memmove@plt\n> + 1.06% postgres bdbench.so [.] bfm_grow_leaf_16\n> \n> OS:\n> NOTICE: 00000: 100000000 ordered insertions in 2.089790 seconds, 47851690/sec\n> LOCATION: bfm_test_insert_bulk, radix.c:2880\n> \n> \n> That is somewhat surprising - part of the promise of a slab allocator is\n> that it's fast...\n> \n> \n> This is caused by multiple issues, I think. Some of which seems fairly easy to\n> fix.\n> \n> 1) If allocations are short-lived slab.c, can end up constantly\n> freeing/initializing blocks. Which requires fairly expensively iterating over\n> all potential chunks in the block and initializing it. Just to then free that\n> memory again after a small number of allocations. The extreme case of this is\n> when there are phases of alloc/free of a single allocation.\n> \n> I \"fixed\" this by adding a few && slab->nblocks > 1 in SlabFree() and the\n> problem vastly reduced. Instead of a 0.4x slowdown it's 0.88x. Of course that\n> only works if the problem is with the only, so it's not a great\n> approach. Perhaps just keeping the last allocated block around would work?\n> \n\n+1\n\nI think it makes perfect sense to not free the blocks immediately, and \nkeep one (or a small number) as a cache. I'm not sure why we decided not \nto have a \"keeper\" block, but I suspect memory consumption was my main \nconcern at that point. But I never expected the cost to be this high.\n\n> \n> 2) SlabChunkIndex() in SlabFree() is slow. It requires a 64bit division, taking\n> up ~50% of the cycles in SlabFree(). A 64bit div, according to [1] , has a\n> latency of 35-88 cycles on skylake-x (and a reverse throughput of 21-83,\n> i.e. no parallelism). While it's getting a bit faster on icelake / zen 3, it's\n> still slow enough there to be very worrisome.\n> \n> I don't see a way to get around the division while keeping the freelist\n> structure as is. But:\n> \n> ISTM that we only need the index because of the free-chunk list, right? Why\n> don't we make the chunk list use actual pointers? Is it concern that that'd\n> increase the minimum allocation size? If so, I see two ways around that:\n> First, we could make the index just the offset from the start of the block,\n> that's much cheaper to calculate. Second, we could store the next pointer in\n> SlabChunk->slab/block instead (making it a union) - while on the freelist we\n> don't need to dereference those, right?\n> \n> I suspect both would also make the block initialization a bit cheaper.\n> \n> That should also accelerate SlabBlockGetChunk(), which currently shows up as\n> an imul, which isn't exactly fast either (and uses a lot of execution ports).\n> \n\nHmm, I think you're right we could simply use the pointers, but I have \nnot tried that.\n\n> \n> 3) Every free/alloc needing to unlink from slab->freelist[i] and then relink\n> shows up prominently in profiles. That's ~tripling the number of cachelines\n> touched in the happy path, with unpredictable accesses to boot.\n> \n> Perhaps we could reduce the precision of slab->freelist indexing to amortize\n> that cost? I.e. not have one slab->freelist entry for each nfree, but instead\n> have an upper limit on the number of freelists?\n> \n\nYeah. The purpose of organizing the freelists like this is to prioritize \nthe \"more full\" blocks\" when allocating new chunks, in the hope that the \n\"less full\" blocks will end up empty and freed faster.\n\nBut this is naturally imprecise, and strongly depends on the workload, \nof course, and I bet for most cases a less precise approach would work \njust as fine.\n\nI'm not sure how exactly would the upper limit you propose work, but \nperhaps we could group the blocks for nfree ranges, say [0-15], [16-31] \nand so on. So after the alloc/free we'd calculate the new freelist index \nas (nfree/16) and only moved the block if the index changed. This would \nreduce the overhead to 1/16 and probably even more in practice.\n\nOf course, we could also say we have e.g. 8 freelists and work the \nranges backwards from that, I guess that's what you propose.\n\n> \n> 4) Less of a performance, and more of a usability issue: The constant\n> block size strikes me as problematic. Most users of an allocator can\n> sometimes be used with a small amount of data, and sometimes with a\n> large amount.\n> \n\nI doubt this is worth the effort, really. The constant block size makes \nvarious places much simpler (both to code and reason about), so this \nshould not make a huge difference in performance. And IMHO the block \nsize is mostly an implementation detail, so I don't see that as a \nusability issue.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 17 Jul 2021 22:35:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-17 22:35:07 +0200, Tomas Vondra wrote:\n> On 7/17/21 9:43 PM, Andres Freund wrote:\n> > 1) If allocations are short-lived slab.c, can end up constantly\n> > freeing/initializing blocks. Which requires fairly expensively iterating over\n> > all potential chunks in the block and initializing it. Just to then free that\n> > memory again after a small number of allocations. The extreme case of this is\n> > when there are phases of alloc/free of a single allocation.\n> > \n> > I \"fixed\" this by adding a few && slab->nblocks > 1 in SlabFree() and the\n> > problem vastly reduced. Instead of a 0.4x slowdown it's 0.88x. Of course that\n> > only works if the problem is with the only, so it's not a great\n> > approach. Perhaps just keeping the last allocated block around would work?\n> > \n> \n> +1\n> \n> I think it makes perfect sense to not free the blocks immediately, and keep\n> one (or a small number) as a cache. I'm not sure why we decided not to have\n> a \"keeper\" block, but I suspect memory consumption was my main concern at\n> that point. But I never expected the cost to be this high.\n\nI think one free block might be too low in some cases. It's pretty\ncommon to have workloads where the number of allocations is \"bursty\",\nand it's imo one case where one might justifiably want to use a slab\nallocator... Perhaps a portion of a high watermark? Or a portion of the\nin use blocks?\n\nHm. I wonder if we should just not populate the freelist eagerly, to\ndrive down the initialization cost. I.e. have a separate allocation path\nfor chunks that have never been allocated, by having a\nSlabBlock->free_offset or such.\n\nSure, it adds a branch to the allocation happy path, but it also makes the\nfirst allocation for a chunk cheaper, because there's no need to get the next\nelement from the freelist (adding a likely cache miss). And it should make the\nallocation of a new block faster by a lot.\n\n\n> > 2) SlabChunkIndex() in SlabFree() is slow. It requires a 64bit division, taking\n> > up ~50% of the cycles in SlabFree(). A 64bit div, according to [1] , has a\n> > latency of 35-88 cycles on skylake-x (and a reverse throughput of 21-83,\n> > i.e. no parallelism). While it's getting a bit faster on icelake / zen 3, it's\n> > still slow enough there to be very worrisome.\n> > \n> > I don't see a way to get around the division while keeping the freelist\n> > structure as is. But:\n> > \n> > ISTM that we only need the index because of the free-chunk list, right? Why\n> > don't we make the chunk list use actual pointers? Is it concern that that'd\n> > increase the minimum allocation size? If so, I see two ways around that:\n> > First, we could make the index just the offset from the start of the block,\n> > that's much cheaper to calculate. Second, we could store the next pointer in\n> > SlabChunk->slab/block instead (making it a union) - while on the freelist we\n> > don't need to dereference those, right?\n> > \n> > I suspect both would also make the block initialization a bit cheaper.\n> > \n> > That should also accelerate SlabBlockGetChunk(), which currently shows up as\n> > an imul, which isn't exactly fast either (and uses a lot of execution ports).\n> > \n> \n> Hmm, I think you're right we could simply use the pointers, but I have not\n> tried that.\n\nI quickly tried that, and it does seem to improve matters considerably. The\nblock initialization still shows up as expensive, but not as bad. The div and\nimul are gone (exept in an assertion build right now). The list manipulation\nstill is visible.\n\n\n\n> > 3) Every free/alloc needing to unlink from slab->freelist[i] and then relink\n> > shows up prominently in profiles. That's ~tripling the number of cachelines\n> > touched in the happy path, with unpredictable accesses to boot.\n> > \n> > Perhaps we could reduce the precision of slab->freelist indexing to amortize\n> > that cost? I.e. not have one slab->freelist entry for each nfree, but instead\n> > have an upper limit on the number of freelists?\n> > \n> \n> Yeah. The purpose of organizing the freelists like this is to prioritize the\n> \"more full\" blocks\" when allocating new chunks, in the hope that the \"less\n> full\" blocks will end up empty and freed faster.\n> \n> But this is naturally imprecise, and strongly depends on the workload, of\n> course, and I bet for most cases a less precise approach would work just as\n> fine.\n> \n> I'm not sure how exactly would the upper limit you propose work, but perhaps\n> we could group the blocks for nfree ranges, say [0-15], [16-31] and so on.\n> So after the alloc/free we'd calculate the new freelist index as (nfree/16)\n> and only moved the block if the index changed. This would reduce the\n> overhead to 1/16 and probably even more in practice.\n\n> Of course, we could also say we have e.g. 8 freelists and work the ranges\n> backwards from that, I guess that's what you propose.\n\nYea, I was thinking something along those lines. As you say, ow about there's\nalways at most 8 freelists or such. During initialization we compute a shift\nthat distributes chunksPerBlock from 0 to 8. Then we only need to perform list\nmanipulation if block->nfree >> slab->freelist_shift != --block->nfree >> slab->freelist_shift.\n\nThat seems nice from a memory usage POV as well - for small and frequent\nallocations needing chunksPerBlock freelists isn't great. The freelists are on\na fair number of cachelines right now.\n\n\n> > 4) Less of a performance, and more of a usability issue: The constant\n> > block size strikes me as problematic. Most users of an allocator can\n> > sometimes be used with a small amount of data, and sometimes with a\n> > large amount.\n> > \n> \n> I doubt this is worth the effort, really. The constant block size makes\n> various places much simpler (both to code and reason about), so this should\n> not make a huge difference in performance. And IMHO the block size is mostly\n> an implementation detail, so I don't see that as a usability issue.\n\nHm? It's something the user has to specify, so I it's not really an\nimplementation detail. It needs to be specified without sufficient\ninformation, as well, since externally one doesn't know how much memory the\nblock header and chunk headers + rounding up will use, so computing a good\nblock size isn't easy. I've wondered whether it should just be a count...\n\nWhy do you not think it's relevant for performance? Either one causes too much\nmemory usage by using a too large block size, wasting memory, or one ends up\nloosing perf through frequent allocations?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 17 Jul 2021 14:14:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "\n\nOn 7/17/21 11:14 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-07-17 22:35:07 +0200, Tomas Vondra wrote:\n>> On 7/17/21 9:43 PM, Andres Freund wrote:\n>>> 1) If allocations are short-lived slab.c, can end up constantly\n>>> freeing/initializing blocks. Which requires fairly expensively iterating over\n>>> all potential chunks in the block and initializing it. Just to then free that\n>>> memory again after a small number of allocations. The extreme case of this is\n>>> when there are phases of alloc/free of a single allocation.\n>>>\n>>> I \"fixed\" this by adding a few && slab->nblocks > 1 in SlabFree() and the\n>>> problem vastly reduced. Instead of a 0.4x slowdown it's 0.88x. Of course that\n>>> only works if the problem is with the only, so it's not a great\n>>> approach. Perhaps just keeping the last allocated block around would work?\n>>>\n>>\n>> +1\n>>\n>> I think it makes perfect sense to not free the blocks immediately, and keep\n>> one (or a small number) as a cache. I'm not sure why we decided not to have\n>> a \"keeper\" block, but I suspect memory consumption was my main concern at\n>> that point. But I never expected the cost to be this high.\n> \n> I think one free block might be too low in some cases. It's pretty\n> common to have workloads where the number of allocations is \"bursty\",\n> and it's imo one case where one might justifiably want to use a slab\n> allocator... Perhaps a portion of a high watermark? Or a portion of the\n> in use blocks?\n> \n\nI think the portion of watermark would be problematic for cases with one \nhuge transaction - that'll set a high watermark, and we'll keep way too \nmany free blocks. But the portion of in use blocks might work, I think.\n\n> Hm. I wonder if we should just not populate the freelist eagerly, to\n> drive down the initialization cost. I.e. have a separate allocation path\n> for chunks that have never been allocated, by having a\n> SlabBlock->free_offset or such.\n> \n> Sure, it adds a branch to the allocation happy path, but it also makes the\n> first allocation for a chunk cheaper, because there's no need to get the next\n> element from the freelist (adding a likely cache miss). And it should make the\n> allocation of a new block faster by a lot.\n>\n\nNot sure what you mean by 'not populate eagerly' so can't comment :-(\n\n> \n>>> 2) SlabChunkIndex() in SlabFree() is slow. It requires a 64bit division, taking\n>>> up ~50% of the cycles in SlabFree(). A 64bit div, according to [1] , has a\n>>> latency of 35-88 cycles on skylake-x (and a reverse throughput of 21-83,\n>>> i.e. no parallelism). While it's getting a bit faster on icelake / zen 3, it's\n>>> still slow enough there to be very worrisome.\n>>>\n>>> I don't see a way to get around the division while keeping the freelist\n>>> structure as is. But:\n>>>\n>>> ISTM that we only need the index because of the free-chunk list, right? Why\n>>> don't we make the chunk list use actual pointers? Is it concern that that'd\n>>> increase the minimum allocation size? If so, I see two ways around that:\n>>> First, we could make the index just the offset from the start of the block,\n>>> that's much cheaper to calculate. Second, we could store the next pointer in\n>>> SlabChunk->slab/block instead (making it a union) - while on the freelist we\n>>> don't need to dereference those, right?\n>>>\n>>> I suspect both would also make the block initialization a bit cheaper.\n>>>\n>>> That should also accelerate SlabBlockGetChunk(), which currently shows up as\n>>> an imul, which isn't exactly fast either (and uses a lot of execution ports).\n>>>\n>>\n>> Hmm, I think you're right we could simply use the pointers, but I have not\n>> tried that.\n> \n> I quickly tried that, and it does seem to improve matters considerably. The\n> block initialization still shows up as expensive, but not as bad. The div and\n> imul are gone (exept in an assertion build right now). The list manipulation\n> still is visible.\n> \n\nUnderstood. I didn't expect this to be a full solution.\n\n> \n>>> 3) Every free/alloc needing to unlink from slab->freelist[i] and then relink\n>>> shows up prominently in profiles. That's ~tripling the number of cachelines\n>>> touched in the happy path, with unpredictable accesses to boot.\n>>>\n>>> Perhaps we could reduce the precision of slab->freelist indexing to amortize\n>>> that cost? I.e. not have one slab->freelist entry for each nfree, but instead\n>>> have an upper limit on the number of freelists?\n>>>\n>>\n>> Yeah. The purpose of organizing the freelists like this is to prioritize the\n>> \"more full\" blocks\" when allocating new chunks, in the hope that the \"less\n>> full\" blocks will end up empty and freed faster.\n>>\n>> But this is naturally imprecise, and strongly depends on the workload, of\n>> course, and I bet for most cases a less precise approach would work just as\n>> fine.\n>>\n>> I'm not sure how exactly would the upper limit you propose work, but perhaps\n>> we could group the blocks for nfree ranges, say [0-15], [16-31] and so on.\n>> So after the alloc/free we'd calculate the new freelist index as (nfree/16)\n>> and only moved the block if the index changed. This would reduce the\n>> overhead to 1/16 and probably even more in practice.\n> \n>> Of course, we could also say we have e.g. 8 freelists and work the ranges\n>> backwards from that, I guess that's what you propose.\n> \n> Yea, I was thinking something along those lines. As you say, ow about there's\n> always at most 8 freelists or such. During initialization we compute a shift\n> that distributes chunksPerBlock from 0 to 8. Then we only need to perform list\n> manipulation if block->nfree >> slab->freelist_shift != --block->nfree >> slab->freelist_shift.\n> \n> That seems nice from a memory usage POV as well - for small and frequent\n> allocations needing chunksPerBlock freelists isn't great. The freelists are on\n> a fair number of cachelines right now.\n> \n\nAgreed.\n\n> \n>>> 4) Less of a performance, and more of a usability issue: The constant\n>>> block size strikes me as problematic. Most users of an allocator can\n>>> sometimes be used with a small amount of data, and sometimes with a\n>>> large amount.\n>>>\n>>\n>> I doubt this is worth the effort, really. The constant block size makes\n>> various places much simpler (both to code and reason about), so this should\n>> not make a huge difference in performance. And IMHO the block size is mostly\n>> an implementation detail, so I don't see that as a usability issue.\n> \n> Hm? It's something the user has to specify, so I it's not really an\n> implementation detail. It needs to be specified without sufficient\n> information, as well, since externally one doesn't know how much memory the\n> block header and chunk headers + rounding up will use, so computing a good\n> block size isn't easy. I've wondered whether it should just be a count...\n> \n\nI think this is mixing two problems - how to specify the block size, and \nwhether the block size is constant (as in slab) or grows over time (as \nin allocset).\n\nAs for specifying the block size, I agree maybe setting chunk count and \nderiving the bytes from that might be easier to use, for the reasons you \nmentioned.\n\nBut growing the block size seems problematic for long-lived contexts \nwith workloads that change a lot over time - imagine e.g. decoding many \nsmall transactions, with one huge transaction mixed in. The one huge \ntransaction will grow the block size, and we'll keep using it forever. \nBut in that case we might have just as well allocate the large blocks \nfrom the start, I guess.\n\n> Why do you not think it's relevant for performance? Either one causes too much\n> memory usage by using a too large block size, wasting memory, or one ends up\n> loosing perf through frequent allocations?\n> \n\nTrue. I simply would not expect this to make a huge difference - I may \nbe wrong, and I'm sure there are workloads where it matters. But I still \nthink it's easier to just use larger blocks than to make the slab code \nmore complex.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 18 Jul 2021 00:46:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-18 00:46:09 +0200, Tomas Vondra wrote:\n> On 7/17/21 11:14 PM, Andres Freund wrote:\n> > Hm. I wonder if we should just not populate the freelist eagerly, to\n> > drive down the initialization cost. I.e. have a separate allocation path\n> > for chunks that have never been allocated, by having a\n> > SlabBlock->free_offset or such.\n> > \n> > Sure, it adds a branch to the allocation happy path, but it also makes the\n> > first allocation for a chunk cheaper, because there's no need to get the next\n> > element from the freelist (adding a likely cache miss). And it should make the\n> > allocation of a new block faster by a lot.\n> > \n> \n> Not sure what you mean by 'not populate eagerly' so can't comment :-(\n\nInstead of populating a linked list with all chunks upon creation of a block -\nwhich requires touching a fair bit of memory - keep a per-block pointer (or an\noffset) into \"unused\" area of the block. When allocating from the block and\ntheres still \"unused\" memory left, use that, instead of bothering with the\nfreelist.\n\nI tried that, and it nearly got slab up to the allocation/freeing performance\nof aset.c (while winning after allocation, due to the higher memory density).\n\n\n> > > > 4) Less of a performance, and more of a usability issue: The constant\n> > > > block size strikes me as problematic. Most users of an allocator can\n> > > > sometimes be used with a small amount of data, and sometimes with a\n> > > > large amount.\n> > > > \n> > > \n> > > I doubt this is worth the effort, really. The constant block size makes\n> > > various places much simpler (both to code and reason about), so this should\n> > > not make a huge difference in performance. And IMHO the block size is mostly\n> > > an implementation detail, so I don't see that as a usability issue.\n> > \n> > Hm? It's something the user has to specify, so I it's not really an\n> > implementation detail. It needs to be specified without sufficient\n> > information, as well, since externally one doesn't know how much memory the\n> > block header and chunk headers + rounding up will use, so computing a good\n> > block size isn't easy. I've wondered whether it should just be a count...\n> > \n> \n> I think this is mixing two problems - how to specify the block size, and\n> whether the block size is constant (as in slab) or grows over time (as in\n> allocset).\n\nThat was in response to the \"implementation detail\" bit solely.\n\n\n> But growing the block size seems problematic for long-lived contexts with\n> workloads that change a lot over time - imagine e.g. decoding many small\n> transactions, with one huge transaction mixed in. The one huge transaction\n> will grow the block size, and we'll keep using it forever. But in that case\n> we might have just as well allocate the large blocks from the start, I\n> guess.\n\nI was thinking of capping the growth fairly low. I don't think after a 16x\ngrowth or so you're likely to still see allocation performance gains with\nslab. And I don't think that'd be too bad for decoding - we'd start with a\nsmall initial block size, and in many workloads that will be enough, and just\nworkloads where that doesn't suffice will adapt performance wise. And: Medium\nterm I wouldn't expect reorderbuffer.c to stay the only slab.c user...\n\n\n> > Why do you not think it's relevant for performance? Either one causes too much\n> > memory usage by using a too large block size, wasting memory, or one ends up\n> > loosing perf through frequent allocations?\n>\n> True. I simply would not expect this to make a huge difference - I may be\n> wrong, and I'm sure there are workloads where it matters. But I still think\n> it's easier to just use larger blocks than to make the slab code more\n> complex.\n\nIDK. I'm looking at using slab as part of a radix tree implementation right\nnow. Which I'd hope to be used in various different situations. So it's hard\nto choose the right block size - and it does seem to matter for performance.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 17 Jul 2021 16:10:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-17 16:10:19 -0700, Andres Freund wrote:\n> Instead of populating a linked list with all chunks upon creation of a block -\n> which requires touching a fair bit of memory - keep a per-block pointer (or an\n> offset) into \"unused\" area of the block. When allocating from the block and\n> theres still \"unused\" memory left, use that, instead of bothering with the\n> freelist.\n> \n> I tried that, and it nearly got slab up to the allocation/freeing performance\n> of aset.c (while winning after allocation, due to the higher memory density).\n\nCombining that with limiting the number of freelists, and some\nmicrooptimizations, allocation performance is now on par.\n\nFreeing still seems to be a tad slower, mostly because SlabFree()\npractically is immediately stalled on fetching the block, whereas\nAllocSetFree() can happily speculate ahead and do work like computing\nthe freelist index. And then aset only needs to access memory inside the\ncontext - which is much more likely to be in cache than a freelist\ninside a block (there are many more).\n\nBut that's ok, I think. It's close and it's only a small share of the\noverall runtime of my workload...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 17 Jul 2021 18:06:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On 7/18/21 3:06 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-07-17 16:10:19 -0700, Andres Freund wrote:\n>> Instead of populating a linked list with all chunks upon creation of a block -\n>> which requires touching a fair bit of memory - keep a per-block pointer (or an\n>> offset) into \"unused\" area of the block. When allocating from the block and\n>> theres still \"unused\" memory left, use that, instead of bothering with the\n>> freelist.\n>>\n>> I tried that, and it nearly got slab up to the allocation/freeing performance\n>> of aset.c (while winning after allocation, due to the higher memory density).\n> \n> Combining that with limiting the number of freelists, and some\n> microoptimizations, allocation performance is now on par.\n> \n> Freeing still seems to be a tad slower, mostly because SlabFree()\n> practically is immediately stalled on fetching the block, whereas\n> AllocSetFree() can happily speculate ahead and do work like computing\n> the freelist index. And then aset only needs to access memory inside the\n> context - which is much more likely to be in cache than a freelist\n> inside a block (there are many more).\n> \n> But that's ok, I think. It's close and it's only a small share of the\n> overall runtime of my workload...\n> \n\nSounds great! Thanks for investigating this and for the improvements.\n\nIt might be good to do some experiments to see how the changes affect \nmemory consumption for practical workloads. I'm willing to spend soem \ntime on that, if needed.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 18 Jul 2021 19:23:31 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-18 19:23:31 +0200, Tomas Vondra wrote:\n> Sounds great! Thanks for investigating this and for the improvements.\n> \n> It might be good to do some experiments to see how the changes affect memory\n> consumption for practical workloads. I'm willing to spend soem time on that,\n> if needed.\n\nI've attached my changes. They're in a rough shape right now, but I\nthink good enough for an initial look.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 19 Jul 2021 13:56:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Em seg., 19 de jul. de 2021 às 17:56, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2021-07-18 19:23:31 +0200, Tomas Vondra wrote:\n> > Sounds great! Thanks for investigating this and for the improvements.\n> >\n> > It might be good to do some experiments to see how the changes affect\n> memory\n> > consumption for practical workloads. I'm willing to spend soem time on\n> that,\n> > if needed.\n>\n> I've attached my changes. They're in a rough shape right now, but I\n> think good enough for an initial look.\n>\nHi Andres, I take a look.\n\nPerhaps you would agree with me that in the most absolute of times, malloc\nwill not fail.\nSo it makes more sense to test:\nif (ret != NULL)\nthan\nif (ret == NULL)\n\nWhat might help branch prediction.\nWith this change wins too, the possibility\nto reduce the scope of some variable.\n\nExample:\n+static void * pg_noinline\n+AllocSetAllocLarge(AllocSet set, Size size, int flags)\n+{\n+ AllocBlock block;\n+ Size chunk_size;\n+ Size blksize;\n+\n+ /* check size, only allocation path where the limits could be hit */\n+ MemoryContextCheckSize(&set->header, size, flags);\n+\n+ AssertArg(AllocSetIsValid(set));\n+\n+ chunk_size = MAXALIGN(size);\n+ blksize = chunk_size + ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ;\n+ block = (AllocBlock) malloc(blksize);\n+ if (block != NULL)\n+ {\n+ AllocChunk chunk;\n+\n+ set->header.mem_allocated += blksize;\n+\n+ block->aset = set;\n+ block->freeptr = block->endptr = ((char *) block) + blksize;\n+\n+ /*\n+ * Stick the new block underneath the active allocation block, if\nany,\n+ * so that we don't lose the use of the space remaining therein.\n+ */\n+ if (set->blocks != NULL)\n+ {\n+ block->prev = set->blocks;\n+ block->next = set->blocks->next;\n+ if (block->next)\n+ block->next->prev = block;\n+ set->blocks->next = block;\n+ }\n+ else\n+ {\n+ block->prev = NULL;\n+ block->next = NULL;\n+ set->blocks = block;\n+ }\n+\n+ chunk = (AllocChunk) (((char *) block) + ALLOC_BLOCKHDRSZ);\n+ chunk->size = chunk_size;\n+\n+ return AllocSetAllocReturnChunk(set, size, chunk, chunk_size);\n+ }\n+\n+ return NULL;\n+}\n\nregards,\nRanier Vilela\n\nEm seg., 19 de jul. de 2021 às 17:56, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\r\nOn 2021-07-18 19:23:31 +0200, Tomas Vondra wrote:\r\n> Sounds great! Thanks for investigating this and for the improvements.\r\n> \r\n> It might be good to do some experiments to see how the changes affect memory\r\n> consumption for practical workloads. I'm willing to spend soem time on that,\r\n> if needed.\n\r\nI've attached my changes. They're in a rough shape right now, but I\r\nthink good enough for an initial look.Hi Andres, I take a look.Perhaps you would agree with me that in the most absolute of times, malloc will not fail.So it makes more sense to test:if (ret != NULL)thanif (ret == NULL)What might help branch prediction.With this change wins too, the possibilityto reduce the scope of some variable.Example:+static void * pg_noinline+AllocSetAllocLarge(AllocSet set, Size size, int flags)+{+ AllocBlock block;+ Size chunk_size;+ Size blksize;++ /* check size, only allocation path where the limits could be hit */+ MemoryContextCheckSize(&set->header, size, flags);++ AssertArg(AllocSetIsValid(set));++ chunk_size = MAXALIGN(size);+ blksize = chunk_size + ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ;+ block = (AllocBlock) malloc(blksize);+ if (block != NULL)+ {+ AllocChunk chunk;++ set->header.mem_allocated += blksize;++ block->aset = set;+ block->freeptr = block->endptr = ((char *) block) + blksize;++ /*+ * Stick the new block underneath the active allocation block, if any,+ * so that we don't lose the use of the space remaining therein.+ */+ if (set->blocks != NULL)+ {+ block->prev = set->blocks;+ block->next = set->blocks->next;+ if (block->next)+ block->next->prev = block;+ set->blocks->next = block;+ }+ else+ {+ block->prev = NULL;+ block->next = NULL;+ set->blocks = block;+ }++ chunk = (AllocChunk) (((char *) block) + ALLOC_BLOCKHDRSZ);+ chunk->size = chunk_size;++ return AllocSetAllocReturnChunk(set, size, chunk, chunk_size);+ }++ return NULL;+}regards,Ranier Vilela",
"msg_date": "Mon, 19 Jul 2021 19:03:41 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Tue, 20 Jul 2021 at 10:04, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Perhaps you would agree with me that in the most absolute of times, malloc will not fail.\n> So it makes more sense to test:\n> if (ret != NULL)\n> than\n> if (ret == NULL)\n\nI think it'd be better to use unlikely() for that.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Jul 2021 02:15:07 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Em ter., 20 de jul. de 2021 às 11:15, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 20 Jul 2021 at 10:04, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > Perhaps you would agree with me that in the most absolute of times,\n> malloc will not fail.\n> > So it makes more sense to test:\n> > if (ret != NULL)\n> > than\n> > if (ret == NULL)\n>\n> I think it'd be better to use unlikely() for that.\n>\nSure, it can be, but in this case, there is no way to reduce the scope.\n\nregards,\nRanier Vilela\n\nEm ter., 20 de jul. de 2021 às 11:15, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 20 Jul 2021 at 10:04, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Perhaps you would agree with me that in the most absolute of times, malloc will not fail.\n> So it makes more sense to test:\n> if (ret != NULL)\n> than\n> if (ret == NULL)\n\nI think it'd be better to use unlikely() for that.Sure, it can be, but in this case, there is no way to reduce the scope.regards,Ranier Vilela",
"msg_date": "Tue, 20 Jul 2021 11:24:50 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nI spent a bit of time benchmarking this - the first patch adds an \nextension with three functions, each executing a slightly different \nallocation pattern:\n\n1) FIFO (allocates and frees in the same order)\n2) LIFO (frees in reverse order)\n3) random\n\nEach function can also do a custom number of iterations, each allocating \nand freeing certain number of chunks. The bench.sql script executes \nthree combinations (for each pattern)\n\n1) no loops\n2) increase: 100 loops, each freeing 10k chunks and allocating 15k\n3) decrease: 100 loops, each freeing 10k chunks and allocating 5k\n\nThe idea is to test simple one-time allocation, and workloads that mix \nallocations and frees (to see how the changes affect reuse etc.).\n\nThe script tests this with a range of block sizes (1k-32k) and chunk \nsizes (32B-512B).\n\nIn the attached .ods file with results, the \"comparison\" sheets are the \ninteresting ones - the last couple columns compare the main metrics for \nthe two patches (labeled patch-1 and patch-2) to master.\n\nOverall, the results look quite good - patch-1 is mostly on par with \nmaster, with maybe 5% variability in both directions. That's expected, \nconsidering the patch does not aim to improve performance.\n\nThe second patch brings some nice improvements - 30%-50% in most cases \n(for both allocation and free) seems pretty nice. But for the \"increase\" \nFIFO pattern (incrementally allocating/freeing more memory) there's a \nsignificant regression - particularly for the allocation time. In some \ncases (larger chunks, block size does not matter too much) it jumps from \n25ms to almost 200ms.\n\nThis seems unfortunate - the allocation pattern (FIFO, allocating more \nmemory over time) seems pretty common, and the slowdown is significant.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 1 Aug 2021 19:59:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-01 19:59:18 +0200, Tomas Vondra wrote:\n> In the attached .ods file with results, the \"comparison\" sheets are the\n> interesting ones - the last couple columns compare the main metrics for the\n> two patches (labeled patch-1 and patch-2) to master.\n\nI assume with patch-1/2 you mean the ones after the benchmark patch\nitself?\n\n\n> Overall, the results look quite good - patch-1 is mostly on par with master,\n> with maybe 5% variability in both directions. That's expected, considering\n> the patch does not aim to improve performance.\n\nNot for slab anyway...\n\n\n> The second patch brings some nice improvements - 30%-50% in most cases (for\n> both allocation and free) seems pretty nice. But for the \"increase\" FIFO\n> pattern (incrementally allocating/freeing more memory) there's a significant\n> regression - particularly for the allocation time. In some cases (larger\n> chunks, block size does not matter too much) it jumps from 25ms to almost\n> 200ms.\n\nI'm not surprised to see some memory usage increase some, but that\ndegree of time overhead does surprise me. ISTM there's something wrong.\n\nIt'd probably worth benchmarking the different improvements inside the\nWIP: slab performance. patch. There's some that I'd expect to be all\naround improvements, whereas others likely aren't quite that clear\ncut. I assume you'd prefer that I split the patch up?\n\n\n> This seems unfortunate - the allocation pattern (FIFO, allocating more\n> memory over time) seems pretty common, and the slowdown is significant.\n\nDid you analyze what causes the regressions?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 1 Aug 2021 14:07:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "\nOn 8/1/21 11:07 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-08-01 19:59:18 +0200, Tomas Vondra wrote:\n>> In the attached .ods file with results, the \"comparison\" sheets are the\n>> interesting ones - the last couple columns compare the main metrics for the\n>> two patches (labeled patch-1 and patch-2) to master.\n> \n> I assume with patch-1/2 you mean the ones after the benchmark patch\n> itself?\n> \n\nYes, those are the two WIP patches you shared on 19/7.\n\n> \n>> Overall, the results look quite good - patch-1 is mostly on par with master,\n>> with maybe 5% variability in both directions. That's expected, considering\n>> the patch does not aim to improve performance.\n> \n> Not for slab anyway...\n> \n\nMaybe the hot/cold separation could have some effect, but probably not \nfor the workloads I've tested.\n\n> \n>> The second patch brings some nice improvements - 30%-50% in most cases (for\n>> both allocation and free) seems pretty nice. But for the \"increase\" FIFO\n>> pattern (incrementally allocating/freeing more memory) there's a significant\n>> regression - particularly for the allocation time. In some cases (larger\n>> chunks, block size does not matter too much) it jumps from 25ms to almost\n>> 200ms.\n> \n> I'm not surprised to see some memory usage increase some, but that\n> degree of time overhead does surprise me. ISTM there's something wrong.\n> \n\nYeah, the higher amount of allocated memory is due to the couple fields \nadded to the SlabBlock struct, but even that only affects a single case \nwith 480B chunks and 1kB blocks. Seems fine to me, especially if we end \nup growing the slab blocks.\n\nNot sure about the allocation time, though.\n\n> It'd probably worth benchmarking the different improvements inside the\n> WIP: slab performance. patch. There's some that I'd expect to be all\n> around improvements, whereas others likely aren't quite that clear\n> cut. I assume you'd prefer that I split the patch up?\n> \n\nYeah, if you split that patch into sensible parts, I'll benchmark those. \nAlso, we can add more interesting workloads if you have some ideas.\n\n> \n>> This seems unfortunate - the allocation pattern (FIFO, allocating more\n>> memory over time) seems pretty common, and the slowdown is significant.\n> \n> Did you analyze what causes the regressions?\n> \n\nNo, not yet. I'll run the same set of benchmarks for the Generation, \ndiscussed in the other thread, and then I'll investigate this. But if \nyou split the patch, that'd probably help.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 Aug 2021 00:01:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "FWIW I tried running the benchmarks again, with some minor changes in \nthe extension code - most importantly, the time is counted in microsecs \n(instead of milisecs).\n\nI suspected the rounding might have been causing some rounding errors \n(essentially not counting anything below 1ms, because it rounds to 0), \nand the results are a bit different.\n\nOn the i5-2500k machine it's an improvement across the board, while on \nthe bigger Xeon e5-2620v3 machine it shows roughly the same regression \nfor the \"decreasing\" allocation pattern.\n\nThere's another issue in the benchmarking script - the queries are meant \nto do multiple runs for each combination of parameters, but it's written \nin a way that simply runs it once and then does cross product with the \ngenerate_sequence(1,5). I'll look into fixing that, but judging by the \nstability of results for similar chunk sizes it won't change much.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 3 Aug 2021 15:33:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Hi,\n\nI've been investigating the regressions in some of the benchmark \nresults, together with the generation context benchmarks [1].\n\nTurns out it's pretty difficult to benchmark this, because the results \nstrongly depend on what the backend did before. For example if I run \nslab_bench_fifo with the \"decreasing\" test for 32kB blocks and 512B \nchunks, I get this:\n\n select * from slab_bench_fifo(1000000, 32768, 512, 100, 10000, 5000);\n\n mem_allocated | alloc_ms | free_ms\n ---------------+----------+---------\n 528547840 | 155394 | 87440\n\n\ni.e. palloc() takes ~155ms and pfree() ~87ms (and these result are \nstable, the numbers don't change much with more runs).\n\nBut if I run a set of \"lifo\" tests in the backend first, the results \nlook like this:\n\n mem_allocated | alloc_ms | free_ms\n ---------------+----------+---------\n 528547840 | 41728 | 71524\n (1 row)\n\nso the pallocs are suddenly about ~4x faster. Clearly, what the backend \ndid before may have pretty dramatic impact on results, even for simple \nbenchmarks like this.\n\nNote: The benchmark was a single SQL script, running all the different \nworkloads in the same backend.\n\nI did a fair amount of perf profiling, and the main difference between \nthe slow and fast runs seems to be this:\n\n 0 page-faults:u \n\n 0 minor-faults:u \n\n 0 major-faults:u \n\n\nvs\n\n 20,634,153 page-faults:u \n\n 20,634,153 minor-faults:u \n\n 0 major-faults:u \n\n\nAttached is a more complete perf stat output, but the page faults seem \nto be the main issue. My theory is that in the \"fast\" case, the past \nbackend activity puts the glibc memory management into a state that \nprevents page faults in the benchmark.\n\nBut of course, this theory may be incomplete - for example it's not \nclear why running the benchmark repeatedly would not \"condition\" the \nbackend the same way. But it doesn't - it's ~150ms even for repeated runs.\n\nSecondly, I'm not sure this explains why some of the timings actually \ngot much slower with the 0003 patch, when the sequence of the steps is \nstill the same. Of course, it's possible 0003 changes the allocation \npattern a bit, interfering with glibc memory management.\n\nThis leads to a couple of interesting questions, I think:\n\n1) I've only tested this on Linux, with glibc. I wonder how it'd behave \non other platforms, or with other allocators.\n\n2) Which cases are more important? When the backend was warmed up, or \nwhen each benchmark runs in a new backend? It seems the \"new backend\" is \nsomething like a \"worst case\" leading to more page faults, so maybe \nthat's the thing to watch. OTOH it's unlikely to have a completely new \nbackend, so maybe not.\n\n3) Can this teach us something about how to allocate stuff, to better \n\"prepare\" the backend for future allocations? For example, it's a bit \nstrange that repeated runs of the same benchmark don't do the trick, for \nsome reason.\n\n\n\nregards\n\n\n[1] \nhttps://www.postgresql.org/message-id/bcdd4e3e-c12d-cd2b-7ead-a91ad416100a%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 10 Sep 2021 23:06:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Sat, 11 Sept 2021 at 09:07, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I've been investigating the regressions in some of the benchmark\n> results, together with the generation context benchmarks [1].\n\nI've not looked into the regression you found with this yet, but I did\nrebase the patch. slab.c has seen quite a number of changes recently.\n\nI didn't spend a lot of time checking over the patch. I mainly wanted\nto see what the performance was like before reviewing in too much\ndetail.\n\nTo test the performance, I used [1] and ran:\n\nselect pg_allocate_memory_test(<nbytes>, 1024*1024,\n10::bigint*1024*1024*1024, 'slab');\n\nthat basically allocates chunks of <nbytes> and keeps around 1MB of\nthem at a time and allocates a total of 10GBs of them.\n\nI saw:\n\nMaster:\n16 byte chunk = 8754.678 ms\n32 byte chunk = 4511.725 ms\n64 byte chunk = 2244.885 ms\n128 byte chunk = 1135.349 ms\n256 byte chunk = 548.030 ms\n512 byte chunk = 272.017 ms\n1024 byte chunk = 144.618 ms\n\nMaster + attached patch:\n16 byte chunk = 5255.974 ms\n32 byte chunk = 2640.807 ms\n64 byte chunk = 1328.949 ms\n128 byte chunk = 668.078 ms\n256 byte chunk = 330.564 ms\n512 byte chunk = 166.844 ms\n1024 byte chunk = 85.399 ms\n\nSo patched runs in about 60% of the time that master runs in.\n\nI plan to look at the patch in a bit more detail and see if I can\nrecreate and figure out the regression that Tomas reported. For now, I\njust want to share the rebased patch.\n\nThe only thing I really adjusted from Andres' version is to instead of\nusing pointers for the linked list block freelist, I made it store the\nnumber of bytes into the block that the chunk is. This means we can\nuse 4 bytes instead of 8 bytes for these pointers. The block size is\nlimited to 1GB now anyway, so 32-bit is large enough for these\noffsets.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/attachment/137056/allocate_performance_functions.patch.txt",
"msg_date": "Wed, 12 Oct 2022 22:37:17 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 4:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> [v3]\n\n+ /*\n+ * Compute a shift that guarantees that shifting chunksPerBlock with it\n+ * yields is smaller than SLAB_FREELIST_COUNT - 1 (one freelist is used\nfor full blocks).\n+ */\n+ slab->freelist_shift = 0;\n+ while ((slab->chunksPerBlock >> slab->freelist_shift) >=\n(SLAB_FREELIST_COUNT - 1))\n+ slab->freelist_shift++;\n\n+ /*\n+ * Ensure, without a branch, that index 0 is only used for blocks entirely\n+ * without free chunks.\n+ * XXX: There probably is a cheaper way to do this. Needing to shift twice\n+ * by slab->freelist_shift isn't great.\n+ */\n+ index = (freecount + (1 << slab->freelist_shift) - 1) >>\nslab->freelist_shift;\n\nHow about something like\n\n#define SLAB_FREELIST_COUNT ((1<<3) + 1)\nindex = (freecount & (SLAB_FREELIST_COUNT - 2)) + (freecount != 0);\n\nand dispense with both freelist_shift and the loop that computes it?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Oct 12, 2022 at 4:37 PM David Rowley <dgrowleyml@gmail.com> wrote:> [v3]+\t/*+\t * Compute a shift that guarantees that shifting chunksPerBlock with it+\t * yields is smaller than SLAB_FREELIST_COUNT - 1 (one freelist is used for full blocks).+\t */+\tslab->freelist_shift = 0;+\twhile ((slab->chunksPerBlock >> slab->freelist_shift) >= (SLAB_FREELIST_COUNT - 1))+\t\tslab->freelist_shift++;+\t/*+\t * Ensure, without a branch, that index 0 is only used for blocks entirely+\t * without free chunks.+\t * XXX: There probably is a cheaper way to do this. Needing to shift twice+\t * by slab->freelist_shift isn't great.+\t */+\tindex = (freecount + (1 << slab->freelist_shift) - 1) >> slab->freelist_shift;How about something like#define SLAB_FREELIST_COUNT ((1<<3) + 1)index = (freecount & (SLAB_FREELIST_COUNT - 2)) + (freecount != 0);and dispense with both freelist_shift and the loop that computes it?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 11 Nov 2022 16:20:29 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Fri, 11 Nov 2022 at 22:20, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Wed, Oct 12, 2022 at 4:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > [v3]\n>\n> + /*\n> + * Compute a shift that guarantees that shifting chunksPerBlock with it\n> + * yields is smaller than SLAB_FREELIST_COUNT - 1 (one freelist is used for full blocks).\n> + */\n> + slab->freelist_shift = 0;\n> + while ((slab->chunksPerBlock >> slab->freelist_shift) >= (SLAB_FREELIST_COUNT - 1))\n> + slab->freelist_shift++;\n>\n> + /*\n> + * Ensure, without a branch, that index 0 is only used for blocks entirely\n> + * without free chunks.\n> + * XXX: There probably is a cheaper way to do this. Needing to shift twice\n> + * by slab->freelist_shift isn't great.\n> + */\n> + index = (freecount + (1 << slab->freelist_shift) - 1) >> slab->freelist_shift;\n>\n> How about something like\n>\n> #define SLAB_FREELIST_COUNT ((1<<3) + 1)\n> index = (freecount & (SLAB_FREELIST_COUNT - 2)) + (freecount != 0);\n\nDoesn't this create a sort of round-robin use of the free list? What\nwe want is a sort of \"histogram\" bucket set of free lists so we can\ngroup together blocks that have a close-enough free number of chunks.\nUnless I'm mistaken, I think what you have doesn't do that.\n\nI wondered if simply:\n\nindex = -(-freecount >> slab->freelist_shift);\n\nwould be faster than Andres' version. I tried it out and on my AMD\nmachine, it's about the same speed. Same on a Raspberry Pi 4.\n\nGoing by [2], the instructions are very different with each method, so\nother machines with different latencies on those instructions might\nshow something different. I attached what I used to test if anyone\nelse wants a go.\n\nAMD Zen2\n$ ./freecount 2000000000\nTest 'a' in 0.922766 seconds\nTest 'd' in 0.922762 seconds (0.000433% faster)\n\nRPI4\n$ ./freecount 2000000000\nTest 'a' in 3.341350 seconds\nTest 'd' in 3.338690 seconds (0.079672% faster)\n\nThat was gcc. Trying it with clang, it went in a little heavy-handed\nand optimized out my loop, so some more trickery might be needed for a\nuseful test on that compiler.\n\nDavid\n\n[2] https://godbolt.org/z/dh95TohEG",
"msg_date": "Mon, 5 Dec 2022 21:02:06 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 3:02 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 11 Nov 2022 at 22:20, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> > #define SLAB_FREELIST_COUNT ((1<<3) + 1)\n> > index = (freecount & (SLAB_FREELIST_COUNT - 2)) + (freecount != 0);\n>\n> Doesn't this create a sort of round-robin use of the free list? What\n> we want is a sort of \"histogram\" bucket set of free lists so we can\n> group together blocks that have a close-enough free number of chunks.\n> Unless I'm mistaken, I think what you have doesn't do that.\n\nThe intent must have slipped my mind along the way.\n\n> I wondered if simply:\n>\n> index = -(-freecount >> slab->freelist_shift);\n>\n> would be faster than Andres' version. I tried it out and on my AMD\n> machine, it's about the same speed. Same on a Raspberry Pi 4.\n>\n> Going by [2], the instructions are very different with each method, so\n> other machines with different latencies on those instructions might\n> show something different. I attached what I used to test if anyone\n> else wants a go.\n\nI get about 0.1% difference on my machine. Both ways boil down to (on gcc)\n3 instructions with low latency. The later ones need the prior results to\nexecute, which I think is what the XXX comment \"isn't great\" was referring\nto. The new coding is more mysterious (does it do the right thing on all\nplatforms?), so I guess the original is still the way to go unless we get a\nbetter idea.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Dec 5, 2022 at 3:02 PM David Rowley <dgrowleyml@gmail.com> wrote:>> On Fri, 11 Nov 2022 at 22:20, John Naylor <john.naylor@enterprisedb.com> wrote:> > #define SLAB_FREELIST_COUNT ((1<<3) + 1)> > index = (freecount & (SLAB_FREELIST_COUNT - 2)) + (freecount != 0);>> Doesn't this create a sort of round-robin use of the free list? What> we want is a sort of \"histogram\" bucket set of free lists so we can> group together blocks that have a close-enough free number of chunks.> Unless I'm mistaken, I think what you have doesn't do that.The intent must have slipped my mind along the way.> I wondered if simply:>> index = -(-freecount >> slab->freelist_shift);>> would be faster than Andres' version. I tried it out and on my AMD> machine, it's about the same speed. Same on a Raspberry Pi 4.>> Going by [2], the instructions are very different with each method, so> other machines with different latencies on those instructions might> show something different. I attached what I used to test if anyone> else wants a go.I get about 0.1% difference on my machine. Both ways boil down to (on gcc) 3 instructions with low latency. The later ones need the prior results to execute, which I think is what the XXX comment \"isn't great\" was referring to. The new coding is more mysterious (does it do the right thing on all platforms?), so I guess the original is still the way to go unless we get a better idea. --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 5 Dec 2022 17:18:24 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 5:07 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Turns out it's pretty difficult to benchmark this, because the results\n> strongly depend on what the backend did before.\n\nWhat you report here seems to be mostly cold-cache effects, with which\nI don't think we need to be overly concerned. We don't want big\nregressions in the cold-cache case, but there is always going to be\nsome overhead when a new backend starts up, because you've got to\nfault some pages into the heap/malloc arena/whatever before they can\nbe efficiently accessed. What would be more concerning is if we found\nout that the performance depended heavily on the internal state of the\nallocator. For example, suppose you have two warmup routines W1 and\nW2, each of which touches the same amount of total memory, but with\ndifferent details. Then you have a benchmark B. If you do W1-B and\nW2-B and the time for B varies dramatically between them, then you've\nmaybe got an issue. For instance, it could indicate that the allocator\nhas issue when the old and new allocations are very different sizes,\nor something like that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 10:31:36 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": ".On Mon, 5 Dec 2022 at 23:18, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Mon, Dec 5, 2022 at 3:02 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> > Going by [2], the instructions are very different with each method, so\n> > other machines with different latencies on those instructions might\n> > show something different. I attached what I used to test if anyone\n> > else wants a go.\n>\n> I get about 0.1% difference on my machine. Both ways boil down to (on gcc) 3 instructions with low latency. The later ones need the prior results to execute, which I think is what the XXX comment \"isn't great\" was referring to. The new coding is more mysterious (does it do the right thing on all platforms?), so I guess the original is still the way to go unless we get a better idea.\n\nI don't think it would work well on a one's complement machine, but I\ndon't think we support those going by the comments above RIGHTMOST_ONE\nin bitmapset.c. In anycase, I found that it wasn't any faster than\nwhat Andres wrote. In fact, even changing the code there to \"index =\n(freecount > 0);\" seems to do very little to increase performance. I\ndo see that having 3 freelist items performs a decent amount better\nthan having 9. However, which workload is run may alter the result of\nthat, assuming that keeping new allocations on fuller blocks is a\nwinning strategy for the CPU's caches.\n\nI've now done quite a bit more work on Andres' patch to try and get it\ninto (hopefully) somewhere close to a committable shape.\n\nI'm fairly happy with what's there now. It does seem to perform much\nbetter than current master, especially so when the workload would have\ncaused master to continually malloc and free an entire block when\nfreeing and allocating a single chunk.\n\nI've done some basic benchmarking, mostly using the attached alloc_bench patch.\n\nIf I run:\n\nselect *, round(slab_result / aset_result * 100 - 100,1)::text || '%'\nas slab_slower_by\nfrom (\nselect\n chunk_size,\n keep_chunks,\n sum(pg_allocate_memory_test(chunk_size, chunk_size * keep_chunks,\n1024*1024*1024, 'slab'))::numeric(1000,3) as slab_result,\n sum(pg_allocate_memory_test(chunk_size, chunk_size * keep_chunks,\n1024*1024*1024, 'aset'))::numeric(1000,3) as aset_result\nfrom\n (values(1),(10),(50),(100),(200),(300),(400),(500),(1000),(2000),(3000),(4000),(5000),(10000))\nv1(keep_chunks),\n (values(64),(128),(256),(512),(1024)) v2(chunk_size)\ngroup by rollup(1,2)\n);\n\nThe results for the 64-byte chunk are shown in the attached chart.\nIt's not as fast as aset, but much faster than master's slab.c\nThe first blue bar of the chart is well above the vertical axis. It\ntook master 1118958 milliseconds for that test. The attached patch\ntook 194 ms. The rest of the tests seem to put the patched code around\nsomewhere in the middle between the unpatched code and aset.c's\nperformance.\n\nThe benchmark I did was entirely a FIFO workload that keeps around\n\"keep_chunks\" at once before starting to free the oldest chunks. I\nknow Tomas has some LIFO and random benchmarks. I edited that code [1]\na little to add support for other context types so that a comparison\ncould be done more easily, however, I'm getting very weird performance\nresults where sometimes it runs about twice as fast (Tomas mentioned\nhe got this too). I'm not seeing that with my own benchmarking\nfunction, so I'm wondering if there's something weird going on with\nthe benchmark itself rather than the slab.c code.\n\nI've likely made much more changes than I can list here, but here are\na few of the more major ones:\n\n1. Make use of dclist for empty blocks\n2. In SlabAlloc() allocate chunks from the freelist before the unused list.\n3. Added output for showing information about empty blocks in the\nSlabStats output.\n4. Renamed the context freelists to blocklist. I found this was\nlikely just confusing things with the block-level freelist. In any\ncase, it seemed weird to have freelist[0] store full blocks. Not much\nfree there! I renamed to blocklist[].\n5. I did a round of changing the order of the fields in SlabBlock.\nThis seems to affect performance quite a bit. Having the context first\nseems to improve performance. Having the blocklist[] node last also\nhelps.\n6. Removed nblocks and headerSize from SlabContext. headerSize is no\nlonger needed. nblocks was only really used for Asserts and\nSlabIsEmpty. I changed the Asserts to use a local count of blocks and\nchanged SlabIsEmpty to look at the context's mem_allocated.\n7. There's now no integer division in any of the alloc and free code.\nThe only time we divide by fullChunkSize is in the check function.\n8. When using a block from the emptyblock list, I changed the code to\nnot re-init the block. It's now used as it was left previously. This\nmeans no longer having to build the freelist again.\n9. Updated all comments to reflect the current state of the code.\n\nSome things I thought about but didn't do:\na. Change the size of SlabContext's chunkSize, fullChunkSize and\nblockSize to be uint32 instead of Size. It might be possible to get\nSlabContext below 128 bytes with a bit more work.\nb. I could have done a bit more experimentation with unlikely() and\nlikely() to move less frequently accessed code off into a cold area.\n\nFor #2 above, I didn't really see much change in performance when I\nswapped the order of what we allocate from first. I expected free\nchunks would be better as they've been used and are seemingly more\nlikely to be in some CPU cache than one of the unused chunks. I might\nneed a different allocation pattern than the one I used to highlight\nthat fact though.\n\nDavid\n\n[1] https://github.com/david-rowley/postgres/tree/alloc_bench_contrib",
"msg_date": "Sat, 10 Dec 2022 17:01:56 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 11:02 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> [v4]\n\nThanks for working on this!\n\nI ran an in-situ benchmark using the v13 radix tree patchset ([1] WIP but\nshould be useful enough for testing allocation speed), only applying the\nfirst five, which are local-memory only. The benchmark is not meant to\nrepresent a realistic workload, and primarily stresses traversal and\nallocation of the smallest node type. Minimum of five, with turbo-boost\noff, on recent Intel laptop hardware:\n\nv13-0001 to 0005:\n\n# select * from bench_load_random_int(500 * 1000);\n mem_allocated | load_ms\n---------------+---------\n 151123432 | 222\n\n47.06% postgres postgres [.] rt_set\n22.89% postgres postgres [.] SlabAlloc\n 9.65% postgres postgres [.] rt_node_insert_inner.isra.0\n 5.94% postgres [unknown] [k] 0xffffffffb5e011b7\n 3.62% postgres postgres [.] MemoryContextAlloc\n 2.70% postgres libc.so.6 [.] __memmove_avx_unaligned_erms\n 2.60% postgres postgres [.] SlabFree\n\n+ v4 slab:\n\n# select * from bench_load_random_int(500 * 1000);\n mem_allocated | load_ms\n---------------+---------\n 152463112 | 213\n\n 52.42% postgres postgres [.] rt_set\n 12.80% postgres postgres [.] SlabAlloc\n 9.38% postgres postgres [.] rt_node_insert_inner.isra.0\n 7.87% postgres [unknown] [k] 0xffffffffb5e011b7\n 4.98% postgres postgres [.] SlabFree\n\nWhile allocation is markedly improved, freeing looks worse here. The\nproportion is surprising because only about 2% of nodes are freed during\nthe load, but doing that takes up 10-40% of the time compared to allocating.\n\nnum_keys = 500000, height = 7\nn4 = 2501016, n15 = 56932, n32 = 270, n125 = 0, n256 = 257\n\nSidenote: I don't recall ever seeing vsyscall (I think that's what the\n0xffffffffb5e011b7 address is referring to) in a profile, so not sure what\nis happening there.\n\n[1]\nhttps://www.postgresql.org/message-id/CAFBsxsHNE621mGuPhd7kxaGc22vMkoSu7R4JW9Zan1jjorGy3g%40mail.gmail.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Dec 10, 2022 at 11:02 AM David Rowley <dgrowleyml@gmail.com> wrote:> [v4]Thanks for working on this!I ran an in-situ benchmark using the v13 radix tree patchset ([1] WIP but should be useful enough for testing allocation speed), only applying the first five, which are local-memory only. The benchmark is not meant to represent a realistic workload, and primarily stresses traversal and allocation of the smallest node type. Minimum of five, with turbo-boost off, on recent Intel laptop hardware:v13-0001 to 0005:# select * from bench_load_random_int(500 * 1000); mem_allocated | load_ms ---------------+--------- 151123432 | 22247.06% postgres postgres [.] rt_set22.89% postgres postgres [.] SlabAlloc 9.65% postgres postgres [.] rt_node_insert_inner.isra.0 5.94% postgres [unknown] [k] 0xffffffffb5e011b7 3.62% postgres postgres [.] MemoryContextAlloc 2.70% postgres libc.so.6 [.] __memmove_avx_unaligned_erms 2.60% postgres postgres [.] SlabFree+ v4 slab:# select * from bench_load_random_int(500 * 1000); mem_allocated | load_ms ---------------+--------- 152463112 | 213 52.42% postgres postgres [.] rt_set 12.80% postgres postgres [.] SlabAlloc 9.38% postgres postgres [.] rt_node_insert_inner.isra.0 7.87% postgres [unknown] [k] 0xffffffffb5e011b7 4.98% postgres postgres [.] SlabFreeWhile allocation is markedly improved, freeing looks worse here. The proportion is surprising because only about 2% of nodes are freed during the load, but doing that takes up 10-40% of the time compared to allocating.num_keys = 500000, height = 7n4 = 2501016, n15 = 56932, n32 = 270, n125 = 0, n256 = 257Sidenote: I don't recall ever seeing vsyscall (I think that's what the 0xffffffffb5e011b7 address is referring to) in a profile, so not sure what is happening there.[1] https://www.postgresql.org/message-id/CAFBsxsHNE621mGuPhd7kxaGc22vMkoSu7R4JW9Zan1jjorGy3g%40mail.gmail.com--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 12 Dec 2022 14:13:47 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "Thanks for testing the patch.\n\nOn Mon, 12 Dec 2022 at 20:14, John Naylor <john.naylor@enterprisedb.com> wrote:\n> v13-0001 to 0005:\n\n> 2.60% postgres postgres [.] SlabFree\n\n> + v4 slab:\n\n> 4.98% postgres postgres [.] SlabFree\n>\n> While allocation is markedly improved, freeing looks worse here. The proportion is surprising because only about 2% of nodes are freed during the load, but doing that takes up 10-40% of the time compared to allocating.\n\nI've tried to reproduce this with the v13 patches applied and I'm not\nreally getting the same as you are. To run the function 100 times I\nused:\n\nselect x, a.* from generate_series(1,100) x(x), lateral (select * from\nbench_load_random_int(500 * 1000 * (1+x-x))) a;\n\n(I had to add the * (1+x-x) to add a lateral dependency to stop the\nfunction just being executed once)\n\nv13-0001 - 0005 gives me:\n\n 37.71% postgres [.] rt_set\n 19.24% postgres [.] SlabAlloc\n 8.73% [kernel] [k] clear_page_rep\n 5.21% postgres [.] rt_node_insert_inner.isra.0\n 2.63% [kernel] [k] asm_exc_page_fault\n 2.24% postgres [.] SlabFree\n\nand fairly consistently 122 ms runtime per call.\n\nApplying v4 slab patch I get:\n\n 41.06% postgres [.] rt_set\n 10.84% postgres [.] SlabAlloc\n 9.01% [kernel] [k] clear_page_rep\n 6.49% postgres [.] rt_node_insert_inner.isra.0\n 2.76% postgres [.] SlabFree\n\nand fairly consistently 112 ms per call.\n\nI wonder if you can consistently get the same result on another\ncompiler or after patching something like master~50 or master~100.\nMaybe it's just a code alignment thing.\n\nLooking at the annotation of perf report for SlabFree with the patched\nversion I see:\n\n │\n │ /* push this chunk onto the head of the free list */\n │ *(MemoryChunk **) pointer = block->freehead;\n 0.09 │ mov 0x10(%r8),%rax\n │ slab = block->slab;\n59.15 │ mov (%r8),%rbp\n │ *(MemoryChunk **) pointer = block->freehead;\n 9.43 │ mov %rax,(%rdi)\n │ block->freehead = chunk;\n │\n │ block->nfree++;\n\nI think what that's telling me is that dereferencing the block's\nmemory is slow, likely due to that particular cache line not being\ncached any longer. I tried running the test with 10,000 ints instead\nof 500,000 so that there would be less CPU cache pressure. I see:\n\n 29.76 │ mov (%r8),%rbp\n │ *(MemoryChunk **) pointer = block->freehead;\n 12.72 │ mov %rax,(%rdi)\n │ block->freehead = chunk;\n │\n │ block->nfree++;\n │ mov 0x8(%r8),%eax\n │ block->freehead = chunk;\n 4.27 │ mov %rdx,0x10(%r8)\n │ SlabBlocklistIndex():\n │ index = (nfree + (1 << blocklist_shift) - 1) >> blocklist_shift;\n │ mov $0x1,%edx\n │ SlabFree():\n │ block->nfree++;\n │ lea 0x1(%rax),%edi\n │ mov %edi,0x8(%r8)\n │ SlabBlocklistIndex():\n │ int32 blocklist_shift = slab->blocklist_shift;\n │ mov 0x70(%rbp),%ecx\n │ index = (nfree + (1 << blocklist_shift) - 1) >> blocklist_shift;\n 8.46 │ shl %cl,%edx\n\nvarious other instructions in SlabFree are proportionally taking\nlonger now. For example the bitshift at the end was insignificant\npreviously. That indicates to me that this is due to caching effects.\nWe must fetch the block in SlabFree() in both versions. It's possible\nthat something is going on in SlabAlloc() that is causing more useful\ncachelines to be evicted, but (I think) one of primary design goals\nAndres was going for was to reduce that. For example not having to\nwrite out the freelist for an entire block when the block is first\nallocated means not having to load possibly all cache lines for the\nentire block anymore.\n\nI tried looking at perf stat during the run.\n\nWithout slab changes:\n\ndrowley@amd3990x:~$ sudo perf stat --pid=74922 sleep 2\n Performance counter stats for process id '74922':\n\n 2,000.74 msec task-clock # 1.000 CPUs utilized\n 4 context-switches # 1.999 /sec\n 0 cpu-migrations # 0.000 /sec\n 578,139 page-faults # 288.963 K/sec\n 8,614,687,392 cycles # 4.306 GHz\n (83.21%)\n 682,574,688 stalled-cycles-frontend # 7.92% frontend\ncycles idle (83.33%)\n 4,822,904,271 stalled-cycles-backend # 55.98% backend\ncycles idle (83.41%)\n 11,447,124,105 instructions # 1.33 insn per cycle\n # 0.42 stalled\ncycles per insn (83.41%)\n 1,947,647,575 branches # 973.464 M/sec\n (83.41%)\n 13,914,897 branch-misses # 0.71% of all\nbranches (83.24%)\n\n 2.000924020 seconds time elapsed\n\nWith slab changes:\n\ndrowley@amd3990x:~$ sudo perf stat --pid=75967 sleep 2\n Performance counter stats for process id '75967':\n\n 2,000.89 msec task-clock # 1.000 CPUs utilized\n 1 context-switches # 0.500 /sec\n 0 cpu-migrations # 0.000 /sec\n 607,423 page-faults # 303.576 K/sec\n 8,566,091,176 cycles # 4.281 GHz\n (83.21%)\n 737,839,390 stalled-cycles-frontend # 8.61% frontend\ncycles idle (83.32%)\n 4,454,357,725 stalled-cycles-backend # 52.00% backend\ncycles idle (83.41%)\n 10,760,559,837 instructions # 1.26 insn per cycle\n # 0.41 stalled\ncycles per insn (83.41%)\n 1,872,047,962 branches # 935.606 M/sec\n (83.41%)\n 14,928,953 branch-misses # 0.80% of all\nbranches (83.25%)\n\n 2.000960610 seconds time elapsed\n\nIt would be interesting to see if your perf stat output is showing\nsomething significantly different with and without the slab changes.\n\nIt does not seem impossible that due to the slab changes having to\nlook at less memory in SlabAlloc() that that's moving some additional\nrequirements for SlabFree() to fetch cache lines that in the unpatched\nversion would have already been available. If that is the case, then\nI think we shouldn't worry about it unless we can find some workload\nthat demonstrates an overall performance regression with the patch. I\njust don't quite have enough perf experience to know how I might go\nabout proving that.\n\nDavid\n\n\n",
"msg_date": "Tue, 13 Dec 2022 13:49:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 7:50 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Thanks for testing the patch.\n>\n> On Mon, 12 Dec 2022 at 20:14, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> > While allocation is markedly improved, freeing looks worse here. The\nproportion is surprising because only about 2% of nodes are freed during\nthe load, but doing that takes up 10-40% of the time compared to allocating.\n>\n> I've tried to reproduce this with the v13 patches applied and I'm not\n> really getting the same as you are. To run the function 100 times I\n> used:\n>\n> select x, a.* from generate_series(1,100) x(x), lateral (select * from\n> bench_load_random_int(500 * 1000 * (1+x-x))) a;\n\nSimply running over a longer period of time like this makes the SlabFree\ndifference much closer to your results, so it doesn't seem out of line\nanymore. Here SlabAlloc seems to take maybe 2/3 of the time of current\nslab, with a 5% reduction in total time:\n\n500k ints:\n\nv13-0001-0005\naverage of 30: 217ms\n\n 47.61% postgres postgres [.] rt_set\n 20.99% postgres postgres [.] SlabAlloc\n 10.00% postgres postgres [.] rt_node_insert_inner.isra.0\n 6.87% postgres [unknown] [k] 0xffffffffbce011b7\n 3.53% postgres postgres [.] MemoryContextAlloc\n 2.82% postgres postgres [.] SlabFree\n\n+slab v4\naverage of 30: 206ms\n\n 51.13% postgres postgres [.] rt_set\n 14.08% postgres postgres [.] SlabAlloc\n 11.41% postgres postgres [.] rt_node_insert_inner.isra.0\n 7.44% postgres [unknown] [k] 0xffffffffbce011b7\n 3.89% postgres postgres [.] MemoryContextAlloc\n 3.39% postgres postgres [.] SlabFree\n\nIt doesn't look mysterious anymore, but I went ahead and took some more\nperf measurements, including for cache misses. My naive impression is that\nwe're spending a bit more time waiting for data, but having to do less work\nwith it once we get it, which is consistent with your earlier comments:\n\nperf stat -p $pid sleep 2\nv13:\n 2,001.55 msec task-clock:u # 1.000 CPUs\nutilized\n 0 context-switches:u # 0.000 /sec\n\n 0 cpu-migrations:u # 0.000 /sec\n\n 311,690 page-faults:u # 155.724 K/sec\n\n 3,128,740,701 cycles:u # 1.563 GHz\n\n 4,739,333,861 instructions:u # 1.51 insn\nper cycle\n 820,014,588 branches:u # 409.690 M/sec\n\n 7,385,923 branch-misses:u # 0.90% of all\nbranches\n\n+slab v4:\n 2,001.09 msec task-clock:u # 1.000 CPUs\nutilized\n 0 context-switches:u # 0.000 /sec\n\n 0 cpu-migrations:u # 0.000 /sec\n\n 326,017 page-faults:u # 162.920 K/sec\n\n 3,016,668,818 cycles:u # 1.508 GHz\n\n 4,324,863,908 instructions:u # 1.43 insn\nper cycle\n 761,839,927 branches:u # 380.712 M/sec\n\n 7,718,366 branch-misses:u # 1.01% of all\nbranches\n\n\nperf stat -e LLC-loads,LLC-loads-misses -p $pid sleep 2\nmin/max of 3 runs:\nv13: LL cache misses: 25.08% - 25.41%\n+slab v4: LL cache misses: 25.74% - 26.01%\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Dec 13, 2022 at 7:50 AM David Rowley <dgrowleyml@gmail.com> wrote:>> Thanks for testing the patch.>> On Mon, 12 Dec 2022 at 20:14, John Naylor <john.naylor@enterprisedb.com> wrote:> > While allocation is markedly improved, freeing looks worse here. The proportion is surprising because only about 2% of nodes are freed during the load, but doing that takes up 10-40% of the time compared to allocating.>> I've tried to reproduce this with the v13 patches applied and I'm not> really getting the same as you are. To run the function 100 times I> used:>> select x, a.* from generate_series(1,100) x(x), lateral (select * from> bench_load_random_int(500 * 1000 * (1+x-x))) a;Simply running over a longer period of time like this makes the SlabFree difference much closer to your results, so it doesn't seem out of line anymore. Here SlabAlloc seems to take maybe 2/3 of the time of current slab, with a 5% reduction in total time:500k ints:v13-0001-0005average of 30: 217ms 47.61% postgres postgres [.] rt_set 20.99% postgres postgres [.] SlabAlloc 10.00% postgres postgres [.] rt_node_insert_inner.isra.0 6.87% postgres [unknown] [k] 0xffffffffbce011b7 3.53% postgres postgres [.] MemoryContextAlloc 2.82% postgres postgres [.] SlabFree+slab v4average of 30: 206ms 51.13% postgres postgres [.] rt_set 14.08% postgres postgres [.] SlabAlloc 11.41% postgres postgres [.] rt_node_insert_inner.isra.0 7.44% postgres [unknown] [k] 0xffffffffbce011b7 3.89% postgres postgres [.] MemoryContextAlloc 3.39% postgres postgres [.] SlabFreeIt doesn't look mysterious anymore, but I went ahead and took some more perf measurements, including for cache misses. My naive impression is that we're spending a bit more time waiting for data, but having to do less work with it once we get it, which is consistent with your earlier comments:perf stat -p $pid sleep 2v13: 2,001.55 msec task-clock:u # 1.000 CPUs utilized 0 context-switches:u # 0.000 /sec 0 cpu-migrations:u # 0.000 /sec 311,690 page-faults:u # 155.724 K/sec 3,128,740,701 cycles:u # 1.563 GHz 4,739,333,861 instructions:u # 1.51 insn per cycle 820,014,588 branches:u # 409.690 M/sec 7,385,923 branch-misses:u # 0.90% of all branches +slab v4: 2,001.09 msec task-clock:u # 1.000 CPUs utilized 0 context-switches:u # 0.000 /sec 0 cpu-migrations:u # 0.000 /sec 326,017 page-faults:u # 162.920 K/sec 3,016,668,818 cycles:u # 1.508 GHz 4,324,863,908 instructions:u # 1.43 insn per cycle 761,839,927 branches:u # 380.712 M/sec 7,718,366 branch-misses:u # 1.01% of all branches perf stat -e LLC-loads,LLC-loads-misses -p $pid sleep 2min/max of 3 runs:v13: LL cache misses: 25.08% - 25.41%+slab v4: LL cache misses: 25.74% - 26.01%--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 14 Dec 2022 17:37:52 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "I've spent quite a bit more time on the slab changes now and I've\nattached a v3 patch.\n\nOne of the major things that I've spent time on is benchmarking this.\nI'm aware that Tomas wrote some functions to benchmark. I've taken\nthose and made some modifications to allow the memory context type to\nbe specified as a function parameter. This allows me to easily\ncompare the performance of slab with both aset and generation.\n\nAnother change that I made to Tomas' module was how the random\nordering part works. What I wanted was the ability to specify how\nrandomly to pfree the chunks and test various \"degrees-of-randomness\"\nto see how that affects the performance. What I ended up coming up\nwith was the ability to specify the number of \"random segments\". This\ncontrols how many groups we split all allocated chunks into to\nrandomise. If there is 1 random segment, then that's just randomising\nover all chunks. If there are 10 random segments, then we split the\narray of allocated chunks into 10 portions based on either FIFO or\nLIFO order, then randomise the order of the chunks only within each of\nthose segments. This allows us to test FIFO/LIFO allocation patterns\nwith and without random and any degrees of that in between. If the\nrandom segments is set to 0, then no randomisation is done.\n\nAnother change I made to Tomas' code was, I'm now using palloc0()\ninstead of palloc() and I'm also checking the first byte of the\nallocated chunk is '\\0' before pfreeing it. What I was finding was\nthat pfree was showing as highly dominant in perf output due to it\nhaving to deference the MemoryChunk to find the context-type bits.\npfree had to do this as none of the calling code had touched any of\nthe memory in the chunk. I felt it was unrealistic to be pallocing\nmemory and not doing anything with it and then pfreeing it without\nhaving done anything with it. Mostly this just moves the\nresponsibilities around of which function is penalised in having to\nload the cache line. I mostly did this as I was struggling to make any\nsense of perf's output.\n\nI've attached alloc_bench_contrib.patch which I used for testing.\n\nI've also attached a spreadsheet with the benchmark results. The\ngeneral summary from having done those is that slab is now generally\nnow on-par with aset in terms of palloc performance. Previously slab\nwas performing at about half the speed of aset unless CPU cache\npressure became more significant, in which case the performance is\ndominated by fetching cache lines from RAM. However, the new code\nstill makes meaningful improvements even under heavy CPU cache\npressure. When it comes to pfree performance, the updated slab code\nis much faster than it was previously, but not quite on-par with aset\nor generation.\n\nThe attached spreadsheet is broken down into 3 tabs. Each tab is\ntesting a chunk size and a fixed total number of chunks allocated at\nonce. Within each tab, I'm testing FIFO and then LIFO allocation\npatterns each with a different degree of randomness introduced, as I\ndescribed above. In none of the tests was the patched version slower\nthan the unpatched version.\n\nOne pending question I had was about SlabStats where we list free\nchunks. Since we now have a list of emptyblocks, I wasn't too sure if\nthe chunks from those should be included in that total. I currently\nam not including them, but I have added some additional information to\nlist the number of completely empty blocks that we've got in the\nemptyblocks list.\n\nSome follow-up work that I'm thinking is a good idea:\n\n1. Reduce the SlabContext's chunkSize, fullChunkSize and blockSize\nfields from Size down to uint32. These have no need to be 64 bits. We\ndon't allow slab blocks over 1GB since c6e0fe1f2. I thought of doing\nthis separately as we might need to rationalise the equivalent fields\nin aset.c and generation.c. Those can have external chunks, so I'm\nnot 100% sure if we should do that there or not yet. I just didn't\nwant to touch those files in this effort.\n2. Slab should probably gain the ability to grow the block size as\naset and generation both do. Since the performance of the slab context\nis good now, we might want to use it for hash join's 32kb chunks, but\nI doubt we can without the block size growth.\n\nI'm planning on pushing the attached v3 patch shortly. I've spent\nseveral days reading over this and testing it in detail along with\nadding additional features to the SlabCheck code to find more\ninconsistencies.\n\nDavid",
"msg_date": "Tue, 20 Dec 2022 16:35:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 10:36 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I'm planning on pushing the attached v3 patch shortly. I've spent\n> several days reading over this and testing it in detail along with\n> adding additional features to the SlabCheck code to find more\n> inconsistencies.\n\nFWIW, I reran my test from last week and got similar results.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Dec 20, 2022 at 10:36 AM David Rowley <dgrowleyml@gmail.com> wrote:>> I'm planning on pushing the attached v3 patch shortly. I've spent> several days reading over this and testing it in detail along with> adding additional features to the SlabCheck code to find more> inconsistencies.FWIW, I reran my test from last week and got similar results.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 20 Dec 2022 15:19:25 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
},
{
"msg_contents": "On Tue, 20 Dec 2022 at 21:19, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Tue, Dec 20, 2022 at 10:36 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > I'm planning on pushing the attached v3 patch shortly. I've spent\n> > several days reading over this and testing it in detail along with\n> > adding additional features to the SlabCheck code to find more\n> > inconsistencies.\n>\n> FWIW, I reran my test from last week and got similar results.\n\nThanks a lot for testing that stuff last week. I got a bit engrossed\nin the perf weirdness and forgot to reply. I found they made much\nmore sense after using palloc0 and touching the allocated memory just\nbefore freeing. I think this is a more realistic test.\n\nI've now pushed the patch after making a small adjustment to the\nversion I sent earlier.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Dec 2022 21:51:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slab allocator performance issues"
}
] |
[
{
"msg_contents": "Hello,\n\nI found that any corruption of WAL page header found during recovery is never\nreported in log messages. If wal page header is broken, it is detected in\nXLogReaderValidatePageHeader called from XLogPageRead, but the error messages\nare always reset and never reported.\n\n if (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n {\n /* reset any error XLogReaderValidatePageHeader() might have set */\n xlogreader->errormsg_buf[0] = '\\0';\n goto next_record_is_invalid;\n }\n\nSince the commit 06687198018, we call XLogReaderValidatePageHeader here so that\nwe can check a page header and retry immediately if it's invalid, but the error\nmessage is reset immediately and not reported. I guess the reason why the error\nmessage is reset is because we might get the right WAL after some retries.\nHowever, I think it is better to report the error for each check in order to let\nusers know the actual issues founded in the WAL.\n\nI attached a patch to fix it in this way.\n\nOr, if we wouldn't like to report an error for each check and also what we want\nto check here is just about old recycled WAL instead of header corruption itself, \nI wander that we could check just xlp_pageaddr instead of calling\nXLogReaderValidatePageHeader.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Sun, 18 Jul 2021 04:55:05 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "corruption of WAL page header is never reported"
},
{
"msg_contents": "Em sáb., 17 de jul. de 2021 às 16:57, Yugo NAGATA <nagata@sraoss.co.jp>\nescreveu:\n\n> Hello,\n>\n> I found that any corruption of WAL page header found during recovery is\n> never\n> reported in log messages. If wal page header is broken, it is detected in\n> XLogReaderValidatePageHeader called from XLogPageRead, but the error\n> messages\n> are always reset and never reported.\n>\n> if (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr,\n> readBuf))\n> {\n> /* reset any error XLogReaderValidatePageHeader() might\n> have set */\n> xlogreader->errormsg_buf[0] = '\\0';\n> goto next_record_is_invalid;\n> }\n>\n> Since the commit 06687198018, we call XLogReaderValidatePageHeader here so\n> that\n> we can check a page header and retry immediately if it's invalid, but the\n> error\n> message is reset immediately and not reported. I guess the reason why the\n> error\n> message is reset is because we might get the right WAL after some retries.\n> However, I think it is better to report the error for each check in order\n> to let\n> users know the actual issues founded in the WAL.\n>\n> I attached a patch to fix it in this way.\n>\nI think to keep the same behavior as before, is necessary always run:\n\n/* reset any error XLogReaderValidatePageHeader() might have set */\nxlogreader->errormsg_buf[0] = '\\0';\n\nnot?\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 17 Jul 2021 18:40:02 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "On Sat, 17 Jul 2021 18:40:02 -0300\nRanier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em sáb., 17 de jul. de 2021 às 16:57, Yugo NAGATA <nagata@sraoss.co.jp>\n> escreveu:\n> \n> > Hello,\n> >\n> > I found that any corruption of WAL page header found during recovery is\n> > never\n> > reported in log messages. If wal page header is broken, it is detected in\n> > XLogReaderValidatePageHeader called from XLogPageRead, but the error\n> > messages\n> > are always reset and never reported.\n> >\n> > if (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr,\n> > readBuf))\n> > {\n> > /* reset any error XLogReaderValidatePageHeader() might\n> > have set */\n> > xlogreader->errormsg_buf[0] = '\\0';\n> > goto next_record_is_invalid;\n> > }\n> >\n> > Since the commit 06687198018, we call XLogReaderValidatePageHeader here so\n> > that\n> > we can check a page header and retry immediately if it's invalid, but the\n> > error\n> > message is reset immediately and not reported. I guess the reason why the\n> > error\n> > message is reset is because we might get the right WAL after some retries.\n> > However, I think it is better to report the error for each check in order\n> > to let\n> > users know the actual issues founded in the WAL.\n> >\n> > I attached a patch to fix it in this way.\n> >\n> I think to keep the same behavior as before, is necessary always run:\n> \n> /* reset any error XLogReaderValidatePageHeader() might have set */\n> xlogreader->errormsg_buf[0] = '\\0';\n> \n> not?\n\nIf we are not in StandbyMode, the check is not retried, and an error is returned\nimmediately. So, I think ,we don't have to display an error message in such cases,\nand neither reset it. Instead, it would be better to leave the error message\nhandling to the caller of XLogReadRecord.\n\nRegards,\nYugo Nagat\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Sun, 18 Jul 2021 23:27:16 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "Hello.\n\nAt Sun, 18 Jul 2021 04:55:05 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> Hello,\n> \n> I found that any corruption of WAL page header found during recovery is never\n> reported in log messages. If wal page header is broken, it is detected in\n> XLogReaderValidatePageHeader called from XLogPageRead, but the error messages\n> are always reset and never reported.\n\nGood catch! Currently recovery stops showing no reason if it is\nstopped by page-header errors.\n\n> I attached a patch to fix it in this way.\n\nHowever, it is a kind of a roof-over-a-roof. What we should do is\njust omitting the check in XLogPageRead while in standby mode.\n\n> Or, if we wouldn't like to report an error for each check and also what we want\n> to check here is just about old recycled WAL instead of header corruption itself, \n> I wander that we could check just xlp_pageaddr instead of calling\n> XLogReaderValidatePageHeader.\n\nI'm not sure. But as described in the commit message, the commit\nintended to save a common case and there's no obvious reason to (and\nnot to) restrict the check only to page address. So it uses the\nestablished checking function.\n\nI was tempted to adjust the comment just above by adding \"while in\nstandby mode\", but \"so that we can retry immediately\" is suggesting\nthat so I didn't do that in the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 19 Jul 2021 15:14:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "On Mon, 19 Jul 2021 15:14:41 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Hello.\n> \n> At Sun, 18 Jul 2021 04:55:05 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> > Hello,\n> > \n> > I found that any corruption of WAL page header found during recovery is never\n> > reported in log messages. If wal page header is broken, it is detected in\n> > XLogReaderValidatePageHeader called from XLogPageRead, but the error messages\n> > are always reset and never reported.\n> \n> Good catch! Currently recovery stops showing no reason if it is\n> stopped by page-header errors.\n> \n> > I attached a patch to fix it in this way.\n> \n> However, it is a kind of a roof-over-a-roof. What we should do is\n> just omitting the check in XLogPageRead while in standby mode.\n\nYour patch doesn't fix the issue that the error message is never reported in\nstandby mode. When a WAL page header is broken, the standby would silently repeat\nretrying forever.\n\nI think we have to let users know the corruption of WAL page header even in\nstandby mode, not? A corruption of WAL record header is always reported,\nby the way. (See that XLogReadRecord is calling ValidXLogRecordHeader.)\n\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 19 Jul 2021 16:00:39 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Mon, 19 Jul 2021 16:00:39 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> Your patch doesn't fix the issue that the error message is never reported in\n> standby mode. When a WAL page header is broken, the standby would silently repeat\n> retrying forever.\n\nOk, I see your point and agree to that.\n\n> I think we have to let users know the corruption of WAL page header even in\n> standby mode, not? A corruption of WAL record header is always reported,\n> by the way. (See that XLogReadRecord is calling ValidXLogRecordHeader.)\n\nHoweer, I'm still on the opinion that we don't need to check that\nwhile in standby mode.\n\nHow about the attached?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 19 Jul 2021 17:47:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "me> Howeer, I'm still on the opinion that we don't need to check that\nme> while in standby mode.\n\nOf course it's typo of \"while not in standby mode\".\n\nsorry..\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 19 Jul 2021 17:50:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "On Mon, 19 Jul 2021 17:47:07 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Mon, 19 Jul 2021 16:00:39 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> > Your patch doesn't fix the issue that the error message is never reported in\n> > standby mode. When a WAL page header is broken, the standby would silently repeat\n> > retrying forever.\n> \n> Ok, I see your point and agree to that.\n> \n> > I think we have to let users know the corruption of WAL page header even in\n> > standby mode, not? A corruption of WAL record header is always reported,\n> > by the way. (See that XLogReadRecord is calling ValidXLogRecordHeader.)\n> \n> Howeer, I'm still on the opinion that we don't need to check that\n> while in standby mode.\n> \n> How about the attached?\n\nOn Mon, 19 Jul 2021 17:50:16 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> me> Howeer, I'm still on the opinion that we don't need to check that\n> me> while in standby mode.\n> \n> Of course it's typo of \"while not in standby mode\".\n\nThanks for updating the patch. I agree with you.\n\nI think it is nice to fix to perform the check only during standby mode\nbecause it make a bit clearer why we check it immediately in XLogPageRead.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 19 Jul 2021 18:13:37 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "Em seg., 19 de jul. de 2021 às 06:15, Yugo NAGATA <nagata@sraoss.co.jp>\nescreveu:\n\n> On Mon, 19 Jul 2021 17:47:07 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> > At Mon, 19 Jul 2021 16:00:39 +0900, Yugo NAGATA <nagata@sraoss.co.jp>\n> wrote in\n> > > Your patch doesn't fix the issue that the error message is never\n> reported in\n> > > standby mode. When a WAL page header is broken, the standby would\n> silently repeat\n> > > retrying forever.\n> >\n> > Ok, I see your point and agree to that.\n> >\n> > > I think we have to let users know the corruption of WAL page header\n> even in\n> > > standby mode, not? A corruption of WAL record header is always\n> reported,\n> > > by the way. (See that XLogReadRecord is calling ValidXLogRecordHeader.)\n> >\n> > Howeer, I'm still on the opinion that we don't need to check that\n> > while in standby mode.\n> >\n> > How about the attached?\n>\n> On Mon, 19 Jul 2021 17:50:16 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> > me> Howeer, I'm still on the opinion that we don't need to check that\n> > me> while in standby mode.\n> >\n> > Of course it's typo of \"while not in standby mode\".\n>\n> Thanks for updating the patch. I agree with you.\n>\n> I think it is nice to fix to perform the check only during standby mode\n> because it make a bit clearer why we check it immediately in XLogPageRead.\n>\nAnd as I had reviewed, your first patch was wrong and now with the Kyotaro\nversion,\nto keep the same behavior, it is necessary to reset the error, correct?\n\nregards,\nRanier Vilela\n\nEm seg., 19 de jul. de 2021 às 06:15, Yugo NAGATA <nagata@sraoss.co.jp> escreveu:On Mon, 19 Jul 2021 17:47:07 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Mon, 19 Jul 2021 16:00:39 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> > Your patch doesn't fix the issue that the error message is never reported in\n> > standby mode. When a WAL page header is broken, the standby would silently repeat\n> > retrying forever.\n> \n> Ok, I see your point and agree to that.\n> \n> > I think we have to let users know the corruption of WAL page header even in\n> > standby mode, not? A corruption of WAL record header is always reported,\n> > by the way. (See that XLogReadRecord is calling ValidXLogRecordHeader.)\n> \n> Howeer, I'm still on the opinion that we don't need to check that\n> while in standby mode.\n> \n> How about the attached?\n\nOn Mon, 19 Jul 2021 17:50:16 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> me> Howeer, I'm still on the opinion that we don't need to check that\n> me> while in standby mode.\n> \n> Of course it's typo of \"while not in standby mode\".\n\nThanks for updating the patch. I agree with you.\n\nI think it is nice to fix to perform the check only during standby mode\nbecause it make a bit clearer why we check it immediately in XLogPageRead.And as I had reviewed, your first patch was wrong and now with the Kyotaro version, to keep the same behavior, it is necessary to reset the error, correct?regards,Ranier Vilela",
"msg_date": "Mon, 19 Jul 2021 06:32:28 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "On Mon, 19 Jul 2021 06:32:28 -0300\nRanier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em seg., 19 de jul. de 2021 às 06:15, Yugo NAGATA <nagata@sraoss.co.jp>\n> escreveu:\n> \n> > On Mon, 19 Jul 2021 17:47:07 +0900 (JST)\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > > At Mon, 19 Jul 2021 16:00:39 +0900, Yugo NAGATA <nagata@sraoss.co.jp>\n> > wrote in\n> > > > Your patch doesn't fix the issue that the error message is never\n> > reported in\n> > > > standby mode. When a WAL page header is broken, the standby would\n> > silently repeat\n> > > > retrying forever.\n> > >\n> > > Ok, I see your point and agree to that.\n> > >\n> > > > I think we have to let users know the corruption of WAL page header\n> > even in\n> > > > standby mode, not? A corruption of WAL record header is always\n> > reported,\n> > > > by the way. (See that XLogReadRecord is calling ValidXLogRecordHeader.)\n> > >\n> > > Howeer, I'm still on the opinion that we don't need to check that\n> > > while in standby mode.\n> > >\n> > > How about the attached?\n> >\n> > On Mon, 19 Jul 2021 17:50:16 +0900 (JST)\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > > me> Howeer, I'm still on the opinion that we don't need to check that\n> > > me> while in standby mode.\n> > >\n> > > Of course it's typo of \"while not in standby mode\".\n> >\n> > Thanks for updating the patch. I agree with you.\n> >\n> > I think it is nice to fix to perform the check only during standby mode\n> > because it make a bit clearer why we check it immediately in XLogPageRead.\n> >\n> And as I had reviewed, your first patch was wrong and now with the Kyotaro\n> version,\n> to keep the same behavior, it is necessary to reset the error, correct?\n\nWell, I think my first patch was not wrong. The difference with the latest\npatch is just whether we perform the additional check when we are not in\nstandby mode or not. The behavior is basically the same although which function\ndetects and prints the page-header error in cases of crash recovery is different.\nOn the other hand, in your patch, the error message was always ommitted in cases\nof crash recovery, and it seemed to me wrong.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 19 Jul 2021 18:52:46 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "\n\nOn 2021/07/19 18:52, Yugo NAGATA wrote:\n> Well, I think my first patch was not wrong. The difference with the latest\n> patch is just whether we perform the additional check when we are not in\n> standby mode or not. The behavior is basically the same although which function\n> detects and prints the page-header error in cases of crash recovery is different.\n\nYes, so which patch do you think is better? I like your version\nbecause there seems no reason why XLogPageRead() should skip\nXLogReaderValidatePageHeader() when not in standby mode.\n\nAlso I'm tempted to move ereport() and reset of errmsg_buf to\nunder next_record_is_invalid as follows. That is, in standby mode\nwhenever we find an invalid record and retry reading WAL page\nin XLogPageRead(), we report the error message and reset it.\nFor now in XLogPageRead(), only XLogReaderValidatePageHeader()\nsets errmsg_buf, but in the future other code or function doing that\nmay be added. For that case, the following change seems more elegant.\nThought?\n\n\t * shouldn't be a big deal from a performance point of view.\n \t */\n \tif (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n-\t{\n-\t\t/* reset any error XLogReaderValidatePageHeader() might have set */\n-\t\txlogreader->errormsg_buf[0] = '\\0';\n \t\tgoto next_record_is_invalid;\n-\t}\n \n \treturn readLen;\n \n@@ -12517,7 +12513,17 @@ next_record_is_invalid:\n \n \t/* In standby-mode, keep trying */\n \tif (StandbyMode)\n+\t{\n+\t\tif (xlogreader->errormsg_buf[0])\n+\t\t{\n+\t\t\tereport(emode_for_corrupt_record(emode, EndRecPtr),\n+\t\t\t\t\t(errmsg_internal(\"%s\", xlogreader->errormsg_buf)));\n+\n+\t\t\t/* reset any error XLogReaderValidatePageHeader() might have set */\n+\t\t\txlogreader->errormsg_buf[0] = '\\0';\n+\t\t}\n \t\tgoto retry;\n+\t}\n \telse\n \t\treturn -1;\n }\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 2 Sep 2021 12:19:25 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Thu, 2 Sep 2021 12:19:25 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/07/19 18:52, Yugo NAGATA wrote:\n> > Well, I think my first patch was not wrong. The difference with the\n> > latest\n> > patch is just whether we perform the additional check when we are not\n> > in\n> > standby mode or not. The behavior is basically the same although which\n> > function\n> > detects and prints the page-header error in cases of crash recovery is\n> > different.\n> \n> Yes, so which patch do you think is better? I like your version\n> because there seems no reason why XLogPageRead() should skip\n> XLogReaderValidatePageHeader() when not in standby mode.\n\nDid you read the comment just above?\n\nxlog.c:12523\n>\t * Check the page header immediately, so that we can retry immediately if\n>\t * it's not valid. This may seem unnecessary, because XLogReadRecord()\n>\t * validates the page header anyway, and would propagate the failure up to\n>\t * ReadRecord(), which would retry. However, there's a corner case with\n>\t * continuation records, if a record is split across two pages such that\n\nSo when not in standby mode, the same check is performed by xlogreader\nwhich has the responsibility to validate the binary data read by\nXLogPageRead. The page-header validation is a compromise to save a\nspecific case.\n\n> Also I'm tempted to move ereport() and reset of errmsg_buf to\n> under next_record_is_invalid as follows. That is, in standby mode\n> whenever we find an invalid record and retry reading WAL page\n> in XLogPageRead(), we report the error message and reset it.\n> For now in XLogPageRead(), only XLogReaderValidatePageHeader()\n> sets errmsg_buf, but in the future other code or function doing that\n> may be added. For that case, the following change seems more elegant.\n> Thought?\n\nI don't think it is good choice to conflate read-failure and header\nvalidation failure from the view of modularity.\n\n> \t * shouldn't be a big deal from a performance point of view.\n> \t */\n> \tif (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n> -\t{\n> -\t\t/* reset any error XLogReaderValidatePageHeader() might have set */\n> -\t\txlogreader->errormsg_buf[0] = '\\0';\n> \t\tgoto next_record_is_invalid;\n> -\t}\n> \treturn readLen;\n> @@ -12517,7 +12513,17 @@ next_record_is_invalid:\n> \t/* In standby-mode, keep trying */\n> \tif (StandbyMode)\n> +\t{\n> +\t\tif (xlogreader->errormsg_buf[0])\n> +\t\t{\n> +\t\t\tereport(emode_for_corrupt_record(emode, EndRecPtr),\n> + (errmsg_internal(\"%s\", xlogreader->errormsg_buf)));\n> +\n> + /* reset any error XLogReaderValidatePageHeader() might have set */\n> +\t\t\txlogreader->errormsg_buf[0] = '\\0';\n> +\t\t}\n> \t\tgoto retry;\n> +\t}\n> \telse\n> \t\treturn -1;\n> }\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 02 Sep 2021 13:17:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "Sorry, please let me add something.\n\nAt Thu, 02 Sep 2021 13:17:16 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 2 Sep 2021 12:19:25 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > Also I'm tempted to move ereport() and reset of errmsg_buf to\n> > under next_record_is_invalid as follows. That is, in standby mode\n> > whenever we find an invalid record and retry reading WAL page\n> > in XLogPageRead(), we report the error message and reset it.\n> > For now in XLogPageRead(), only XLogReaderValidatePageHeader()\n> > sets errmsg_buf, but in the future other code or function doing that\n> > may be added. For that case, the following change seems more elegant.\n> > Thought?\n> \n> I don't think it is good choice to conflate read-failure and header\n> validation failure from the view of modularity.\n\nIn other words, XLogReaderValidatePageHeader is foreign for\nXLogPageRead and the function indeuces the need of extra care for\nerrormsg_buf that is not relevant to the elog-capable module.\n\nregards,\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 02 Sep 2021 13:31:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "(.. sorry for the chained-mails)\n\nAt Thu, 02 Sep 2021 13:31:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Sorry, please let me add something.\n> \n> At Thu, 02 Sep 2021 13:17:16 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Thu, 2 Sep 2021 12:19:25 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > > Also I'm tempted to move ereport() and reset of errmsg_buf to\n> > > under next_record_is_invalid as follows. That is, in standby mode\n> > > whenever we find an invalid record and retry reading WAL page\n> > > in XLogPageRead(), we report the error message and reset it.\n> > > For now in XLogPageRead(), only XLogReaderValidatePageHeader()\n> > > sets errmsg_buf, but in the future other code or function doing that\n> > > may be added. For that case, the following change seems more elegant.\n> > > Thought?\n> > \n> > I don't think it is good choice to conflate read-failure and header\n> > validation failure from the view of modularity.\n> \n> In other words, XLogReaderValidatePageHeader is foreign for\n> XLogPageRead and the function indeuces the need of extra care for\n> errormsg_buf that is not relevant to the elog-capable module.\n\nHowever, I can agree that the error handling code can be moved further\nlater. Like this,\n\n> \t * shouldn't be a big deal from a performance point of view.\n> \t */\n-\tif (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n-\t\t/* reset any error XLogReaderValidatePageHeader() might have set */\n-\t\txlogreader->errormsg_buf[0] = '\\0';\n-\t\tgoto next_record_is_invalid;\n+ if (... && XLogReaderValidatePageHeader())\n+ goto page_header_is_invalid;\n...\n> return readlen;\n>\n+ page_header_is_invalid:\n+\t/*\n+\t * in this case we consume this error right now then retry immediately,\n+\t * the message is already translated\n+\t */\n+\tif (xlogreader->errormsg_buf[0])\n+\t\tereport(emode_for_corrupt_record(emode, EndRecPtr),\n+\t\t\t\t(errmsg_internal(\"%s\", xlogreader->errormsg_buf)));\n+\n+ \t/* reset any error XLogReaderValidatePageHeader() might have set */\n+ \txlogreader->errormsg_buf[0] = '\\0';\n> \n> next_record_is_invalid:\n> \tlastSourceFailed = true;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 02 Sep 2021 13:39:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "\n\nOn 2021/09/02 13:17, Kyotaro Horiguchi wrote:\n> Did you read the comment just above?\n\nYes!\n\n> \n> xlog.c:12523\n>> \t * Check the page header immediately, so that we can retry immediately if\n>> \t * it's not valid. This may seem unnecessary, because XLogReadRecord()\n>> \t * validates the page header anyway, and would propagate the failure up to\n>> \t * ReadRecord(), which would retry. However, there's a corner case with\n>> \t * continuation records, if a record is split across two pages such that\n> \n> So when not in standby mode, the same check is performed by xlogreader\n> which has the responsibility to validate the binary data read by\n> XLogPageRead. The page-header validation is a compromise to save a\n> specific case.\n\nYes, so XLogPageRead() can skip the validation check of page head if not\nin standby mode. On the other hand, there is no problem if it still performs\nthe validation check as it does for now. No?\n\n> I don't think it is good choice to conflate read-failure and header\n> validation failure from the view of modularity.\n\nI don't think that the proposed change does that. But maybe I failed to get\nyour point yet... Anyway the proposed change just tries to reset\nerrormsg_buf whenever XLogPageRead() retries, whatever error happened\nbefore. Also if errormsg_buf is set at that moment, it's reported.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 2 Sep 2021 14:44:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Thu, 2 Sep 2021 14:44:31 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/02 13:17, Kyotaro Horiguchi wrote:\n> > Did you read the comment just above?\n> \n> Yes!\n\nGlad to hear that, or..:p\n\n> > xlog.c:12523\n> >> \t * Check the page header immediately, so that we can retry immediately if\n> >> \t * it's not valid. This may seem unnecessary, because XLogReadRecord()\n> >> \t * validates the page header anyway, and would propagate the failure up to\n> >> \t * ReadRecord(), which would retry. However, there's a corner case with\n> >> \t * continuation records, if a record is split across two pages such that\n> > So when not in standby mode, the same check is performed by xlogreader\n> > which has the responsibility to validate the binary data read by\n> > XLogPageRead. The page-header validation is a compromise to save a\n> > specific case.\n> \n> Yes, so XLogPageRead() can skip the validation check of page head if\n> not\n> in standby mode. On the other hand, there is no problem if it still\n> performs\n> the validation check as it does for now. No?\n\nPractically yes, and it has always been like that as you say.\n\n> > I don't think it is good choice to conflate read-failure and header\n> > validation failure from the view of modularity.\n> \n> I don't think that the proposed change does that. But maybe I failed\n\nIt's about your idea in a recent mail. not about the proposed\npatch(es).\n\n> to get\n> your point yet... Anyway the proposed change just tries to reset\n> errormsg_buf whenever XLogPageRead() retries, whatever error happened\n> before. Also if errormsg_buf is set at that moment, it's reported.\n\nI believe errmsg_buf is an interface to emit error messages dedicated\nto xlogreader that doesn't have an access to elog facility, and\nxlogreader doesn't (or ought not to or expect to) suppose\nnon-xlogreader callback functions set the variable. In that sense I\ndon't think theoriginally proposed patch is proper for the reason that\nthe non-xlogreader callback function may set errmsg_buf. This is what\nI meant by the word \"modularity\".\n\nFor that reason I avoided in my second proposal to call\nXLogReaderValidatePageHeader() at all while not in standby mode,\nbecause calling the validator function while in non-standby mode\nresults in the non-xlogreader function return errmsg_buf. Of course\nwe can instead always consume errmsg_buf in the function but I don't\nlike to shadow the caller's task.\n\nDoes that makes sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 02 Sep 2021 16:26:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "\n\nOn 2021/09/02 16:26, Kyotaro Horiguchi wrote:\n> I believe errmsg_buf is an interface to emit error messages dedicated\n> to xlogreader that doesn't have an access to elog facility, and\n> xlogreader doesn't (or ought not to or expect to) suppose\n> non-xlogreader callback functions set the variable. In that sense I\n> don't think theoriginally proposed patch is proper for the reason that\n> the non-xlogreader callback function may set errmsg_buf. This is what\n> I meant by the word \"modularity\".\n> \n> For that reason I avoided in my second proposal to call\n> XLogReaderValidatePageHeader() at all while not in standby mode,\n> because calling the validator function while in non-standby mode\n> results in the non-xlogreader function return errmsg_buf. Of course\n> we can instead always consume errmsg_buf in the function but I don't\n> like to shadow the caller's task.\n\nUnderstood. Thanks for clarifying this!\n\n> Does that makes sense?\n\nYes, I'm fine with your latest patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 2 Sep 2021 21:52:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Thu, 2 Sep 2021 21:52:00 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/02 16:26, Kyotaro Horiguchi wrote:\n> > I believe errmsg_buf is an interface to emit error messages dedicated\n> > to xlogreader that doesn't have an access to elog facility, and\n> > xlogreader doesn't (or ought not to or expect to) suppose\n> > non-xlogreader callback functions set the variable. In that sense I\n> > don't think theoriginally proposed patch is proper for the reason that\n> > the non-xlogreader callback function may set errmsg_buf. This is what\n> > I meant by the word \"modularity\".\n> > For that reason I avoided in my second proposal to call\n> > XLogReaderValidatePageHeader() at all while not in standby mode,\n> > because calling the validator function while in non-standby mode\n> > results in the non-xlogreader function return errmsg_buf. Of course\n> > we can instead always consume errmsg_buf in the function but I don't\n> > like to shadow the caller's task.\n> \n> Understood. Thanks for clarifying this!\n> \n> > Does that makes sense?\n> \n> Yes, I'm fine with your latest patch.\n\nThanks. Maybe some additional comment is needed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 03 Sep 2021 16:55:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Fri, 03 Sep 2021 16:55:36 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Yes, I'm fine with your latest patch.\n> \n> Thanks. Maybe some additional comment is needed.\n\nSomething like this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 03 Sep 2021 17:08:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "On 2021-Sep-03, Kyotaro Horiguchi wrote:\n\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index 24165ab03e..b621ad6b0f 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -12496,9 +12496,21 @@ retry:\n> \t *\n> \t * Validating the page header is cheap enough that doing it twice\n> \t * shouldn't be a big deal from a performance point of view.\n> +\t *\n> +\t * Don't call XLogReaderValidatePageHeader here while not in standby mode\n> +\t * so that this function won't return with a valid errmsg_buf.\n> \t */\n> -\tif (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n> +\tif (StandbyMode &&\n> +\t\t!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n\nOK, but I don't understand why we have a comment that says (referring to\nnon-standby mode) \"doing it twice shouldn't be a big deal\", followed by\n\"Don't do it twice while not in standby mode\" -- that seems quite\ncontradictory. I think the new comment should overwrite the previous\none, something like this:\n\n-\t * Validating the page header is cheap enough that doing it twice\n-\t * shouldn't be a big deal from a performance point of view.\n+\t *\n+\t * We do this in standby mode only,\n+\t * so that this function won't return with a valid errmsg_buf.\n \t */\n-\tif (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n+\tif (StandbyMode &&\n+\t\t!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sun, 5 Sep 2021 15:11:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "\n\nOn 2021/09/06 3:11, Alvaro Herrera wrote:\n> On 2021-Sep-03, Kyotaro Horiguchi wrote:\n> \n>> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n>> index 24165ab03e..b621ad6b0f 100644\n>> --- a/src/backend/access/transam/xlog.c\n>> +++ b/src/backend/access/transam/xlog.c\n>> @@ -12496,9 +12496,21 @@ retry:\n>> \t *\n>> \t * Validating the page header is cheap enough that doing it twice\n>> \t * shouldn't be a big deal from a performance point of view.\n>> +\t *\n>> +\t * Don't call XLogReaderValidatePageHeader here while not in standby mode\n>> +\t * so that this function won't return with a valid errmsg_buf.\n>> \t */\n>> -\tif (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n>> +\tif (StandbyMode &&\n>> +\t\t!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n> \n> OK, but I don't understand why we have a comment that says (referring to\n> non-standby mode) \"doing it twice shouldn't be a big deal\", followed by\n> \"Don't do it twice while not in standby mode\" -- that seems quite\n> contradictory. I think the new comment should overwrite the previous\n> one, something like this:\n> \n> -\t * Validating the page header is cheap enough that doing it twice\n> -\t * shouldn't be a big deal from a performance point of view.\n> +\t *\n> +\t * We do this in standby mode only,\n> +\t * so that this function won't return with a valid errmsg_buf.\n\nEven if we do this while NOT in standby mode, ISTM that this function doesn't\nreturn with a valid errmsg_buf because it's reset. So probably the comment\nshould be updated as follows?\n\n-------------------------\nWe don't do this while not in standby mode because we don't need to retry\nimmediately if the page header is not valid. Instead, XLogReadRecord() is\nresponsible to check the page header.\n-------------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 7 Sep 2021 02:02:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "On 2021/09/07 2:02, Fujii Masao wrote:\n> Even if we do this while NOT in standby mode, ISTM that this function doesn't\n> return with a valid errmsg_buf because it's reset. So probably the comment\n> should be updated as follows?\n> \n> -------------------------\n> We don't do this while not in standby mode because we don't need to retry\n> immediately if the page header is not valid. Instead, XLogReadRecord() is\n> responsible to check the page header.\n> -------------------------\n\nI updated the comment as above. Patch attached.\n\n-\t * it's not valid. This may seem unnecessary, because XLogReadRecord()\n+\t * it's not valid. This may seem unnecessary, because ReadPageInternal()\n \t * validates the page header anyway, and would propagate the failure up to\n\nI also applied this change because ReadPageInternal() not XLogReadRecord()\ncalls XLogReaderValidatePageHeader().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 10 Sep 2021 10:38:39 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Fri, 10 Sep 2021 10:38:39 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/07 2:02, Fujii Masao wrote:\n> > Even if we do this while NOT in standby mode, ISTM that this function\n> > doesn't\n> > return with a valid errmsg_buf because it's reset. So probably the\n> > comment\n> > should be updated as follows?\n> > -------------------------\n> > We don't do this while not in standby mode because we don't need to\n> > retry\n> > immediately if the page header is not valid. Instead, XLogReadRecord()\n> > is\n> > responsible to check the page header.\n> > -------------------------\n> \n> I updated the comment as above. Patch attached.\n> \n> -\t * it's not valid. This may seem unnecessary, because XLogReadRecord()\n> + * it's not valid. This may seem unnecessary, because\n> ReadPageInternal()\n> \t * validates the page header anyway, and would propagate the failure up to\n> \n> I also applied this change because ReadPageInternal() not\n> XLogReadRecord()\n> calls XLogReaderValidatePageHeader().\n\nYeah, good catch.\n\n\n+\t * Note that we don't do this while not in standby mode because we don't\n+\t * need to retry immediately if the page header is not valid. Instead,\n+\t * ReadPageInternal() is responsible for validating the page header.\n\nThe point here is \"retry this page, not this record\", so \"we don't need\nto retry immediately\" looks a bit ambiguous. So how about something\nlike this?\n\nNote that we don't do this while not in standby mode because we don't\nneed to avoid retrying this entire record even if the page header is\nnot valid. Instead, ReadPageInternal() is responsible for validating\nthe page header in that case.\n\nEverything else looks fine to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 13 Sep 2021 11:00:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "\n\nOn 2021/09/13 11:00, Kyotaro Horiguchi wrote:\n> The point here is \"retry this page, not this record\", so \"we don't need\n> to retry immediately\" looks a bit ambiguous. So how about something\n> like this?\n> \n> Note that we don't do this while not in standby mode because we don't\n> need to avoid retrying this entire record even if the page header is\n> not valid. Instead, ReadPageInternal() is responsible for validating\n> the page header in that case.\n\nYou mean that, while not in standby mode, we need to retry reading\nthe entire record if the page head is invalid? I was thinking that\nwe basically give up replaying further records in that case becase\nwe're not in standby mode.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 13 Sep 2021 14:56:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Mon, 13 Sep 2021 14:56:11 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/13 11:00, Kyotaro Horiguchi wrote:\n> > The point here is \"retry this page, not this record\", so \"we don't\n> > need\n> > to retry immediately\" looks a bit ambiguous. So how about something\n> > like this?\n> > Note that we don't do this while not in standby mode because we don't\n> > need to avoid retrying this entire record even if the page header is\n> > not valid. Instead, ReadPageInternal() is responsible for validating\n> > the page header in that case.\n> \n> You mean that, while not in standby mode, we need to retry reading\n> the entire record if the page head is invalid? I was thinking that\n> we basically give up replaying further records in that case becase\n> we're not in standby mode.\n\nI wrote \"while not in standby mode, we don't need to avoid retry the\nentire record\" but that doesn't mean the inversion \"while in standby\nmode, we need to do avoid that\". In the first place retry doesn't\nhappen while not in standby mode. I don't come up with a nice\nphrasing but something like this works?\n\nNote that we don't do this while not in standby mode because this is\nrequired only to avoid retrying this entire record for an invalid page\nheader while in standby mode. Instead, ReadPageInternal() is\nresponsible for validating the page header in that case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 13 Sep 2021 17:21:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Mon, 13 Sep 2021 17:21:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 13 Sep 2021 14:56:11 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > \n> > \n> > On 2021/09/13 11:00, Kyotaro Horiguchi wrote:\n> > > The point here is \"retry this page, not this record\", so \"we don't\n> > > need\n> > > to retry immediately\" looks a bit ambiguous. So how about something\n> > > like this?\n> > > Note that we don't do this while not in standby mode because we don't\n> > > need to avoid retrying this entire record even if the page header is\n> > > not valid. Instead, ReadPageInternal() is responsible for validating\n> > > the page header in that case.\n> > \n> > You mean that, while not in standby mode, we need to retry reading\n> > the entire record if the page head is invalid? I was thinking that\n> > we basically give up replaying further records in that case becase\n> > we're not in standby mode.\n\nSorry, my brain can be easily twisted..\n\n> I wrote \"while not in standby mode, we don't need to avoid retry the\n> entire record\" but that doesn't mean the inversion \"while in standby\n> mode, we need to do avoid that\". In the first place retry doesn't\n> happen while not in standby mode. I don't come up with a nice\n> phrasing but something like this works?\n\nMmm. Something's got wrong badly...\n\nI wrote \"we don't need to avoid retry the entire record\" but that\ndoesn't mean \"we need to retry the entire record\". That simply means\n\"we don't care if retry happens or not.\"\n\nIn the first place retry doesn't happen while not in standby mode. I\ndon't come up with a nice phrasing but something like this works?\n\n> Note that we don't do this while not in standby mode because this is\n> required only to avoid retrying this entire record for an invalid page\n> header while in standby mode. Instead, ReadPageInternal() is\n> responsible for validating the page header in that case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 13 Sep 2021 17:50:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "\n\nOn 2021/09/13 17:21, Kyotaro Horiguchi wrote:\n> I wrote \"while not in standby mode, we don't need to avoid retry the\n> entire record\" but that doesn't mean the inversion \"while in standby\n> mode, we need to do avoid that\". In the first place retry doesn't\n> happen while not in standby mode. I don't come up with a nice\n> phrasing but something like this works?\n> \n> Note that we don't do this while not in standby mode because this is\n> required only to avoid retrying this entire record for an invalid page\n> header while in standby mode. Instead, ReadPageInternal() is\n> responsible for validating the page header in that case.\n\nI think that it's better to comment why \"retry\" is not necessary\nwhen not in standby mode.\n\n-------------------\nWhen not in standby mode, an invalid page header should cause recovery\nto end, not retry reading the page, so we don't need to validate the page\nheader here for the retry. Instead, ReadPageInternal() is responsible for\nthe validation.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 5 Oct 2021 00:59:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "At Tue, 5 Oct 2021 00:59:46 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> I think that it's better to comment why \"retry\" is not necessary\n> when not in standby mode.\n> \n> -------------------\n> When not in standby mode, an invalid page header should cause recovery\n> to end, not retry reading the page, so we don't need to validate the\n> page\n> header here for the retry. Instead, ReadPageInternal() is responsible\n> for\n> the validation.\n\nLGTM.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 05 Oct 2021 10:58:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
},
{
"msg_contents": "\n\nOn 2021/10/05 10:58, Kyotaro Horiguchi wrote:\n> At Tue, 5 Oct 2021 00:59:46 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> I think that it's better to comment why \"retry\" is not necessary\n>> when not in standby mode.\n>>\n>> -------------------\n>> When not in standby mode, an invalid page header should cause recovery\n>> to end, not retry reading the page, so we don't need to validate the\n>> page\n>> header here for the retry. Instead, ReadPageInternal() is responsible\n>> for\n>> the validation.\n> \n> LGTM.\n\nThanks for the review! I updated the comment and pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 6 Oct 2021 00:18:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: corruption of WAL page header is never reported"
}
] |
[
{
"msg_contents": "Hi all,\n\nprairiedog has failed in a way that seems a bit obscure to me:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2021-07-18%2000%3A23%3A29\n\nHere are the details of the failure:\nserver signaled to rotate log file\ncould not read\n\"/Users/buildfarm/bf-data/HEAD/pgsql.build/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\":\nNo such file or directory at t/004_logrotate.pl line 78\n### Stopping node \"primary\" using mode immediate\n\nupdate_metainfo_datafile() creates a temporary file renamed to\ncurrent_logfiles with rename(). It should be atomic, though this\nerror points out that this is not the case? The previous steps of\nthis test ensure that current_logfiles should exist.\n\nWe could use some eval blocks in this area, but a non-atomic rename()\nwould cause problems in more areas. Thoughts?\n--\nMichael",
"msg_date": "Sun, 18 Jul 2021 12:32:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Failure with 004_logrotate in prairiedog"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> prairiedog has failed in a way that seems a bit obscure to me:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2021-07-18%2000%3A23%3A29\n> ...\n> We could use some eval blocks in this area, but a non-atomic rename()\n> would cause problems in more areas. Thoughts?\n\nAwhile back we discovered that old macOS versions have non-atomic rename\n[1]. I eventually shut down dromedary because that was causing failures\noften enough to be annoying. I'd not seen such a failure before on\nprairiedog, but it sure seems plausible that this is one.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2636.1569016167@sss.pgh.pa.us\n\n\n",
"msg_date": "Sun, 18 Jul 2021 01:42:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failure with 004_logrotate in prairiedog"
},
{
"msg_contents": "On Sun, Jul 18, 2021 at 01:42:18AM -0400, Tom Lane wrote:\n> Awhile back we discovered that old macOS versions have non-atomic rename\n> [1]. I eventually shut down dromedary because that was causing failures\n> often enough to be annoying. I'd not seen such a failure before on\n> prairiedog, but it sure seems plausible that this is one.\n\nThanks for the pointer. This indeed looks like the same problem.\n--\nMichael",
"msg_date": "Mon, 19 Jul 2021 07:06:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Failure with 004_logrotate in prairiedog"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Jul 18, 2021 at 01:42:18AM -0400, Tom Lane wrote:\n>> Awhile back we discovered that old macOS versions have non-atomic rename\n>> [1]. I eventually shut down dromedary because that was causing failures\n>> often enough to be annoying. I'd not seen such a failure before on\n>> prairiedog, but it sure seems plausible that this is one.\n\n> Thanks for the pointer. This indeed looks like the same problem.\n\nFor context, dromedary (now florican) is a dual-CPU machine while\nprairiedog has but one CPU. I'd thought that maybe not being\nmulti-CPU insulated prairiedog from the non-atomic-rename problem.\nBut now it looks like it's merely a lot less probable on that hardware.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Jul 2021 18:32:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failure with 004_logrotate in prairiedog"
},
{
"msg_contents": "At Sun, 18 Jul 2021 12:32:20 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> prairiedog has failed in a way that seems a bit obscure to me:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2021-07-18%2000%3A23%3A29\n> \n> Here are the details of the failure:\n> server signaled to rotate log file\n> could not read\n> \"/Users/buildfarm/bf-data/HEAD/pgsql.build/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\":\n> No such file or directory at t/004_logrotate.pl line 78\n> ### Stopping node \"primary\" using mode immediate\n> \n> update_metainfo_datafile() creates a temporary file renamed to\n> current_logfiles with rename(). It should be atomic, though this\n> error points out that this is not the case? The previous steps of\n> this test ensure that current_logfiles should exist.\n> \n> We could use some eval blocks in this area, but a non-atomic rename()\n> would cause problems in more areas. Thoughts?\n\nPostgresNode.logrotate() just invokes pg_ctl logrotate, which ends\nwith triggering log rotation by a signal.\n\nWhen rotation happens, the metainfo file is once removed then\ncreated. If slurp_file in the metafile-checking loop hits the gap, the\nslurp_file fails with ENOENT.\n\nFor non-win32 platforms, the error is identifiable by #!{ENOENT} but\nI'm not sure how we can identify the error for createFile(). $!\ndoesn't work, and $^E returns a human-readable string in the platform\nlanguage..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 19 Jul 2021 16:15:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failure with 004_logrotate in prairiedog"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 04:15:36PM +0900, Kyotaro Horiguchi wrote:\n> When rotation happens, the metainfo file is once removed then\n> created. If slurp_file in the metafile-checking loop hits the gap, the\n> slurp_file fails with ENOENT.\n\nI can read the following code, as of update_metainfo_datafile():\nif (rename(LOG_METAINFO_DATAFILE_TMP, LOG_METAINFO_DATAFILE) != 0)\n ereport(LOG,\n (errcode_for_file_access(),\n errmsg(\"could not rename file \\\"%s\\\" to \\\"%s\\\": %m\",\n LOG_METAINFO_DATAFILE_TMP, LOG_METAINFO_DATAFILE)));\n\nThis creates a temporary file that gets renamed to current_logfiles.\n--\nMichael",
"msg_date": "Mon, 19 Jul 2021 16:52:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Failure with 004_logrotate in prairiedog"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> When rotation happens, the metainfo file is once removed then\n> created. If slurp_file in the metafile-checking loop hits the gap, the\n> slurp_file fails with ENOENT.\n\nOh! Yeah, that's dumb, we should fix it to use rename(). Can't blame\nplatform's rename() if it's not being used.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jul 2021 10:19:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failure with 004_logrotate in prairiedog"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Jul 19, 2021 at 04:15:36PM +0900, Kyotaro Horiguchi wrote:\n>> When rotation happens, the metainfo file is once removed then\n>> created. If slurp_file in the metafile-checking loop hits the gap, the\n>> slurp_file fails with ENOENT.\n\n> I can read the following code, as of update_metainfo_datafile():\n> if (rename(LOG_METAINFO_DATAFILE_TMP, LOG_METAINFO_DATAFILE) != 0)\n\nYeah, ignore my previous message. There is an unlink up at the top\nof the function, which fooled me in my caffeine-deprived state.\nBut that path is only taken when logging was just turned off, so\nwe must remove the now-irrelevant metafile.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jul 2021 10:23:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failure with 004_logrotate in prairiedog"
},
{
"msg_contents": "At Mon, 19 Jul 2021 10:23:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, Jul 19, 2021 at 04:15:36PM +0900, Kyotaro Horiguchi wrote:\n> >> When rotation happens, the metainfo file is once removed then\n> >> created. If slurp_file in the metafile-checking loop hits the gap, the\n> >> slurp_file fails with ENOENT.\n> \n> > I can read the following code, as of update_metainfo_datafile():\n> > if (rename(LOG_METAINFO_DATAFILE_TMP, LOG_METAINFO_DATAFILE) != 0)\n> \n> Yeah, ignore my previous message. There is an unlink up at the top\n> of the function, which fooled me in my caffeine-deprived state.\n\nYeah, sorry for the stupidity.\n\n> But that path is only taken when logging was just turned off, so\n> we must remove the now-irrelevant metafile.\n\nI'm not sure this is relevant, I found the following article. (as a\ntoken of my apology:p)\n\nhttp://www.weirdnet.nl/apple/rename.html\n\n> There is an easy way to empirically prove that rename() is not\n> atomic on Leopard 10.5.2. All you have to do is create a link to a\n> directory, replace that link with a\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 20 Jul 2021 13:12:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failure with 004_logrotate in prairiedog"
}
] |
[
{
"msg_contents": "Hi.\n\nIt seems that only superusers can execute pg_import_system_collations(), \nbut this is not mentioned in the manual.\n\nSince other functions that require superuser privileges describe that in \nthe manual, I think it would be better to do the same for consistency.\n\nThoughts?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Mon, 19 Jul 2021 11:45:57 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Doc necessity for superuser privileges to execute\n pg_import_system_collations()"
},
{
"msg_contents": "\n\nOn 2021/07/19 11:45, torikoshia wrote:\n> Hi.\n> \n> It seems that only superusers can execute pg_import_system_collations(), but this is not mentioned in the manual.\n> \n> Since other functions that require superuser privileges describe that in the manual, I think it would be better to do the same for consistency.\n> \n> Thoughts?\n\nLGTM.\n\nIMO it's better to back-patch this to v10\nwhere pg_import_system_collations() was added.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 19 Jul 2021 13:30:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc necessity for superuser privileges to execute\n pg_import_system_collations()"
},
{
"msg_contents": "\n\nOn 2021/07/19 13:30, Fujii Masao wrote:\n> \n> \n> On 2021/07/19 11:45, torikoshia wrote:\n>> Hi.\n>>\n>> It seems that only superusers can execute pg_import_system_collations(), but this is not mentioned in the manual.\n>>\n>> Since other functions that require superuser privileges describe that in the manual, I think it would be better to do the same for consistency.\n>>\n>> Thoughts?\n> \n> LGTM.\n> \n> IMO it's better to back-patch this to v10\n> where pg_import_system_collations() was added.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 21 Jul 2021 13:58:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc necessity for superuser privileges to execute\n pg_import_system_collations()"
}
] |
[
{
"msg_contents": "Hello,\n\nI found that the start section of the postgresql.conf file is missing a \ndescription of two units: bytes (appeared in version 11) and \nmicroseconds (in version 12).\n\nThe attached patch makes these changes to the postgresql.conf.sample file.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Jul 2021 12:44:50 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "postgresql.conf.sample missing units"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 5:44 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n>\n> Hello,\n>\n> I found that the start section of the postgresql.conf file is missing a\n> description of two units: bytes (appeared in version 11) and\n> microseconds (in version 12).\n>\n> The attached patch makes these changes to the postgresql.conf.sample file.\n\nSeems like an oversight. I'll commit this soon barring objections.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 19, 2021 at 5:44 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:>> Hello,>> I found that the start section of the postgresql.conf file is missing a> description of two units: bytes (appeared in version 11) and> microseconds (in version 12).>> The attached patch makes these changes to the postgresql.conf.sample file.Seems like an oversight. I'll commit this soon barring objections.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Jul 2021 10:31:37 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf.sample missing units"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 10:31 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> On Mon, Jul 19, 2021 at 5:44 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n> >\n> > Hello,\n> >\n> > I found that the start section of the postgresql.conf file is missing a\n> > description of two units: bytes (appeared in version 11) and\n> > microseconds (in version 12).\n> >\n> > The attached patch makes these changes to the postgresql.conf.sample\nfile.\n>\n> Seems like an oversight. I'll commit this soon barring objections.\n\nI pushed this and backpatched to v12. I also extracted just the \"bytes\"\npart and applied it to v11.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 19, 2021 at 10:31 AM John Naylor <john.naylor@enterprisedb.com> wrote:>> On Mon, Jul 19, 2021 at 5:44 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:> >> > Hello,> >> > I found that the start section of the postgresql.conf file is missing a> > description of two units: bytes (appeared in version 11) and> > microseconds (in version 12).> >> > The attached patch makes these changes to the postgresql.conf.sample file.>> Seems like an oversight. I'll commit this soon barring objections.I pushed this and backpatched to v12. I also extracted just the \"bytes\" part and applied it to v11.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Jul 2021 10:33:16 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf.sample missing units"
},
{
"msg_contents": "\nOn 21.07.2021 17:33, John Naylor wrote:\n> I pushed this and backpatched to v12. I also extracted just the \n> \"bytes\" part and applied it to v11.\nIt's a little more complicated, but it's the right decision.\nThank you.\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 13:14:19 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf.sample missing units"
}
] |
[
{
"msg_contents": "Noob here.\n\nI'm trying to implement a new type of index access method. I don't think I\ncan use the built-in storage manager and buffer manager efficiently because\nfiles with 8k blocks aren't going to work. I really need to implement a\nlog-structured file.\n\nI'm confused on how to handle transactions and visibility. I don't see\nanything in the index action method functions (am*()) that tell me when to\ncommit or rollback new index entries, or which transaction we're currently\nin so I can know whether recently-added index entries should be visible to\nthe current scan. I'm guessing that all that magically happens in the\nstorage and buffer managers.\n\nSo... how do I handle this? Is there some way for me to implement my own\nstorage manager that manages visibility?\n\nI'd be grateful for any guidance.\n\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nNoob here.I'm trying to implement a new type of index access method. I don't think I can use the built-in storage manager and buffer manager efficiently because files with 8k blocks aren't going to work. I really need to implement a log-structured file.I'm confused on how to handle transactions and visibility. I don't see anything in the index action method functions (am*()) that tell me when to commit or rollback new index entries, or which transaction we're currently in so I can know whether recently-added index entries should be visible to the current scan. I'm guessing that all that magically happens in the storage and buffer managers.So... how do I handle this? Is there some way for me to implement my own storage manager that manages visibility?I'd be grateful for any guidance.-- Chris Cleveland312-339-2677 mobile",
"msg_date": "Mon, 19 Jul 2021 12:15:28 -0500",
"msg_from": "Chris Cleveland <ccleveland@dieselpoint.com>",
"msg_from_op": true,
"msg_subject": "Transactions and indexes"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 4:31 PM Chris Cleveland\n<ccleveland@dieselpoint.com> wrote:\n> I'm confused on how to handle transactions and visibility.\n\nIn Postgres, indexes are not considered to be part of the logical\ndatabase. They're just data structures that point to TIDs in the\ntable. To an index, each TID is just another object -- it doesn't\npossess any built-in idea about MVCC.\n\nIn practice the indexes may be able to surmise certain things about\nMVCC and versioning, as an optimization -- but that is all speculative\nand relies on cooperation from the table AM side. Also, the\nimplementation of unique indexes knows more than zero about versions,\nsince that's just necessary. These two cases may or may not be\nconsidered exceptions to the general rule. I suppose that it's a\nmatter of perspective.\n\n> So... how do I handle this? Is there some way for me to implement my own storage manager that manages visibility?\n\nThis is the responsibility of a table AM, not any one index AM. In\ngeneral we assume that each table AM implements something very much\nlike heapam's VACUUM implementation. Index AMs may also have\nopportunistic cleanup of their own, as an optimization (actually this\nis what I was referring to).\n\nTheoretically index AMs and table AMs are orthogonal things. How true\nthat will be in a world with more than one mature table AM remains to\nbe seen.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 19 Jul 2021 16:57:45 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Transactions and indexes"
},
{
"msg_contents": "Thank you. Does this mean I can implement the index AM and return TIDs\nwithout having to worry about transactions at all?\n\nAlso, as far as I can tell, the only way that TIDs are removed from the\nindex is in ambulkdelete(). Is this accurate? Does that mean that my index\nwill be returning TIDs for deleted items and I don't have to worry about\nthat either?\n\nDon't TIDs get reused? What happens when my index returns an old TID which\nis now pointing to a new record?\n\nThis is going to make it really hard to implement Top X queries of the type\nyou get from a search engine. A search engine will normally maintain an\ninternal buffer (usually a priority queue) of a fixed size, X, and add\ntuples to it along with their relevance score. The buffer only remembers\nthe Top X tuples with the highest score. In this way the search engine can\niterate over millions of entries and retain only the best ones without\nhaving an unbounded buffer. For this to work, though, you need to know how\nmany tuples to keep in the buffer in advance. If my index can't know, in\nadvance, which TIDs are invisible or deleted, then it can't keep them out\nof the buffer, and this whole scheme fails.\n\nThis is not going to work unless the system gives the index a clear picture\nof transactions, visibility, and deletes as they happen. Is this\ninformation available?\n\n\n\n\nOn Mon, Jul 19, 2021 at 6:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Mon, Jul 19, 2021 at 4:31 PM Chris Cleveland\n> <ccleveland@dieselpoint.com> wrote:\n> > I'm confused on how to handle transactions and visibility.\n>\n> In Postgres, indexes are not considered to be part of the logical\n> database. They're just data structures that point to TIDs in the\n> table. To an index, each TID is just another object -- it doesn't\n> possess any built-in idea about MVCC.\n>\n> In practice the indexes may be able to surmise certain things about\n> MVCC and versioning, as an optimization -- but that is all speculative\n> and relies on cooperation from the table AM side. Also, the\n> implementation of unique indexes knows more than zero about versions,\n> since that's just necessary. These two cases may or may not be\n> considered exceptions to the general rule. I suppose that it's a\n> matter of perspective.\n>\n> > So... how do I handle this? Is there some way for me to implement my own\n> storage manager that manages visibility?\n>\n> This is the responsibility of a table AM, not any one index AM. In\n> general we assume that each table AM implements something very much\n> like heapam's VACUUM implementation. Index AMs may also have\n> opportunistic cleanup of their own, as an optimization (actually this\n> is what I was referring to).\n>\n> Theoretically index AMs and table AMs are orthogonal things. How true\n> that will be in a world with more than one mature table AM remains to\n> be seen.\n>\n> --\n> Peter Geoghegan\n>\n\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nThank you. Does this mean I can implement the index AM and return TIDs without having to worry about transactions at all?Also, as far as I can tell, the only way that TIDs are removed from the index is in ambulkdelete(). Is this accurate? Does that mean that my index will be returning TIDs for deleted items and I don't have to worry about that either?Don't TIDs get reused? What happens when my index returns an old TID which is now pointing to a new record?This is going to make it really hard to implement Top X queries of the type you get from a search engine. A search engine will normally maintain an internal buffer (usually a priority queue) of a fixed size, X, and add tuples to it along with their relevance score. The buffer only remembers the Top X tuples with the highest score. In this way the search engine can iterate over millions of entries and retain only the best ones without having an unbounded buffer. For this to work, though, you need to know how many tuples to keep in the buffer in advance. If my index can't know, in advance, which TIDs are invisible or deleted, then it can't keep them out of the buffer, and this whole scheme fails.This is not going to work unless the system gives the index a clear picture of transactions, visibility, and deletes as they happen. Is this information available?On Mon, Jul 19, 2021 at 6:58 PM Peter Geoghegan <pg@bowt.ie> wrote:On Mon, Jul 19, 2021 at 4:31 PM Chris Cleveland\n<ccleveland@dieselpoint.com> wrote:\n> I'm confused on how to handle transactions and visibility.\n\nIn Postgres, indexes are not considered to be part of the logical\ndatabase. They're just data structures that point to TIDs in the\ntable. To an index, each TID is just another object -- it doesn't\npossess any built-in idea about MVCC.\n\nIn practice the indexes may be able to surmise certain things about\nMVCC and versioning, as an optimization -- but that is all speculative\nand relies on cooperation from the table AM side. Also, the\nimplementation of unique indexes knows more than zero about versions,\nsince that's just necessary. These two cases may or may not be\nconsidered exceptions to the general rule. I suppose that it's a\nmatter of perspective.\n\n> So... how do I handle this? Is there some way for me to implement my own storage manager that manages visibility?\n\nThis is the responsibility of a table AM, not any one index AM. In\ngeneral we assume that each table AM implements something very much\nlike heapam's VACUUM implementation. Index AMs may also have\nopportunistic cleanup of their own, as an optimization (actually this\nis what I was referring to).\n\nTheoretically index AMs and table AMs are orthogonal things. How true\nthat will be in a world with more than one mature table AM remains to\nbe seen.\n\n-- \nPeter Geoghegan\n-- Chris Cleveland312-339-2677 mobile",
"msg_date": "Mon, 19 Jul 2021 21:20:33 -0500",
"msg_from": "Chris Cleveland <ccleveland@dieselpoint.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions and indexes"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 7:20 PM Chris Cleveland\n<ccleveland@dieselpoint.com> wrote:\n> Thank you. Does this mean I can implement the index AM and return TIDs without having to worry about transactions at all?\n\nYes. That's the upside of the design -- it makes it easy to add new\ntransactional index AMs. Which is one reason why Postgres has so many.\n\n> Also, as far as I can tell, the only way that TIDs are removed from the index is in ambulkdelete(). Is this accurate?\n\nIt doesn't have to be the only way, but in practice it can be. Depends\non the index AM. The core code relies on ambulkdelete() to make sure\nthat all TIDs dead in the table are gone from the index. This allows\nVACUUM to finally physically recycle the previously referenced TIDs in\nthe table structure, without risk of index scans finding the wrong\nthing.\n\n> Does that mean that my index will be returning TIDs for deleted items and I don't have to worry about that either?\n\nIf you assume that you're using heapam (the standard table AM), then\nyes. Otherwise I don't know -- it's ambiguous.\n\n> Don't TIDs get reused? What happens when my index returns an old TID which is now pointing to a new record?\n\nThis can't happen because, as I said, the table cannot recycle\nTIDs/line pointers until it's known that this cannot happen (because\nVACUUM already cleaned out all the garbage index tuples).\n\n> This is going to make it really hard to implement Top X queries of the type you get from a search engine. A search engine will normally maintain an internal buffer (usually a priority queue) of a fixed size, X, and add tuples to it along with their relevance score. The buffer only remembers the Top X tuples with the highest score. In this way the search engine can iterate over millions of entries and retain only the best ones without having an unbounded buffer. For this to work, though, you need to know how many tuples to keep in the buffer in advance. If my index can't know, in advance, which TIDs are invisible or deleted, then it can't keep them out of the buffer, and this whole scheme fails.\n>\n> This is not going to work unless the system gives the index a clear picture of transactions, visibility, and deletes as they happen. Is this information available?\n\nAre you implementing a new index AM or a new table AM? Discarding data\nbased on something like a relevance score doesn't seem like something\nthat either API provides for. Indexes in Postgres can be lossy, but\nthat in itself doesn't change the result of queries.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 19 Jul 2021 19:37:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Transactions and indexes"
},
{
"msg_contents": ">\n> Are you implementing a new index AM or a new table AM? Discarding data\n> based on something like a relevance score doesn't seem like something\n> that either API provides for. Indexes in Postgres can be lossy, but\n> that in itself doesn't change the result of queries.\n>\n\n(Sorry if this doesn't quote properly, I'm trying to figure out how to do\nthe quote-and-bottom-post thing in gmail).\n\nMy plan was to do an index AM alone, but I'm thinking that isn't going to\nwork. The goal is to do better full-text search in Postgres, fast, over\nreally large datasets.\n\nRelevance scoring is like an ORDER BY score with a LIMIT. The code that\ntraverses the index needs to know both of these things in advance.\n\nThe GIN code doesn't cut it. I'm still trying to understand the code for\nthe RUM index type, but it's slow going.\n\nSuggestions on how to go about this are welcome.\n\nAre you implementing a new index AM or a new table AM? Discarding data\nbased on something like a relevance score doesn't seem like something\nthat either API provides for. Indexes in Postgres can be lossy, but\nthat in itself doesn't change the result of queries.\n(Sorry if this doesn't quote properly, I'm trying to figure out how to do the quote-and-bottom-post thing in gmail).My plan was to do an index AM alone, but I'm thinking that isn't going to work. The goal is to do better full-text search in Postgres, fast, over really large datasets. Relevance scoring is like an ORDER BY score with a LIMIT. The code that traverses the index needs to know both of these things in advance.The GIN code doesn't cut it. I'm still trying to understand the code for the RUM index type, but it's slow going.Suggestions on how to go about this are welcome.",
"msg_date": "Mon, 19 Jul 2021 22:04:27 -0500",
"msg_from": "Chris Cleveland <ccleveland@dieselpoint.com>",
"msg_from_op": true,
"msg_subject": "Re: Transactions and indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have a few routines that are taking up a meaningful share of nearly all\nworkloads. They are worth micro-optimizing, even though they rarely the most\nexpensive parts of a profile. The memory allocation infrastructure is an\nexample of that.\n\nWhen looking at a profile one can often see that a measurable percentage of\nthe time is spent doing stack frame setup in functions like palloc(),\nAllocSetAlloc(). E.g. here's a perf profile annotating palloc(), the first\ncolumn showing the percentage of the time the relevant instruction was\nsampled:\n\n │ void *\n │ palloc(Size size)\n │ {\n 11.62 │ push %rbp\n 5.89 │ mov %rsp,%rbp\n 11.79 │ push %r12\n │ mov %rdi,%r12\n 6.07 │ push %rbx\n │ /* duplicates MemoryContextAlloc to avoid increased overhead */\n │ void *ret;\n │ MemoryContext context = CurrentMemoryContext;\n │ mov CurrentMemoryContext,%rbx\n │\n │ AssertArg(MemoryContextIsValid(context));\n │ AssertNotInCriticalSection(context);\n │\n │ if (!AllocSizeIsValid(size))\n 5.86 │ cmp $0x3fffffff,%rdi\n │ → ja 14fa5b <palloc.cold>\n │ elog(ERROR, \"invalid memory alloc request size %zu\", size);\n │\n │ context->isReset = false;\n 17.71 │ movb $0x0,0x4(%rbx)\n │\n │ ret = context->methods->alloc(context, size);\n 5.98 │ mov 0x10(%rbx),%rax\n │ mov %rdi,%rsi\n │ mov %rbx,%rdi\n 35.08 │ → callq *(%rax)\n\n\nThe stack frame setup bit is the push ... bit.\n\nAt least on x86-64 unixoid systems, that overhead can be avoided in certain\ncircumstances! The simplest case is if the function doesn't do any function\ncalls of its own. If simple enough (i.e. no register spilling), the compiler\nwill just not set up a stack frame - nobody could need it.\n\nThe slightly more complicated case is that of a function that only does a\n\"tail call\", i.e. the only function call is just before returning (there can\nbe multiple such paths though). If the function is simple enough, gcc/clang\nwill then not use the \"call\" instruction to call the function (which would\nrequire a proper stack frame being set up), but instead just jump to the other\nfunction. Which ends up reusing the current function's stack frame,\nbasically. When that called function returns using 'ret', it'll reuse the\nlocation pushed onto the call stack by the caller of the \"original\" function,\nand return to its caller. Having optimized away the need to maintain one stack\nframe level, and one indirection when returning from the inner function (which\njust would do its own ret).\n\nFor that to work, there obviously cannot be any instructions in the function\nbefore calling the inner function. Which brings us back to the palloc example\nfrom above.\n\nAs an experiment, if i change the code for palloc() to omit the if (ret == NULL)\ncheck, the assembly (omitting source for brevity) from:\n\n 61c9a0: 55 push %rbp\n 61c9a1: 48 89 e5 mov %rsp,%rbp\n 61c9a4: 41 54 push %r12\n 61c9a6: 49 89 fc mov %rdi,%r12\n 61c9a9: 53 push %rbx\n 61c9aa: 48 8b 1d 2f f2 2a 00 mov 0x2af22f(%rip),%rbx # 8cbbe0 <CurrentMemoryContext>\n 61c9b1: 48 81 ff ff ff ff 3f cmp $0x3fffffff,%rdi\n 61c9b8: 0f 87 9d 30 b3 ff ja 14fa5b <palloc.cold>\n 61c9be: c6 43 04 00 movb $0x0,0x4(%rbx)\n 61c9c2: 48 8b 43 10 mov 0x10(%rbx),%rax\n 61c9c6: 48 89 fe mov %rdi,%rsi\n 61c9c9: 48 89 df mov %rbx,%rdi\n 61c9cc: ff 10 callq *(%rax)\n 61c9ce: 48 85 c0 test %rax,%rax\n 61c9d1: 0f 84 b9 30 b3 ff je 14fa90 <palloc.cold+0x35>\n 61c9d7: 5b pop %rbx\n 61c9d8: 41 5c pop %r12\n 61c9da: 5d pop %rbp\n 61c9db: c3 retq\n\nto\n\n 61c8c0: 48 89 fe mov %rdi,%rsi\n 61c8c3: 48 8b 3d 16 f3 2a 00 mov 0x2af316(%rip),%rdi # 8cbbe0 <CurrentMemoryContext>\n 61c8ca: 48 81 fe ff ff ff 3f cmp $0x3fffffff,%rsi\n 61c8d1: 0f 87 c3 31 b3 ff ja 14fa9a <palloc.cold>\n 61c8d7: c6 47 04 00 movb $0x0,0x4(%rdi)\n 61c8db: 48 8b 47 10 mov 0x10(%rdi),%rax\n 61c8df: ff 20 jmpq *(%rax)\n\nIt's not hard to see why that would be faster, I think.\n\n\nOf course, we cannot just omit that check. But I think this is an argument for\nwhy it is not a great idea to have such a check in palloc() - it prevents the\nuse of the above optimization, and it adds a branch to a performance critical\nfunction, though there already existing branches in aset.c etc that\nspecifically know about this case.\n\nThe code in palloc() does this check after context->methods->alloc() since\n3d6d1b585524: Move out-of-memory error checks from aset.c to mcxt.c\n\nOf course, that commit changed things for a reason: It allows\npalloc_extended() to exist.\n\nHowever, it seems that the above optimization, as well as the desire to avoid\nredundant branches (checking for allocation failures in AllocSetAlloc() and\nthen again in palloc() etc) in critical paths, suggests pushing the handling\nof MCXT_ALLOC_NO_OOM (and perhaps others) a layer down, into the memory\ncontext implementations. Which of course means that we would need to pass down\nMCXT_ALLOC_NO_OOM into at least MemoryContextMethods->alloc,realloc}. But that\nseems like a good idea to me anyway. That way we could pass down further\ninformation as well, e.g. about required alignment.\n\nOf course it'd make sense to avoid duplicating the same error message across\nall contexts, but that could be addressed using a mcxt.c helper function to\ndeal with the allocation failure case.\n\nE.g the existing cases like\n\n block = (AllocBlock) malloc(blksize);\n if (block == NULL)\n return NULL;\n\ncould become something like\n block = (AllocBlock) malloc(blksize);\n if (unlikely(block == NULL))\n return MemoryContextAllocationFailure(context, size, flags);\n\n\nThe trick of avoiding stack frame setup does not just apply to wrapper\nfunctions like palloc(). It even can apply to AllocSetAlloc() itself! If one\nseparates out the \"slow paths\" from the \"fast paths\" of AllocSetAlloc(), the\nfast path can avoid needing the stack frame, for the price of the slow paths\nbeing a tiny bit slower. Often the generated code turns out to be better,\nbecause the register allocation pressure is lower in the fast path.\n\nFor that to work, the paths of AllocSetAlloc() that call malloc() need to be\nseparated out. As we obviously need to process malloc()'s result, the call to\nmalloc cannot be a tail call. So we need to split out two paths:\n1) handling of large allocations\n2) running out of space in the current block / having no block\n\nTo actually benefit from the optimization, those paths need to actually return\nthe allocated memory. And they need to be marked pg_noinline, otherwise the\ncompiler won't get the message...\n\nI think this actually makes the aset.c code a good bit more readable, and\nhighlights where in AllocSetAlloc() adding instructions hurts, and where its\nfine.\n\nI have *not* carefully benchmarked this, but a quick implementation of this\ndoes seem to increase readonly pgbench tps at a small scale by 2-3% (both\n-Mprepared/simple). Despite not being an all that pgbench bound workload.\n\n\nRough prototype patch for the above attached.\n\nComments?\n\n\nA slightly tangential improvement would be to move the memset() in palloc0()\net al do into a static inline. There's two benefits of that:\n\n1) compilers can generate much better code for memset() if the length is known\n - instead of a function call with length dispatch replace that with a\n handful of instructions doing the zeroing for the precise length.\n\n2) compilers can often optimize away [part of ]the overhead of needing to do\n the memset, as many callers will go on to overwrite a good portion of the\n zeroed data.\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 19 Jul 2021 12:59:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Avoid stack frame setup in performance critical routines using tail\n calls"
},
{
"msg_contents": "On Tue, 20 Jul 2021 at 08:00, Andres Freund <andres@anarazel.de> wrote:\n> I have *not* carefully benchmarked this, but a quick implementation of this\n> does seem to increase readonly pgbench tps at a small scale by 2-3% (both\n\nInteresting.\n\nI've not taken the time to study the patch but I was running some\nother benchmarks today on a small scale pgbench readonly test and I\ntook this patch for a spin to see if I could see the same performance\ngains.\n\nThis is an AMD 3990x machine that seems to get the most throughput\nfrom pgbench with 132 processes\n\nI did: pgbench -T 240 -P 10 -c 132 -j 132 -S -M prepared\n--random-seed=12345 postgres\n\nmaster = dd498998a\n\nMaster: 3816959.53 tps\nPatched: 3820723.252 tps\n\nI didn't quite get the same 2-3% as you did, but it did come out\nfaster than on master.\n\nDavid",
"msg_date": "Tue, 20 Jul 2021 16:50:09 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-20 16:50:09 +1200, David Rowley wrote:\n> I've not taken the time to study the patch but I was running some\n> other benchmarks today on a small scale pgbench readonly test and I\n> took this patch for a spin to see if I could see the same performance\n> gains.\n\nThanks!\n\n\n> This is an AMD 3990x machine that seems to get the most throughput\n> from pgbench with 132 processes\n> \n> I did: pgbench -T 240 -P 10 -c 132 -j 132 -S -M prepared\n> --random-seed=12345 postgres\n> \n> master = dd498998a\n> \n> Master: 3816959.53 tps\n> Patched: 3820723.252 tps\n> \n> I didn't quite get the same 2-3% as you did, but it did come out\n> faster than on master.\n\nIt would not at all be suprising to me if AMD in recent microarchitectures did\na better job at removing stack management overview (e.g. by better register\nrenaming, or by resolving dependencies on %rsp in a smarter way) than Intel\nhas. This was on a Cascade Lake CPU (xeon 5215), which, despite being released\nin 2019, effectively is a moderately polished (or maybe shoehorned)\nmicroarchitecture from 2015 due to all the Intel troubles. Whereas Zen2 is\nfrom 2019.\n\nIt's also possible that my attempts at avoiding the stack management just\ndidn't work on your compiler. Either due to vendor (I know that gcc is better\nat it than clang), version, or compiler flags (e.g. -fno-omit-frame-pointer\ncould make it harder, -fno-optimize-sibling-calls would disable it).\n\nA third plausible explanation for the difference is that at a client count of\n132, the bottlenecks are sufficiently elsewhere to just not show a meaningful\ngain from memory management efficiency improvements.\n\n\nAny chance you could show a `perf annotate AllocSetAlloc` and `perf annotate\npalloc` from a patched run? And perhaps how high their percentages of the\ntotal work are. E.g. using something like\nperf report -g none|grep -E 'AllocSetAlloc|palloc|MemoryContextAlloc|pfree'\n\nIt'd be interesting to know where the bottlenecks on a zen2 machine are.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Jul 2021 23:16:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "On Tue, 20 Jul 2021 at 18:17, Andres Freund <andres@anarazel.de> wrote:\n> Any chance you could show a `perf annotate AllocSetAlloc` and `perf annotate\n> palloc` from a patched run? And perhaps how high their percentages of the\n> total work are. E.g. using something like\n> perf report -g none|grep -E 'AllocSetAlloc|palloc|MemoryContextAlloc|pfree'\n\nSure. See attached.\n\nDavid",
"msg_date": "Tue, 20 Jul 2021 18:53:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jul 19, 2021, at 23:53, David Rowley wrote:\n> On Tue, 20 Jul 2021 at 18:17, Andres Freund <andres@anarazel.de> wrote:\n> > Any chance you could show a `perf annotate AllocSetAlloc` and `perf annotate\n> > palloc` from a patched run? And perhaps how high their percentages of the\n> > total work are. E.g. using something like\n> > perf report -g none|grep -E 'AllocSetAlloc|palloc|MemoryContextAlloc|pfree'\n> \n> Sure. See attached.\n> \n> David\n> \n> Attachments:\n> * AllocateSetAlloc.txt\n> * palloc.txt\n> * percent.txt\n\nHuh, that's interesting. You have some control flow enforcement stuff turned on (the endbr64). And it looks like it has a non zero cost (or maybe it's just skid). Did you enable that intentionally? If not, what compiler/version/distro is it? I think at least on GCC that's -fcf-protection=...\n\nAndres\n\n\n",
"msg_date": "Tue, 20 Jul 2021 00:03:51 -0700",
"msg_from": "\"Andres Freund\" <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_Avoid_stack_frame_setup_in_performance_critical_routines_u?=\n =?UTF-8?Q?sing_tail_calls?="
},
{
"msg_contents": "On Tue, 20 Jul 2021 at 19:04, Andres Freund <andres@anarazel.de> wrote:\n> > * AllocateSetAlloc.txt\n> > * palloc.txt\n> > * percent.txt\n>\n> Huh, that's interesting. You have some control flow enforcement stuff turned on (the endbr64). And it looks like it has a non zero cost (or maybe it's just skid). Did you enable that intentionally? If not, what compiler/version/distro is it? I think at least on GCC that's -fcf-protection=...\n\nIt's ubuntu 21.04 with gcc 10.3 (specifically gcc version 10.3.0\n(Ubuntu 10.3.0-1ubuntu1)\n\nI've attached the same results from compiling with clang 12\n(12.0.0-3ubuntu1~21.04.1)\n\nDavid",
"msg_date": "Tue, 20 Jul 2021 19:37:46 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-20 19:37:46 +1200, David Rowley wrote:\n> On Tue, 20 Jul 2021 at 19:04, Andres Freund <andres@anarazel.de> wrote:\n> > > * AllocateSetAlloc.txt\n> > > * palloc.txt\n> > > * percent.txt\n> >\n> > Huh, that's interesting. You have some control flow enforcement stuff turned on (the endbr64). And it looks like it has a non zero cost (or maybe it's just skid). Did you enable that intentionally? If not, what compiler/version/distro is it? I think at least on GCC that's -fcf-protection=...\n>\n> It's ubuntu 21.04 with gcc 10.3 (specifically gcc version 10.3.0\n> (Ubuntu 10.3.0-1ubuntu1)\n>\n> I've attached the same results from compiling with clang 12\n> (12.0.0-3ubuntu1~21.04.1)\n\nIt looks like the ubuntu folks have changed the default for CET to on.\n\n\nandres@ubuntu2020:~$ echo 'int foo(void) { return 17;}' > test.c && gcc -O2 -c -o test.o test.c && objdump -S test.o\n\ntest.o: file format elf64-x86-64\n\n\nDisassembly of section .text:\n\n0000000000000000 <foo>:\n 0:\tf3 0f 1e fa \tendbr64\n 4:\tb8 11 00 00 00 \tmov $0x11,%eax\n 9:\tc3 \tretq\nandres@ubuntu2020:~$ echo 'int foo(void) { return 17;}' > test.c && gcc -O2 -fcf-protection=none -c -o test.o test.c && objdump -S test.o\n\ntest.o: file format elf64-x86-64\n\n\nDisassembly of section .text:\n\n0000000000000000 <foo>:\n 0:\tb8 11 00 00 00 \tmov $0x11,%eax\n 5:\tc3 \tretq\n\n\nIndependent of this patch, it might be worth running a benchmark with\nthe default options, and one with -fcf-protection=none. None of my\nmachines support it...\n\n$ cpuid -1|grep CET\n CET_SS: CET shadow stack = false\n CET_IBT: CET indirect branch tracking = false\n XCR0 supported: CET_U state = false\n XCR0 supported: CET_S state = false\n\nHere it adds about 40kB of .text, but I can't measure the CET\noverhead...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jul 2021 08:57:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "Hi,\n\nDavid and I were chatting about this patch, in the context of his bump\nallocator patch. Attached is a rebased version that is also split up into two\nsteps, and a bit more polished.\n\nI wasn't sure what a good test was. I ended up measuring\n COPY pgbench_accounts TO '/dev/null' WITH (FORMAT 'binary');\nof a scale 1 database with pgbench:\n\nc=1;pgbench -q -i -s1 && pgbench -n -c$c -j$c -t100 -f <(echo \"COPY pgbench_accounts TO '/dev/null' WITH (FORMAT 'binary');\")\n\n\taverage latency\nHEAD: 33.865 ms\n01: 32.820 ms\n02: 29.934 ms\n\nThe server was pinned to the one core, turbo mode disabled. That's a pretty\nnice win, I'd say. And I don't think this is actually the most allocator\nbound workload, I just tried something fairly random...\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 19 Jul 2023 01:52:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "On Wed, 19 Jul 2023 at 20:52, Andres Freund <andres@anarazel.de> wrote:\n> David and I were chatting about this patch, in the context of his bump\n> allocator patch. Attached is a rebased version that is also split up into two\n> steps, and a bit more polished.\n\nI've only just briefly read through the updated patch, but I did take\nit for a spin to see what sort of improvements I can get from it.\n\nThe attached graph shows the time in seconds that it took for each\nallocator to allocate 10GBs of memory resetting the context once 1MB\nis allocated. The data point for aset with 32-byte chunks takes\nmaster 1.697 seconds and with both patches, it goes down to 1.264,\nwhich is a 34% increase in performance.\n\nIt's pretty nice that we can hide the AllocSizeIsValid tests inside\nthe allocChunkLimit path and pretty good that we can skip the NULL\nchecks in most cases since we're not having to check for malloc\nfailure unless we malloc a new block.\n\nI'll reply back with a more detailed review next week.\n\nDavid",
"msg_date": "Fri, 21 Jul 2023 14:03:46 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> David and I were chatting about this patch, in the context of his bump\n> allocator patch. Attached is a rebased version that is also split up\ninto two\n> steps, and a bit more polished.\n\nHere is a quick test -- something similar was used to measure the slab\nimprovements last cycle. With radix tree v37 0001-0011 from [1],\n\ncreate extension bench_radix_tree;\nselect avg(load_ms) from generate_series(1,100) x(x), lateral (select *\nfrom bench_load_random_int(100 * 1000 * (1+x-x))) a;\n\nThe backend was pinned and turbo off. Perf runs were separate from timed\nruns. I included 0002 for completeness.\n\nv37\n avg\n---------------------\n 27.0400000000000000\n\n 32.42% postgres bench_radix_tree.so [.] rt_recursive_set\n 21.60% postgres postgres [.] SlabAlloc\n 11.06% postgres [unknown] [k] 0xffffffff930018f7\n 10.49% postgres bench_radix_tree.so [.] rt_extend_down\n 7.07% postgres postgres [.] MemoryContextAlloc\n 4.83% postgres bench_radix_tree.so [.] rt_node_insert_inner\n 2.19% postgres bench_radix_tree.so [.] rt_grow_node_48\n 2.16% postgres bench_radix_tree.so [.] rt_set.isra.0\n 1.50% postgres bench_radix_tree.so [.] MemoryContextAlloc@plt\n\nv37 + palloc sibling calls\n avg\n---------------------\n 26.0700000000000000\n\nv37 + palloc sibling calls + opt aset\n avg\n---------------------\n 26.0900000000000000\n\n 33.78% postgres bench_radix_tree.so [.] rt_recursive_set\n 23.04% postgres postgres [.] SlabAlloc\n 11.43% postgres [unknown] [k] 0xffffffff930018f7\n 11.05% postgres bench_radix_tree.so [.] rt_extend_down\n 5.52% postgres bench_radix_tree.so [.] rt_node_insert_inner\n 2.47% postgres bench_radix_tree.so [.] rt_set.isra.0\n 2.30% postgres bench_radix_tree.so [.] rt_grow_node_48\n 1.88% postgres postgres [.] MemoryContextAlloc\n 1.44% postgres bench_radix_tree.so [.] MemoryContextAlloc@plt\n\nIt's nice to see MemoryContextAlloc go down in the profile.\n\n[1]\nhttps://www.postgresql.org/message-id/CAD21AoA3gS45DFMOyTE-Wm4fu+BYzsYPVcSMYggLxwm40cGHZg@mail.gmail.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 19, 2023 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:>> Hi,>> David and I were chatting about this patch, in the context of his bump> allocator patch. Attached is a rebased version that is also split up into two> steps, and a bit more polished.Here is a quick test -- something similar was used to measure the slab improvements last cycle. With radix tree v37 0001-0011 from [1],create extension bench_radix_tree;select avg(load_ms) from generate_series(1,100) x(x), lateral (select * from bench_load_random_int(100 * 1000 * (1+x-x))) a;The backend was pinned and turbo off. Perf runs were separate from timed runs. I included 0002 for completeness.v37 avg --------------------- 27.0400000000000000 32.42% postgres bench_radix_tree.so [.] rt_recursive_set 21.60% postgres postgres [.] SlabAlloc 11.06% postgres [unknown] [k] 0xffffffff930018f7 10.49% postgres bench_radix_tree.so [.] rt_extend_down 7.07% postgres postgres [.] MemoryContextAlloc 4.83% postgres bench_radix_tree.so [.] rt_node_insert_inner 2.19% postgres bench_radix_tree.so [.] rt_grow_node_48 2.16% postgres bench_radix_tree.so [.] rt_set.isra.0 1.50% postgres bench_radix_tree.so [.] MemoryContextAlloc@pltv37 + palloc sibling calls avg --------------------- 26.0700000000000000v37 + palloc sibling calls + opt aset avg --------------------- 26.0900000000000000 33.78% postgres bench_radix_tree.so [.] rt_recursive_set 23.04% postgres postgres [.] SlabAlloc 11.43% postgres [unknown] [k] 0xffffffff930018f7 11.05% postgres bench_radix_tree.so [.] rt_extend_down 5.52% postgres bench_radix_tree.so [.] rt_node_insert_inner 2.47% postgres bench_radix_tree.so [.] rt_set.isra.0 2.30% postgres bench_radix_tree.so [.] rt_grow_node_48 1.88% postgres postgres [.] MemoryContextAlloc 1.44% postgres bench_radix_tree.so [.] MemoryContextAlloc@pltIt's nice to see MemoryContextAlloc go down in the profile.[1] https://www.postgresql.org/message-id/CAD21AoA3gS45DFMOyTE-Wm4fu+BYzsYPVcSMYggLxwm40cGHZg@mail.gmail.com--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 8 Aug 2023 18:18:51 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "On Fri, 21 Jul 2023 at 14:03, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'll reply back with a more detailed review next week.\n\nHere's a review of v2-0001:\n\n1.\n\n/*\n* XXX: Should this also be moved into alloc()? We could possibly avoid\n* zeroing in some cases (e.g. if we used mmap() ourselves.\n*/\nMemSetAligned(ret, 0, size);\n\nMaybe this should be moved to the alloc function. It would allow us\nto get rid of this:\n\n#define palloc0fast(sz) \\\n( MemSetTest(0, sz) ? \\\nMemoryContextAllocZeroAligned(CurrentMemoryContext, sz) : \\\nMemoryContextAllocZero(CurrentMemoryContext, sz) )\n\nIf we do the zeroing inside the alloc function then it can always use\nthe MemoryContextAllocZeroAligned version providing we zero before\nsetting the sentinel byte.\n\nIt would allow the tail call in the palloc0() case, but the drawback\nwould be having to check for the MCXT_ALLOC_ZERO flag in the alloc\nfunction. I wonder if that branch would be predictable in most cases,\ne.g. the parser will be making lots of nodes and want to zero all\nallocations, but the executor won't be doing much of that. There will\nbe a mix of zeroing and not zeroing in the planner, mostly not, I\nthink.\n\n2. Why do you need to add the NULL check here?\n\n #ifdef USE_VALGRIND\n- if (method != MCTX_ALIGNED_REDIRECT_ID)\n+ if (ret != NULL && method != MCTX_ALIGNED_REDIRECT_ID)\n VALGRIND_MEMPOOL_CHANGE(context, pointer, ret, size);\n #endif\n\nI know it's just valgrind code and performance does not matter, but\nthe realloc flags are being passed as 0, so allocation failures won't\nreturn.\n\n3.\n\n/*\n* XXX: Probably no need to check for huge allocations, we only support\n* one size? Which could theoretically be huge, but that'd not make\n* sense...\n*/\n\nThey can't be huge per Assert(fullChunkSize <= MEMORYCHUNK_MAX_VALUE)\nin SlabContextCreate().\n\n4. It would be good to see some API documentation in the\nMemoryContextMethods struct. This adds a lot of responsibility onto\nthe context implementation without any extra documentation to explain\nwhat, for example, palloc is responsible for and what the alloc\nfunction needs to do itself.\n\nDavid\n\n\n",
"msg_date": "Wed, 9 Aug 2023 20:44:43 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "I've rebased the 0001 patch and gone over it again and made a few\nadditional changes besides what I mentioned in my review.\n\nOn Wed, 9 Aug 2023 at 20:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> Here's a review of v2-0001:\n> 2. Why do you need to add the NULL check here?\n>\n> #ifdef USE_VALGRIND\n> - if (method != MCTX_ALIGNED_REDIRECT_ID)\n> + if (ret != NULL && method != MCTX_ALIGNED_REDIRECT_ID)\n> VALGRIND_MEMPOOL_CHANGE(context, pointer, ret, size);\n> #endif\n\nI removed this NULL check as we're calling the realloc function with\nno flags, so it shouldn't return NULL as it'll error out from any OOM\nerrors.\n\n> 3.\n>\n> /*\n> * XXX: Probably no need to check for huge allocations, we only support\n> * one size? Which could theoretically be huge, but that'd not make\n> * sense...\n> */\n>\n> They can't be huge per Assert(fullChunkSize <= MEMORYCHUNK_MAX_VALUE)\n> in SlabContextCreate().\n\nI removed this comment and adjusted the comment just below that which\nchecks the 'size' matches the expected slab chunk size. i.e.\n\n/*\n* Make sure we only allow correct request size. This doubles as the\n* MemoryContextCheckSize check.\n*/\nif (unlikely(size != slab->chunkSize))\n\n\n> 4. It would be good to see some API documentation in the\n> MemoryContextMethods struct. This adds a lot of responsibility onto\n> the context implementation without any extra documentation to explain\n> what, for example, palloc is responsible for and what the alloc\n> function needs to do itself.\n\nI've done that too.\n\nI also added header comments for MemoryContextAllocationFailure and\nMemoryContextSizeFailure and added some comments to explain in places\nlike palloc() to warn people not to add checks after the 'alloc' call.\n\nThe rebased patch is 0001 and all of my changes are in 0002. I will\nrebase your original 0002 patch later. I think 0001 is much more\nimportant, as evident by the reported benchmarks on this thread.\n\nIn absence of anyone else looking at this, I think it's ready to go.\nIf anyone is following along and wants to review or test it, please do\nso soon.\n\nDavid",
"msg_date": "Fri, 23 Feb 2024 00:46:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-23 00:46:26 +1300, David Rowley wrote:\n> I've rebased the 0001 patch and gone over it again and made a few\n> additional changes besides what I mentioned in my review.\n>\n> On Wed, 9 Aug 2023 at 20:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> > Here's a review of v2-0001:\n> > 2. Why do you need to add the NULL check here?\n> >\n> > #ifdef USE_VALGRIND\n> > - if (method != MCTX_ALIGNED_REDIRECT_ID)\n> > + if (ret != NULL && method != MCTX_ALIGNED_REDIRECT_ID)\n> > VALGRIND_MEMPOOL_CHANGE(context, pointer, ret, size);\n> > #endif\n>\n> I removed this NULL check as we're calling the realloc function with\n> no flags, so it shouldn't return NULL as it'll error out from any OOM\n> errors.\n\nThat was probably a copy-paste issue...\n\n\n> > 4. It would be good to see some API documentation in the\n> > MemoryContextMethods struct. This adds a lot of responsibility onto\n> > the context implementation without any extra documentation to explain\n> > what, for example, palloc is responsible for and what the alloc\n> > function needs to do itself.\n>\n> I've done that too.\n>\n> I also added header comments for MemoryContextAllocationFailure and\n> MemoryContextSizeFailure and added some comments to explain in places\n> like palloc() to warn people not to add checks after the 'alloc' call.\n>\n> The rebased patch is 0001 and all of my changes are in 0002. I will\n> rebase your original 0002 patch later.\n\nThanks!\n\n\n> I think 0001 is much more important, as evident by the reported benchmarks\n> on this thread.\n\nI agree that it's good to tackle 0001 first.\n\nI don't understand the benchmark point though. Your benchmark seems to suggest\nthat 0002 improves aset performance by *more* than 0001: for 8 byte aset\nallocs:\n\n time\nmaster: 8.86\n0001: 8.12\n0002: 7.02\n\nSo 0001 reduces time by 0.92x and 0002 by 0.86x.\n\n\nJohn's test shows basically no change for 0002 - which is unsurprising, as\n0002 changes aset.c, but the test seems to solely exercise slab, as only\nSlabAlloc() shows up in the profile. As 0002 only touches aset.c it couldn't\nreally have affected that test.\n\n\n> In absence of anyone else looking at this, I think it's ready to go.\n> If anyone is following along and wants to review or test it, please do\n> so soon.\n\nMakes sense!\n\n\n\n> @@ -1061,6 +1072,16 @@ MemoryContextAlloc(MemoryContext context, Size size)\n>\n> \tcontext->isReset = false;\n>\n\nFor a moment this made me wonder if we could move the isReset handling into\nthe allocator slow paths as well - it's annoying to write that bit (and thus\ndirty the cacheline) over and ove. But it'd be somewhat awkward due to\npre-allocated blocks. So that'd be a larger change better done separately.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Feb 2024 14:53:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "On Fri, 23 Feb 2024 at 11:53, Andres Freund <andres@anarazel.de> wrote:\n> > @@ -1061,6 +1072,16 @@ MemoryContextAlloc(MemoryContext context, Size size)\n> >\n> > context->isReset = false;\n> >\n>\n> For a moment this made me wonder if we could move the isReset handling into\n> the allocator slow paths as well - it's annoying to write that bit (and thus\n> dirty the cacheline) over and ove. But it'd be somewhat awkward due to\n> pre-allocated blocks. So that'd be a larger change better done separately.\n\nIt makes sense to do this, but on looking closer for aset.c, it seems\nlike the only time we can avoid un-setting the isReset flag is when\nallocating from the freelist. We must unset it for large allocations\nand for allocations that don't fit onto the existing block (the\nexiting block could be the keeper block) and for allocations that\nrequire a new block.\n\nWith the current arrangement of code in generation.c, I didn't see any\npath we could skip doing context->isReset = false.\n\nFor slab.c, it's very easy and we can skip setting the isReset in most cases.\n\nI've attached the patches I benchmarked against 449e798c7 and also the\npatch I used to add a function to exercise palloc.\n\nThe query I ran was:\n\nselect chksz,mtype,pg_allocate_memory_test_reset(chksz, 1024*1024,\n1024*1024*1024, mtype)\nfrom (values(8),(16),(32),(64)) sizes(chksz),\n(values('aset'),('generation'),('slab')) cxt(mtype)\norder by mtype,chksz;\n\nDavid",
"msg_date": "Mon, 26 Feb 2024 20:42:34 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "On Fri, 23 Feb 2024 at 11:53, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2024-02-23 00:46:26 +1300, David Rowley wrote:\n> > In absence of anyone else looking at this, I think it's ready to go.\n> > If anyone is following along and wants to review or test it, please do\n> > so soon.\n>\n> Makes sense!\n\nI pushed the 0001 and 0002 patches today.\n\nI switched over to working on doing what you did in 0002 for\ngeneration.c and slab.c.\n\nSee the attached patch which runs the same test as in [1] (aset.c is\njust there for comparisons between slab and generation)\n\nThe attached includes some additional tuning to generation.c:\n\n1) Changed GenerationFree() to not free() the current block when it\nbecomes empty. The code now just marks it as empty and reuses it.\nSaves free()/malloc() cycle. Also means we can get rid of a NULL check\nin GenerationAlloc().\n\n2) Removed code in GenerationAlloc() which I felt was trying too hard\nto fill the keeper, free and current block. The changes I made here\ndo mean that once the keeper block becomes empty, it won't be used\nagain until the context is reset and gets a new allocation. I don't\nsee this as a big issue as the keeper block is small anyway.\n\ngeneration.c is now ~30% faster for the 8-byte test.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvqss7-a9c51nj+f9xyAr15wjLB6teHsxPe-NwLCNqiJbg@mail.gmail.com",
"msg_date": "Thu, 29 Feb 2024 00:29:17 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "On Thu, 29 Feb 2024 at 00:29, David Rowley <dgrowleyml@gmail.com> wrote:\n> I switched over to working on doing what you did in 0002 for\n> generation.c and slab.c.\n>\n> See the attached patch which runs the same test as in [1] (aset.c is\n> just there for comparisons between slab and generation)\n>\n> The attached includes some additional tuning to generation.c:\n\nI've now pushed this.\n\nDavid\n\n> [1] https://postgr.es/m/CAApHDvqss7-a9c51nj+f9xyAr15wjLB6teHsxPe-NwLCNqiJbg@mail.gmail.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 17:43:50 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
},
{
"msg_contents": "Hi,\n\nOn 2024-03-04 17:43:50 +1300, David Rowley wrote:\n> On Thu, 29 Feb 2024 at 00:29, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I switched over to working on doing what you did in 0002 for\n> > generation.c and slab.c.\n> >\n> > See the attached patch which runs the same test as in [1] (aset.c is\n> > just there for comparisons between slab and generation)\n> >\n> > The attached includes some additional tuning to generation.c:\n> \n> I've now pushed this.\n\nThanks for working on all these, much appreciated!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Mar 2024 00:20:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Avoid stack frame setup in performance critical routines using\n tail calls"
}
] |
[
{
"msg_contents": "Hi,\n\nThere are some places, where strlen can have an overhead.\nThis patch tries to fix this.\n\nPass check-world at linux ubuntu (20.04) 64 bits.\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 19 Jul 2021 19:48:55 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Micro-optimizations to avoid some strlen calls."
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 07:48:55PM -0300, Ranier Vilela wrote:\n> There are some places, where strlen can have an overhead.\n> This patch tries to fix this.\n> \n> Pass check-world at linux ubuntu (20.04) 64 bits.\n\nWhy does it matter? No code paths you are changing here are\nperformance-critical, meaning that such calls won't really show up\nhigh in profiles.\n\nI don't think there is anything to change here.\n--\nMichael",
"msg_date": "Wed, 21 Jul 2021 09:28:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Micro-optimizations to avoid some strlen calls."
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 5:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jul 19, 2021 at 07:48:55PM -0300, Ranier Vilela wrote:\n> > There are some places, where strlen can have an overhead.\n> > This patch tries to fix this.\n> >\n> > Pass check-world at linux ubuntu (20.04) 64 bits.\n>\n> Why does it matter? No code paths you are changing here are\n> performance-critical, meaning that such calls won't really show up\n> high in profiles.\n>\n> I don't think there is anything to change here.\n>\n\nAgreed. To borrow from a nearby email of a similar nature (PGConn\ninformation retrieval IIRC) - it is not generally a benefit to avoid\nfunction call access to data multiple times in a block by substituting in a\nsaved local variable. The function call tends to be more readable then\nhaving yet one more unimportant name to keep in short-term memory. As much\ncode already conforms to that the status quo is a preferred state unless\nthere is a demonstrable performance gain to be had. The readability, and\nlack of churn, is otherwise more important.\n\nDavid J.\n\nOn Tue, Jul 20, 2021 at 5:28 PM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Jul 19, 2021 at 07:48:55PM -0300, Ranier Vilela wrote:\n> There are some places, where strlen can have an overhead.\n> This patch tries to fix this.\n> \n> Pass check-world at linux ubuntu (20.04) 64 bits.\n\nWhy does it matter? No code paths you are changing here are\nperformance-critical, meaning that such calls won't really show up\nhigh in profiles.\n\nI don't think there is anything to change here.Agreed. To borrow from a nearby email of a similar nature (PGConn information retrieval IIRC) - it is not generally a benefit to avoid function call access to data multiple times in a block by substituting in a saved local variable. The function call tends to be more readable then having yet one more unimportant name to keep in short-term memory. As much code already conforms to that the status quo is a preferred state unless there is a demonstrable performance gain to be had. The readability, and lack of churn, is otherwise more important.David J.",
"msg_date": "Tue, 20 Jul 2021 17:48:07 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Micro-optimizations to avoid some strlen calls."
},
{
"msg_contents": "On Tue, 20 Jul 2021 at 10:49, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> There are some places, where strlen can have an overhead.\n> This patch tries to fix this.\n\nI'm with Michael and David on this.\n\nI don't really feel like doing;\n\n- snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n+ buflen = snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n _(\"could not fork new process for connection: \"),\n\nis a good idea. I'm unsure if you're either not aware of the value\nthat snprintf() returns or just happen to think an overflow is\nunlikely enough because you're convinced that 1000 chars are always\nenough to fit this translatable string. I'd say if we were 100%\ncertain of that then it might as well become sprintf() instead.\nHowever, I imagine you'll struggle to get people to side with you that\ntaking this overflow risk would be worthwhile given your lack of any\nevidence that anything actually has become meaningfully faster as a\nresult of any of these changes.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Jul 2021 22:44:17 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Micro-optimizations to avoid some strlen calls."
},
{
"msg_contents": "Em qua., 21 de jul. de 2021 às 07:44, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 20 Jul 2021 at 10:49, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > There are some places, where strlen can have an overhead.\n> > This patch tries to fix this.\n>\n> I'm with Michael and David on this.\n>\n> I don't really feel like doing;\n>\n> - snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n> + buflen = snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n> _(\"could not fork new process for connection: \"),\n>\n> is a good idea. I'm unsure if you're either not aware of the value\n> that snprintf() returns or just happen to think an overflow is\n> unlikely enough because you're convinced that 1000 chars are always\n> enough to fit this translatable string. I'd say if we were 100%\n> certain of that then it might as well become sprintf() instead.\n> However, I imagine you'll struggle to get people to side with you that\n> taking this overflow risk would be worthwhile given your lack of any\n> evidence that anything actually has become meaningfully faster as a\n> result of any of these changes.\n>\nI got your point.\nReally getting only the result of snprintf is a bad idea.\nIn this case, the right way would be:\n\nsnprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n _(\"could not fork new process for connection: \"),\nbuflen = strlen(buffer);\n\nThus doesn't have to recount buffer over, if rc fails.\nThanks for the tip about snprintf, even though it's not the intention.\nThis is what I call a bad interface.\n\nregards,\nRanier Vilela\n\n>\n>\n\nEm qua., 21 de jul. de 2021 às 07:44, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 20 Jul 2021 at 10:49, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> There are some places, where strlen can have an overhead.\n> This patch tries to fix this.\n\nI'm with Michael and David on this.\n\nI don't really feel like doing;\n\n- snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n+ buflen = snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n _(\"could not fork new process for connection: \"),\n\nis a good idea. I'm unsure if you're either not aware of the value\nthat snprintf() returns or just happen to think an overflow is\nunlikely enough because you're convinced that 1000 chars are always\nenough to fit this translatable string. I'd say if we were 100%\ncertain of that then it might as well become sprintf() instead.\nHowever, I imagine you'll struggle to get people to side with you that\ntaking this overflow risk would be worthwhile given your lack of any\nevidence that anything actually has become meaningfully faster as a\nresult of any of these changes.I got your point.Really getting only the result of snprintf is a bad idea.In this case, the right way would be:snprintf(buffer, sizeof(buffer), \"E%s%s\\n\", _(\"could not fork new process for connection: \"),buflen = strlen(buffer);Thus doesn't have to recount buffer over, if rc fails.Thanks for the tip about snprintf, even though it's not the intention.This is what I call a bad interface.regards,Ranier Vilela",
"msg_date": "Wed, 21 Jul 2021 09:28:42 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Micro-optimizations to avoid some strlen calls."
},
{
"msg_contents": "Em qua., 21 de jul. de 2021 às 09:28, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qua., 21 de jul. de 2021 às 07:44, David Rowley <dgrowleyml@gmail.com>\n> escreveu:\n>\n>> On Tue, 20 Jul 2021 at 10:49, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> > There are some places, where strlen can have an overhead.\n>> > This patch tries to fix this.\n>>\n>> I'm with Michael and David on this.\n>>\n>> I don't really feel like doing;\n>>\n>> - snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n>> + buflen = snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n>> _(\"could not fork new process for connection: \"),\n>>\n>> is a good idea. I'm unsure if you're either not aware of the value\n>> that snprintf() returns or just happen to think an overflow is\n>> unlikely enough because you're convinced that 1000 chars are always\n>> enough to fit this translatable string. I'd say if we were 100%\n>> certain of that then it might as well become sprintf() instead.\n>> However, I imagine you'll struggle to get people to side with you that\n>> taking this overflow risk would be worthwhile given your lack of any\n>> evidence that anything actually has become meaningfully faster as a\n>> result of any of these changes.\n>>\n> I got your point.\n> Really getting only the result of snprintf is a bad idea.\n> In this case, the right way would be:\n>\n> snprintf(buffer, sizeof(buffer), \"E%s%s\\n\",\n> _(\"could not fork new process for connection: \"),\n> buflen = strlen(buffer);\n>\n> Thus doesn't have to recount buffer over, if rc fails.\n> Thanks for the tip about snprintf, even though it's not the intention.\n> This is what I call a bad interface.\n>\nHere the v1 version, with fix to snprintf trap.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 21 Jul 2021 21:20:55 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Micro-optimizations to avoid some strlen calls."
}
] |
[
{
"msg_contents": "Hi hackers.\n\nThis is the patch to add kerberos delegation support in libpq, which\nenables postgres_fdw to connect to another server and authenticate\nas the same user to the current login user. This will obsolete my\nprevious patch which requires keytab file to be present on the fdw\nserver host.\n\nAfter the backend accepts the gssapi context, it may also get a\nproxy credential if permitted by policy. I previously made a hack\nto pass the pointer of proxy credential directly into libpq. It turns\nout that the correct way to do this is store/acquire using credential\ncache within local process memory to prevent leak.\n\nBecause no password is needed when querying foreign table via\nkerberos delegation, the \"password_required\" option in user\nmapping must be set to false by a superuser. Other than this, it\nshould work with normal user.\n\nI only tested it manually in a very simple configuration currently.\nI will go on to work with TAP tests for this.\n\nHow do you feel about this patch? Any feature/security concerns\nabout this?\n\nBest regards,\nPeifeng Qiu",
"msg_date": "Tue, 20 Jul 2021 03:05:48 +0000",
"msg_from": "Peifeng Qiu <peifengq@vmware.com>",
"msg_from_op": true,
"msg_subject": "Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Hi all.\n\nI've slightly modified the patch to support \"gssencmode\" and added TAP tests.\n\nBest regards,\nPeifeng Qiu\n\n________________________________\nFrom: Peifeng Qiu\nSent: Tuesday, July 20, 2021 11:05 AM\nTo: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Magnus Hagander <magnus@hagander.net>; Stephen Frost <sfrost@snowman.net>; Tom Lane <tgl@sss.pgh.pa.us>\nSubject: Kerberos delegation support in libpq and postgres_fdw\n\nHi hackers.\n\nThis is the patch to add kerberos delegation support in libpq, which\nenables postgres_fdw to connect to another server and authenticate\nas the same user to the current login user. This will obsolete my\nprevious patch which requires keytab file to be present on the fdw\nserver host.\n\nAfter the backend accepts the gssapi context, it may also get a\nproxy credential if permitted by policy. I previously made a hack\nto pass the pointer of proxy credential directly into libpq. It turns\nout that the correct way to do this is store/acquire using credential\ncache within local process memory to prevent leak.\n\nBecause no password is needed when querying foreign table via\nkerberos delegation, the \"password_required\" option in user\nmapping must be set to false by a superuser. Other than this, it\nshould work with normal user.\n\nI only tested it manually in a very simple configuration currently.\nI will go on to work with TAP tests for this.\n\nHow do you feel about this patch? Any feature/security concerns\nabout this?\n\nBest regards,\nPeifeng Qiu",
"msg_date": "Thu, 22 Jul 2021 08:39:53 +0000",
"msg_from": "Peifeng Qiu <peifengq@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On 22.07.21 10:39, Peifeng Qiu wrote:\n> I've slightly modified the patch to support \"gssencmode\" and added TAP \n> tests.\n\nFor the TAP tests, please put then under src/test/kerberos/, instead of \ncopying the whole infrastructure to contrib/postgres_fdw/. Just make a \nnew file, for example t/002_postgres_fdw_proxy.pl, and put your tests there.\n\nAlso, you can put code and tests in one patch, no need to separate.\n\nI wonder if this feature would also work in dblink. Since there is no \nsubstantial code changes in postgres_fdw itself as part of this patch, I \nwould suspect yes. Can you check?\n\n\n",
"msg_date": "Wed, 1 Sep 2021 10:57:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "This patch no longer applies following the Perl namespace changes, can you\nplease submit a rebased version? Marking the patch \"Waiting on Author\".\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 3 Nov 2021 13:41:32 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "> On 3 Nov 2021, at 13:41, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> This patch no longer applies following the Perl namespace changes, can you\n> please submit a rebased version? Marking the patch \"Waiting on Author\".\n\nAs the thread has stalled, and the OP email bounces, I'm marking this patch\nReturned with Feedback. Please feel free to resubmit a new entry in case\nanyone picks this up.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 26 Nov 2021 14:41:55 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n(Dropping the original poster as their email address apparently no\nlonger works)\n\n* Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> On 22.07.21 10:39, Peifeng Qiu wrote:\n> >I've slightly modified the patch to support \"gssencmode\" and added TAP\n> >tests.\n> \n> For the TAP tests, please put then under src/test/kerberos/, instead of\n> copying the whole infrastructure to contrib/postgres_fdw/. Just make a new\n> file, for example t/002_postgres_fdw_proxy.pl, and put your tests there.\n\nI've incorporated the tests into the existing kerberos/001_auth.pl as\nthere didn't seem any need to create another file.\n\n> Also, you can put code and tests in one patch, no need to separate.\n\nDone. Also rebased and updated for the changes in the TAP testing\ninfrastructure and other changes. Also added code to track if\ncredentials were forwarded or not and to log that information.\n\n> I wonder if this feature would also work in dblink. Since there is no\n> substantial code changes in postgres_fdw itself as part of this patch, I\n> would suspect yes. Can you check?\n\nYup, this should work fine. I didn't include any explicit testing of\npostgres_fdw or dblink in this, yet. Instead, for the moment at least,\nI've added to the connection log message an indiciation of if\ncredentials were passed along with the connection along with tests of\nboth the negative case and the positive case. Not sure if that's useful\ninformation to have in pg_stat_gssapi, but if so, then we could add it\nthere pretty easily.\n\nI'm happy to try and get testing with postgres_fdw and dblink working\nsoon though, assuming there aren't any particular objections to moving\nthis forward.\n\nWill add to the CF for consideration.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 28 Feb 2022 20:28:47 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On Mon, 2022-02-28 at 20:28 -0500, Stephen Frost wrote:\r\n> Will add to the CF for consideration.\r\n\r\nGSSAPI newbie here, so caveat lector.\r\n\r\n> diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c\r\n> index efc53f3135..6f820a34f1 100644\r\n> --- a/src/backend/libpq/auth.c\r\n> +++ b/src/backend/libpq/auth.c\r\n> @@ -920,6 +920,7 @@ pg_GSS_recvauth(Port *port)\r\n> \tint\t\t\tmtype;\r\n> \tStringInfoData buf;\r\n> \tgss_buffer_desc gbuf;\r\n> +\tgss_cred_id_t proxy;\r\n> \r\n> \t/*\r\n> \t * Use the configured keytab, if there is one. Unfortunately, Heimdal\r\n> @@ -949,6 +950,9 @@ pg_GSS_recvauth(Port *port)\r\n> \t */\r\n> \tport->gss->ctx = GSS_C_NO_CONTEXT;\r\n> \r\n> +\tproxy = NULL;\r\n> +\tport->gss->proxy_creds = false;\r\n> +\r\n> \t/*\r\n> \t * Loop through GSSAPI message exchange. This exchange can consist of\r\n> \t * multiple messages sent in both directions. First message is always from\r\n> @@ -999,7 +1003,7 @@ pg_GSS_recvauth(Port *port)\r\n> \t\t\t\t\t\t\t\t\t\t &port->gss->outbuf,\r\n> \t\t\t\t\t\t\t\t\t\t &gflags,\r\n> \t\t\t\t\t\t\t\t\t\t NULL,\r\n> -\t\t\t\t\t\t\t\t\t\t NULL);\r\n> +\t\t\t\t\t\t\t\t\t\t &proxy);\r\n> \r\n> \t\t/* gbuf no longer used */\r\n> \t\tpfree(buf.data);\r\n> @@ -1011,6 +1015,12 @@ pg_GSS_recvauth(Port *port)\r\n> \r\n> \t\tCHECK_FOR_INTERRUPTS();\r\n> \r\n> +\t\tif (proxy != NULL)\r\n> +\t\t{\r\n> +\t\t\tpg_store_proxy_credential(proxy);\r\n> +\t\t\tport->gss->proxy_creds = true;\r\n> +\t\t}\r\n> +\r\n\r\nSome implementation docs [1] imply that a delegated_cred_handle is only\r\nvalid if the ret_flags include GSS_C_DELEG_FLAG. The C-binding RFC [2],\r\nthough, says that we can rely on it being set to GSS_C_NO_CREDENTIAL if\r\nno handle was sent...\r\n\r\nI don't know if there are any implementation differences here, but in\r\nany case I think it'd be more clear to use the GSS_C_NO_CREDENTIAL\r\nspelling (instead of NULL) here, if we do decide not to check\r\nret_flags.\r\n\r\n[5] says we have to free the proxy credential with GSS_Release_cred();\r\nI don't see that happening anywhere, but I may have missed it.\r\n\r\n> \tmaj_stat = gss_init_sec_context(&min_stat,\r\n> -\t\t\t\t\t\t\t\t\tGSS_C_NO_CREDENTIAL,\r\n> +\t\t\t\t\t\t\t\t\tproxy,\r\n> \t\t\t\t\t\t\t\t\t&conn->gctx,\r\n> \t\t\t\t\t\t\t\t\tconn->gtarg_nam,\r\n> \t\t\t\t\t\t\t\t\tGSS_C_NO_OID,\r\n> -\t\t\t\t\t\t\t\t\tGSS_C_MUTUAL_FLAG,\r\n> +\t\t\t\t\t\t\t\t\tGSS_C_MUTUAL_FLAG | GSS_C_DELEG_FLAG,\r\n> \t\t\t\t\t\t\t\t\t0,\r\n> \t\t\t\t\t\t\t\t\tGSS_C_NO_CHANNEL_BINDINGS,\r\n> \t\t\t\t\t\t\t\t\t(ginbuf.value == NULL) ? GSS_C_NO_BUFFER : &ginbuf,\r\n> diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c\r\n> index 6ea52ed866..566c89f52f 100644\r\n> --- a/src/interfaces/libpq/fe-secure-gssapi.c\r\n> +++ b/src/interfaces/libpq/fe-secure-gssapi.c\r\n> @@ -631,7 +631,7 @@ pqsecure_open_gss(PGconn *conn)\r\n> \t */\r\n> \tmajor = gss_init_sec_context(&minor, conn->gcred, &conn->gctx,\r\n> \t\t\t\t\t\t\t\t conn->gtarg_nam, GSS_C_NO_OID,\r\n> -\t\t\t\t\t\t\t\t GSS_REQUIRED_FLAGS, 0, 0, &input, NULL,\r\n> +\t\t\t\t\t\t\t\t GSS_REQUIRED_FLAGS | GSS_C_DELEG_FLAG, 0, 0, &input, NULL,\r\n\r\nIt seems like there should be significant security implications to\r\nallowing delegation across the board. Especially since one postgres_fdw\r\nmight delegate to another server, and then another... Should this be\r\nopt-in, maybe via a connection parameter?\r\n\r\n(It also looks like there are some mechanisms for further constraining\r\ndelegation scope, either by administrator policy or otherwise [3, 4].\r\nMight be a good thing for a v2 of this feature to have.)\r\n\r\nSimilarly, it feels a little strange that the server would allow the\r\nclient to unilaterally force the use of a delegated credential. I think\r\nthat should be opt-in on the server side too, unless there's some\r\ncontext I'm missing around why that's safe.\r\n\r\n> +\t/* Make the proxy credential only available to current process */\r\n> +\tmajor = gss_store_cred_into(&minor,\r\n> +\t\tcred,\r\n> +\t\tGSS_C_INITIATE, /* credential only used for starting libpq connection */\r\n> +\t\tGSS_C_NULL_OID, /* store all */\r\n> +\t\ttrue, /* overwrite */\r\n> +\t\ttrue, /* make default */\r\n> +\t\t&ccset,\r\n> +\t\t&mech,\r\n> +\t\t&usage);\r\n> +\r\n> +\r\n> +\tif (major != GSS_S_COMPLETE)\r\n> +\t{\r\n> +\t\tpg_GSS_error(\"gss_store_cred\", major, minor);\r\n> +\t}\r\n> +\r\n> +\t/* quite strange that gss_store_cred doesn't work with \"KRB5CCNAME=MEMORY:\",\r\n> +\t * we have to use gss_store_cred_into instead and set the env for later\r\n> +\t * gss_acquire_cred calls. */\r\n> +\tsetenv(\"KRB5CCNAME\", GSS_MEMORY_CACHE, 1);\r\n\r\nIf I'm reading it right, we're resetting the default credential in the\r\nMEMORY cache, so if you're a libpq client doing your own GSSAPI work,\r\nI'm guessing you might not be happy with this behavior. Also, we're\r\nglobally ignoring whatever ccache was set by an administrator. Can't\r\ntwo postgres_fdw connections from the same backend process require\r\ndifferent settings?\r\n\r\nI notice that gss_store_cred_into() has a companion,\r\ngss_acquire_cred_from(). Is it possible to use that to pull out our\r\ndelegated credential explicitly by name, instead of stomping on the\r\nglobal setup?\r\n\r\nThanks,\r\n--Jacob\r\n\r\n[1] https://docs.oracle.com/cd/E36784_01/html/E36875/gss-accept-sec-context-3gss.html\r\n[2] https://datatracker.ietf.org/doc/html/rfc2744#section-5.1\r\n[3] https://datatracker.ietf.org/doc/html/rfc5896\r\n[4] https://web.mit.edu/kerberos/krb5-latest/doc/appdev/gssapi.html#constrained-delegation-s4u\r\n[5] https://datatracker.ietf.org/doc/html/rfc2743#page-50\r\n",
"msg_date": "Fri, 11 Mar 2022 23:55:16 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Mar 11, 2022 at 18:55 Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Mon, 2022-02-28 at 20:28 -0500, Stephen Frost wrote:\n> > Will add to the CF for consideration.\n>\n> GSSAPI newbie here, so caveat lector.\n\n\nNo worries, thanks for your interest!\n\n> diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c\n> > index efc53f3135..6f820a34f1 100644\n> > --- a/src/backend/libpq/auth.c\n> > +++ b/src/backend/libpq/auth.c\n> > @@ -920,6 +920,7 @@ pg_GSS_recvauth(Port *port)\n> > int mtype;\n> > StringInfoData buf;\n> > gss_buffer_desc gbuf;\n> > + gss_cred_id_t proxy;\n> >\n> > /*\n> > * Use the configured keytab, if there is one. Unfortunately,\n> Heimdal\n> > @@ -949,6 +950,9 @@ pg_GSS_recvauth(Port *port)\n> > */\n> > port->gss->ctx = GSS_C_NO_CONTEXT;\n> >\n> > + proxy = NULL;\n> > + port->gss->proxy_creds = false;\n> > +\n> > /*\n> > * Loop through GSSAPI message exchange. This exchange can consist\n> of\n> > * multiple messages sent in both directions. First message is\n> always from\n> > @@ -999,7 +1003,7 @@ pg_GSS_recvauth(Port *port)\n> >\n> &port->gss->outbuf,\n> >\n> &gflags,\n> >\n> NULL,\n> > -\n> NULL);\n> > +\n> &proxy);\n> >\n> > /* gbuf no longer used */\n> > pfree(buf.data);\n> > @@ -1011,6 +1015,12 @@ pg_GSS_recvauth(Port *port)\n> >\n> > CHECK_FOR_INTERRUPTS();\n> >\n> > + if (proxy != NULL)\n> > + {\n> > + pg_store_proxy_credential(proxy);\n> > + port->gss->proxy_creds = true;\n> > + }\n> > +\n>\n> Some implementation docs [1] imply that a delegated_cred_handle is only\n> valid if the ret_flags include GSS_C_DELEG_FLAG. The C-binding RFC [2],\n> though, says that we can rely on it being set to GSS_C_NO_CREDENTIAL if\n> no handle was sent...\n>\n> I don't know if there are any implementation differences here, but in\n> any case I think it'd be more clear to use the GSS_C_NO_CREDENTIAL\n> spelling (instead of NULL) here, if we do decide not to check\n> ret_flags.\n\n\nHmmm, yeah, that seems like it might be better and is something I’ll take a\nlook at.\n\n[5] says we have to free the proxy credential with GSS_Release_cred();\n> I don't see that happening anywhere, but I may have missed it.\n\n\nI’m not sure that it’s really necessary or worthwhile to do that at process\nend since … the process is about to end. I suppose we could provide a\nfunction that a user could call to ask for it to be released sooner if we\nreally wanted..?\n\n> maj_stat = gss_init_sec_context(&min_stat,\n> > -\n> GSS_C_NO_CREDENTIAL,\n> > +\n> proxy,\n> >\n> &conn->gctx,\n> >\n> conn->gtarg_nam,\n> >\n> GSS_C_NO_OID,\n> > -\n> GSS_C_MUTUAL_FLAG,\n> > +\n> GSS_C_MUTUAL_FLAG | GSS_C_DELEG_FLAG,\n> > 0,\n> >\n> GSS_C_NO_CHANNEL_BINDINGS,\n> >\n> (ginbuf.value == NULL) ? GSS_C_NO_BUFFER : &ginbuf,\n> > diff --git a/src/interfaces/libpq/fe-secure-gssapi.c\n> b/src/interfaces/libpq/fe-secure-gssapi.c\n> > index 6ea52ed866..566c89f52f 100644\n> > --- a/src/interfaces/libpq/fe-secure-gssapi.c\n> > +++ b/src/interfaces/libpq/fe-secure-gssapi.c\n> > @@ -631,7 +631,7 @@ pqsecure_open_gss(PGconn *conn)\n> > */\n> > major = gss_init_sec_context(&minor, conn->gcred, &conn->gctx,\n> >\n> conn->gtarg_nam, GSS_C_NO_OID,\n> > -\n> GSS_REQUIRED_FLAGS, 0, 0, &input, NULL,\n> > +\n> GSS_REQUIRED_FLAGS | GSS_C_DELEG_FLAG, 0, 0, &input, NULL,\n>\n> It seems like there should be significant security implications to\n> allowing delegation across the board. Especially since one postgres_fdw\n> might delegate to another server, and then another... Should this be\n> opt-in, maybe via a connection parameter?\n\n\nThis is already opt-in- at kinit time a user can decide if they’d like a\nproxy-able ticket or not. I don’t know that we really need to have our own\noption for it … tho I’m not really against adding such an option either.\n\n(It also looks like there are some mechanisms for further constraining\n> delegation scope, either by administrator policy or otherwise [3, 4].\n> Might be a good thing for a v2 of this feature to have.)\n\n\nYes, constrained delegation is a pretty neat extension to Kerberos and one\nI’d like to look at later as a future enhancement but I don’t think it\nneeds to be in the initial version.\n\nSimilarly, it feels a little strange that the server would allow the\n> client to unilaterally force the use of a delegated credential. I think\n> that should be opt-in on the server side too, unless there's some\n> context I'm missing around why that's safe.\n\n\nPerhaps you could explain what isn’t safe about accepting a delegated\ncredential from a client..? I am not away of a risk to accepting such a\ndelegated credential. Even so, I’m not against adding an option… but\nexactly how would that option be configured? Server level? On the HBA\nline? role level..?\n\n> + /* Make the proxy credential only available to current process */\n> > + major = gss_store_cred_into(&minor,\n> > + cred,\n> > + GSS_C_INITIATE, /* credential only used for starting libpq\n> connection */\n> > + GSS_C_NULL_OID, /* store all */\n> > + true, /* overwrite */\n> > + true, /* make default */\n> > + &ccset,\n> > + &mech,\n> > + &usage);\n> > +\n> > +\n> > + if (major != GSS_S_COMPLETE)\n> > + {\n> > + pg_GSS_error(\"gss_store_cred\", major, minor);\n> > + }\n> > +\n> > + /* quite strange that gss_store_cred doesn't work with\n> \"KRB5CCNAME=MEMORY:\",\n> > + * we have to use gss_store_cred_into instead and set the env for\n> later\n> > + * gss_acquire_cred calls. */\n> > + setenv(\"KRB5CCNAME\", GSS_MEMORY_CACHE, 1);\n>\n> If I'm reading it right, we're resetting the default credential in the\n> MEMORY cache, so if you're a libpq client doing your own GSSAPI work,\n> I'm guessing you might not be happy with this behavior.\n\n\nThis is just done on the server side and not the client side..?\n\nAlso, we're\n> globally ignoring whatever ccache was set by an administrator. Can't\n> two postgres_fdw connections from the same backend process require\n> different settings?\n\n\nSettings..? Perhaps, but delegated credentials aren’t really settings, so\nnot really sure what you’re suggesting here.\n\nI notice that gss_store_cred_into() has a companion,\n> gss_acquire_cred_from(). Is it possible to use that to pull out our\n> delegated credential explicitly by name, instead of stomping on the\n> global setup?\n\n\nNot really sure what is meant here by global setup..? Feeling like this is\na follow on confusion from maybe mixing server vs client libpq?\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Fri, Mar 11, 2022 at 18:55 Jacob Champion <pchampion@vmware.com> wrote:On Mon, 2022-02-28 at 20:28 -0500, Stephen Frost wrote:\n> Will add to the CF for consideration.\n\nGSSAPI newbie here, so caveat lector.No worries, thanks for your interest!\n> diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c\n> index efc53f3135..6f820a34f1 100644\n> --- a/src/backend/libpq/auth.c\n> +++ b/src/backend/libpq/auth.c\n> @@ -920,6 +920,7 @@ pg_GSS_recvauth(Port *port)\n> int mtype;\n> StringInfoData buf;\n> gss_buffer_desc gbuf;\n> + gss_cred_id_t proxy;\n> \n> /*\n> * Use the configured keytab, if there is one. Unfortunately, Heimdal\n> @@ -949,6 +950,9 @@ pg_GSS_recvauth(Port *port)\n> */\n> port->gss->ctx = GSS_C_NO_CONTEXT;\n> \n> + proxy = NULL;\n> + port->gss->proxy_creds = false;\n> +\n> /*\n> * Loop through GSSAPI message exchange. This exchange can consist of\n> * multiple messages sent in both directions. First message is always from\n> @@ -999,7 +1003,7 @@ pg_GSS_recvauth(Port *port)\n> &port->gss->outbuf,\n> &gflags,\n> NULL,\n> - NULL);\n> + &proxy);\n> \n> /* gbuf no longer used */\n> pfree(buf.data);\n> @@ -1011,6 +1015,12 @@ pg_GSS_recvauth(Port *port)\n> \n> CHECK_FOR_INTERRUPTS();\n> \n> + if (proxy != NULL)\n> + {\n> + pg_store_proxy_credential(proxy);\n> + port->gss->proxy_creds = true;\n> + }\n> +\n\nSome implementation docs [1] imply that a delegated_cred_handle is only\nvalid if the ret_flags include GSS_C_DELEG_FLAG. The C-binding RFC [2],\nthough, says that we can rely on it being set to GSS_C_NO_CREDENTIAL if\nno handle was sent...\n\nI don't know if there are any implementation differences here, but in\nany case I think it'd be more clear to use the GSS_C_NO_CREDENTIAL\nspelling (instead of NULL) here, if we do decide not to check\nret_flags.Hmmm, yeah, that seems like it might be better and is something I’ll take a look at. \n[5] says we have to free the proxy credential with GSS_Release_cred();\nI don't see that happening anywhere, but I may have missed it.I’m not sure that it’s really necessary or worthwhile to do that at process end since … the process is about to end. I suppose we could provide a function that a user could call to ask for it to be released sooner if we really wanted..?\n> maj_stat = gss_init_sec_context(&min_stat,\n> - GSS_C_NO_CREDENTIAL,\n> + proxy,\n> &conn->gctx,\n> conn->gtarg_nam,\n> GSS_C_NO_OID,\n> - GSS_C_MUTUAL_FLAG,\n> + GSS_C_MUTUAL_FLAG | GSS_C_DELEG_FLAG,\n> 0,\n> GSS_C_NO_CHANNEL_BINDINGS,\n> (ginbuf.value == NULL) ? GSS_C_NO_BUFFER : &ginbuf,\n> diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c\n> index 6ea52ed866..566c89f52f 100644\n> --- a/src/interfaces/libpq/fe-secure-gssapi.c\n> +++ b/src/interfaces/libpq/fe-secure-gssapi.c\n> @@ -631,7 +631,7 @@ pqsecure_open_gss(PGconn *conn)\n> */\n> major = gss_init_sec_context(&minor, conn->gcred, &conn->gctx,\n> conn->gtarg_nam, GSS_C_NO_OID,\n> - GSS_REQUIRED_FLAGS, 0, 0, &input, NULL,\n> + GSS_REQUIRED_FLAGS | GSS_C_DELEG_FLAG, 0, 0, &input, NULL,\n\nIt seems like there should be significant security implications to\nallowing delegation across the board. Especially since one postgres_fdw\nmight delegate to another server, and then another... Should this be\nopt-in, maybe via a connection parameter?This is already opt-in- at kinit time a user can decide if they’d like a proxy-able ticket or not. I don’t know that we really need to have our own option for it … tho I’m not really against adding such an option either.\n(It also looks like there are some mechanisms for further constraining\ndelegation scope, either by administrator policy or otherwise [3, 4].\nMight be a good thing for a v2 of this feature to have.)Yes, constrained delegation is a pretty neat extension to Kerberos and one I’d like to look at later as a future enhancement but I don’t think it needs to be in the initial version.\nSimilarly, it feels a little strange that the server would allow the\nclient to unilaterally force the use of a delegated credential. I think\nthat should be opt-in on the server side too, unless there's some\ncontext I'm missing around why that's safe.Perhaps you could explain what isn’t safe about accepting a delegated credential from a client..? I am not away of a risk to accepting such a delegated credential. Even so, I’m not against adding an option… but exactly how would that option be configured? Server level? On the HBA line? role level..?\n> + /* Make the proxy credential only available to current process */\n> + major = gss_store_cred_into(&minor,\n> + cred,\n> + GSS_C_INITIATE, /* credential only used for starting libpq connection */\n> + GSS_C_NULL_OID, /* store all */\n> + true, /* overwrite */\n> + true, /* make default */\n> + &ccset,\n> + &mech,\n> + &usage);\n> +\n> +\n> + if (major != GSS_S_COMPLETE)\n> + {\n> + pg_GSS_error(\"gss_store_cred\", major, minor);\n> + }\n> +\n> + /* quite strange that gss_store_cred doesn't work with \"KRB5CCNAME=MEMORY:\",\n> + * we have to use gss_store_cred_into instead and set the env for later\n> + * gss_acquire_cred calls. */\n> + setenv(\"KRB5CCNAME\", GSS_MEMORY_CACHE, 1);\n\nIf I'm reading it right, we're resetting the default credential in the\nMEMORY cache, so if you're a libpq client doing your own GSSAPI work,\nI'm guessing you might not be happy with this behavior.This is just done on the server side and not the client side..? Also, we're\nglobally ignoring whatever ccache was set by an administrator. Can't\ntwo postgres_fdw connections from the same backend process require\ndifferent settings?Settings..? Perhaps, but delegated credentials aren’t really settings, so not really sure what you’re suggesting here.\nI notice that gss_store_cred_into() has a companion,\ngss_acquire_cred_from(). Is it possible to use that to pull out our\ndelegated credential explicitly by name, instead of stomping on the\nglobal setup?Not really sure what is meant here by global setup..? Feeling like this is a follow on confusion from maybe mixing server vs client libpq?Thanks,Stephen",
"msg_date": "Fri, 11 Mar 2022 19:39:55 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On Fri, 2022-03-11 at 19:39 -0500, Stephen Frost wrote:\r\n> \r\n> On Fri, Mar 11, 2022 at 18:55 Jacob Champion <pchampion@vmware.com> wrote:\r\n> > \r\n> > [5] says we have to free the proxy credential with GSS_Release_cred();\r\n> > I don't see that happening anywhere, but I may have missed it.\r\n> \r\n> I’m not sure that it’s really necessary or worthwhile to do that at\r\n> process end since … the process is about to end. I suppose we could\r\n> provide a function that a user could call to ask for it to be\r\n> released sooner if we really wanted..?\r\n\r\nDo we have to keep the credential handle around once we've stored it\r\ninto the MEMORY: cache, though? Just seems like a leak that someone\r\nwill have to plug eventually, even if it doesn't really impact things\r\nnow.\r\n\r\n> > It seems like there should be significant security implications to\r\n> > allowing delegation across the board. Especially since one postgres_fdw\r\n> > might delegate to another server, and then another... Should this be\r\n> > opt-in, maybe via a connection parameter?\r\n> \r\n> This is already opt-in- at kinit time a user can decide if they’d\r\n> like a proxy-able ticket or not. I don’t know that we really need to\r\n> have our own option for it … tho I’m not really against adding such\r\n> an option either.\r\n\r\nI don't really have experience with the use case. Is it normal for\r\nkinit users to have to decide once, globally, whether they want\r\neverything they interact with to be able to proxy their credentials? It\r\njust seems like you'd want more fine-grained control over who gets to\r\nmasquerade as you.\r\n\r\n> > Similarly, it feels a little strange that the server would allow the\r\n> > client to unilaterally force the use of a delegated credential. I think\r\n> > that should be opt-in on the server side too, unless there's some\r\n> > context I'm missing around why that's safe.\r\n> \r\n> Perhaps you could explain what isn’t safe about accepting a delegated\r\n> credential from a client..? I am not away of a risk to accepting\r\n> such a delegated credential.\r\n\r\nMy initial impression is that this is effectively modifying the USER\r\nMAPPING that the admin has set up. I'd be worried about an open\r\ncredential proxy being used to bypass firewall or HBA restrictions, for\r\ninstance -- you might not be able to connect as an admin from your\r\nmachine, but you might be able to connect by bouncing through a proxy.\r\n(What damage you can do is going to be limited by what the server\r\nextensions can do, of course.)\r\n\r\nAnother danger might be disclosure/compromise of middlebox secrets? Is\r\nit possible for someone who has one half of the credentials to snoop on\r\na gssenc connection between the proxy Postgres and the backend\r\nPostgres?\r\n\r\n> Even so, I’m not against adding an option… but exactly how would that\r\n> option be configured? Server level? On the HBA line? role level..?\r\n\r\nIn the OPTIONS for CREATE SERVER, maybe? At least for the FDW case.\r\n\r\n> > If I'm reading it right, we're resetting the default credential in the\r\n> > MEMORY cache, so if you're a libpq client doing your own GSSAPI work,\r\n> > I'm guessing you might not be happy with this behavior.\r\n> \r\n> This is just done on the server side and not the client side..?\r\n\r\nYeah, I misread the patch, sorry.\r\n\r\n> > Also, we're\r\n> > globally ignoring whatever ccache was set by an administrator. Can't\r\n> > two postgres_fdw connections from the same backend process require\r\n> > different settings?\r\n> \r\n> Settings..? Perhaps, but delegated credentials aren’t really \r\n> settings, so not really sure what you’re suggesting here.\r\n\r\nI mean that one backend server might require delegated credentials, and\r\nanother might require whatever the admin has already set up in the\r\nccache, and the user might want to use tables from both servers in the\r\nsame session.\r\n\r\n> > I notice that gss_store_cred_into() has a companion,\r\n> > gss_acquire_cred_from(). Is it possible to use that to pull out our\r\n> > delegated credential explicitly by name, instead of stomping on the\r\n> > global setup?\r\n> \r\n> Not really sure what is meant here by global setup..? Feeling like\r\n> this is a follow on confusion from maybe mixing server vs client\r\n> libpq?\r\n\r\nBy my reading, the gss_store_cred_into() call followed by\r\nthe setenv(\"KRB5CCNAME\", ...) is effectively performing global\r\nconfiguration for the process. Any KRB5CCNAME already set up by the\r\nserver admin is going to be ignored from that point onward. Is that\r\naccurate?\r\n\r\nThanks,\r\n--Jacob\r\n",
"msg_date": "Tue, 15 Mar 2022 17:59:08 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetinsg,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Fri, 2022-03-11 at 19:39 -0500, Stephen Frost wrote:\n> > On Fri, Mar 11, 2022 at 18:55 Jacob Champion <pchampion@vmware.com> wrote:\n> > > [5] says we have to free the proxy credential with GSS_Release_cred();\n> > > I don't see that happening anywhere, but I may have missed it.\n> > \n> > I’m not sure that it’s really necessary or worthwhile to do that at\n> > process end since … the process is about to end. I suppose we could\n> > provide a function that a user could call to ask for it to be\n> > released sooner if we really wanted..?\n> \n> Do we have to keep the credential handle around once we've stored it\n> into the MEMORY: cache, though? Just seems like a leak that someone\n> will have to plug eventually, even if it doesn't really impact things\n> now.\n\nWe don't, so I've fixed that in the attached. Not sure it's that big a\ndeal but I don't think it hurts anything either.\n\n> > > It seems like there should be significant security implications to\n> > > allowing delegation across the board. Especially since one postgres_fdw\n> > > might delegate to another server, and then another... Should this be\n> > > opt-in, maybe via a connection parameter?\n> > \n> > This is already opt-in- at kinit time a user can decide if they’d\n> > like a proxy-able ticket or not. I don’t know that we really need to\n> > have our own option for it … tho I’m not really against adding such\n> > an option either.\n> \n> I don't really have experience with the use case. Is it normal for\n> kinit users to have to decide once, globally, whether they want\n> everything they interact with to be able to proxy their credentials? It\n> just seems like you'd want more fine-grained control over who gets to\n> masquerade as you.\n\nYes, that's pretty typical for kinit users- they usually go with\nwhatever the org policy is. Now, you're not wrong about wanting more\nfine-grained control, which is what's known as 'constrained delegation'.\nThat's something that Kerberos in general supports these days though\nit's more complicated and requires additional code to do. That's\nsomething that I think we could certainly add later on.\n\n> > > Similarly, it feels a little strange that the server would allow the\n> > > client to unilaterally force the use of a delegated credential. I think\n> > > that should be opt-in on the server side too, unless there's some\n> > > context I'm missing around why that's safe.\n> > \n> > Perhaps you could explain what isn’t safe about accepting a delegated\n> > credential from a client..? I am not away of a risk to accepting\n> > such a delegated credential.\n> \n> My initial impression is that this is effectively modifying the USER\n> MAPPING that the admin has set up. I'd be worried about an open\n> credential proxy being used to bypass firewall or HBA restrictions, for\n> instance -- you might not be able to connect as an admin from your\n> machine, but you might be able to connect by bouncing through a proxy.\n> (What damage you can do is going to be limited by what the server\n> extensions can do, of course.)\n\nI'm not sure that I really see the concern here. Also, in order for\nthis to work, the user mapping would have to be created with \"password\nrequired = false\". Maybe that's something we revisit later, but it\nseems like a good way to allow an admin to have control over this.\n\n> Another danger might be disclosure/compromise of middlebox secrets? Is\n> it possible for someone who has one half of the credentials to snoop on\n> a gssenc connection between the proxy Postgres and the backend\n> Postgres?\n\nA compromised middlebox would, of course, be an issue- for any kind of\ndelegated credentials (which certainly goes for cleartext passwords\nbeing passed along, and that's currently the only thing we support..).\nOne nice thing about GSSAPI is that the client and the server validate\neach other, so it wouldn't just be 'any' middle-box but would have to be\none that was actually a trusted system in the infrastructure which has\nsomehow been compromised and was still trusted.\n\n> > Even so, I’m not against adding an option… but exactly how would that\n> > option be configured? Server level? On the HBA line? role level..?\n> \n> In the OPTIONS for CREATE SERVER, maybe? At least for the FDW case.\n\nI'm a bit confused on this. The option to allow or not allow delegated\ncredentials couldn't be something that's in the CREATE SERVER for FDWs\nas it applies to more than just FDWs but also dblink and anything else\nwhere we reach out from PG to contact some other system.\n\n> > > If I'm reading it right, we're resetting the default credential in the\n> > > MEMORY cache, so if you're a libpq client doing your own GSSAPI work,\n> > > I'm guessing you might not be happy with this behavior.\n> > \n> > This is just done on the server side and not the client side..?\n> \n> Yeah, I misread the patch, sorry.\n\nNo worries.\n\n> > > Also, we're\n> > > globally ignoring whatever ccache was set by an administrator. Can't\n> > > two postgres_fdw connections from the same backend process require\n> > > different settings?\n> > \n> > Settings..? Perhaps, but delegated credentials aren’t really \n> > settings, so not really sure what you’re suggesting here.\n> \n> I mean that one backend server might require delegated credentials, and\n> another might require whatever the admin has already set up in the\n> ccache, and the user might want to use tables from both servers in the\n> same session.\n\nThat an admin might have a credential cache that's picked up and used\nfor connections from a regular user backend to another system strikes me\nas an altogether concerning idea. Even so, in such a case, the admin\nwould have had to set up the user mapping with 'password required =\nfalse' or it wouldn't have worked for a non-superuser anyway, so I'm not\nsure that I'm too worried about this case.\n\n> > > I notice that gss_store_cred_into() has a companion,\n> > > gss_acquire_cred_from(). Is it possible to use that to pull out our\n> > > delegated credential explicitly by name, instead of stomping on the\n> > > global setup?\n> > \n> > Not really sure what is meant here by global setup..? Feeling like\n> > this is a follow on confusion from maybe mixing server vs client\n> > libpq?\n> \n> By my reading, the gss_store_cred_into() call followed by\n> the setenv(\"KRB5CCNAME\", ...) is effectively performing global\n> configuration for the process. Any KRB5CCNAME already set up by the\n> server admin is going to be ignored from that point onward. Is that\n> accurate?\n\nThe process, yes, but I guess I disagree on that being 'global'- it's\njust for that PG backend process.\n\nAttached is an updated patch which adds the gss_release_creds call, a\nfunction in libpq to allow checking if the libpq connection was made\nusing GSS, changes to dblink to have it check for password-or-gss when\nconnecting to a remote system, and tests for dblink and postgres_fdw to\nmake sure that this all works correctly.\n\nThoughts?\n\nThanks!\n\nStephen",
"msg_date": "Wed, 6 Apr 2022 15:27:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Jacob Champion (pchampion@vmware.com) wrote:\n> > On Fri, 2022-03-11 at 19:39 -0500, Stephen Frost wrote:\n> > > Even so, I’m not against adding an option… but exactly how would that\n> > > option be configured? Server level? On the HBA line? role level..?\n> > \n> > In the OPTIONS for CREATE SERVER, maybe? At least for the FDW case.\n> \n> I'm a bit confused on this. The option to allow or not allow delegated\n> credentials couldn't be something that's in the CREATE SERVER for FDWs\n> as it applies to more than just FDWs but also dblink and anything else\n> where we reach out from PG to contact some other system.\n\nThinking it through further, it seems like the right place to allow an\nadministrator to control if credentials are allowed to be delegated is\nthrough a pg_hba option. Attached patch adds such an option.\n\n> > > > Also, we're\n> > > > globally ignoring whatever ccache was set by an administrator. Can't\n> > > > two postgres_fdw connections from the same backend process require\n> > > > different settings?\n> > > \n> > > Settings..? Perhaps, but delegated credentials aren’t really \n> > > settings, so not really sure what you’re suggesting here.\n> > \n> > I mean that one backend server might require delegated credentials, and\n> > another might require whatever the admin has already set up in the\n> > ccache, and the user might want to use tables from both servers in the\n> > same session.\n> \n> That an admin might have a credential cache that's picked up and used\n> for connections from a regular user backend to another system strikes me\n> as an altogether concerning idea. Even so, in such a case, the admin\n> would have had to set up the user mapping with 'password required =\n> false' or it wouldn't have worked for a non-superuser anyway, so I'm not\n> sure that I'm too worried about this case.\n\nTo address this, I also added a new GUC which allows an administrator to\ncontrol what the credential cache is set to for user-authenticated\nbackends, with a default of MEMORY:, which should generally be safe and\nwon't cause a user backend to pick up on a file-based credential cache\nwhich might exist on the server somewhere. This gives the administrator\nthe option to set it to more-or-less whatever they'd like though, so if\nthey want to set it to a file-based credential cache, then they can do\nso (I did put some caveats about doing that into the documentation as I\ndon't think it's generally a good idea to do...).\n\n> > > > I notice that gss_store_cred_into() has a companion,\n> > > > gss_acquire_cred_from(). Is it possible to use that to pull out our\n> > > > delegated credential explicitly by name, instead of stomping on the\n> > > > global setup?\n> > > \n> > > Not really sure what is meant here by global setup..? Feeling like\n> > > this is a follow on confusion from maybe mixing server vs client\n> > > libpq?\n> > \n> > By my reading, the gss_store_cred_into() call followed by\n> > the setenv(\"KRB5CCNAME\", ...) is effectively performing global\n> > configuration for the process. Any KRB5CCNAME already set up by the\n> > server admin is going to be ignored from that point onward. Is that\n> > accurate?\n> \n> The process, yes, but I guess I disagree on that being 'global'- it's\n> just for that PG backend process.\n\nThe new krb_user_ccache is a lot closer to 'global', though it's\nspecifically for user-authenticated backends (allowing the postmaster\nand other things like replication connections to use whatever the\ncredential cache is set to by the administrator on startup), but that\nseems like it makes sense to me- generally you're not going to want\nregular user backends to be accessing the credential cache of the\n'postgres' unix account on the server.\n\n> Attached is an updated patch which adds the gss_release_creds call, a\n> function in libpq to allow checking if the libpq connection was made\n> using GSS, changes to dblink to have it check for password-or-gss when\n> connecting to a remote system, and tests for dblink and postgres_fdw to\n> make sure that this all works correctly.\n\nI've added a couple more tests to address the new options too, along\nwith documentation for them. This is starting to feel reasonably decent\nto me, at least as a first pass at supporting kerberos credential\ndelegation, which is definitely a feature I've been hoping we would get\ninto PG for quite a while. Would certainly appreciate some feedback on\nthis (from anyone who'd like to comment), though I know we're getting\ninto the last few hours before feature freeze ends.\n\nUpdated patch attached.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 8 Apr 2022 00:21:26 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> The new krb_user_ccache is a lot closer to 'global', though it's\n> specifically for user-authenticated backends (allowing the postmaster\n> and other things like replication connections to use whatever the\n> credential cache is set to by the administrator on startup), but that\n> seems like it makes sense to me- generally you're not going to want\n> regular user backends to be accessing the credential cache of the\n> 'postgres' unix account on the server.\n\nAdded an explicit 'environment' option to allow for, basically, existing\nbehavior, where we don't mess with the environment variable at all,\nthough I kept the default as MEMORY since I don't think it's really\ntypical that folks actually want regular user backends to inherit the\ncredential cache of the server.\n\nAdded a few more tests and updated the documentation too. Sadly, seems\nwe've missed the deadline for v15 though for lack of feedback on these.\nWould really like to get some other folks commenting as these are new\npg_hba and postgresql.conf options being added.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 8 Apr 2022 08:21:30 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 8:21 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Added an explicit 'environment' option to allow for, basically, existing\n> behavior, where we don't mess with the environment variable at all,\n> though I kept the default as MEMORY since I don't think it's really\n> typical that folks actually want regular user backends to inherit the\n> credential cache of the server.\n>\n> Added a few more tests and updated the documentation too. Sadly, seems\n> we've missed the deadline for v15 though for lack of feedback on these.\n> Would really like to get some other folks commenting as these are new\n> pg_hba and postgresql.conf options being added.\n\nHi,\n\nI don't think this patch is quite baked enough to go in even if the\ndeadline hadn't formally passed, but I'm happy to offer a few opinions\n... especially if we can also try to sort out a plan for getting that\nwider-checksums thing you mentioned done for v16.\n\n + /* gssencmode is also libpq option, same to above. */\n+ {\"gssencmode\", UserMappingRelationId, true},\n\nI really hate names like this that are just a bunch of stuff strung\ntogether with no punctuation and some arbitrary abbreviations thrown\nin for good measure. But since the libpq parameter already exists it's\nhard to argue we should do anything else here.\n\n+ <term><literal>allow_cred_delegation</literal></term>\n\nFirst, I again recommend not choosing words at random to abbreviate.\n\"delegate_credentials\" would be shorter and clearer. Second, I think\nwe need to decide whether we envision just having one parameter here\nfor every kind of credential delegation that libpq might ever support,\nor whether this is really something specific to GSS. If the latter,\nthe name should mention GSS.\n\nI also suggest that the default value of this option should be false,\nrather than true. I would be unhappy if ssh started defaulting to\nForwardAgent=yes, because that's less secure and I don't want my\ncredentials shared with random servers without me making a choice to\ndo that. Similarly here I think we should default to the more secure\noption.\n\n+ <listitem>\n+ <para>\n+ Sets the location of the Kerberos credential cache to be used for\n+ regular user backends which go through authentication. The default is\n+ <filename>MEMORY:</filename>, which is where delegated credentials\n+ are stored (and is otherwise empty). Care should be used when changing\n+ this value- setting it to a file-based credential cache will mean that\n+ user backends could potentially use any credentials stored to access\n+ other systems.\n+ If this parameter is set to an empty string, then the variable will be\n+ explicit un-set and the system-dependent default is used, which may be a\n+ file-based credential cache with the same caveats as previously\n+ mentioned. If the special value 'environment' is used, then the variable\n+ is left untouched and will be whatever was set in the environment at\n+ startup time.\n\n\"MEMORY:\" seems like a pretty weird choice of arbitrary string. Is it\nsupposed to look like a Windows drive letter or pseudo-device, or\nwhat? I'm not sure exactly what's better here, but I just think this\ndoesn't look like anything else we've got today. And then we've got a\nsecond special environment, \"environment\", which looks completely\ndifferent: now it's lower-case and without the colon. And then empty\nstring is special too.\n\nI wonder whether we really quite this many cases. But if we do they\nprobably need better and more consistent naming.\n\nThe formatting here also looks weird.\n\n+#ifndef PG_KRB_USER_CCACHE\n+#define PG_KRB_USER_CCACHE \"MEMORY:\"\n+#endif\n\nAt the risk of stating the obvious, the general idea of a #define is\nthat you define things in one place and then use the defined symbol\nrather than the original value everywhere. This patch takes the\nless-useful approach of defining two different symbols for the same\nstring in different files. This one has this #ifndef/#endif guard here\nwhich I think it probably shouldn't, since the choice of string\nprobably shouldn't be compile-time configurable, but it also won't\nwork, because there's no similar guard in the other file.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Apr 2022 11:01:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Apr 8, 2022 at 8:21 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Added an explicit 'environment' option to allow for, basically, existing\n> > behavior, where we don't mess with the environment variable at all,\n> > though I kept the default as MEMORY since I don't think it's really\n> > typical that folks actually want regular user backends to inherit the\n> > credential cache of the server.\n> >\n> > Added a few more tests and updated the documentation too. Sadly, seems\n> > we've missed the deadline for v15 though for lack of feedback on these.\n> > Would really like to get some other folks commenting as these are new\n> > pg_hba and postgresql.conf options being added.\n> \n> I don't think this patch is quite baked enough to go in even if the\n> deadline hadn't formally passed, but I'm happy to offer a few opinions\n> ... especially if we can also try to sort out a plan for getting that\n> wider-checksums thing you mentioned done for v16.\n\nSure.\n\n> + /* gssencmode is also libpq option, same to above. */\n> + {\"gssencmode\", UserMappingRelationId, true},\n> \n> I really hate names like this that are just a bunch of stuff strung\n> together with no punctuation and some arbitrary abbreviations thrown\n> in for good measure. But since the libpq parameter already exists it's\n> hard to argue we should do anything else here.\n\nWell, yeah.\n\n> + <term><literal>allow_cred_delegation</literal></term>\n> \n> First, I again recommend not choosing words at random to abbreviate.\n> \"delegate_credentials\" would be shorter and clearer. Second, I think\n> we need to decide whether we envision just having one parameter here\n> for every kind of credential delegation that libpq might ever support,\n> or whether this is really something specific to GSS. If the latter,\n> the name should mention GSS.\n\ndelegate_credentials seems to imply that the server has some kind of\ncontrol over the act of delegating credentials, which isn't really the\ncase. The client has to decide to delegate credentials and it does that\nindependent of the server- the server side just gets to either accept\nthose delegated credentials, or ignore them. \n\nIn terms of having a prefix, this is certainly something that I'd like\nto see SSPI support added for as well (perhaps that can be in v16 too)\nand so it's definitely not GSS-exclusive among the authentication\nmethods that we have today. In that sense, this option falls into the\nsame category as 'include_realm' and 'krb_realm' in that it applies to\nmore than one, but not all, of our authentication methods.\n\n> I also suggest that the default value of this option should be false,\n> rather than true. I would be unhappy if ssh started defaulting to\n> ForwardAgent=yes, because that's less secure and I don't want my\n> credentials shared with random servers without me making a choice to\n> do that. Similarly here I think we should default to the more secure\n> option.\n\nThis is a bit backwards from how it works though- this option is about\nif the server will accept delegated credentials, not if the client sends\nthem. If your client was set to ForwardAgent=yes, would you be happy if\nthe server's default was AllowAgentForwarding=no? (At least on the\nsystem I'm looking at, the current default is AllowAgentForwarding=yes\nin sshd_config).\n\nRegarding the client side, it is the case that GSSAPIDelegateCredentials\nin ssh defaults to no, so it seems like the next iteration of the patch\nshould probably include a libpq option similar to that ssh_config\noption. As I mentioned before, users already can decide if they'd like\nproxyable credentials or not when they kinit, though more generally this\nis set as a environment-wide policy, but we can add an option and\ndisable it by default.\n\n> + <listitem>\n> + <para>\n> + Sets the location of the Kerberos credential cache to be used for\n> + regular user backends which go through authentication. The default is\n> + <filename>MEMORY:</filename>, which is where delegated credentials\n> + are stored (and is otherwise empty). Care should be used when changing\n> + this value- setting it to a file-based credential cache will mean that\n> + user backends could potentially use any credentials stored to access\n> + other systems.\n> + If this parameter is set to an empty string, then the variable will be\n> + explicit un-set and the system-dependent default is used, which may be a\n> + file-based credential cache with the same caveats as previously\n> + mentioned. If the special value 'environment' is used, then the variable\n> + is left untouched and will be whatever was set in the environment at\n> + startup time.\n> \n> \"MEMORY:\" seems like a pretty weird choice of arbitrary string. Is it\n> supposed to look like a Windows drive letter or pseudo-device, or\n> what? I'm not sure exactly what's better here, but I just think this\n> doesn't look like anything else we've got today. And then we've got a\n> second special environment, \"environment\", which looks completely\n> different: now it's lower-case and without the colon. And then empty\n> string is special too.\n\nThis isn't actually something we have a choice in, really, it's from the\nKerberos library. MEMORY is the library's in-memory credential cache.\nOther possible values are FILE:/some/file, DIR:/some/dir, API:, and\nothers. Documentaton is available here:\nhttps://web.mit.edu/kerberos/krb5-1.12/doc/basic/ccache_def.html\n\n> I wonder whether we really quite this many cases. But if we do they\n> probably need better and more consistent naming.\n\nI wouldn't want to end up with values that could end up conflicting with\nreal values that a user might want to specify, so the choice of\n'environment' and empty-value were specifically chosen to avoid that\nrisk. If we're worried that doing so isn't sufficient or is too\nconfusing, the better option would likely be to have another GUC that\ncontrols if we unset, ignore, or set the value to what the other GUC\nsays to set it to. I'm fine with that if you agree.\n\n> The formatting here also looks weird.\n> \n> +#ifndef PG_KRB_USER_CCACHE\n> +#define PG_KRB_USER_CCACHE \"MEMORY:\"\n> +#endif\n> \n> At the risk of stating the obvious, the general idea of a #define is\n> that you define things in one place and then use the defined symbol\n> rather than the original value everywhere. This patch takes the\n> less-useful approach of defining two different symbols for the same\n> string in different files. This one has this #ifndef/#endif guard here\n> which I think it probably shouldn't, since the choice of string\n> probably shouldn't be compile-time configurable, but it also won't\n> work, because there's no similar guard in the other file.\n\nYeah, the other #define should have gone away and been changed to use\nthe above. That should be easy enough to fix.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 8 Apr 2022 11:29:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 11:29 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > + <term><literal>allow_cred_delegation</literal></term>\n> >\n> > First, I again recommend not choosing words at random to abbreviate.\n> > \"delegate_credentials\" would be shorter and clearer. Second, I think\n> > we need to decide whether we envision just having one parameter here\n> > for every kind of credential delegation that libpq might ever support,\n> > or whether this is really something specific to GSS. If the latter,\n> > the name should mention GSS.\n>\n> delegate_credentials seems to imply that the server has some kind of\n> control over the act of delegating credentials, which isn't really the\n> case. The client has to decide to delegate credentials and it does that\n> independent of the server- the server side just gets to either accept\n> those delegated credentials, or ignore them.\n\nOh ... I thought this was a libpq parameter to control the client\nbehavior. I guess I didn't read it carefully enough.\n\n> Regarding the client side, it is the case that GSSAPIDelegateCredentials\n> in ssh defaults to no, so it seems like the next iteration of the patch\n> should probably include a libpq option similar to that ssh_config\n> option. As I mentioned before, users already can decide if they'd like\n> proxyable credentials or not when they kinit, though more generally this\n> is set as a environment-wide policy, but we can add an option and\n> disable it by default.\n\n+1.\n\n> This isn't actually something we have a choice in, really, it's from the\n> Kerberos library. MEMORY is the library's in-memory credential cache.\n> Other possible values are FILE:/some/file, DIR:/some/dir, API:, and\n> others. Documentaton is available here:\n> https://web.mit.edu/kerberos/krb5-1.12/doc/basic/ccache_def.html\n\nWell, I was just going by the fact that this string (\"MEMORY:\") seems\nto be being interpreted in our code, not the library.\n\n> > I wonder whether we really quite this many cases. But if we do they\n> > probably need better and more consistent naming.\n>\n> I wouldn't want to end up with values that could end up conflicting with\n> real values that a user might want to specify, so the choice of\n> 'environment' and empty-value were specifically chosen to avoid that\n> risk. If we're worried that doing so isn't sufficient or is too\n> confusing, the better option would likely be to have another GUC that\n> controls if we unset, ignore, or set the value to what the other GUC\n> says to set it to. I'm fine with that if you agree.\n\nYeah, I thought of that, and it might be the way to go. I wasn't too\nsure we needed the explicit-unset behavior as an option, but I defer\nto you on that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Apr 2022 11:35:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On 4/8/22 05:21, Stephen Frost wrote:\n> Added a few more tests and updated the documentation too. Sadly, seems\n> we've missed the deadline for v15 though for lack of feedback on these.\n> Would really like to get some other folks commenting as these are new\n> pg_hba and postgresql.conf options being added.\n\nSorry for the incredibly long delay; I lost track of this thread during\nthe email switch. I'm testing the patch with various corner cases to try\nto figure out how it behaves, so this isn't a full review, but I wanted\nto jump through some of the emails I missed and at least give you some\nresponses.\n\nAs an overall note, I think the patch progression, and adding more\nexplicit control over when credentials may be delegated, is very\npositive, and +1 for the proposed libpq connection option elsewhere in\nthe thread.\n\nOn 4/7/22 21:21, Stephen Frost wrote:\n>> That an admin might have a credential cache that's picked up and used\n>> for connections from a regular user backend to another system strikes me\n>> as an altogether concerning idea. Even so, in such a case, the admin\n>> would have had to set up the user mapping with 'password required =\n>> false' or it wouldn't have worked for a non-superuser anyway, so I'm not\n>> sure that I'm too worried about this case.\n>\n> To address this, I also added a new GUC which allows an administrator to\n> control what the credential cache is set to for user-authenticated\n> backends, with a default of MEMORY:, which should generally be safe and\n> won't cause a user backend to pick up on a file-based credential cache\n> which might exist on the server somewhere. This gives the administrator\n> the option to set it to more-or-less whatever they'd like though, so if\n> they want to set it to a file-based credential cache, then they can do\n> so (I did put some caveats about doing that into the documentation as I\n> don't think it's generally a good idea to do...).\n\nI'm not clear on how this handles the collision case. My concern was\nwith a case where you have more than one foreign table/server, and they\nneed to use separate credentials. It's not obvious to me how changing\nthe location of a (single, backend-global) cache mitigates that problem.\n\nI'm also missing something about why password_required=false is\nnecessary (as opposed to simply setting a password in the USER MAPPING).\nMy current test case doesn't make use of password_required=false and it\nappears to work just fine.\n\nOn 4/6/22 12:27, Stephen Frost wrote:\n>> Another danger might be disclosure/compromise of middlebox secrets? Is\n>> it possible for someone who has one half of the credentials to snoop on\n>> a gssenc connection between the proxy Postgres and the backend\n>> Postgres?\n>\n> A compromised middlebox would, of course, be an issue- for any kind of\n> delegated credentials (which certainly goes for cleartext passwords\n> being passed along, and that's currently the only thing we support..).\n> One nice thing about GSSAPI is that the client and the server validate\n> each other, so it wouldn't just be 'any' middle-box but would have to be\n> one that was actually a trusted system in the infrastructure which has\n> somehow been compromised and was still trusted.\n\nI wasn't clear enough, sorry -- I mean that we have to prove that\ndefaulting allow_cred_delegation to true doesn't cause the compromise of\nexisting deployments.\n\nAs an example, right now I'm trying to characterize behavior with the\nfollowing pg_hba setup on the foreign server:\n\n hostgssenc all all ... password\n\nSo in other words we're using GSS as transport encryption only, not as\nan authentication provider. On the middlebox, we create a FOREIGN\nSERVER/TABLE that points to this, and set up a USER MAPPING (with no\nUSAGE rights) that contains the necessary password. (I'm using a\nplaintext password to make it more obvious what the danger is, not\nsuggesting that this would be good practice.)\n\nAs far as I can tell, to make this work today, a server admin has to set\nup a local credential cache with the keys for some one-off principal. It\ndoesn't have to be an admin principal, because the point is just to\nprovide transport protection for the password, so it's not really\nparticularly scary to make it available to user backends. But this new\nproposed feature lets the client override that credential cache,\nsubstituting their own credentials, for which they have all the Kerberos\nsymmetric key material.\n\nSo my question is this: does substituting my credentials for the admin's\ncredentials let me weaken or break the transport encryption on the\nbackend connection, and grab the password that I'm not supposed to have\naccess to as a front-end client?\n\nI honestly don't know the answer; GSSAPI is a black box that defers to\nKerberos and there's a huge number of specs that I've been slowly making\nmy way through. But in my tests, if I turn on credential forwarding,\nWireshark is suddenly able to use the *client's* keys to decrypt pieces\nof the TGS's conversations with the *middlebox*, including session keys,\nand that doesn't make me feel very good about the strength of the crypto\nwhen the middlebox starts talking to the backend foreign server.\n\nMaybe there's some ephemeral exchange going on that makes it too hard to\nattack in practice, or some other mitigations. But Wireshark doesn't\nunderstand how to dissect the libpq gssenc exchange, and I don't know\nthe specs well enough yet, so I can't really prove it either way. Do you\nknow more about the underlying GSS exchange, or else which specs cover\nthe low-level details? I'm trying to avoid writing Wireshark dissector\ncode, but maybe that'd be useful either way...\n\n--Jacob\n\n\n",
"msg_date": "Thu, 7 Jul 2022 16:24:49 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 4:24 PM Jacob Champion <jchampion@timescale.com> wrote:\n> So my question is this: does substituting my credentials for the admin's\n> credentials let me weaken or break the transport encryption on the\n> backend connection, and grab the password that I'm not supposed to have\n> access to as a front-end client?\n\nWith some further research: yes, it does.\n\nIf a DBA is using a GSS encrypted tunnel to communicate to a foreign\nserver, accepting delegation by default means that clients will be\nable to break that backend encryption at will, because the keys in use\nwill be under their control.\n\n> Maybe there's some ephemeral exchange going on that makes it too hard to\n> attack in practice, or some other mitigations.\n\nThere is no forward secrecy, ephemeral exchange, etc. to mitigate this [1]:\n\n The Kerberos protocol in its basic form does not provide perfect\n forward secrecy for communications. If traffic has been recorded by\n an eavesdropper, then messages encrypted using the KRB_PRIV message,\n or messages encrypted using application-specific encryption under\n keys exchanged using Kerberos can be decrypted if the user's,\n application server's, or KDC's key is subsequently discovered.\n\nSo the client can decrypt backend communications that make use of its\ndelegated key material. (This also means that gssencmode is a lot\nweaker than I expected.)\n\n> I'm trying to avoid writing Wireshark dissector\n> code, but maybe that'd be useful either way...\n\nI did end up filling out the existing PGSQL dissector so that it could\ndecrypt GSSAPI exchanges (with the use of a keytab, that is). If you'd\nlike to give it a try, the patch, based on Wireshark 3.7.1, is\nattached. Note the GPLv2 license. It isn't correct code yet, because I\ndidn't understand how packet reassembly worked in Wireshark when I\nstarted writing the code, so really large GSSAPI messages that are\nsplit across multiple TCP packets will confuse the dissector. But it's\nenough to prove the concept.\n\nTo see this in action, set up an FDW connection that uses gssencmode\n(so the server in the middle will need its own Kerberos credentials).\nCapture traffic starting from the kinit through the query on the\nforeign table. Export the client's key material into a keytab, and set\nup Wireshark to use that keytab for decryption. When credential\nforwarding is *not* in use, Wireshark will be able to decrypt the\ninitial client connection, but it won't be able to see anything inside\nthe foreign server connection. When credential forwarding is in use,\nWireshark will be able to decrypt both connections.\n\nThanks,\n--Jacob\n\n[1] https://www.rfc-editor.org/rfc/rfc4120",
"msg_date": "Thu, 15 Sep 2022 17:06:05 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Jacob Champion (jchampion@timescale.com) wrote:\n> On Thu, Jul 7, 2022 at 4:24 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > So my question is this: does substituting my credentials for the admin's\n> > credentials let me weaken or break the transport encryption on the\n> > backend connection, and grab the password that I'm not supposed to have\n> > access to as a front-end client?\n> \n> With some further research: yes, it does.\n> \n> If a DBA is using a GSS encrypted tunnel to communicate to a foreign\n> server, accepting delegation by default means that clients will be\n> able to break that backend encryption at will, because the keys in use\n> will be under their control.\n\nThis is coming across as if it's a surprise of some kind when it\ncertainly isn't.. If the delegated credentials are being used to\nauthenticate and establish the connection from that backend to another\nsystem then, yes, naturally that means that the keys provided are coming\nfrom the client and the client knows them. The idea of arranging to\nhave an admin's credentials used to authenticate to another system where\nthe backend is actually controlled by a non-admin user is, in fact, the\nissue in what is being outlined above as that's clearly a situation\nwhere the user's connection is being elevated to an admin level. That's\nalso something that we try to avoid having happen because it's not\nreally a good idea, which is why we require a password today for the\nconnection to be established (postgres_fdw/connection.c:\n\nNon-superuser cannot connect if the server does not request a password.\n\n).\n\nConsider that, in general, the user could also simply directly connect\nto the other system themselves instead of having a PG backend make that\nconnection for them- the point in doing it from PG would be to avoid\nhaving to pass all the data back through the client's system.\n\nConsider SSH instead of PG. What you're pointing out, accurately, is\nthat if an admin were to install their keys into a user's .ssh directory\nunencrypted and then the user logged into the system, they'd then be\nable to SSH to another system with the admin's credentials and then\nthey'd need the admin's credentials to decrypt the traffic, but that if,\ninstead, the user brings their own credentials then they could\npotentially decrypt the connection between the systems. Is that really\nthe issue here? Doesn't seem like that's where the concern should be in\nthis scenario.\n\n> > Maybe there's some ephemeral exchange going on that makes it too hard to\n> > attack in practice, or some other mitigations.\n> \n> There is no forward secrecy, ephemeral exchange, etc. to mitigate this [1]:\n> \n> The Kerberos protocol in its basic form does not provide perfect\n> forward secrecy for communications. If traffic has been recorded by\n> an eavesdropper, then messages encrypted using the KRB_PRIV message,\n> or messages encrypted using application-specific encryption under\n> keys exchanged using Kerberos can be decrypted if the user's,\n> application server's, or KDC's key is subsequently discovered.\n> \n> So the client can decrypt backend communications that make use of its\n> delegated key material. (This also means that gssencmode is a lot\n> weaker than I expected.)\n\nThe backend wouldn't be able to establish the connection in the first\nplace without those delegated credentials.\n\n> > I'm trying to avoid writing Wireshark dissector\n> > code, but maybe that'd be useful either way...\n> \n> I did end up filling out the existing PGSQL dissector so that it could\n> decrypt GSSAPI exchanges (with the use of a keytab, that is). If you'd\n> like to give it a try, the patch, based on Wireshark 3.7.1, is\n> attached. Note the GPLv2 license. It isn't correct code yet, because I\n> didn't understand how packet reassembly worked in Wireshark when I\n> started writing the code, so really large GSSAPI messages that are\n> split across multiple TCP packets will confuse the dissector. But it's\n> enough to prove the concept.\n> \n> To see this in action, set up an FDW connection that uses gssencmode\n> (so the server in the middle will need its own Kerberos credentials).\n\nThe server in the middle should *not* be using its own Kerberos\ncredentials to establish the connection to the other system- that's\nelevating the credentials used for that connection and is something that\nshould be prevented for non-superusers already (see above). We do allow\nthat when a superuser is involved because they are considered to\nessentially have OS-level privileges and therefore could see those\ncredentials anyway, but that's not the case for non-superusers.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 19 Sep 2022 13:05:17 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On 9/19/22 10:05, Stephen Frost wrote:\n> This is coming across as if it's a surprise of some kind when it\n> certainly isn't.. If the delegated credentials are being used to\n> authenticate and establish the connection from that backend to another\n> system then, yes, naturally that means that the keys provided are coming\n> from the client and the client knows them.\n\nI think it may be surprising to end users that credential delegation\nlets them trivially break transport encryption. Like I said before, it\nwas a surprise to me, because the cryptosystems I'm familiar with\nprevent that.\n\nIf it wasn't surprising to you, I could really have used a heads up back\nwhen I asked you about it.\n\n> The idea of arranging to\n> have an admin's credentials used to authenticate to another system where\n> the backend is actually controlled by a non-admin user is, in fact, the\n> issue in what is being outlined above as that's clearly a situation\n> where the user's connection is being elevated to an admin level.\n\nYes, controlled elevation is the goal in the scenario I'm describing.\n\n> That's\n> also something that we try to avoid having happen because it's not\n> really a good idea, which is why we require a password today for the\n> connection to be established (postgres_fdw/connection.c:\n> \n> Non-superuser cannot connect if the server does not request a password.\n\nA password is being used in this scenario. The password is the secret\nbeing stolen.\n\nThe rest of your email describes a scenario different from what I'm\nattacking here. Here's my sample HBA line for the backend again:\n\n hostgssenc all all ... password\n\nI'm using password authentication with a Kerberos-encrypted channel.\nIt's similar to protecting password authentication with TLS and a client\ncert:\n\n hostssl all all ... password clientcert=verify-*\n\n> Consider that, in general, the user could also simply directly connect\n> to the other system themselves\n\nNo, because they don't have the password. They don't have USAGE on the\nforeign table, so they can't see the password in the USER MAPPING.\n\nWith the new default introduced in this patch, they can now steal the\npassword by delegating their credentials and cracking the transport\nencryption. This bypasses the protections that are documented for the\npg_user_mappings view.\n\n> Consider SSH instead of PG. What you're pointing out, accurately, is\n> that if an admin were to install their keys into a user's .ssh directory\n> unencrypted and then the user logged into the system, they'd then be\n> able to SSH to another system with the admin's credentials and then\n> they'd need the admin's credentials to decrypt the traffic, but that if,\n> instead, the user brings their own credentials then they could\n> potentially decrypt the connection between the systems. Is that really\n> the issue here?\n\nNo, it's not the issue here. This is more like setting up a restricted\nshell that provides limited access to a resource on another machine\n(analogous to the foreign table). The user SSHs into this restricted\nshell, and then invokes an admin-blessed command whose implementation\nuses some credentials (which they cannot read, analogous to the USER\nMAPPING) over an encrypted channel to access the backend resource. In\nthis situation an admin would want to ensure that the encrypted tunnel\ncouldn't be weakened by the client, so that they can't learn how to\nbypass the blessed command and connect to the backend directly.\n\nUnlike SSH, we've never supported credential delegation, and now we're\nintroducing it. So my claim is, it's possible for someone who was\npreviously in a secure situation to be broken by the new default.\n\n>> So the client can decrypt backend communications that make use of its\n>> delegated key material. (This also means that gssencmode is a lot\n>> weaker than I expected.)\n> \n> The backend wouldn't be able to establish the connection in the first\n> place without those delegated credentials.\n\nThat's not true in the situation I'm describing; hopefully my comments\nabove help clarify.\n\n> The server in the middle should *not* be using its own Kerberos\n> credentials to establish the connection to the other system- that's\n> elevating the credentials used for that connection and is something that\n> should be prevented for non-superusers already (see above).\n\nIt's not prevented, because a password is being used. In my tests I'm\nconnecting as an unprivileged user.\n\nYou're claiming that the middlebox shouldn't be doing this. If this new\ndefault behavior were the historical behavior, then I would have agreed.\nBut the cat's already out of the bag on that, right? It's safe today.\nAnd if it's not safe today for some other reason, please share why, and\nmaybe I can work on a patch to try to prevent people from doing it.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Mon, 19 Sep 2022 14:05:39 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 02:05:39PM -0700, Jacob Champion wrote:\n> It's not prevented, because a password is being used. In my tests I'm\n> connecting as an unprivileged user.\n> \n> You're claiming that the middlebox shouldn't be doing this. If this new\n> default behavior were the historical behavior, then I would have agreed.\n> But the cat's already out of the bag on that, right? It's safe today.\n> And if it's not safe today for some other reason, please share why, and\n> maybe I can work on a patch to try to prevent people from doing it.\n\nPlease note that this has been marked as returned with feedback in the\ncurrent CF, as this has remained unanswered for a bit more than three\nweeks.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:32:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, Sep 19, 2022 at 02:05:39PM -0700, Jacob Champion wrote:\n> > It's not prevented, because a password is being used. In my tests I'm\n> > connecting as an unprivileged user.\n> > \n> > You're claiming that the middlebox shouldn't be doing this. If this new\n> > default behavior were the historical behavior, then I would have agreed.\n> > But the cat's already out of the bag on that, right? It's safe today.\n> > And if it's not safe today for some other reason, please share why, and\n> > maybe I can work on a patch to try to prevent people from doing it.\n> \n> Please note that this has been marked as returned with feedback in the\n> current CF, as this has remained unanswered for a bit more than three\n> weeks.\n\nThere's some ongoing discussion about how to handle outbound connections\nfrom the server ending up picking up credentials from the server's\nenvironment (that really shouldn't be allowed unless specifically asked\nfor..), that's ultimately an independent change from what this patch is\ndoing.\n\nHere's an updated version which does address Robert's concerns around\nhaving this disabled by default and having options on both the server\nand client side saying if it is to be enabled or not. Also added to\npg_stat_gssapi a field that indicates if credentials were proxied or not\nand made some other improvements and added additional regression tests\nto test out various combinations.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 17 Feb 2023 04:27:28 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Michael Paquier (michael@paquier.xyz) wrote:\n> > On Mon, Sep 19, 2022 at 02:05:39PM -0700, Jacob Champion wrote:\n> > > It's not prevented, because a password is being used. In my tests I'm\n> > > connecting as an unprivileged user.\n> > > \n> > > You're claiming that the middlebox shouldn't be doing this. If this new\n> > > default behavior were the historical behavior, then I would have agreed.\n> > > But the cat's already out of the bag on that, right? It's safe today.\n> > > And if it's not safe today for some other reason, please share why, and\n> > > maybe I can work on a patch to try to prevent people from doing it.\n> > \n> > Please note that this has been marked as returned with feedback in the\n> > current CF, as this has remained unanswered for a bit more than three\n> > weeks.\n> \n> There's some ongoing discussion about how to handle outbound connections\n> from the server ending up picking up credentials from the server's\n> environment (that really shouldn't be allowed unless specifically asked\n> for..), that's ultimately an independent change from what this patch is\n> doing.\n\nThat got committed, which is great, though it didn't go quite as far as\nI had been hoping regarding dealing with outbound connections from the\nserver- perhaps we should make it clear at least for postgres_fdw that\nit might be good for administrators to explicitly say which options are\nallowed for a given user-map when it comes to how authentication is\ndone to the remote server? Seems like mostly a documentation\nimprovement, I think? Or should we have some special handling around\nthat option for postgres_fdw/dblink?\n\n> Here's an updated version which does address Robert's concerns around\n> having this disabled by default and having options on both the server\n> and client side saying if it is to be enabled or not. Also added to\n> pg_stat_gssapi a field that indicates if credentials were proxied or not\n> and made some other improvements and added additional regression tests\n> to test out various combinations.\n\nI've done some self-review and also reworked how the security checks are\ndone to be sure that we're not ending up pulling credentials from the\nenvironment (with added regression tests to check for it too). If\nthere's remaining concerns around that, please let me know. Of course,\nother review would be great also. Presently though:\n\n- Rebased up to today\n- Requires explicitly being enabled on client and server\n- Authentication to a remote server via dblink or postgres_fdw with\n GSSAPI requires that credentials were proxied by the client to the\n server, except if the superuser set 'password_required' to false on\n the postgres_fdw (which has lots of caveats around it in the\n documentation because it's inherently un-safe to do).\n- Includes updated documentation\n- Quite a few additional regression tests to check for unrelated\n credentials coming from the environment in either cases where\n credentials have been proxied and in cases where they haven't.\n- Only changes to existing regression tests for dblink/postgres_fdw are\n in the error message wording updates.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 20 Mar 2023 09:30:09 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "The CFBot says there's a function be_gssapi_get_proxy() which is\nundefined. Presumably this is a missing #ifdef or a definition that\nshould be outside an #ifdef.\n\n[14:05:21.532] dblink.c: In function ‘dblink_security_check’:\n[14:05:21.532] dblink.c:2606:38: error: implicit declaration of\nfunction ‘be_gssapi_get_proxy’ [-Werror=implicit-function-declaration]\n[14:05:21.532] 2606 | if (PQconnectionUsedGSSAPI(conn) &&\nbe_gssapi_get_proxy(MyProcPort))\n[14:05:21.532] | ^~~~~~~~~~~~~~~~~~~\n[14:05:21.532] cc1: all warnings being treated as errors\n\n[13:56:28.789] dblink.c.obj : error LNK2019: unresolved external\nsymbol be_gssapi_get_proxy referenced in function dblink_connstr_check\n[13:56:29.040] contrib\\dblink\\dblink.dll : fatal error LNK1120: 1\nunresolved externals\n\nOn Mon, 20 Mar 2023 at 09:30, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Stephen Frost (sfrost@snowman.net) wrote:\n> > * Michael Paquier (michael@paquier.xyz) wrote:\n> > > On Mon, Sep 19, 2022 at 02:05:39PM -0700, Jacob Champion wrote:\n> > > > It's not prevented, because a password is being used. In my tests I'm\n> > > > connecting as an unprivileged user.\n> > > >\n> > > > You're claiming that the middlebox shouldn't be doing this. If this new\n> > > > default behavior were the historical behavior, then I would have agreed.\n> > > > But the cat's already out of the bag on that, right? It's safe today.\n> > > > And if it's not safe today for some other reason, please share why, and\n> > > > maybe I can work on a patch to try to prevent people from doing it.\n> > >\n> > > Please note that this has been marked as returned with feedback in the\n> > > current CF, as this has remained unanswered for a bit more than three\n> > > weeks.\n> >\n> > There's some ongoing discussion about how to handle outbound connections\n> > from the server ending up picking up credentials from the server's\n> > environment (that really shouldn't be allowed unless specifically asked\n> > for..), that's ultimately an independent change from what this patch is\n> > doing.\n>\n> That got committed, which is great, though it didn't go quite as far as\n> I had been hoping regarding dealing with outbound connections from the\n> server- perhaps we should make it clear at least for postgres_fdw that\n> it might be good for administrators to explicitly say which options are\n> allowed for a given user-map when it comes to how authentication is\n> done to the remote server? Seems like mostly a documentation\n> improvement, I think? Or should we have some special handling around\n> that option for postgres_fdw/dblink?\n>\n> > Here's an updated version which does address Robert's concerns around\n> > having this disabled by default and having options on both the server\n> > and client side saying if it is to be enabled or not. Also added to\n> > pg_stat_gssapi a field that indicates if credentials were proxied or not\n> > and made some other improvements and added additional regression tests\n> > to test out various combinations.\n>\n> I've done some self-review and also reworked how the security checks are\n> done to be sure that we're not ending up pulling credentials from the\n> environment (with added regression tests to check for it too). If\n> there's remaining concerns around that, please let me know. Of course,\n> other review would be great also. Presently though:\n>\n> - Rebased up to today\n> - Requires explicitly being enabled on client and server\n> - Authentication to a remote server via dblink or postgres_fdw with\n> GSSAPI requires that credentials were proxied by the client to the\n> server, except if the superuser set 'password_required' to false on\n> the postgres_fdw (which has lots of caveats around it in the\n> documentation because it's inherently un-safe to do).\n> - Includes updated documentation\n> - Quite a few additional regression tests to check for unrelated\n> credentials coming from the environment in either cases where\n> credentials have been proxied and in cases where they haven't.\n> - Only changes to existing regression tests for dblink/postgres_fdw are\n> in the error message wording updates.\n>\n> Thanks!\n>\n> Stephen\n\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 21 Mar 2023 23:38:51 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Greg Stark (stark@mit.edu) wrote:\n> The CFBot says there's a function be_gssapi_get_proxy() which is\n> undefined. Presumably this is a missing #ifdef or a definition that\n> should be outside an #ifdef.\n\nYup, just a couple of missing #ifdef's.\n\nUpdated and rebased patch attached.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 26 Mar 2023 07:46:32 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Greg Stark (stark@mit.edu) wrote:\n> > The CFBot says there's a function be_gssapi_get_proxy() which is\n> > undefined. Presumably this is a missing #ifdef or a definition that\n> > should be outside an #ifdef.\n> \n> Yup, just a couple of missing #ifdef's.\n> \n> Updated and rebased patch attached.\n\n... and a few more. Apparently hacking on a plane without enough sleep\nleads to changing ... and unchanging configure flags before testing.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 28 Mar 2023 10:30:28 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nDid a code review pass here; here is some feedback.\r\n\r\n\r\n+\t/* If password was used to connect, make sure it was one provided */\r\n+\tif (PQconnectionUsedPassword(conn) && dblink_connstr_has_pw(connstr))\r\n+\t\treturn;\r\n\r\n Do we need to consider whether these passwords are the same? Is there a different vector where a different password could be acquired from a different source (PGPASSWORD, say) while both of these criteria are true? Seems like it probably doesn't matter that much considering we only checked Password alone in previous version of this code.\r\n\r\n---\r\n\r\nLooks like the pg_gssinfo struct hides the `proxy_creds` def behind:\r\n\r\n #if defined(ENABLE_GSS) | defined(ENABLE_SSPI)\r\n typedef struct\r\n {\r\n gss_buffer_desc outbuf;\t\t/* GSSAPI output token buffer */\r\n #ifdef ENABLE_GSS\r\n ...\r\n bool\t\tproxy_creds;\t/* GSSAPI Delegated/proxy credentials */\r\n #endif\r\n } pg_gssinfo;\r\n #endif\r\n\r\nWhich means that the later check in `be_gssapi_get_proxy()` we have:\r\n\r\n /*\r\n * Return if GSSAPI delegated/proxy credentials were included on this\r\n * connection.\r\n */\r\n bool\r\n be_gssapi_get_proxy(Port *port)\r\n {\r\n if (!port || !port->gss)\r\n return NULL;\r\n\r\n return port->gss->proxy_creds;\r\n }\r\n\r\nSo in theory it'd be possible to have SSPI enabled but GSS disabled and we'd fail to compile in that case. (It may be that this routine is never *actually* called in that case, just noting compile-time considerations.) I'm not seeing guards in the actual PQ* routines, but don't think I've done an exhaustive search.\r\n\r\n---\r\n\r\ngss_accept_deleg\r\n\r\n\r\n+ <para>\r\n+ Forward (delegate) GSS credentials to the server. The default is\r\n+ <literal>disable</literal> which means credentials will not be forwarded\r\n+ to the server. Set this to <literal>enable</literal> to have\r\n+ credentials forwarded when possible.\r\n\r\nWhen is this not possible? Server policy? External factors?\r\n\r\n---\r\n\r\n </para>\r\n <para>\r\n Only superusers may connect to foreign servers without password\r\n- authentication, so always specify the <literal>password</literal> option\r\n- for user mappings belonging to non-superusers.\r\n+ authentication or using gssapi proxied credentials, so specify the\r\n+ <literal>password</literal> option for user mappings belonging to\r\n+ non-superusers who are not able to proxy GSSAPI credentials.\r\n </para>\r\n <para>\r\n\r\ns/gssapi/GSSAPI/; this is kind of confusing, as this makes it sound like only superuser may use GSSAPI proxied credentials, which I disbelieve to be true. Additionally, it sounds like you're wanting to explicitly maintain a denylist for users to not be allowed proxying; is that correct?\r\n\r\n---\r\n\r\nlibpq/auth.c:\r\n\r\n\t\tif (proxy != NULL)\r\n\t\t{\r\n\t\t\tpg_store_proxy_credential(proxy);\r\n\t\t\tport->gss->proxy_creds = true;\r\n\t\t}\r\n\r\nPer GSS docs, seems like we should be comparing to GSS_C_NO_CREDENTIAL and validating that the gflags has the `deleg_flag` bit set before considering whether there are valid credentials; in practice this might be the same effect (haven't looked at what that symbol actually resolves to, but NULL would be sensible).\r\n\r\nAre there other cases we might need to consider here, like valid credentials, but they are expired? (GSS_S_CREDENTIALS_EXPIRED)\r\n\r\n---\r\n\r\n+\t/*\r\n+\t * Set KRB5CCNAME for this backend, so that later calls to gss_acquire_cred\r\n+\t * will find the proxied credentials we stored.\r\n+\t */\r\n\r\nSo I'm not seeing this in other use in the code; I assume this is just used by the krb5 libs?\r\n\r\nSimilar q's for the other places the pg_gss_accept_deleg are used.\r\n\r\n---\r\n\r\n+int\r\n+PQconnectionUsedGSSAPI(const PGconn *conn)\r\n+{\r\n+\tif (!conn)\r\n+\t\treturn false;\r\n+\tif (conn->gssapi_used)\r\n+\t\treturn true;\r\n+\telse\r\n+\t\treturn false;\r\n+}\r\n\r\nMicro-gripe: this routine seems like could be simpler, though the compiler probably has the same thing to say for either, so maybe code clarity is better as written:\r\n\r\n int\r\n PQconnectionUsedGSSAPI(const PGconn *conn)\r\n {\r\n return conn && conn->gssapi_used;\r\n }\r\n\r\n---\r\n\r\nAnything required for adding meson support? I notice src/test/kerberos has Makefile updated, but no meson.build files are changed.\r\n\r\n---\r\n\r\nTwo tests in src/test/kerberos/t/001_auth.pl at :535 and :545 have the same test description:\r\n\r\n+\t'succeeds with GSS-encrypted access required and hostgssenc hba and credentials not forwarded',\r\n\r\nSince the first test has only `gssencmode` defined (so implicit `gssdeleg` value) and the second has `gssdeleg=disable` I'd suggest that the test on :545 should have its description updated to add the word \"explicitly\":\r\n\r\n'succeeds with GSS-encrypted access required and hostgssenc hba and credentials explicitly not forwarded',\r\n\r\n---\r\n\r\nIn the dblink test, this seems like debugging junk:\r\n\r\n+print (\"$psql_out\");\r\n+print (\"$psql_stderr\");\r\n\r\nWhacking those lines and reviewing the surrounding code block: so this is testing that dblink won't use `.pgpass`; so is this a behavior change, and dblink could be previously used w/postgres user's .pgpass file? I assume since this patch is forbidding this, we've decided that that is a bad idea--was this updated in the docs to note that this is now forbidden, or is this something that should only apply in some cases (i.e., this is now config-specific)? If config-specific, should we have a test in the non-forwarded version of these tests that exercises that behavior?\r\n\r\n+$psql_rc = $node->psql(\r\n+ 'postgres',\r\n+\t\"SELECT * FROM dblink('user=test2 dbname=$dbname port=$port passfile=$pgpass','select 1') as t1(c1 int);\",\r\n+\tconnstr => \"user=test1 host=$host hostaddr=$hostaddr gssencmode=require gssdeleg=disable\",\r\n+\tstdout => \\$psql_out,\r\n \r\n+is($psql_rc,'3','dblink does not work without proxied credentials and with passfile');\r\n+like($psql_stderr, qr/password or gssapi proxied credentials required/,'dblink does not work without proxied credentials and with passfile');\r\n\r\nSame Q's apply to the postgres_fdw version of these tests.\r\n\r\n---\r\n\r\n:659 and :667, the test description says non-encrypted and the gssencmode=prefer implies encrypted; seems like those descriptions might need to be updated, since it seems like what it's really testing is dblink/postgres_fdw against gssdeleg=enabled. Also looks like later tests are explicitly testing w/gssencmode=require so making me think this is mislabeled further.\r\n\r\n---\r\n\r\nThis is what I noticed on an initial pass-through.\r\n\r\nBest,\r\n\r\nDavid\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Mon, 03 Apr 2023 22:55:30 +0000",
"msg_from": "David Christensen <david+pg@pgguru.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* David Christensen (david+pg@pgguru.net) wrote:\n> Did a code review pass here; here is some feedback.\n\nThanks!\n\n> +\t/* If password was used to connect, make sure it was one provided */\n> +\tif (PQconnectionUsedPassword(conn) && dblink_connstr_has_pw(connstr))\n> +\t\treturn;\n> \n> Do we need to consider whether these passwords are the same? Is there a different vector where a different password could be acquired from a different source (PGPASSWORD, say) while both of these criteria are true? Seems like it probably doesn't matter that much considering we only checked Password alone in previous version of this code.\n\nNote that this patch isn't really changing how these checks are being\ndone but more moving them around and allowing a GSSAPI-based approach\nwith credential delegation to also be allowed.\n\nThat said, as noted in the comments above dblink_connstr_check():\n\n * For non-superusers, insist that the connstr specify a password, except\n * if GSSAPI credentials have been proxied (and we check that they are used\n * for the connection in dblink_security_check later). This prevents a\n * password or GSSAPI credentials from being picked up from .pgpass, a\n * service file, the environment, etc. We don't want the postgres user's\n * passwords or Kerberos credentials to be accessible to non-superusers.\n\nThe point of these checks is, indeed, to ensure that environmental\nvalues such as a .pgpass or variable don't end up getting picked up and\nused (or, if they do, we realize it post-connection and then throw away\nthe connection).\n\nlibpq does explicitly prefer to use the password passed in as part of\nthe connection string and won't attempt to look up passwords in a\n.pgpass file or similar if a password has been included in the\nconnection string.\n\n> Looks like the pg_gssinfo struct hides the `proxy_creds` def behind:\n> \n> #if defined(ENABLE_GSS) | defined(ENABLE_SSPI)\n> typedef struct\n> {\n> gss_buffer_desc outbuf;\t\t/* GSSAPI output token buffer */\n> #ifdef ENABLE_GSS\n> ...\n> bool\t\tproxy_creds;\t/* GSSAPI Delegated/proxy credentials */\n> #endif\n> } pg_gssinfo;\n> #endif\n\n... right, proxy_creds only exists (today anyway) if ENABLE_GSS is set.\n\n> Which means that the later check in `be_gssapi_get_proxy()` we have:\n> \n> /*\n> * Return if GSSAPI delegated/proxy credentials were included on this\n> * connection.\n> */\n> bool\n> be_gssapi_get_proxy(Port *port)\n> {\n> if (!port || !port->gss)\n> return NULL;\n> \n> return port->gss->proxy_creds;\n> }\n\nbut we don't build be-secure-gssapi.c, where this function is added,\nunless --with-gssapi is included, from src/backend/libpq/Makefile:\n\nifeq ($(with_gssapi),yes)\nOBJS += be-gssapi-common.o be-secure-gssapi.o\nendif\n\nFurther, src/include/libpq/libpq-be.h has a matching #ifdef ENABLE_GSS\nfor the function declarations:\n\n#ifdef ENABLE_GSS\n/*\n * Return information about the GSSAPI authenticated connection\n */\nextern bool be_gssapi_get_auth(Port *port);\nextern bool be_gssapi_get_enc(Port *port);\nextern const char *be_gssapi_get_princ(Port *port);\nextern bool be_gssapi_get_proxy(Port *port);\n\n> So in theory it'd be possible to have SSPI enabled but GSS disabled and we'd fail to compile in that case. (It may be that this routine is never *actually* called in that case, just noting compile-time considerations.) I'm not seeing guards in the actual PQ* routines, but don't think I've done an exhaustive search.\n\nFairly confident the analysis here is wrong, further, the cfbot seems to\nagree that there isn't a compile failure here:\n\nhttps://cirrus-ci.com/task/6589717672624128\n\n[20:19:15.985] gss : NO\n\n(we always build with SSPI on Windows, per\nsrc/include/port/win32_port.h).\n\n> gss_accept_deleg\n> \n> \n> + <para>\n> + Forward (delegate) GSS credentials to the server. The default is\n> + <literal>disable</literal> which means credentials will not be forwarded\n> + to the server. Set this to <literal>enable</literal> to have\n> + credentials forwarded when possible.\n> \n> When is this not possible? Server policy? External factors?\n\nThe Kerberos credentials have to be forwardable for them to be allowed\nto be forwarded and the server has to be configured to accept them.\n\n> </para>\n> <para>\n> Only superusers may connect to foreign servers without password\n> - authentication, so always specify the <literal>password</literal> option\n> - for user mappings belonging to non-superusers.\n> + authentication or using gssapi proxied credentials, so specify the\n> + <literal>password</literal> option for user mappings belonging to\n> + non-superusers who are not able to proxy GSSAPI credentials.\n> </para>\n> <para>\n> \n> s/gssapi/GSSAPI/; this is kind of confusing, as this makes it sound like only superuser may use GSSAPI proxied credentials, which I disbelieve to be true. Additionally, it sounds like you're wanting to explicitly maintain a denylist for users to not be allowed proxying; is that correct?\n\nUpdated to GSSAPI and reworded in the updated patch (attached).\nCertainly open to suggestions on how to improve the documentation here.\nThere is no 'denylist' for users when it comes to GSSAPI proxied\ncredentials. If there's a use-case for that then it could be added in\nthe future.\n\n> ---\n> \n> libpq/auth.c:\n> \n> \t\tif (proxy != NULL)\n> \t\t{\n> \t\t\tpg_store_proxy_credential(proxy);\n> \t\t\tport->gss->proxy_creds = true;\n> \t\t}\n> \n> Per GSS docs, seems like we should be comparing to GSS_C_NO_CREDENTIAL and validating that the gflags has the `deleg_flag` bit set before considering whether there are valid credentials; in practice this might be the same effect (haven't looked at what that symbol actually resolves to, but NULL would be sensible).\n\nGSS_C_NO_CREDENTIAL is indeed NULL, but updated to that anyway to be a\nbit cleaner and also added an explicit check that GSS_C_DELEG_FLAG was\nset in gflags.\n\n> Are there other cases we might need to consider here, like valid credentials, but they are expired? (GSS_S_CREDENTIALS_EXPIRED)\n\nShort answer is no, I don't believe we need to. We shouldn't actually\nget any expired credentials but even if we did, worst is that we'd end\nup storing them and they wouldn't be able to be used because they're\nexpired.\n\n> ---\n> \n> +\t/*\n> +\t * Set KRB5CCNAME for this backend, so that later calls to gss_acquire_cred\n> +\t * will find the proxied credentials we stored.\n> +\t */\n> \n> So I'm not seeing this in other use in the code; I assume this is just used by the krb5 libs?\n\nNot sure I'm following. gss_acquire_cred() is called in\nsrc/interfaces/libpq/fe-gssapi-common.c.\n\n> Similar q's for the other places the pg_gss_accept_deleg are used.\n\npg_gss_accept_deleg is checked in the two paths where we could have\ncredentials delegated to us- either through the encrypted-GSSAPI\nconnection path in libpq/be-secure-gssapi.c, or the\nnot-using-GSSAPI-encryption path in libpq/auth.c.\n\n> ---\n> \n> +int\n> +PQconnectionUsedGSSAPI(const PGconn *conn)\n> +{\n> +\tif (!conn)\n> +\t\treturn false;\n> +\tif (conn->gssapi_used)\n> +\t\treturn true;\n> +\telse\n> +\t\treturn false;\n> +}\n> \n> Micro-gripe: this routine seems like could be simpler, though the compiler probably has the same thing to say for either, so maybe code clarity is better as written:\n> \n> int\n> PQconnectionUsedGSSAPI(const PGconn *conn)\n> {\n> return conn && conn->gssapi_used;\n> }\n\nI tend to disagree- explicitly returning true/false seems a bit clearer\nto me and is also in-line with what other functions in\nlibpq/fe-connect.c are doing. Having this function be different from,\neg, PQconnectionUsedPassword, would probably end up having more\nquestions about why they're different. Either way, I'd say we change\nboth or neither and that doesn't really need to be part of this patch.\n\n> ---\n> \n> Anything required for adding meson support? I notice src/test/kerberos has Makefile updated, but no meson.build files are changed.\n\nShort answer is- I don't think so (happy to be told I'm wrong though, if\nsomeone wants to tell me what's wrong). The other src/test modules that\nhave EXTRA_INSTALL lines don't have anything for those in the\nmeson.build, so I'm guessing the assumption is that everything is built\nwhen using meson.\n\n> ---\n> \n> Two tests in src/test/kerberos/t/001_auth.pl at :535 and :545 have the same test description:\n> \n> +\t'succeeds with GSS-encrypted access required and hostgssenc hba and credentials not forwarded',\n> \n> Since the first test has only `gssencmode` defined (so implicit `gssdeleg` value) and the second has `gssdeleg=disable` I'd suggest that the test on :545 should have its description updated to add the word \"explicitly\":\n> \n> 'succeeds with GSS-encrypted access required and hostgssenc hba and credentials explicitly not forwarded',\n\nSure, updated.\n\n> ---\n> \n> In the dblink test, this seems like debugging junk:\n> \n> +print (\"$psql_out\");\n> +print (\"$psql_stderr\");\n\nAh, yeah, removed.\n\n> Whacking those lines and reviewing the surrounding code block: so this is testing that dblink won't use `.pgpass`; so is this a behavior change, and dblink could be previously used w/postgres user's .pgpass file? I assume since this patch is forbidding this, we've decided that that is a bad idea--was this updated in the docs to note that this is now forbidden, or is this something that should only apply in some cases (i.e., this is now config-specific)? If config-specific, should we have a test in the non-forwarded version of these tests that exercises that behavior?\n\nYes, that's what is being tested, but non-superuser dblink already won't\nuse a .pgpass file if it exists, so it's not a behavior change. I added\nexplicit tests here though to make sure that even a dblink connection\ncreated without a password being used in the connection string (because\nGSSAPI credentials were proxied) won't end up using the .pgpass file.\n\nAdditional tests could perhaps be added to dblink itself (don't know\nthat we really need to hide those tests under src/test/kerberos) to make\nsure that it's not going to use the .pgpass file; I'm not sure why that\nwasn't done previously (it was done for postgres_fdw though and the\napproach in each is basically the same...).\n\n> +$psql_rc = $node->psql(\n> + 'postgres',\n> +\t\"SELECT * FROM dblink('user=test2 dbname=$dbname port=$port passfile=$pgpass','select 1') as t1(c1 int);\",\n> +\tconnstr => \"user=test1 host=$host hostaddr=$hostaddr gssencmode=require gssdeleg=disable\",\n> +\tstdout => \\$psql_out,\n> \n> +is($psql_rc,'3','dblink does not work without proxied credentials and with passfile');\n> +like($psql_stderr, qr/password or gssapi proxied credentials required/,'dblink does not work without proxied credentials and with passfile');\n> \n> Same Q's apply to the postgres_fdw version of these tests.\n\nRegarding postgres_fdw, there are already tests in contrib/postgres_fdw\nto make sure that when password_required=true that .pgpass and such\ndon't end up getting used, but when password_required=false (which can\nonly be set by a superuser) then it's allowed to use environmental\nauthentication options such as a .pgpass file.\n\n> ---\n> \n> :659 and :667, the test description says non-encrypted and the gssencmode=prefer implies encrypted; seems like those descriptions might need to be updated, since it seems like what it's really testing is dblink/postgres_fdw against gssdeleg=enabled.\n\nThe server is configured at this point to not accept encrypted\nconnections (the pg_hba.conf has only:\n\nlocal all test2 scram-sha-256\nhostnogssenc all all $hostaddr/32 gss map=mymap\n\nin it).\n\nUpdated the test descriptions.\n\n> Also looks like later tests are explicitly testing w/gssencmode=require so making me think this is mislabeled further.\n\nThose are after the pg_hba.conf has been adjusted again to allow\nencrypted connections.\n\n> This is what I noticed on an initial pass-through.\n> The new status of this patch is: Waiting on Author\n\nChanged back to Needs Review.\n\nThanks again!\n\nStephen",
"msg_date": "Wed, 5 Apr 2023 16:30:45 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 3:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * David Christensen (david+pg@pgguru.net) wrote:\n> > Did a code review pass here; here is some feedback.\n>\n> Thanks!\n>\n> > + /* If password was used to connect, make sure it was one provided\n> */\n> > + if (PQconnectionUsedPassword(conn) &&\n> dblink_connstr_has_pw(connstr))\n> > + return;\n> >\n> > Do we need to consider whether these passwords are the same? Is there\n> a different vector where a different password could be acquired from a\n> different source (PGPASSWORD, say) while both of these criteria are true?\n> Seems like it probably doesn't matter that much considering we only checked\n> Password alone in previous version of this code.\n>\n> Note that this patch isn't really changing how these checks are being\n> done but more moving them around and allowing a GSSAPI-based approach\n> with credential delegation to also be allowed.\n>\n> That said, as noted in the comments above dblink_connstr_check():\n>\n> * For non-superusers, insist that the connstr specify a password, except\n> * if GSSAPI credentials have been proxied (and we check that they are used\n> * for the connection in dblink_security_check later). This prevents a\n> * password or GSSAPI credentials from being picked up from .pgpass, a\n> * service file, the environment, etc. We don't want the postgres user's\n> * passwords or Kerberos credentials to be accessible to non-superusers.\n>\n> The point of these checks is, indeed, to ensure that environmental\n> values such as a .pgpass or variable don't end up getting picked up and\n> used (or, if they do, we realize it post-connection and then throw away\n> the connection).\n>\n> libpq does explicitly prefer to use the password passed in as part of\n> the connection string and won't attempt to look up passwords in a\n> .pgpass file or similar if a password has been included in the\n> connection string.\n>\n\nThe case I think I was thinking of was (manufactured) when we connected to\na backend with one password but the dblink or postgresql_fdw includes an\nexplicit password to a different server. But now I'm thinking that this\nPQconnectionUsedPassword() is checking the outgoing connection for dblink\nitself, not the connection of the backend that connected to the main\nserver, so I think this objection is moot, like you say.\n\n> Looks like the pg_gssinfo struct hides the `proxy_creds` def behind:\n> >\n> > #if defined(ENABLE_GSS) | defined(ENABLE_SSPI)\n> > typedef struct\n> > {\n> > gss_buffer_desc outbuf; /* GSSAPI output token\n> buffer */\n> > #ifdef ENABLE_GSS\n> > ...\n> > bool proxy_creds; /* GSSAPI Delegated/proxy\n> credentials */\n> > #endif\n> > } pg_gssinfo;\n> > #endif\n>\n> ... right, proxy_creds only exists (today anyway) if ENABLE_GSS is set.\n>\n> > Which means that the later check in `be_gssapi_get_proxy()` we have:\n\n\n [analysis snipped]\n\nFairly confident the analysis here is wrong, further, the cfbot seems to\n> agree that there isn't a compile failure here:\n>\n> https://cirrus-ci.com/task/6589717672624128\n>\n> [20:19:15.985] gss : NO\n>\n> (we always build with SSPI on Windows, per\n> src/include/port/win32_port.h).\n>\n\nCool; since we have coverage for that case seems like my concern was\nunwarranted.\n\n[snip]\n\n\n> > </para>\n> > <para>\n> > Only superusers may connect to foreign servers without password\n> > - authentication, so always specify the <literal>password</literal>\n> option\n> > - for user mappings belonging to non-superusers.\n> > + authentication or using gssapi proxied credentials, so specify the\n> > + <literal>password</literal> option for user mappings belonging to\n> > + non-superusers who are not able to proxy GSSAPI credentials.\n> > </para>\n> > <para>\n> >\n> > s/gssapi/GSSAPI/; this is kind of confusing, as this makes it sound like\n> only superuser may use GSSAPI proxied credentials, which I disbelieve to be\n> true. Additionally, it sounds like you're wanting to explicitly maintain a\n> denylist for users to not be allowed proxying; is that correct?\n>\n> Updated to GSSAPI and reworded in the updated patch (attached).\n> Certainly open to suggestions on how to improve the documentation here.\n> There is no 'denylist' for users when it comes to GSSAPI proxied\n> credentials. If there's a use-case for that then it could be added in\n> the future.\n>\n\nOkay, I think your revisions here seem more clear, thanks.\n\n\n>\n> > ---\n> >\n> > libpq/auth.c:\n> >\n> > if (proxy != NULL)\n> > {\n> > pg_store_proxy_credential(proxy);\n> > port->gss->proxy_creds = true;\n> > }\n> >\n> > Per GSS docs, seems like we should be comparing to GSS_C_NO_CREDENTIAL\n> and validating that the gflags has the `deleg_flag` bit set before\n> considering whether there are valid credentials; in practice this might be\n> the same effect (haven't looked at what that symbol actually resolves to,\n> but NULL would be sensible).\n>\n> GSS_C_NO_CREDENTIAL is indeed NULL, but updated to that anyway to be a\n> bit cleaner and also added an explicit check that GSS_C_DELEG_FLAG was\n> set in gflags.\n>\n\n+ proxy = NULL;\n[...]\n+ if (proxy != GSS_C_NO_CREDENTIAL && gflags & GSS_C_DELEG_FLAG)\n\nWe should probably also initialize \"proxy\" to GSS_C_NO_CREDENTIAL as well,\nyes?\n\n\n> > Are there other cases we might need to consider here, like valid\n> credentials, but they are expired? (GSS_S_CREDENTIALS_EXPIRED)\n>\n> Short answer is no, I don't believe we need to. We shouldn't actually\n> get any expired credentials but even if we did, worst is that we'd end\n> up storing them and they wouldn't be able to be used because they're\n> expired\n\n\nOkay.\n\n> ---\n>\n>\n> > + /*\n> > + * Set KRB5CCNAME for this backend, so that later calls to\n> gss_acquire_cred\n> > + * will find the proxied credentials we stored.\n> > + */\n> >\n> > So I'm not seeing this in other use in the code; I assume this is just\n> used by the krb5 libs?\n>\n> Not sure I'm following. gss_acquire_cred() is called in\n> src/interfaces/libpq/fe-gssapi-common.c.\n>\n\nI just meant the KRB5CCNAME envvar itself; looks like my assumption was\nright.\n\n\n> > Similar q's for the other places the pg_gss_accept_deleg are used.\n>\n> pg_gss_accept_deleg is checked in the two paths where we could have\n> credentials delegated to us- either through the encrypted-GSSAPI\n> connection path in libpq/be-secure-gssapi.c, or the\n> not-using-GSSAPI-encryption path in libpq/auth.c.\n>\n\nSounds good.\n\n\n> > ---\n> >\n> > +int\n> > +PQconnectionUsedGSSAPI(const PGconn *conn)\n> > +{\n> > + if (!conn)\n> > + return false;\n> > + if (conn->gssapi_used)\n> > + return true;\n> > + else\n> > + return false;\n> > +}\n> >\n> > Micro-gripe: this routine seems like could be simpler, though the\n> compiler probably has the same thing to say for either, so maybe code\n> clarity is better as written:\n> >\n> > int\n> > PQconnectionUsedGSSAPI(const PGconn *conn)\n> > {\n> > return conn && conn->gssapi_used;\n> > }\n>\n> I tend to disagree- explicitly returning true/false seems a bit clearer\n> to me and is also in-line with what other functions in\n> libpq/fe-connect.c are doing. Having this function be different from,\n> eg, PQconnectionUsedPassword, would probably end up having more\n> questions about why they're different. Either way, I'd say we change\n> both or neither and that doesn't really need to be part of this patch.\n>\n\nFair points; we should presumably optimize for comprehension.\n\n\n> > ---\n> >\n> > Anything required for adding meson support? I notice src/test/kerberos\n> has Makefile updated, but no meson.build files are changed.\n>\n> Short answer is- I don't think so (happy to be told I'm wrong though, if\n> someone wants to tell me what's wrong). The other src/test modules that\n> have EXTRA_INSTALL lines don't have anything for those in the\n> meson.build, so I'm guessing the assumption is that everything is built\n> when using meson.\n>\n\nOkay, just validating.\n\n\n> > ---\n> >\n> > Two tests in src/test/kerberos/t/001_auth.pl at :535 and :545 have the\n> same test description:\n> >\n> > + 'succeeds with GSS-encrypted access required and hostgssenc hba\n> and credentials not forwarded',\n> >\n> > Since the first test has only `gssencmode` defined (so implicit\n> `gssdeleg` value) and the second has `gssdeleg=disable` I'd suggest that\n> the test on :545 should have its description updated to add the word\n> \"explicitly\":\n> >\n> > 'succeeds with GSS-encrypted access required and hostgssenc hba and\n> credentials explicitly not forwarded',\n>\n> Sure, updated.\n>\n\nThanks.\n\n> ---\n> >\n> > In the dblink test, this seems like debugging junk:\n> >\n> > +print (\"$psql_out\");\n> > +print (\"$psql_stderr\");\n>\n> Ah, yeah, removed.\n>\n> > Whacking those lines and reviewing the surrounding code block: so this\n> is testing that dblink won't use `.pgpass`; so is this a behavior change,\n> and dblink could be previously used w/postgres user's .pgpass file? I\n> assume since this patch is forbidding this, we've decided that that is a\n> bad idea--was this updated in the docs to note that this is now forbidden,\n> or is this something that should only apply in some cases (i.e., this is\n> now config-specific)? If config-specific, should we have a test in the\n> non-forwarded version of these tests that exercises that behavior?\n>\n> Yes, that's what is being tested, but non-superuser dblink already won't\n> use a .pgpass file if it exists, so it's not a behavior change. I added\n> explicit tests here though to make sure that even a dblink connection\n> created without a password being used in the connection string (because\n> GSSAPI credentials were proxied) won't end up using the .pgpass file.\n>\n> Additional tests could perhaps be added to dblink itself (don't know\n> that we really need to hide those tests under src/test/kerberos) to make\n> sure that it's not going to use the .pgpass file; I'm not sure why that\n> wasn't done previously (it was done for postgres_fdw though and the\n> approach in each is basically the same...).\n>\n\nI'm fine with that being out-of-scope for this patch and agreed there are\nmore appropriate places for this one than the kerberos tests.\n\n[snip]\n\nSo on a re-read of the v7 patch, there seems to be a bit of inconsistent\nusage between delegation and proxying; i.e., the field itself is called\ngss_proxy in the gssstatus struct, authentication messages, etc, but the\nsetting and docs refer to GSS delegation. Are there subtle distinctions\nbetween these? It seems like this patch is using them interchangeably, so\nit might be good to settle on one terminology here unless there are already\nwell-defined categories for where to use one and where to use the other.\n\n Thanks,\n\nDavid\n\nOn Wed, Apr 5, 2023 at 3:30 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* David Christensen (david+pg@pgguru.net) wrote:\n> Did a code review pass here; here is some feedback.\n\nThanks!\n\n> + /* If password was used to connect, make sure it was one provided */\n> + if (PQconnectionUsedPassword(conn) && dblink_connstr_has_pw(connstr))\n> + return;\n> \n> Do we need to consider whether these passwords are the same? Is there a different vector where a different password could be acquired from a different source (PGPASSWORD, say) while both of these criteria are true? Seems like it probably doesn't matter that much considering we only checked Password alone in previous version of this code.\n\nNote that this patch isn't really changing how these checks are being\ndone but more moving them around and allowing a GSSAPI-based approach\nwith credential delegation to also be allowed.\n\nThat said, as noted in the comments above dblink_connstr_check():\n\n * For non-superusers, insist that the connstr specify a password, except\n * if GSSAPI credentials have been proxied (and we check that they are used\n * for the connection in dblink_security_check later). This prevents a\n * password or GSSAPI credentials from being picked up from .pgpass, a\n * service file, the environment, etc. We don't want the postgres user's\n * passwords or Kerberos credentials to be accessible to non-superusers.\n\nThe point of these checks is, indeed, to ensure that environmental\nvalues such as a .pgpass or variable don't end up getting picked up and\nused (or, if they do, we realize it post-connection and then throw away\nthe connection).\n\nlibpq does explicitly prefer to use the password passed in as part of\nthe connection string and won't attempt to look up passwords in a\n.pgpass file or similar if a password has been included in the\nconnection string.The case I think I was thinking of was (manufactured) when we connected to a backend with one password but the dblink or postgresql_fdw includes an explicit password to a different server. But now I'm thinking that this PQconnectionUsedPassword() is checking the outgoing connection for dblink itself, not the connection of the backend that connected to the main server, so I think this objection is moot, like you say.\n> Looks like the pg_gssinfo struct hides the `proxy_creds` def behind:\n> \n> #if defined(ENABLE_GSS) | defined(ENABLE_SSPI)\n> typedef struct\n> {\n> gss_buffer_desc outbuf; /* GSSAPI output token buffer */\n> #ifdef ENABLE_GSS\n> ...\n> bool proxy_creds; /* GSSAPI Delegated/proxy credentials */\n> #endif\n> } pg_gssinfo;\n> #endif\n\n... right, proxy_creds only exists (today anyway) if ENABLE_GSS is set.\n\n> Which means that the later check in `be_gssapi_get_proxy()` we have: [analysis snipped] Fairly confident the analysis here is wrong, further, the cfbot seems to\nagree that there isn't a compile failure here:\n\nhttps://cirrus-ci.com/task/6589717672624128\n\n[20:19:15.985] gss : NO\n\n(we always build with SSPI on Windows, per\nsrc/include/port/win32_port.h).Cool; since we have coverage for that case seems like my concern was unwarranted. [snip] > </para>\n> <para>\n> Only superusers may connect to foreign servers without password\n> - authentication, so always specify the <literal>password</literal> option\n> - for user mappings belonging to non-superusers.\n> + authentication or using gssapi proxied credentials, so specify the\n> + <literal>password</literal> option for user mappings belonging to\n> + non-superusers who are not able to proxy GSSAPI credentials.\n> </para>\n> <para>\n> \n> s/gssapi/GSSAPI/; this is kind of confusing, as this makes it sound like only superuser may use GSSAPI proxied credentials, which I disbelieve to be true. Additionally, it sounds like you're wanting to explicitly maintain a denylist for users to not be allowed proxying; is that correct?\n\nUpdated to GSSAPI and reworded in the updated patch (attached).\nCertainly open to suggestions on how to improve the documentation here.\nThere is no 'denylist' for users when it comes to GSSAPI proxied\ncredentials. If there's a use-case for that then it could be added in\nthe future.Okay, I think your revisions here seem more clear, thanks. \n\n> ---\n> \n> libpq/auth.c:\n> \n> if (proxy != NULL)\n> {\n> pg_store_proxy_credential(proxy);\n> port->gss->proxy_creds = true;\n> }\n> \n> Per GSS docs, seems like we should be comparing to GSS_C_NO_CREDENTIAL and validating that the gflags has the `deleg_flag` bit set before considering whether there are valid credentials; in practice this might be the same effect (haven't looked at what that symbol actually resolves to, but NULL would be sensible).\n\nGSS_C_NO_CREDENTIAL is indeed NULL, but updated to that anyway to be a\nbit cleaner and also added an explicit check that GSS_C_DELEG_FLAG was\nset in gflags.+\tproxy = NULL;[...]+\t\tif (proxy != GSS_C_NO_CREDENTIAL && gflags & GSS_C_DELEG_FLAG)We should probably also initialize \"proxy\" to GSS_C_NO_CREDENTIAL as well, yes? \n> Are there other cases we might need to consider here, like valid credentials, but they are expired? (GSS_S_CREDENTIALS_EXPIRED)\n\nShort answer is no, I don't believe we need to. We shouldn't actually\nget any expired credentials but even if we did, worst is that we'd end\nup storing them and they wouldn't be able to be used because they're\nexpired Okay.> ---\n> \n> + /*\n> + * Set KRB5CCNAME for this backend, so that later calls to gss_acquire_cred\n> + * will find the proxied credentials we stored.\n> + */\n> \n> So I'm not seeing this in other use in the code; I assume this is just used by the krb5 libs?\n\nNot sure I'm following. gss_acquire_cred() is called in\nsrc/interfaces/libpq/fe-gssapi-common.c.I just meant the KRB5CCNAME envvar itself; looks like my assumption was right. \n> Similar q's for the other places the pg_gss_accept_deleg are used.\n\npg_gss_accept_deleg is checked in the two paths where we could have\ncredentials delegated to us- either through the encrypted-GSSAPI\nconnection path in libpq/be-secure-gssapi.c, or the\nnot-using-GSSAPI-encryption path in libpq/auth.c.Sounds good. \n> ---\n> \n> +int\n> +PQconnectionUsedGSSAPI(const PGconn *conn)\n> +{\n> + if (!conn)\n> + return false;\n> + if (conn->gssapi_used)\n> + return true;\n> + else\n> + return false;\n> +}\n> \n> Micro-gripe: this routine seems like could be simpler, though the compiler probably has the same thing to say for either, so maybe code clarity is better as written:\n> \n> int\n> PQconnectionUsedGSSAPI(const PGconn *conn)\n> {\n> return conn && conn->gssapi_used;\n> }\n\nI tend to disagree- explicitly returning true/false seems a bit clearer\nto me and is also in-line with what other functions in\nlibpq/fe-connect.c are doing. Having this function be different from,\neg, PQconnectionUsedPassword, would probably end up having more\nquestions about why they're different. Either way, I'd say we change\nboth or neither and that doesn't really need to be part of this patch.Fair points; we should presumably optimize for comprehension. \n> ---\n> \n> Anything required for adding meson support? I notice src/test/kerberos has Makefile updated, but no meson.build files are changed.\n\nShort answer is- I don't think so (happy to be told I'm wrong though, if\nsomeone wants to tell me what's wrong). The other src/test modules that\nhave EXTRA_INSTALL lines don't have anything for those in the\nmeson.build, so I'm guessing the assumption is that everything is built\nwhen using meson.Okay, just validating. \n> ---\n> \n> Two tests in src/test/kerberos/t/001_auth.pl at :535 and :545 have the same test description:\n> \n> + 'succeeds with GSS-encrypted access required and hostgssenc hba and credentials not forwarded',\n> \n> Since the first test has only `gssencmode` defined (so implicit `gssdeleg` value) and the second has `gssdeleg=disable` I'd suggest that the test on :545 should have its description updated to add the word \"explicitly\":\n> \n> 'succeeds with GSS-encrypted access required and hostgssenc hba and credentials explicitly not forwarded',\n\nSure, updated.Thanks.\n> ---\n> \n> In the dblink test, this seems like debugging junk:\n> \n> +print (\"$psql_out\");\n> +print (\"$psql_stderr\");\n\nAh, yeah, removed.\n\n> Whacking those lines and reviewing the surrounding code block: so this is testing that dblink won't use `.pgpass`; so is this a behavior change, and dblink could be previously used w/postgres user's .pgpass file? I assume since this patch is forbidding this, we've decided that that is a bad idea--was this updated in the docs to note that this is now forbidden, or is this something that should only apply in some cases (i.e., this is now config-specific)? If config-specific, should we have a test in the non-forwarded version of these tests that exercises that behavior?\n\nYes, that's what is being tested, but non-superuser dblink already won't\nuse a .pgpass file if it exists, so it's not a behavior change. I added\nexplicit tests here though to make sure that even a dblink connection\ncreated without a password being used in the connection string (because\nGSSAPI credentials were proxied) won't end up using the .pgpass file.\n\nAdditional tests could perhaps be added to dblink itself (don't know\nthat we really need to hide those tests under src/test/kerberos) to make\nsure that it's not going to use the .pgpass file; I'm not sure why that\nwasn't done previously (it was done for postgres_fdw though and the\napproach in each is basically the same...).I'm fine with that being out-of-scope for this patch and agreed there are more appropriate places for this one than the kerberos tests.[snip]So on a re-read of the v7 patch, there seems to be a bit of inconsistent usage between delegation and proxying; i.e., the field itself is called gss_proxy in the gssstatus struct, authentication messages, etc, but the setting and docs refer to GSS delegation. Are there subtle distinctions between these? It seems like this patch is using them interchangeably, so it might be good to settle on one terminology here unless there are already well-defined categories for where to use one and where to use the other. Thanks,David",
"msg_date": "Wed, 5 Apr 2023 20:41:25 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* David Christensen (david@pgguru.net) wrote:\n> On Wed, Apr 5, 2023 at 3:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Per GSS docs, seems like we should be comparing to GSS_C_NO_CREDENTIAL\n> > and validating that the gflags has the `deleg_flag` bit set before\n> > considering whether there are valid credentials; in practice this might be\n> > the same effect (haven't looked at what that symbol actually resolves to,\n> > but NULL would be sensible).\n> >\n> > GSS_C_NO_CREDENTIAL is indeed NULL, but updated to that anyway to be a\n> > bit cleaner and also added an explicit check that GSS_C_DELEG_FLAG was\n> > set in gflags.\n> \n> + proxy = NULL;\n> [...]\n> + if (proxy != GSS_C_NO_CREDENTIAL && gflags & GSS_C_DELEG_FLAG)\n> \n> We should probably also initialize \"proxy\" to GSS_C_NO_CREDENTIAL as well,\n> yes?\n\nSure, done, and updated for both auth.c and be-secure-gssapi.c\n\n> > > + /*\n> > > + * Set KRB5CCNAME for this backend, so that later calls to\n> > gss_acquire_cred\n> > > + * will find the proxied credentials we stored.\n> > > + */\n> > >\n> > > So I'm not seeing this in other use in the code; I assume this is just\n> > used by the krb5 libs?\n> >\n> > Not sure I'm following. gss_acquire_cred() is called in\n> > src/interfaces/libpq/fe-gssapi-common.c.\n> \n> I just meant the KRB5CCNAME envvar itself; looks like my assumption was\n> right.\n\nAh, yes, that's correct.\n\n> So on a re-read of the v7 patch, there seems to be a bit of inconsistent\n> usage between delegation and proxying; i.e., the field itself is called\n> gss_proxy in the gssstatus struct, authentication messages, etc, but the\n> setting and docs refer to GSS delegation. Are there subtle distinctions\n> between these? It seems like this patch is using them interchangeably, so\n> it might be good to settle on one terminology here unless there are already\n> well-defined categories for where to use one and where to use the other.\n\nThat's a fair point and so I've updated the patch to consistently use\n'delegated credentials' and similar to match the Kerberos documentation.\nIn Kerberos there is *also* the concept of proxied credentials which are\nvery very similar to delegated credentials (they're actually\n\"constrainted delegations\") but they're not exactly the same and that\nisn't what we're doing with this particular patch (though I hope that\nonce we get support for unconstrained delegation, which is what this\npatch is doing, we can then go add support for constrainted\ndelegations).\n\nUpdated patch attached.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 17:09:09 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Reviewed v8; largely looking good, though I notice this hunk, which may\narguably be a bug fix, but doesn't appear to be relevant to this specific\npatch, so could probably be debated independently (and if a bug, should\nprobably be backpatched):\n\ndiff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.c\nindex 4229d2048c..11d41979c6 100644\n--- a/contrib/postgres_fdw/option.c\n+++ b/contrib/postgres_fdw/option.c\n@@ -288,6 +288,9 @@ InitPgFdwOptions(void)\n {\"sslcert\", UserMappingRelationId, true},\n {\"sslkey\", UserMappingRelationId, true},\n\n+ /* gssencmode is also libpq option, same to above. */\n+ {\"gssencmode\", UserMappingRelationId, true},\n+\n {NULL, InvalidOid, false}\n };\n\nThat said, should \"gssdeleg\" be exposed as a user mapping? (This shows up\nin postgresql_fdw; not sure if there are other places that would be\nrelevant, like in dblink somewhere as well, just a thought.)\n\nBest,\n\nDavid\n\nReviewed v8; largely looking good, though I notice this hunk, which may arguably be a bug fix, but doesn't appear to be relevant to this specific patch, so could probably be debated independently (and if a bug, should probably be backpatched):diff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.cindex 4229d2048c..11d41979c6 100644--- a/contrib/postgres_fdw/option.c+++ b/contrib/postgres_fdw/option.c@@ -288,6 +288,9 @@ InitPgFdwOptions(void) \t\t{\"sslcert\", UserMappingRelationId, true}, \t\t{\"sslkey\", UserMappingRelationId, true}, +\t\t/* gssencmode is also libpq option, same to above. */+\t\t{\"gssencmode\", UserMappingRelationId, true},+ \t\t{NULL, InvalidOid, false} \t}; That said, should \"gssdeleg\" be exposed as a user mapping? (This shows up in postgresql_fdw; not sure if there are other places that would be relevant, like in dblink somewhere as well, just a thought.)Best,David",
"msg_date": "Fri, 7 Apr 2023 16:43:44 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* David Christensen (david@pgguru.net) wrote:\n> Reviewed v8; largely looking good, though I notice this hunk, which may\n> arguably be a bug fix, but doesn't appear to be relevant to this specific\n> patch, so could probably be debated independently (and if a bug, should\n> probably be backpatched):\n> \n> diff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.c\n> index 4229d2048c..11d41979c6 100644\n> --- a/contrib/postgres_fdw/option.c\n> +++ b/contrib/postgres_fdw/option.c\n> @@ -288,6 +288,9 @@ InitPgFdwOptions(void)\n> {\"sslcert\", UserMappingRelationId, true},\n> {\"sslkey\", UserMappingRelationId, true},\n> \n> + /* gssencmode is also libpq option, same to above. */\n> + {\"gssencmode\", UserMappingRelationId, true},\n> +\n> {NULL, InvalidOid, false}\n> };\n\nHmm, yeah, hard to say if that makes sense at a user-mapping level or\nnot. Agreed that we could have an independent discussion regarding\nthat and if it should be back-patched, so removed it from this patch.\n\n> That said, should \"gssdeleg\" be exposed as a user mapping? (This shows up\n> in postgresql_fdw; not sure if there are other places that would be\n> relevant, like in dblink somewhere as well, just a thought.)\n\nAh, yeah, that certainly makes sense to have as optional for a user\nmapping. dblink doesn't have the distinction between server-level\noptions and user mapping options (as it doesn't have user mappings at\nall really) so it doesn't have something similar.\n\nUpdated patch attached.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 17:48:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Ok, based on the interdiff there, I'm happy with that last change. Marking\nas Ready For Committer.\n\nBest,\n\nDavid\n\nOk, based on the interdiff there, I'm happy with that last change. Marking as Ready For Committer.Best,David",
"msg_date": "Fri, 7 Apr 2023 16:53:06 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* David Christensen (david@pgguru.net) wrote:\n> Ok, based on the interdiff there, I'm happy with that last change. Marking\n> as Ready For Committer.\n\nGreat, thanks!\n\nI'm going to go through it again myself but I feel reasonably good about\nit and if nothing else pops and there aren't objections, I'll push it\nbefore feature freeze.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 19:43:14 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * David Christensen (david@pgguru.net) wrote:\n> > Ok, based on the interdiff there, I'm happy with that last change. Marking\n> > as Ready For Committer.\n> \n> Great, thanks!\n> \n> I'm going to go through it again myself but I feel reasonably good about\n> it and if nothing else pops and there aren't objections, I'll push it\n> before feature freeze.\n\nAlright, I've cleaned up a few of the error messages to consistently use\nGSSAPI (instead of a mix of GSSAPI and gssapi) and run the changes through\npgindent. Updated patch attached. cfbot looks happy. Feeling like\nthis is looking just about ready to go.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 21:28:08 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Stephen Frost (sfrost@snowman.net) wrote:\n> > * David Christensen (david@pgguru.net) wrote:\n> > > Ok, based on the interdiff there, I'm happy with that last change. Marking\n> > > as Ready For Committer.\n> > \n> > Great, thanks!\n> > \n> > I'm going to go through it again myself but I feel reasonably good about\n> > it and if nothing else pops and there aren't objections, I'll push it\n> > before feature freeze.\n> \n> Alright, I've cleaned up a few of the error messages to consistently use\n> GSSAPI (instead of a mix of GSSAPI and gssapi) and run the changes through\n> pgindent. Updated patch attached. cfbot looks happy. Feeling like\n> this is looking just about ready to go.\n\nAnd pushed, thanks again! Time to watch the buildfarm.. :)\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 22:00:34 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Kerberos delegation support in libpq and postgres_fdw"
}
] |
[
{
"msg_contents": "Hi,\n\nI think something is slightly off with pgbench (or libpq) pipelining. Consider\ne.g. the following pgbench workload:\n\n\\startpipeline\nSELECT 1;\nSELECT 1;\nSELECT 1;\nSELECT 1;\nSELECT 1;\nSELECT 1;\nSELECT 1;\n\\endpipeline\n\nA pgbench run using that results in in endless repetitions of the below:\npgbench -Mprepared -c 1 -T1000 -f ~/tmp/select1_batch.sql\n\nsendto(3, \"B\\0\\0\\0\\22\\0P0_1\\0\\0\\0\\0\\0\\0\\1\\0\\0D\\0\\0\\0\\6P\\0E\\0\\0\\0\\t\\0\"..., 257, MSG_NOSIGNAL, NULL, 0) = 257\nrecvfrom(3, 0x5614032370f0, 16384, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nppoll([{fd=3, events=POLLIN}], 1, NULL, NULL, 8) = 1 ([{fd=3, revents=POLLIN}])\nrecvfrom(3, \"2\\0\\0\\0\\4T\\0\\0\\0!\\0\\1?column?\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\27\\0\"..., 16384, 0, NULL, NULL) = 461\nrecvfrom(3, 0x56140323727c, 15988, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x56140323723b, 16053, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x5614032371fa, 16118, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x5614032371b9, 16183, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x561403237178, 16248, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x561403237137, 16313, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x5614032370f6, 16378, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nsendto(3, \"B\\0\\0\\0\\22\\0P0_1\\0\\0\\0\\0\\0\\0\\1\\0\\0D\\0\\0\\0\\6P\\0E\\0\\0\\0\\t\\0\"..., 257, MSG_NOSIGNAL, NULL, 0) = 257\nrecvfrom(3, 0x5614032370f0, 16384, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nppoll([{fd=3, events=POLLIN}], 1, NULL, NULL, 8) = 1 ([{fd=3, revents=POLLIN}])\nrecvfrom(3, \"2\\0\\0\\0\\4T\\0\\0\\0!\\0\\1?column?\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\27\\0\"..., 16384, 0, NULL, NULL) = 461\nrecvfrom(3, 0x56140323727c, 15988, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x56140323723b, 16053, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x5614032371fa, 16118, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x5614032371b9, 16183, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x561403237178, 16248, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x561403237137, 16313, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nrecvfrom(3, 0x5614032370f6, 16378, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)\nsendto(3, \"B\\0\\0\\0\\22\\0P0_1\\0\\0\\0\\0\\0\\0\\1\\0\\0D\\0\\0\\0\\6P\\0E\\0\\0\\0\\t\\0\"..., 257, MSG_NOSIGNAL, NULL, 0) = 257\n\nNote how recvfrom() returning EAGAIN is called 7 times in a row? There's also\n7 SQL statements in the workload...\n\nI think what's happening is that the first recvfrom() actually gets all 7\nconnection results. The server doesn't have any queries to process at that\npoint. But we ask the kernel whether there is new network input over and over\nagain, despite having results to process!\n\nWith a short pipeline this doesn't matter much. But if it's longer, adding a\nsyscall for each statement in the pipeline does increase pgbench overhead\nmeasurably. An easy way to avoid that is to put a PQisBusy() && before the\nPQconsumeInput().\n\nComparing pgbench of 100 pipelined SELECT 1;'s, under perf stat yields:\n\nperf stat -e task-clock,raw_syscalls:sys_enter,context-switches,cycles:u,cycles:k,instructions:u,instructions:k \\\n schedtool -a 38 -e \\\n /home/andres/build/postgres/dev-optimize/vpath/src/bin/pgbench/pgbench -n -Mprepared -c 1 -j1 -T5 -f ~/tmp/select1_batch.sql\n\ndefault:\n...\ntps = 3617.823383 (without initial connection time)\n...\n 1,339.25 msec task-clock # 0.267 CPUs utilized\n 1,880,855 raw_syscalls:sys_enter # 1.404 M/sec\n 18,084 context-switches # 13.503 K/sec\n 3,128,615,558 cycles:u # 2.336 GHz\n 1,211,509,367 cycles:k # 0.905 GHz\n 8,000,238,738 instructions:u # 2.56 insn per cycle\n 1,720,276,642 instructions:k # 1.42 insn per cycle\n\n 5.007540307 seconds time elapsed\n\n 1.004346000 seconds user\n 0.376209000 seconds sys\n\nwith-isbusy:\n...\ntps = 3990.424742 (without initial connection time)\n...\n 1,013.71 msec task-clock # 0.202 CPUs utilized\n 80,203 raw_syscalls:sys_enter # 79.119 K/sec\n 19,947 context-switches # 19.677 K/sec\n 2,943,676,361 cycles:u # 2.904 GHz\n 346,607,769 cycles:k # 0.342 GHz\n 8,464,188,379 instructions:u # 2.88 insn per cycle\n 226,665,530 instructions:k # 0.65 insn per cycle\n\n 5.007539846 seconds time elapsed\n\n 0.906090000 seconds user\n 0.151015000 seconds sys\n\n\n1.8 million fewer syscalls, reduced overall \"on cpu\" time, and particularly\n0.27x of the system time... The user/kernel cycles/instruction split is also\nilluminating.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jul 2021 11:00:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "something is wonky with pgbench pipelining"
},
{
"msg_contents": "On 2021-Jul-20, Andres Freund wrote:\n\n> I think what's happening is that the first recvfrom() actually gets all 7\n> connection results. The server doesn't have any queries to process at that\n> point. But we ask the kernel whether there is new network input over and over\n> again, despite having results to process!\n\nHmm, yeah, that seems a missed opportunity.\n\n> with-isbusy:\n> ...\n> tps = 3990.424742 (without initial connection time)\n> ...\n> 1,013.71 msec task-clock # 0.202 CPUs utilized\n> 80,203 raw_syscalls:sys_enter # 79.119 K/sec\n> 19,947 context-switches # 19.677 K/sec\n> 2,943,676,361 cycles:u # 2.904 GHz\n> 346,607,769 cycles:k # 0.342 GHz\n> 8,464,188,379 instructions:u # 2.88 insn per cycle\n> 226,665,530 instructions:k # 0.65 insn per cycle\n\nThis is quite compelling.\n\nIf you don't mind I can get this pushed soon in the next couple of days\n-- or do you want to do it yourself?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n\n",
"msg_date": "Tue, 20 Jul 2021 14:57:15 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: something is wonky with pgbench pipelining"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-20 14:57:15 -0400, Alvaro Herrera wrote:\n> On 2021-Jul-20, Andres Freund wrote:\n> \n> > I think what's happening is that the first recvfrom() actually gets all 7\n> > connection results. The server doesn't have any queries to process at that\n> > point. But we ask the kernel whether there is new network input over and over\n> > again, despite having results to process!\n> \n> Hmm, yeah, that seems a missed opportunity.\n\n> > with-isbusy:\n> > ...\n> > tps = 3990.424742 (without initial connection time)\n> > ...\n> > 1,013.71 msec task-clock # 0.202 CPUs utilized\n> > 80,203 raw_syscalls:sys_enter # 79.119 K/sec\n> > 19,947 context-switches # 19.677 K/sec\n> > 2,943,676,361 cycles:u # 2.904 GHz\n> > 346,607,769 cycles:k # 0.342 GHz\n> > 8,464,188,379 instructions:u # 2.88 insn per cycle\n> > 226,665,530 instructions:k # 0.65 insn per cycle\n> \n> This is quite compelling.\n> \n> If you don't mind I can get this pushed soon in the next couple of days\n> -- or do you want to do it yourself?\n\nI was thinking of pushing the attached, to both 14 and master, thinking\nthat was what you meant, but then I wasn't quite sure: It's a relatively\nminor performance improvement, after all? OTOH, it arguably also just is\na bit of an API misuse...\n\nI'm inclined to push it to 14 and master, but ...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 21 Jul 2021 16:55:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: something is wonky with pgbench pipelining"
},
{
"msg_contents": "Hi,\n\nAdding RMT.\n\nOn 2021-07-21 16:55:08 -0700, Andres Freund wrote:\n> On 2021-07-20 14:57:15 -0400, Alvaro Herrera wrote:\n> > On 2021-Jul-20, Andres Freund wrote:\n> > \n> > > I think what's happening is that the first recvfrom() actually gets all 7\n> > > connection results. The server doesn't have any queries to process at that\n> > > point. But we ask the kernel whether there is new network input over and over\n> > > again, despite having results to process!\n> > \n> > Hmm, yeah, that seems a missed opportunity.\n> \n> > > with-isbusy:\n> > > ...\n> > > tps = 3990.424742 (without initial connection time)\n> > > ...\n> > > 1,013.71 msec task-clock # 0.202 CPUs utilized\n> > > 80,203 raw_syscalls:sys_enter # 79.119 K/sec\n> > > 19,947 context-switches # 19.677 K/sec\n> > > 2,943,676,361 cycles:u # 2.904 GHz\n> > > 346,607,769 cycles:k # 0.342 GHz\n> > > 8,464,188,379 instructions:u # 2.88 insn per cycle\n> > > 226,665,530 instructions:k # 0.65 insn per cycle\n> > \n> > This is quite compelling.\n> > \n> > If you don't mind I can get this pushed soon in the next couple of days\n> > -- or do you want to do it yourself?\n> \n> I was thinking of pushing the attached, to both 14 and master, thinking\n> that was what you meant, but then I wasn't quite sure: It's a relatively\n> minor performance improvement, after all? OTOH, it arguably also just is\n> a bit of an API misuse...\n> \n> I'm inclined to push it to 14 and master, but ...\n\nRMT: ^\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 24 Jul 2021 16:08:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: something is wonky with pgbench pipelining"
},
{
"msg_contents": "On Sat, Jul 24, 2021 at 04:08:33PM -0700, Andres Freund wrote:\n> On 2021-07-21 16:55:08 -0700, Andres Freund wrote:\n>> I'm inclined to push it to 14 and master, but ...\n> \n> RMT: ^\n\nIf it were me, I think that I would have back-patched this change even\nif found after the GA release of 14 as there is no advantages in\nkeeping the current behavior either except overloading pgbench with\nunnecessary system calls. No objections from me to change that now\nfor 14~.\n--\nMichael",
"msg_date": "Mon, 26 Jul 2021 16:07:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: something is wonky with pgbench pipelining"
}
] |
[
{
"msg_contents": "For some queries PostgreSQL can spend most of its time creating the exact\nsame bitmap over and over. For example, in the below case: (also attached\nas a file because line-wrapping is going to make a mess of it)\n\ndrop table if exists foo;\ncreate table foo (x daterange, i int, t text);\ninsert into foo select daterange(x::date,x::date+3), random()*3000 from\n(select now()-interval '3 years'*random() as x from\ngenerate_series(1,1e6))foo;\nvacuum analyze foo;\ncreate index ON foo using gist ( x);\ncreate index ON foo ( i);\nexplain (analyze, buffers) select * from generate_series(1,20) g(i), foo\nwhere x && '[2019-08-09,2019-08-11)' and g.i=foo.i;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=170.21..3563.24 rows=33 width=54) (actual\ntime=1.295..24.890 rows=28 loops=1)\n Buffers: shared hit=543 read=8\n I/O Timings: read=0.040\n -> Function Scan on generate_series g (cost=0.00..0.20 rows=20\nwidth=4) (actual time=0.007..0.014 rows=20 loops=1)\n -> Bitmap Heap Scan on foo (cost=170.21..178.13 rows=2 width=50)\n(actual time=1.238..1.240 rows=1 loops=20)\n Recheck Cond: ((i = g.i) AND (x &&\n'[2019-08-09,2019-08-11)'::daterange))\n Heap Blocks: exact=28\n Buffers: shared hit=543 read=8\n I/O Timings: read=0.040\n -> BitmapAnd (cost=170.21..170.21 rows=2 width=0) (actual\ntime=1.234..1.234 rows=0 loops=20)\n Buffers: shared hit=515 read=8\n I/O Timings: read=0.040\n -> Bitmap Index Scan on foo_i_idx (cost=0.00..6.92\nrows=333 width=0) (actual time=0.031..0.031 rows=327 loops=20)\n Index Cond: (i = g.i)\n Buffers: shared hit=55 read=8\n I/O Timings: read=0.040\n -> Bitmap Index Scan on foo_x_idx (cost=0.00..161.78\nrows=5000 width=0) (actual time=1.183..1.183 rows=3670 loops=20)\n Index Cond: (x && '[2019-08-09,2019-08-11)'::daterange)\n Buffers: shared hit=460\n\nNote that the fast bitmap index scan is parameterized to the other side of\nthe nested loop, so has to be recomputed. While the slow one is\nparameterized to a constant, so it could in principle just be reused.\n\nWhat kind of infrastructure would be needed to detect this case and reuse\nthat bitmap?\n\nCheers,\n\nJeff",
"msg_date": "Tue, 20 Jul 2021 19:10:41 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Bitmap reuse"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> For some queries PostgreSQL can spend most of its time creating the exact\n> same bitmap over and over. For example, in the below case: (also attached\n> as a file because line-wrapping is going to make a mess of it)\n\nUh .... it's not the \"exact same bitmap each time\", because the selected\nrows vary depending on the value of g.i.\n\nIf the output of the subplan was indeed constant, I'd expect the planner\nto stick a Materialize node atop it. That would almost certainly win\nmore than re-using the index output to scan the heap additional times.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Jul 2021 19:25:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap reuse"
},
{
"msg_contents": "On Wed, 21 Jul 2021 at 11:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Jeff Janes <jeff.janes@gmail.com> writes:\n> > For some queries PostgreSQL can spend most of its time creating the exact\n> > same bitmap over and over. For example, in the below case: (also attached\n> > as a file because line-wrapping is going to make a mess of it)\n>\n> Uh .... it's not the \"exact same bitmap each time\", because the selected\n> rows vary depending on the value of g.i.\n\nI imagined Jeff was meaning the bitmap from the scan of foo_x_idx, not\nthe combined ANDed bitmap from both indexes.\n\nI didn't look in detail, but I'd think it would just be a matter of\ncaching the bitmap then in ExecReScanBitmapIndexScan(), if the\nPlanState's chgParam indicate a parameter has changed, then throw away\nthe cache. Then if the cache is still valid in\nMultiExecBitmapIndexScan(), return it. Not too sure about the memory\ncontext part for the cache. As I said, I didn't look in detail.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Jul 2021 12:31:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap reuse"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 21 Jul 2021 at 11:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Uh .... it's not the \"exact same bitmap each time\", because the selected\n>> rows vary depending on the value of g.i.\n\n> I imagined Jeff was meaning the bitmap from the scan of foo_x_idx, not\n> the combined ANDed bitmap from both indexes.\n\nTo re-use that, you'd need a way to prevent the upper node from\ndestructively modifying the tidbitmap.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Jul 2021 21:32:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap reuse"
},
{
"msg_contents": "On Wed, 21 Jul 2021 at 13:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I imagined Jeff was meaning the bitmap from the scan of foo_x_idx, not\n> > the combined ANDed bitmap from both indexes.\n>\n> To re-use that, you'd need a way to prevent the upper node from\n> destructively modifying the tidbitmap.\n\nYeah. And that would slow things down in the case where it was just\nexecuted once as we'd need to make a copy of it to prevent the cached\nversion from being modified regardless if it would ever be used again\nor not.\n\nMaybe the planner would need to be involved in making the decision of\nif the bitmap index scan should tuck away a carbon copy of the\nresulting TIDBitmap after the first scan. That way on rescan we could\njust make a copy of the cached version and return that. That saves\nhaving to modify the callers to tell them not to damage the returned\nTIDBitmap.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jul 2021 01:54:50 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap reuse"
},
{
"msg_contents": "On Thu, 22 Jul 2021 at 01:54, David Rowley <dgrowleyml@gmail.com> wrote:\n> Maybe the planner would need to be involved in making the decision of\n> if the bitmap index scan should tuck away a carbon copy of the\n> resulting TIDBitmap after the first scan. That way on rescan we could\n> just make a copy of the cached version and return that. That saves\n> having to modify the callers to tell them not to damage the returned\n> TIDBitmap.\n\nOh but, meh. Caching could blow out work_mem... We might end up using\nwork_mem * 2.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jul 2021 01:58:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bitmap reuse"
}
] |
[
{
"msg_contents": "Hi, Heikki\r\n\r\n'noError' argument was added at commit ea1b99a661,\r\nbut it seems to be neglected in euc_tw_and_big5.c Line 289.\r\nplease see the attachment.\r\n\r\nRegards,\r\nYukun Wang",
"msg_date": "Wed, 21 Jul 2021 02:15:14 +0000",
"msg_from": "\"wangyukun@fujitsu.com\" <wangyukun@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "add 'noError' to euc_tw_and_big5.c"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 02:15:14AM +0000, wangyukun@fujitsu.com wrote:\n> 'noError' argument was added at commit ea1b99a661,\n> but it seems to be neglected in euc_tw_and_big5.c Line 289.\n> please see the attachment.\n\nThat sounds right to me. Double-checking the area, I am not seeing\nanother portion of the code to fix.\n--\nMichael",
"msg_date": "Wed, 21 Jul 2021 11:35:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add 'noError' to euc_tw_and_big5.c"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 10:35 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n>\n> On Wed, Jul 21, 2021 at 02:15:14AM +0000, wangyukun@fujitsu.com wrote:\n> > 'noError' argument was added at commit ea1b99a661,\n> > but it seems to be neglected in euc_tw_and_big5.c Line 289.\n> > please see the attachment.\n>\n> That sounds right to me. Double-checking the area, I am not seeing\n> another portion of the code to fix.\n\nAgreed, will push.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jul 20, 2021 at 10:35 PM Michael Paquier <michael@paquier.xyz> wrote:>> On Wed, Jul 21, 2021 at 02:15:14AM +0000, wangyukun@fujitsu.com wrote:> > 'noError' argument was added at commit ea1b99a661,> > but it seems to be neglected in euc_tw_and_big5.c Line 289.> > please see the attachment.>> That sounds right to me. Double-checking the area, I am not seeing> another portion of the code to fix.Agreed, will push.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Jul 2021 07:44:06 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: add 'noError' to euc_tw_and_big5.c"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 10:35 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n>\n> On Wed, Jul 21, 2021 at 02:15:14AM +0000, wangyukun@fujitsu.com wrote:\n> > 'noError' argument was added at commit ea1b99a661,\n> > but it seems to be neglected in euc_tw_and_big5.c Line 289.\n> > please see the attachment.\n>\n> That sounds right to me. Double-checking the area, I am not seeing\n> another portion of the code to fix.\n\nPushed, but I forgot to give you review credit, sorry about that. Thanks\nfor taking a look!\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jul 20, 2021 at 10:35 PM Michael Paquier <michael@paquier.xyz> wrote:>> On Wed, Jul 21, 2021 at 02:15:14AM +0000, wangyukun@fujitsu.com wrote:> > 'noError' argument was added at commit ea1b99a661,> > but it seems to be neglected in euc_tw_and_big5.c Line 289.> > please see the attachment.>> That sounds right to me. Double-checking the area, I am not seeing> another portion of the code to fix.Pushed, but I forgot to give you review credit, sorry about that. Thanks for taking a look!--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Jul 2021 09:47:31 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: add 'noError' to euc_tw_and_big5.c"
}
] |
[
{
"msg_contents": "I'm working on a patch [1] to get the planner to consider adding\nPathKeys to satisfy ORDER BY / DISTINCT aggregates. I think this has\nled me to discover some problems with postgres_fdw's handling of\npushing down ORDER BY clauses into the foreign server.\n\nThe following test exists in the postgres_fdw module:\n\ncreate operator class my_op_class for type int using btree family\nmy_op_family as\n operator 1 public.<^,\n operator 3 public.=^,\n operator 5 public.>^,\n function 1 my_op_cmp(int, int);\n-- This will not be pushed as user defined sort operator is not part of the\n-- extension yet.\nexplain (verbose, costs off)\nselect array_agg(c1 order by c1 using operator(public.<^)) from ft2\nwhere c2 = 6 and c1 < 100 group by c2;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n GroupAggregate\n Output: array_agg(c1 ORDER BY c1 USING <^ NULLS LAST), c2\n Group Key: ft2.c2\n -> Foreign Scan on public.ft2\n Output: c1, c2\n Remote SQL: SELECT \"C 1\", c2 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" <\n100)) AND ((c2 = 6))\n(6 rows)\n\nHere the test claims that it wants to ensure that the order by using\noperator(public.<^) is not pushed down into the foreign scan.\nHowever, unless I'm mistaken, it seems there's a completely wrong\nassumption there that the planner would even attempt that. In current\nmaster we don't add PathKeys for ORDER BY aggregates, why would that\nsort get pushed down in the first place?\n\nIf I adjust that query to something that would have the planner set\npathkeys for, it does push the ORDER BY to the foreign server without\nany consideration that the sort operator is not shippable to the\nforeign server.\n\npostgres=# explain verbose select * from ft2 order by c1 using\noperator(public.<^);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Foreign Scan on public.ft2 (cost=100.28..169.27 rows=1000 width=88)\n Output: c1, c2, c3, c4, c5, c6, c7, c8\n Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM \"S 1\".\"T\n1\" ORDER BY \"C 1\" ASC NULLS LAST\n(3 rows)\n\nAm I missing something here, or is postgres_fdw.c's\nget_useful_pathkeys_for_relation() just broken?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/1882015.KPgzjnsp5C%40aivenronan#159e89188e172ca38cb28ef7c5be9b2c\n\n\n",
"msg_date": "Wed, 21 Jul 2021 14:25:00 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le mercredi 21 juillet 2021, 04:25:00 CEST David Rowley a écrit :\n> Here the test claims that it wants to ensure that the order by using\n> operator(public.<^) is not pushed down into the foreign scan.\n> However, unless I'm mistaken, it seems there's a completely wrong\n> assumption there that the planner would even attempt that. In current\n> master we don't add PathKeys for ORDER BY aggregates, why would that\n> sort get pushed down in the first place?\n\nThe whole aggregate, including it's order by clause, can be pushed down so \nthere is nothing related to pathkeys here.\n\n> \n> If I adjust that query to something that would have the planner set\n> pathkeys for, it does push the ORDER BY to the foreign server without\n> any consideration that the sort operator is not shippable to the\n> foreign server.\n> \n> Am I missing something here, or is postgres_fdw.c's\n> get_useful_pathkeys_for_relation() just broken?\n\nI think you're right, we need to add a check if the opfamily is shippable. \nI'll submit a patch for that including regression tests.\n\nRegards,\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Wed, 21 Jul 2021 11:05:14 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le mercredi 21 juillet 2021, 11:05:14 CEST Ronan Dunklau a écrit :\n> Le mercredi 21 juillet 2021, 04:25:00 CEST David Rowley a écrit :\n> > Here the test claims that it wants to ensure that the order by using\n> > operator(public.<^) is not pushed down into the foreign scan.\n> > However, unless I'm mistaken, it seems there's a completely wrong\n> > assumption there that the planner would even attempt that. In current\n> > master we don't add PathKeys for ORDER BY aggregates, why would that\n> > sort get pushed down in the first place?\n> \n> The whole aggregate, including it's order by clause, can be pushed down so\n> there is nothing related to pathkeys here.\n> \n> > If I adjust that query to something that would have the planner set\n> > pathkeys for, it does push the ORDER BY to the foreign server without\n> > any consideration that the sort operator is not shippable to the\n> > foreign server.\n> > \n> > Am I missing something here, or is postgres_fdw.c's\n> > get_useful_pathkeys_for_relation() just broken?\n> \n> I think you're right, we need to add a check if the opfamily is shippable.\n> I'll submit a patch for that including regression tests.\n> \n\nIn fact the support for generating the correct USING clause was inexistent \ntoo, so that needed a bit more work.\n\nThe attached patch does the following:\n - verify the opfamily is shippable to keep pathkeys\n - generate a correct order by clause using the actual operator.\n\nThe second part needed a bit of refactoring: the find_em_expr_for_input_target \nand find_em_expr_for_rel need to return the whole EquivalenceMember, because we \ncan't know the type used by the opfamily from the expression (example: the \nexpression could be of type intarray, but the type used by the opfamily could \nbe anyarray).\n\nI also moved the \"USING <operator>\"' string generation to a separate function \nsince it's now used by appendAggOrderBy and appendOrderByClause.\n\nThe find_em_expr_for_rel is exposed in optimizer/paths.h, so I kept the \nexisting function which returns the expr directly in case it is used out of \ntree. \n\n\n\n-- \nRonan Dunklau",
"msg_date": "Wed, 21 Jul 2021 14:28:30 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "On Thu, 22 Jul 2021 at 00:28, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> The attached patch does the following:\n> - verify the opfamily is shippable to keep pathkeys\n> - generate a correct order by clause using the actual operator.\n\nThanks for writing the patch.\n\nThis is just a very superficial review. I've not spent a great deal\nof time looking at postgres_fdw code, so would rather some eyes that\nwere more familiar with the code looked too.\n\n1. This comment needs to be updated. It still mentions\nis_foreign_expr, which you're no longer calling.\n\n * is_foreign_expr would detect volatile expressions as well, but\n * checking ec_has_volatile here saves some cycles.\n */\n- if (pathkey_ec->ec_has_volatile ||\n- !(em_expr = find_em_expr_for_rel(pathkey_ec, rel)) ||\n- !is_foreign_expr(root, rel, em_expr))\n+ if (!is_foreign_pathkey(root, rel, pathkey))\n\n2. This is not a very easy return condition to read:\n\n+ return (!pathkey_ec->ec_has_volatile &&\n+ (em = find_em_for_rel(pathkey_ec, baserel)) &&\n+ is_foreign_expr(root, baserel, em->em_expr) &&\n+ is_shippable(pathkey->pk_opfamily, OperatorFamilyRelationId, fpinfo));\n\nI think it would be nicer to break that down into something easier on\nthe eyes that could be commented a little more.\n\n3. This comment is no longer true:\n\n * Find an equivalence class member expression, all of whose Vars, come from\n * the indicated relation.\n */\n-Expr *\n-find_em_expr_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n+EquivalenceMember*\n+find_em_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n\nAlso, missing space after EquivalenceMember.\n\nThe comment can just be moved down to:\n\n+Expr *\n+find_em_expr_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n+{\n+ EquivalenceMember *em = find_em_for_rel(ec, rel);\n+ return em ? em->em_expr : NULL;\n+}\n\nand you can rewrite the one for find_em_for_rel.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jul 2021 01:45:15 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le mercredi 21 juillet 2021 15:45:15 CEST, vous avez écrit :\n> On Thu, 22 Jul 2021 at 00:28, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > The attached patch does the following:\n> > - verify the opfamily is shippable to keep pathkeys\n> > - generate a correct order by clause using the actual operator.\n> \n> Thanks for writing the patch.\n> \n> This is just a very superficial review. I've not spent a great deal\n> of time looking at postgres_fdw code, so would rather some eyes that\n> were more familiar with the code looked too.\n\nThank you for the review.\n\n> \n> 1. This comment needs to be updated. It still mentions\n> is_foreign_expr, which you're no longer calling.\n> \n> * is_foreign_expr would detect volatile expressions as well, but\n> * checking ec_has_volatile here saves some cycles.\n> */\n> - if (pathkey_ec->ec_has_volatile ||\n> - !(em_expr = find_em_expr_for_rel(pathkey_ec, rel)) ||\n> - !is_foreign_expr(root, rel, em_expr))\n> + if (!is_foreign_pathkey(root, rel, pathkey))\n> \nDone. By the way, the comment just above mentions we don't have a way to use a \nprefix pathkey, but I suppose we should revisit that now that we have \nIncrementalSort. I'll mark it in my todo list for another patch.\n\n> 2. This is not a very easy return condition to read:\n> \n> + return (!pathkey_ec->ec_has_volatile &&\n> + (em = find_em_for_rel(pathkey_ec, baserel)) &&\n> + is_foreign_expr(root, baserel, em->em_expr) &&\n> + is_shippable(pathkey->pk_opfamily, OperatorFamilyRelationId, fpinfo));\n> \n> I think it would be nicer to break that down into something easier on\n> the eyes that could be commented a little more.\n\nDone, let me know what you think about it.\n\n> \n> 3. This comment is no longer true:\n> \n> * Find an equivalence class member expression, all of whose Vars, come\n> from * the indicated relation.\n> */\n> -Expr *\n> -find_em_expr_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n> +EquivalenceMember*\n> +find_em_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n> \n> Also, missing space after EquivalenceMember.\n> \n> The comment can just be moved down to:\n> \n> +Expr *\n> +find_em_expr_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n> +{\n> + EquivalenceMember *em = find_em_for_rel(ec, rel);\n> + return em ? em->em_expr : NULL;\n> +}\n> \n> and you can rewrite the one for find_em_for_rel.\n\nI have done it the other way around. I'm not sure we really need to keep the \nfind_em_expr_for_rel function on HEAD. If we decide to backpatch, it would need \nto be kept though. \n\n-- \nRonan Dunklau",
"msg_date": "Wed, 21 Jul 2021 16:33:10 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Em qua., 21 de jul. de 2021 às 11:33, Ronan Dunklau <ronan.dunklau@aiven.io>\nescreveu:\n\n> Le mercredi 21 juillet 2021 15:45:15 CEST, vous avez écrit :\n> > On Thu, 22 Jul 2021 at 00:28, Ronan Dunklau <ronan.dunklau@aiven.io>\n> wrote:\n> > > The attached patch does the following:\n> > > - verify the opfamily is shippable to keep pathkeys\n> > > - generate a correct order by clause using the actual operator.\n> >\n> > Thanks for writing the patch.\n> >\n> > This is just a very superficial review. I've not spent a great deal\n> > of time looking at postgres_fdw code, so would rather some eyes that\n> > were more familiar with the code looked too.\n>\n> Thank you for the review.\n>\n> >\n> > 1. This comment needs to be updated. It still mentions\n> > is_foreign_expr, which you're no longer calling.\n> >\n> > * is_foreign_expr would detect volatile expressions as well, but\n> > * checking ec_has_volatile here saves some cycles.\n> > */\n> > - if (pathkey_ec->ec_has_volatile ||\n> > - !(em_expr = find_em_expr_for_rel(pathkey_ec, rel)) ||\n> > - !is_foreign_expr(root, rel, em_expr))\n> > + if (!is_foreign_pathkey(root, rel, pathkey))\n> >\n> Done. By the way, the comment just above mentions we don't have a way to\n> use a\n> prefix pathkey, but I suppose we should revisit that now that we have\n> IncrementalSort. I'll mark it in my todo list for another patch.\n>\n> > 2. This is not a very easy return condition to read:\n> >\n> > + return (!pathkey_ec->ec_has_volatile &&\n> > + (em = find_em_for_rel(pathkey_ec, baserel)) &&\n> > + is_foreign_expr(root, baserel, em->em_expr) &&\n> > + is_shippable(pathkey->pk_opfamily, OperatorFamilyRelationId, fpinfo));\n> >\n> > I think it would be nicer to break that down into something easier on\n> > the eyes that could be commented a little more.\n>\n> Done, let me know what you think about it.\n>\n> >\n> > 3. This comment is no longer true:\n> >\n> > * Find an equivalence class member expression, all of whose Vars, come\n> > from * the indicated relation.\n> > */\n> > -Expr *\n> > -find_em_expr_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n> > +EquivalenceMember*\n> > +find_em_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n> >\n> > Also, missing space after EquivalenceMember.\n> >\n> > The comment can just be moved down to:\n> >\n> > +Expr *\n> > +find_em_expr_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n> > +{\n> > + EquivalenceMember *em = find_em_for_rel(ec, rel);\n> > + return em ? em->em_expr : NULL;\n> > +}\n> >\n> > and you can rewrite the one for find_em_for_rel.\n>\n> I have done it the other way around. I'm not sure we really need to keep\n> the\n> find_em_expr_for_rel function on HEAD. If we decide to backpatch, it would\n> need\n> to be kept though.\n>\nUnfortunately your patch does not apply clear into the head.\nSo I have a few suggestions on v2, attached with the .txt extension to\navoid cf bot.\nPlease, if ok, make the v3.\n\n1. new version is_foreign_pathke?\n+bool\n+is_foreign_pathkey(PlannerInfo *root,\n+ RelOptInfo *baserel,\n+ PathKey *pathkey)\n+{\n+ EquivalenceClass *pathkey_ec = pathkey->pk_eclass;\n+ EquivalenceMember *em;\n+\n+ /*\n+ * is_foreign_expr would detect volatile expressions as well, but\n+ * checking ec_has_volatile here saves some cycles.\n+ */\n+ if (pathkey_ec->ec_has_volatile)\n+ return false;\n+\n+ /*\n+ * Found member's expression is foreign?\n+ */\n+ em = find_em_for_rel(pathkey_ec, baserel);\n+ if (em != NULL && is_foreign_expr(root, baserel, em->em_expr))\n+ {\n+ PgFdwRelationInfo *fpinfo = (PgFdwRelationInfo *) baserel->fdw_private;\n+\n+ /*\n+ * Operator family is shippable?\n+ */\n+ return is_shippable(pathkey->pk_opfamily, OperatorFamilyRelationId,\nfpinfo);\n+ }\n+\n+ return false;\n+}\n\n2. appendOrderbyUsingClause function\nPut the buffer actions together?\n\n3. Apply style Postgres?\n+ if (!HeapTupleIsValid(tuple))\n+ {\n+ elog(ERROR, \"cache lookup failed for operator family %u\",\npathkey->pk_opfamily);\n+ }\n\n4. Assertion not ok here?\n+ em = find_em_for_rel(pathkey->pk_eclass, baserel);\n+ em_expr = em->em_expr;\n Assert(em_expr != NULL);\n\nfind_em_for_rel function can returns NULL.\nI think that is need deal with em_expr == NULL at runtime.\n\n5. More readable version?\n+find_em_expr_for_rel(EquivalenceClass *ec, RelOptInfo *rel)\n+{\n+ EquivalenceMember *em = find_em_for_rel(ec, rel);\n+\n+ if (em != NULL)\n+ return em->em_expr;\n+\n+ return NULL;\n+}\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 21 Jul 2021 21:16:52 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le jeudi 22 juillet 2021, 02:16:52 CEST Ranier Vilela a écrit :\n> Unfortunately your patch does not apply clear into the head.\n> So I have a few suggestions on v2, attached with the .txt extension to\n> avoid cf bot.\n> Please, if ok, make the v3.\n\nHum weird, it applied cleanly for me, and was formatted using git show which I \nadmit is not ideal. Please find it reattached. \n\n\n> \n> 2. appendOrderbyUsingClause function\n> Put the buffer actions together?\n> \nNot sure what you mean here ?\n\n> 3. Apply style Postgres?\n> + if (!HeapTupleIsValid(tuple))\n> + {\n> + elog(ERROR, \"cache lookup failed for operator family %u\",\n> pathkey->pk_opfamily);\n> + }\n> \n\nGood catch ! \n\n\n> 4. Assertion not ok here?\n> + em = find_em_for_rel(pathkey->pk_eclass, baserel);\n> + em_expr = em->em_expr;\n> Assert(em_expr != NULL);\n> \n\nIf we are here there should never be a case where the em can't be found. I \nmoved the assertion where it makes sense though.\n\n\n\nRegards,\n\n\n-- \nRonan Dunklau",
"msg_date": "Thu, 22 Jul 2021 09:00:20 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "On Thu, 22 Jul 2021 at 19:00, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> Please find it reattached.\n\n+-- This will not be pushed either\n+explain verbose select * from ft2 order by c1 using operator(public.<^);\n+ QUERY PLAN\n+-------------------------------------------------------------------------------\n+ Sort (cost=190.83..193.33 rows=1000 width=142)\n\n\nCan you also use explain (verbose, costs off) the same as the other\ntests in that area. Having the costs there would never survive a run\nof the buildfarm. Different hardware will produce different costs, e.g\n32-bit hardware might cost cheaper due to narrower widths.\n\nHistory lesson: costs off was added so we could test plans. Before\nthat, I don't think that the regression tests had any coverage for\nplans. Older test files still likely lack much testing with EXPLAIN.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jul 2021 19:44:54 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le jeudi 22 juillet 2021, 09:44:54 CEST David Rowley a écrit :\n> +-- This will not be pushed either\n> +explain verbose select * from ft2 order by c1 using operator(public.<^);\n> + QUERY PLAN\n> +---------------------------------------------------------------------------\n> ---- + Sort (cost=190.83..193.33 rows=1000 width=142)\n> \n> \n> Can you also use explain (verbose, costs off) the same as the other\n> tests in that area. Having the costs there would never survive a run\n> of the buildfarm. Different hardware will produce different costs, e.g\n> 32-bit hardware might cost cheaper due to narrower widths.\n> \n\nSorry about that. Here it is. \n\n\nRegards,\n\n-- \nRonan Dunklau",
"msg_date": "Thu, 22 Jul 2021 10:49:14 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Em qui., 22 de jul. de 2021 às 04:00, Ronan Dunklau <ronan.dunklau@aiven.io>\nescreveu:\n\n> Le jeudi 22 juillet 2021, 02:16:52 CEST Ranier Vilela a écrit :\n> > Unfortunately your patch does not apply clear into the head.\n> > So I have a few suggestions on v2, attached with the .txt extension to\n> > avoid cf bot.\n> > Please, if ok, make the v3.\n>\n> Hum weird, it applied cleanly for me, and was formatted using git show\n> which I\n> admit is not ideal. Please find it reattached.\n>\nranier@notebook2:/usr/src/postgres$ git apply <\nv2_fix_postgresfdw_orderby_handling.patch\nerror: falha no patch: contrib/postgres_fdw/deparse.c:37\nerror: contrib/postgres_fdw/deparse.c: patch does not apply\nerror: falha no patch: contrib/postgres_fdw/expected/postgres_fdw.out:3168\nerror: contrib/postgres_fdw/expected/postgres_fdw.out: patch does not apply\nerror: falha no patch: contrib/postgres_fdw/postgres_fdw.c:916\nerror: contrib/postgres_fdw/postgres_fdw.c: patch does not apply\nerror: falha no patch: contrib/postgres_fdw/postgres_fdw.h:165\nerror: contrib/postgres_fdw/postgres_fdw.h: patch does not apply\nerror: falha no patch: contrib/postgres_fdw/sql/postgres_fdw.sql:873\nerror: contrib/postgres_fdw/sql/postgres_fdw.sql: patch does not apply\nerror: falha no patch: src/backend/optimizer/path/equivclass.c:932\nerror: src/backend/optimizer/path/equivclass.c: patch does not apply\nerror: falha no patch: src/include/optimizer/paths.h:144\nerror: src/include/optimizer/paths.h: patch does not apply\n\n\n>\n>\n> >\n> > 2. appendOrderbyUsingClause function\n> > Put the buffer actions together?\n> >\n> Not sure what you mean here ?\n>\n+ appendStringInfoString(buf, \" USING \");\n+ deparseOperatorName(buf, operform);\n\n\n>\n> > 3. Apply style Postgres?\n> > + if (!HeapTupleIsValid(tuple))\n> > + {\n> > + elog(ERROR, \"cache lookup failed for operator family %u\",\n> > pathkey->pk_opfamily);\n> > + }\n> >\n>\n> Good catch !\n>\n>\n> > 4. Assertion not ok here?\n> > + em = find_em_for_rel(pathkey->pk_eclass, baserel);\n> > + em_expr = em->em_expr;\n> > Assert(em_expr != NULL);\n> >\n>\n> If we are here there should never be a case where the em can't be found. I\n> moved the assertion where it makes sense though.\n>\n> Your version of function is_foreign_pathkey (v4),\nnot reduce scope the variable PgFdwRelationInfo *fpinfo.\nI still prefer the v3 version.\n\nThe C ternary operator ? : ;\nIt's nothing more than a disguised if else\n\nregards,\nRanier Vilela\n\nEm qui., 22 de jul. de 2021 às 04:00, Ronan Dunklau <ronan.dunklau@aiven.io> escreveu:Le jeudi 22 juillet 2021, 02:16:52 CEST Ranier Vilela a écrit :\n> Unfortunately your patch does not apply clear into the head.\n> So I have a few suggestions on v2, attached with the .txt extension to\n> avoid cf bot.\n> Please, if ok, make the v3.\n\nHum weird, it applied cleanly for me, and was formatted using git show which I \nadmit is not ideal. Please find it reattached. ranier@notebook2:/usr/src/postgres$ git apply < v2_fix_postgresfdw_orderby_handling.patcherror: falha no patch: contrib/postgres_fdw/deparse.c:37error: contrib/postgres_fdw/deparse.c: patch does not applyerror: falha no patch: contrib/postgres_fdw/expected/postgres_fdw.out:3168error: contrib/postgres_fdw/expected/postgres_fdw.out: patch does not applyerror: falha no patch: contrib/postgres_fdw/postgres_fdw.c:916error: contrib/postgres_fdw/postgres_fdw.c: patch does not applyerror: falha no patch: contrib/postgres_fdw/postgres_fdw.h:165error: contrib/postgres_fdw/postgres_fdw.h: patch does not applyerror: falha no patch: contrib/postgres_fdw/sql/postgres_fdw.sql:873error: contrib/postgres_fdw/sql/postgres_fdw.sql: patch does not applyerror: falha no patch: src/backend/optimizer/path/equivclass.c:932error: src/backend/optimizer/path/equivclass.c: patch does not applyerror: falha no patch: src/include/optimizer/paths.h:144error: src/include/optimizer/paths.h: patch does not apply \n\n\n> \n> 2. appendOrderbyUsingClause function\n> Put the buffer actions together?\n> \nNot sure what you mean here ?+\t\tappendStringInfoString(buf, \" USING \");+\t\tdeparseOperatorName(buf, operform); \n\n> 3. Apply style Postgres?\n> + if (!HeapTupleIsValid(tuple))\n> + {\n> + elog(ERROR, \"cache lookup failed for operator family %u\",\n> pathkey->pk_opfamily);\n> + }\n> \n\nGood catch ! \n\n\n> 4. Assertion not ok here?\n> + em = find_em_for_rel(pathkey->pk_eclass, baserel);\n> + em_expr = em->em_expr;\n> Assert(em_expr != NULL);\n> \n\nIf we are here there should never be a case where the em can't be found. I \nmoved the assertion where it makes sense though.\nYour version of function is_foreign_pathkey (v4), not reduce scope the variable PgFdwRelationInfo *fpinfo.I still prefer the v3 version.The C ternary operator ? : ;It's nothing more than a disguised if elseregards,Ranier Vilela",
"msg_date": "Thu, 22 Jul 2021 06:00:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "On Thu, 22 Jul 2021 at 20:49, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n>\n> Le jeudi 22 juillet 2021, 09:44:54 CEST David Rowley a écrit :\n> > Can you also use explain (verbose, costs off) the same as the other\n> > tests in that area. Having the costs there would never survive a run\n> > of the buildfarm. Different hardware will produce different costs, e.g\n> > 32-bit hardware might cost cheaper due to narrower widths.\n> >\n>\n> Sorry about that. Here it is.\n\nI had a look over the v5 patch and noticed a few issues and a few\nthings that could be improved.\n\nThis is not ok:\n\n+ tuple = SearchSysCache4(AMOPSTRATEGY,\n+ ObjectIdGetDatum(pathkey->pk_opfamily),\n+ em->em_datatype,\n+ em->em_datatype,\n+ pathkey->pk_strategy);\n\nSearchSysCache* expects Datum inputs, so you must use the *GetDatum()\nmacro for each input that isn't already a Datum.\n\nI also:\n1. Changed the error message for when that lookup fails so that it's\nthe same as the others that perform a lookup with AMOPSTRATEGY.\n2. Put back the comment in equivclass.c for find_em_expr_for_rel. I\nsaw no reason that comment should be changed when the function does\nexactly what it did before.\n3. Renamed appendOrderbyUsingClause to appendOrderBySuffix. I wasn't\nhappy that the name indicated it was only handling USING clauses when\nit also handled ASC/DESC. I also moved in the NULLS FIRST/LAST stuff\nin there\n4. Adjusted is_foreign_pathkey() to make it easier to read and do\nis_shippable() before calling find_em_expr_for_rel(). I didn't see\nthe need to call find_em_expr_for_rel() when is_shippable() returned\nfalse.\n5. Adjusted find_em_expr_for_rel() to remove the ternary operator.\n\nI've attached what I ended up with.\n\nIt seems that it was the following commit that introduced the ability\nfor sorts to be pushed down to the foreign server, so it would be good\nif the authors of that patch could look over this.\n\ncommit f18c944b6137329ac4a6b2dce5745c5dc21a8578\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Tue Nov 3 12:46:06 2015 -0500\n\n postgres_fdw: Add ORDER BY to some remote SQL queries.\n\nDavid",
"msg_date": "Tue, 27 Jul 2021 13:19:18 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le mardi 27 juillet 2021, 03:19:18 CEST David Rowley a écrit :\n> On Thu, 22 Jul 2021 at 20:49, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > Le jeudi 22 juillet 2021, 09:44:54 CEST David Rowley a écrit :\n> > > Can you also use explain (verbose, costs off) the same as the other\n> > > tests in that area. Having the costs there would never survive a run\n> > > of the buildfarm. Different hardware will produce different costs, e.g\n> > > 32-bit hardware might cost cheaper due to narrower widths.\n> > \n> > Sorry about that. Here it is.\n> \n> I had a look over the v5 patch and noticed a few issues and a few\n> things that could be improved.\n\nThank you.\n\n> \n> This is not ok:\n> \n> + tuple = SearchSysCache4(AMOPSTRATEGY,\n> + ObjectIdGetDatum(pathkey->pk_opfamily),\n> + em->em_datatype,\n> + em->em_datatype,\n> + pathkey->pk_strategy);\n> \n> SearchSysCache* expects Datum inputs, so you must use the *GetDatum()\n> macro for each input that isn't already a Datum.\n\nNoted.\n\n> \n> I also:\n> 1. Changed the error message for when that lookup fails so that it's\n> the same as the others that perform a lookup with AMOPSTRATEGY.\n> 2. Put back the comment in equivclass.c for find_em_expr_for_rel. I\n> saw no reason that comment should be changed when the function does\n> exactly what it did before.\n> 3. Renamed appendOrderbyUsingClause to appendOrderBySuffix. I wasn't\n> happy that the name indicated it was only handling USING clauses when\n> it also handled ASC/DESC. I also moved in the NULLS FIRST/LAST stuff\n> in there\n\nI agree that name is better.\n\n\n> 4. Adjusted is_foreign_pathkey() to make it easier to read and do\n> is_shippable() before calling find_em_expr_for_rel(). I didn't see\n> the need to call find_em_expr_for_rel() when is_shippable() returned\n> false.\n> 5. Adjusted find_em_expr_for_rel() to remove the ternary operator.\n> \n> I've attached what I ended up with.\n\nLooks good to me.\n\n> \n> It seems that it was the following commit that introduced the ability\n> for sorts to be pushed down to the foreign server, so it would be good\n> if the authors of that patch could look over this.\n\nOne thing in particular I was not sure about was how to fetch the operator \nassociated with the path key ordering. I chose to go through the opfamily \nrecorded on the member, but maybe locating the original SortGroupClause by its \nref and getting the operator number here woud have worked. It seems more \nstraightforward like this though.\n\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 07:20:17 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "On Tue, 27 Jul 2021 at 17:20, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> One thing in particular I was not sure about was how to fetch the operator\n> associated with the path key ordering. I chose to go through the opfamily\n> recorded on the member, but maybe locating the original SortGroupClause by its\n> ref and getting the operator number here woud have worked. It seems more\n> straightforward like this though.\n\nI spent a bit of time trying to find a less invasive way of doing that\nand didn't manage to come up with anything. I'm interested to hear if\nanyone else has any better ideas.\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Jul 2021 18:05:19 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nApplied the v6 patch to master branch and ran regression test for contrib, the result was \"All tests successful.\"",
"msg_date": "Fri, 03 Sep 2021 20:54:25 +0000",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le vendredi 3 septembre 2021, 22:54:25 CEST David Zhang a écrit :\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> Applied the v6 patch to master branch and ran regression test for contrib,\n> the result was \"All tests successful.\"\n\nWhat kind of error did you get running make installcheck-world ? If it passed \nthe make check for contrib, I can't see why it would fail running make \ninstallcheck-world. \n\nIn any case, I just checked and running make installcheck-world doesn't \nproduce any error.\n\nSince HEAD had moved a bit since the last version, I rebased the patch, \nresulting in the attached v7.\n\nBest regards,\n\n--\nRonan Dunklau",
"msg_date": "Mon, 06 Sep 2021 10:16:21 +0200",
"msg_from": "Ronan Dunklau <ronan@dunklau.fr>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "On Mon, Sep 6, 2021 at 1:17 AM Ronan Dunklau <ronan@dunklau.fr> wrote:\n\n> Le vendredi 3 septembre 2021, 22:54:25 CEST David Zhang a écrit :\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, failed\n> > Implements feature: tested, passed\n> > Spec compliant: not tested\n> > Documentation: not tested\n> >\n> > Applied the v6 patch to master branch and ran regression test for\n> contrib,\n> > the result was \"All tests successful.\"\n>\n> What kind of error did you get running make installcheck-world ? If it\n> passed\n> the make check for contrib, I can't see why it would fail running make\n> installcheck-world.\n>\n> In any case, I just checked and running make installcheck-world doesn't\n> produce any error.\n>\n> Since HEAD had moved a bit since the last version, I rebased the patch,\n> resulting in the attached v7.\n>\n> Best regards,\n>\n> --\n> Ronan Dunklau\n>\nHi,\nbq. a pushed-down order by could return wrong results.\n\nCan you briefly summarize the approach for fixing the bug in the\ndescription ?\n\n+ * Returns true if it's safe to push down a sort as described by 'pathkey'\nto\n+ * the foreign server\n+ */\n+bool\n+is_foreign_pathkey(PlannerInfo *root,\n\nIt would be better to name the method which reflects whether pushdown is\nsafe. e.g. is_pathkey_safe_for_pushdown.\n\nCheers\n\nOn Mon, Sep 6, 2021 at 1:17 AM Ronan Dunklau <ronan@dunklau.fr> wrote:Le vendredi 3 septembre 2021, 22:54:25 CEST David Zhang a écrit :\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> Applied the v6 patch to master branch and ran regression test for contrib,\n> the result was \"All tests successful.\"\n\nWhat kind of error did you get running make installcheck-world ? If it passed \nthe make check for contrib, I can't see why it would fail running make \ninstallcheck-world. \n\nIn any case, I just checked and running make installcheck-world doesn't \nproduce any error.\n\nSince HEAD had moved a bit since the last version, I rebased the patch, \nresulting in the attached v7.\n\nBest regards,\n\n--\nRonan DunklauHi,bq. a pushed-down order by could return wrong results. Can you briefly summarize the approach for fixing the bug in the description ?+ * Returns true if it's safe to push down a sort as described by 'pathkey' to+ * the foreign server+ */+bool+is_foreign_pathkey(PlannerInfo *root,It would be better to name the method which reflects whether pushdown is safe. e.g. is_pathkey_safe_for_pushdown.Cheers",
"msg_date": "Mon, 6 Sep 2021 02:25:39 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le lundi 6 septembre 2021, 11:25:39 CEST Zhihong Yu a écrit :\n> On Mon, Sep 6, 2021 at 1:17 AM Ronan Dunklau <ronan@dunklau.fr> wrote:\n> > Le vendredi 3 septembre 2021, 22:54:25 CEST David Zhang a écrit :\n> > > The following review has been posted through the commitfest application:\n> > > make installcheck-world: tested, failed\n> > > Implements feature: tested, passed\n> > > Spec compliant: not tested\n> > > Documentation: not tested\n> > > \n> > > Applied the v6 patch to master branch and ran regression test for\n> > \n> > contrib,\n> > \n> > > the result was \"All tests successful.\"\n> > \n> > What kind of error did you get running make installcheck-world ? If it\n> > passed\n> > the make check for contrib, I can't see why it would fail running make\n> > installcheck-world.\n> > \n> > In any case, I just checked and running make installcheck-world doesn't\n> > produce any error.\n> > \n> > Since HEAD had moved a bit since the last version, I rebased the patch,\n> > resulting in the attached v7.\n> > \n> > Best regards,\n> > \n> > --\n> > Ronan Dunklau\n> \n> Hi,\n> bq. a pushed-down order by could return wrong results.\n> \n> Can you briefly summarize the approach for fixing the bug in the\n> description ?\n\nDone, let me know what you think about it.\n\n> \n> + * Returns true if it's safe to push down a sort as described by 'pathkey'\n> to\n> + * the foreign server\n> + */\n> +bool\n> +is_foreign_pathkey(PlannerInfo *root,\n> \n> It would be better to name the method which reflects whether pushdown is\n> safe. e.g. is_pathkey_safe_for_pushdown.\n\nThe convention used here is the same one as in is_foreign_expr and \nis_foreign_param, which are also related to pushdown-safety. \n\n-- \nRonan Dunklau",
"msg_date": "Mon, 06 Sep 2021 11:41:26 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "On Mon, Sep 6, 2021 at 2:41 AM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n\n> Le lundi 6 septembre 2021, 11:25:39 CEST Zhihong Yu a écrit :\n> > On Mon, Sep 6, 2021 at 1:17 AM Ronan Dunklau <ronan@dunklau.fr> wrote:\n> > > Le vendredi 3 septembre 2021, 22:54:25 CEST David Zhang a écrit :\n> > > > The following review has been posted through the commitfest\n> application:\n> > > > make installcheck-world: tested, failed\n> > > > Implements feature: tested, passed\n> > > > Spec compliant: not tested\n> > > > Documentation: not tested\n> > > >\n> > > > Applied the v6 patch to master branch and ran regression test for\n> > >\n> > > contrib,\n> > >\n> > > > the result was \"All tests successful.\"\n> > >\n> > > What kind of error did you get running make installcheck-world ? If it\n> > > passed\n> > > the make check for contrib, I can't see why it would fail running make\n> > > installcheck-world.\n> > >\n> > > In any case, I just checked and running make installcheck-world doesn't\n> > > produce any error.\n> > >\n> > > Since HEAD had moved a bit since the last version, I rebased the patch,\n> > > resulting in the attached v7.\n> > >\n> > > Best regards,\n> > >\n> > > --\n> > > Ronan Dunklau\n> >\n> > Hi,\n> > bq. a pushed-down order by could return wrong results.\n> >\n> > Can you briefly summarize the approach for fixing the bug in the\n> > description ?\n>\n> Done, let me know what you think about it.\n>\n> >\n> > + * Returns true if it's safe to push down a sort as described by\n> 'pathkey'\n> > to\n> > + * the foreign server\n> > + */\n> > +bool\n> > +is_foreign_pathkey(PlannerInfo *root,\n> >\n> > It would be better to name the method which reflects whether pushdown is\n> > safe. e.g. is_pathkey_safe_for_pushdown.\n>\n> The convention used here is the same one as in is_foreign_expr and\n> is_foreign_param, which are also related to pushdown-safety.\n>\n> --\n> Ronan Dunklau\n\nHi,\nw.r.t. description:\nbq. original operator associated to the pathkey\n\n associated to -> associated with\n\nw.r.t. method name, it is fine to use the current name, considering the\nfunctions it calls don't have pushdown in their names.\n\nCheers\n\nOn Mon, Sep 6, 2021 at 2:41 AM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:Le lundi 6 septembre 2021, 11:25:39 CEST Zhihong Yu a écrit :\n> On Mon, Sep 6, 2021 at 1:17 AM Ronan Dunklau <ronan@dunklau.fr> wrote:\n> > Le vendredi 3 septembre 2021, 22:54:25 CEST David Zhang a écrit :\n> > > The following review has been posted through the commitfest application:\n> > > make installcheck-world: tested, failed\n> > > Implements feature: tested, passed\n> > > Spec compliant: not tested\n> > > Documentation: not tested\n> > > \n> > > Applied the v6 patch to master branch and ran regression test for\n> > \n> > contrib,\n> > \n> > > the result was \"All tests successful.\"\n> > \n> > What kind of error did you get running make installcheck-world ? If it\n> > passed\n> > the make check for contrib, I can't see why it would fail running make\n> > installcheck-world.\n> > \n> > In any case, I just checked and running make installcheck-world doesn't\n> > produce any error.\n> > \n> > Since HEAD had moved a bit since the last version, I rebased the patch,\n> > resulting in the attached v7.\n> > \n> > Best regards,\n> > \n> > --\n> > Ronan Dunklau\n> \n> Hi,\n> bq. a pushed-down order by could return wrong results.\n> \n> Can you briefly summarize the approach for fixing the bug in the\n> description ?\n\nDone, let me know what you think about it.\n\n> \n> + * Returns true if it's safe to push down a sort as described by 'pathkey'\n> to\n> + * the foreign server\n> + */\n> +bool\n> +is_foreign_pathkey(PlannerInfo *root,\n> \n> It would be better to name the method which reflects whether pushdown is\n> safe. e.g. is_pathkey_safe_for_pushdown.\n\nThe convention used here is the same one as in is_foreign_expr and \nis_foreign_param, which are also related to pushdown-safety. \n\n-- \nRonan DunklauHi,w.r.t. description:bq. original operator associated to the pathkey associated to -> associated withw.r.t. method name, it is fine to use the current name, considering the functions it calls don't have pushdown in their names.Cheers",
"msg_date": "Mon, 6 Sep 2021 09:46:37 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "On 2021-09-06 1:16 a.m., Ronan Dunklau wrote:\n> Le vendredi 3 septembre 2021, 22:54:25 CEST David Zhang a �crit :\n>> The following review has been posted through the commitfest application:\n>> make installcheck-world: tested, failed\n>> Implements feature: tested, passed\n>> Spec compliant: not tested\n>> Documentation: not tested\n>>\n>> Applied the v6 patch to master branch and ran regression test for contrib,\n>> the result was \"All tests successful.\"\n> What kind of error did you get running make installcheck-world ? If it passed\n> the make check for contrib, I can't see why it would fail running make\n> installcheck-world.\nJust to clarify, the error I encountered was not related with patch v6, \nit was related with other extensions.\n> In any case, I just checked and running make installcheck-world doesn't\n> produce any error.\n>\n> Since HEAD had moved a bit since the last version, I rebased the patch,\n> resulting in the attached v7.\n>\n> Best regards,\n>\n> --\n> Ronan Dunklau\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Fri, 10 Sep 2021 14:23:48 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> [ v8-0001-Fix-orderby-handling-in-postgres_fdw.patch ]\n\nI looked through this patch. It's going in the right direction,\nbut I have a couple of nitpicks:\n\n1. There are still some more places that aren't checking shippability\nof the relevant opfamily.\n\n2. The existing usage of find_em_expr_for_rel is fundamentally broken,\nbecause that function will seize on the first EC member that is from the\ngiven rel, whether it's shippable or not. There might be another one\nlater that is shippable, so this is just the wrong API. It's not like\nthis function gives us any useful isolation from the details of ECs,\nbecause postgres_fdw is already looking into those elsewhere, notably\nin find_em_expr_for_input_target --- which has the same order-sensitivity\nbug.\n\nI think that instead of doubling down on a wrong API, we should just\ntake that out and move the logic into postgres_fdw.c. This also has\nthe advantage of producing a patch that's much safer to backpatch,\nbecause it doesn't rely on the core backend getting updated before\npostgres_fdw.so is.\n\nSo hacking on those two points, and doing some additional cleanup,\nled me to the attached v9. (In this patch, the removal of code\nfrom equivclass.c is only meant to be applied to HEAD; we have to\nleave the function in place in the back branches for API stability.)\n\nIf no objections, I think this is committable.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 30 Mar 2022 19:41:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Le jeudi 31 mars 2022, 01:41:37 CEST Tom Lane a écrit :\n> I looked through this patch. It's going in the right direction,\n> but I have a couple of nitpicks:\n\nThank you Tom for taking a look at this.\n\n> I think that instead of doubling down on a wrong API, we should just\n> take that out and move the logic into postgres_fdw.c. This also has\n> the advantage of producing a patch that's much safer to backpatch,\n> because it doesn't rely on the core backend getting updated before\n> postgres_fdw.so is.\n\nIt makes total sense.\n\n> If no objections, I think this is committable.\n\nNo objections on my end.\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Thu, 31 Mar 2022 10:02:54 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> Le jeudi 31 mars 2022, 01:41:37 CEST Tom Lane a écrit :\n>> If no objections, I think this is committable.\n\n> No objections on my end.\n\nPushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 31 Mar 2022 14:51:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY pushdowns seem broken in postgres_fdw"
}
] |
[
{
"msg_contents": "Hi,\n\nthis is a followup to a performance optimization during the conversion of a\ncolumn from a timestamp column to a \"timestamp with tz\" column. The initial\npatch I am referring to is this one:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3c59263#patch4\n\nand the previous discussion on this list is this one:\n\nhttps://www.postgresql.org/message-id/CANsFX06xN-vPYxM+YXyfLezK9twjtK3dFJcOHoubTXng40muoQ@mail.gmail.com\n\nThe problem is that I have a 60TB+ PG installation for which we need to\nmodify all of the timestamp columns to timestamp with tz. The data in the\ncolumns are already in UTC so we can benefit from the patch listed above.\nYet there are 2 cases in which we are having an issue.\n\n1) Index rebuilds: The patch is only avoiding a rewrite of the table data\nbut is not avoiding a rebuild of the indexes. Following the logic in the\npatch above this should also be avoidable under the same condition\n\n2) Partitioned tables with the timestamp as partition column: In this case\nthe current version does not allow a modification of the column data type\nat all. Yet also following the logic in the patch this can also be allowed\nunder the side condition if no table rewrite is required.\n\nQuestion: What chances to we have to get the optimisations from the patch\nabove also \"promoted\" to the other 2 cases I listed?\n\nCheers,\nPeter\n\nHi,this is a followup to a performance optimization during the conversion of a column from a timestamp column to a \"timestamp with tz\" column. The initial patch I am referring to is this one:https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3c59263#patch4and the previous discussion on this list is this one:https://www.postgresql.org/message-id/CANsFX06xN-vPYxM+YXyfLezK9twjtK3dFJcOHoubTXng40muoQ@mail.gmail.comThe problem is that I have a 60TB+ PG installation for which we need to modify all of the timestamp columns to timestamp with tz. The data in the columns are already in UTC so we can benefit from the patch listed above. Yet there are 2 cases in which we are having an issue. 1) Index rebuilds: The patch is only avoiding a rewrite of the table data but is not avoiding a rebuild of the indexes. Following the logic in the patch above this should also be avoidable under the same condition2) Partitioned tables with the timestamp as partition column: In this case the current version does not allow a modification of the column data type at all. Yet also following the logic in the patch this can also be allowed under the side condition if no table rewrite is required. Question: What chances to we have to get the optimisations from the patch above also \"promoted\" to the other 2 cases I listed? Cheers,Peter",
"msg_date": "Wed, 21 Jul 2021 07:48:28 +0200",
"msg_from": "Peter Volk <peterb.volk@gmx.net>",
"msg_from_op": true,
"msg_subject": "Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "Peter Volk <peterb.volk@gmx.net> writes:\n> The problem is that I have a 60TB+ PG installation for which we need to\n> modify all of the timestamp columns to timestamp with tz. The data in the\n> columns are already in UTC so we can benefit from the patch listed above.\n> Yet there are 2 cases in which we are having an issue.\n\n> 1) Index rebuilds: The patch is only avoiding a rewrite of the table data\n> but is not avoiding a rebuild of the indexes. Following the logic in the\n> patch above this should also be avoidable under the same condition\n\nI don't think that follows. What we are concerned about when determining\nwhether a heap rewrite can be skipped is whether the stored heap entries\nare bit-compatible between the two data types. To decide that an index\nrebuild is not needed, you'd need to further determine that their sort\norders are equivalent (for a btree index, or who-knows-what semantic\ncondition for other types of indexes). We don't attempt to do that,\nso index rebuilds are always needed.\n\nAs a thought experiment to prove that this is an issue, suppose that\nsomebody invented an unsigned integer type, and made the cast from\nregular int4 follow the rules of a C cast, so that e.g. -1 becomes\n2^32-1. Given that, an ALTER TYPE from int4 to the unsigned type\ncould skip the heap rewrite. But we absolutely would have to rebuild\nany btree index on the column, because the sort ordering of the two\ntypes is different. OTOH, it's quite likely that a hash index would\nnot really need to be rebuilt. So this is a real can of worms and\nwe've not cared to open it.\n\n> 2) Partitioned tables with the timestamp as partition column: In this case\n> the current version does not allow a modification of the column data type\n> at all.\n\nPG's partitioning features are still being built out, but I would not\nhold my breath for that specific thing to change. Again, the issue\nis that bit-compatibility of individual values doesn't prove much\nabout comparison semantics, so it's not clear that a change of\ndata type still allows the value-to-partition assignment to be\nidentical. (This is clearly an issue for RANGE partitioning. Maybe\nit could be finessed for HASH or LIST, but you'd still be needing\nsemantic assumptions that go beyond mere bit-compatibility of values.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 11:28:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "Hi Tom,\n\nthanks for the reply, I do understand that if a rewrite of the table\nneeds to be avoided the binary image needs to be the same. Since PG 12\nthere is an optimisation to avoid a rewrite of timestamp columns if\nthey are converted to timestamp with tz and the target tz offset is 0\n\nI am referring to the function\n\nATColumnChangeRequiresRewrite(Node *expr, AttrNumber varattno)\n\nin which the following is checked:\n\n(b/src/backend/commands/tablecmds.c)\n\nelse if (IsA(expr, FuncExpr))\n {\n FuncExpr *f = (FuncExpr *) expr;\n\n switch (f->funcid)\n {\n case F_TIMESTAMPTZ_TIMESTAMP:\n case F_TIMESTAMP_TIMESTAMPTZ:\n if (TimestampTimestampTzRequiresRewrite())\n return true;\n else\n expr = linitial(f->args);\n break;\n default:\n return true;\n }\n\n\nand TimestampTimestampTzRequiresRewrite checks if the offset is 0:\n\n(b/src/backend/utils/adt/timestamp.c)\n\n TimestampTimestampTzRequiresRewrite()\n *\n * Returns false if the TimeZone GUC setting causes timestamp_timestamptz and\n * timestamptz_timestamp to be no-ops, where the return value has the same\n * bits as the argument. Since project convention is to assume a GUC changes\n * no more often than STABLE functions change, the answer is valid that long.\n */\nbool\nTimestampTimestampTzRequiresRewrite(void)\n{\n long offset;\n\n if (pg_get_timezone_offset(session_timezone, &offset) && offset == 0)\n PG_RETURN_BOOL(false);\n PG_RETURN_BOOL(true);\n}\n\nSo in this case it is already proven that there is a binary equality\nbetween the data types timestamp and timestamp with tz if the offset\nis considered with 0. Hence this type of optimisation should / could\nalso apply to indexes as well as the columns used in partitions\n\nThanks,\nPeter\n\n\nOn Thu, Jul 22, 2021 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Volk <peterb.volk@gmx.net> writes:\n> > The problem is that I have a 60TB+ PG installation for which we need to\n> > modify all of the timestamp columns to timestamp with tz. The data in the\n> > columns are already in UTC so we can benefit from the patch listed above.\n> > Yet there are 2 cases in which we are having an issue.\n>\n> > 1) Index rebuilds: The patch is only avoiding a rewrite of the table data\n> > but is not avoiding a rebuild of the indexes. Following the logic in the\n> > patch above this should also be avoidable under the same condition\n>\n> I don't think that follows. What we are concerned about when determining\n> whether a heap rewrite can be skipped is whether the stored heap entries\n> are bit-compatible between the two data types. To decide that an index\n> rebuild is not needed, you'd need to further determine that their sort\n> orders are equivalent (for a btree index, or who-knows-what semantic\n> condition for other types of indexes). We don't attempt to do that,\n> so index rebuilds are always needed.\n>\n> As a thought experiment to prove that this is an issue, suppose that\n> somebody invented an unsigned integer type, and made the cast from\n> regular int4 follow the rules of a C cast, so that e.g. -1 becomes\n> 2^32-1. Given that, an ALTER TYPE from int4 to the unsigned type\n> could skip the heap rewrite. But we absolutely would have to rebuild\n> any btree index on the column, because the sort ordering of the two\n> types is different. OTOH, it's quite likely that a hash index would\n> not really need to be rebuilt. So this is a real can of worms and\n> we've not cared to open it.\n>\n> > 2) Partitioned tables with the timestamp as partition column: In this case\n> > the current version does not allow a modification of the column data type\n> > at all.\n>\n> PG's partitioning features are still being built out, but I would not\n> hold my breath for that specific thing to change. Again, the issue\n> is that bit-compatibility of individual values doesn't prove much\n> about comparison semantics, so it's not clear that a change of\n> data type still allows the value-to-partition assignment to be\n> identical. (This is clearly an issue for RANGE partitioning. Maybe\n> it could be finessed for HASH or LIST, but you'd still be needing\n> semantic assumptions that go beyond mere bit-compatibility of values.)\n>\n> regards, tom lane\n>\n>\n\n\n",
"msg_date": "Thu, 22 Jul 2021 18:36:41 +0200",
"msg_from": "Peter Volk <peterb.volk@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 11:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As a thought experiment to prove that this is an issue, suppose that\n> somebody invented an unsigned integer type, and made the cast from\n> regular int4 follow the rules of a C cast, so that e.g. -1 becomes\n> 2^32-1. Given that, an ALTER TYPE from int4 to the unsigned type\n> could skip the heap rewrite. But we absolutely would have to rebuild\n> any btree index on the column, because the sort ordering of the two\n> types is different. OTOH, it's quite likely that a hash index would\n> not really need to be rebuilt. So this is a real can of worms and\n> we've not cared to open it.\n\nI agree that it doesn't follow in general. I think it does in the case\nof timestamp and timestamptz, because I don't think either the choice\nof time zone or the fact that we're reckoning relative to a time zone\ncan change which of two timestamps is considered earlier. However, I\nthink the only infrastructure we have for proving that is to look to\nsee whether it's the same operator family in both cases. Because\ntimestamp_ops and timestamptz_ops are separate, that doesn't help\nhere.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 22 Jul 2021 14:48:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I agree that it doesn't follow in general. I think it does in the case\n> of timestamp and timestamptz, because I don't think either the choice\n> of time zone or the fact that we're reckoning relative to a time zone\n> can change which of two timestamps is considered earlier. However, I\n> think the only infrastructure we have for proving that is to look to\n> see whether it's the same operator family in both cases. Because\n> timestamp_ops and timestamptz_ops are separate, that doesn't help\n> here.\n\nRight. It would in fact work for these two types, but we do not have\ninfrastructure that would allow us to know that. I'm not sure about\nyour idea that \"same operator family\" is enough.\n\n(Even for these two types, while a plain btree index should be fine,\nI think it wouldn't be hard to construct expression indexes that\nwould not be compatible. So there's a lot of worms in that can.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Jul 2021 15:09:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "Peter Volk <peterb.volk@gmx.net> writes:\n> thanks for the reply, I do understand that if a rewrite of the table\n> needs to be avoided the binary image needs to be the same. Since PG 12\n> there is an optimisation to avoid a rewrite of timestamp columns if\n> they are converted to timestamp with tz and the target tz offset is 0\n\nYes, I'm very well aware of that optimization. While it's certainly\na hack, it fits within a design that isn't a hack, ie that there are\ncommon, well-defined cases where we can skip the table rewrite.\nHowever, for the reasons I explained before, there are no general-purpose\ncases where we can skip an index build on a type-changed column, so\nthere is no place to insert a similar hack for the timestamp[tz] case.\nI'm unwilling to kluge up ALTER TYPE to the extent that would be needed\nif the result would only be to handle this one case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jul 2021 14:07:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 2:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yes, I'm very well aware of that optimization. While it's certainly\n> a hack, it fits within a design that isn't a hack, ie that there are\n> common, well-defined cases where we can skip the table rewrite.\n> However, for the reasons I explained before, there are no general-purpose\n> cases where we can skip an index build on a type-changed column, so\n> there is no place to insert a similar hack for the timestamp[tz] case.\n\nWouldn't the hack just go into CheckIndexCompatible()?\n\nYou seemed to think my previous comments about comparing opfamilies\nwere hypothetical but I think we actually already have the\noptimization Peter wants, and it just doesn't apply in this case for\nlack of hacks.\n\nMaybe I am missing something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Jul 2021 16:49:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jul 23, 2021 at 2:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, for the reasons I explained before, there are no general-purpose\n>> cases where we can skip an index build on a type-changed column, so\n>> there is no place to insert a similar hack for the timestamp[tz] case.\n\n> Wouldn't the hack just go into CheckIndexCompatible()?\n\nOh! I went looking for code to skip rebuilding indexes during ALTER TYPE,\nbut I guess I looked in the wrong place, because I missed that somehow.\n\n> You seemed to think my previous comments about comparing opfamilies\n> were hypothetical but I think we actually already have the\n> optimization Peter wants, and it just doesn't apply in this case for\n> lack of hacks.\n\nHmm. Note that what this is checking for is same operator *class* not\nsame operator family (if it were doing the latter, Peter's case would\nalready work). I think it has to do that. Extending my previous\nthought experiment about an unsigned integer type, if someone were to\ninvent one, it would make a lot of sense to include it in integer_ops,\nand then the logic you suggest is toast. (Obviously, the cross-type\ncomparison operators you'd need to write would have to be careful,\nbut you'd almost certainly wish to write them anyway.)\n\nGiven that we require the non-cross-type members of an opclass to be\nimmutable, what this is actually doing may be safe. At least I can't\nconstruct a counterexample after five minutes' thought. On the other\nhand, I'm also a bit confused about how it ever succeeds at all.\nIf we've changed the heap column's type, it should not be using the\nsame opclass anymore (unless the opclass is polymorphic, but that\ncase is rejected too). I'm suspicious that this is just an expensive\nway of writing \"we can only preserve indexes that aren't on the\ncolumn that changed type\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jul 2021 17:47:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > You seemed to think my previous comments about comparing opfamilies\n> > were hypothetical but I think we actually already have the\n> > optimization Peter wants, and it just doesn't apply in this case for\n> > lack of hacks.\n>\n> Hmm. Note that what this is checking for is same operator *class* not\n> same operator family (if it were doing the latter, Peter's case would\n> already work). I think it has to do that. Extending my previous\n> thought experiment about an unsigned integer type, if someone were to\n> invent one, it would make a lot of sense to include it in integer_ops,\n> and then the logic you suggest is toast. (Obviously, the cross-type\n> comparison operators you'd need to write would have to be careful,\n> but you'd almost certainly wish to write them anyway.)\n\nMumble. I hadn't considered that sort of thing. I assumed that when\nthe documentation and/or code comments talked about a compatible\nnotion of equality, it was a strong enough notion of \"compatible\" to\npreclude this sort of case. I'm not really sure why we shouldn't think\nof it that way; the example you give here is reasonable, but\nartificial.\n\n> Given that we require the non-cross-type members of an opclass to be\n> immutable, what this is actually doing may be safe. At least I can't\n> construct a counterexample after five minutes' thought. On the other\n> hand, I'm also a bit confused about how it ever succeeds at all.\n> If we've changed the heap column's type, it should not be using the\n> same opclass anymore (unless the opclass is polymorphic, but that\n> case is rejected too). I'm suspicious that this is just an expensive\n> way of writing \"we can only preserve indexes that aren't on the\n> column that changed type\".\n\nWell, you can change just the typemod, for example, which was a case\nthat motivated this work originally. People wanted to be able to make\nvarchar(10) into varchar(20) without doing a ton of work, and I think\nthis lets that work. I seem to recall that there are at least a few\ncases that involve actually changing the type as well, but at 6pm on a\nFriday evening when I haven't looked at this in years, I can't tell\nyou what they are off the top of my head.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Jul 2021 18:03:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jul 23, 2021 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm. Note that what this is checking for is same operator *class* not\n>> same operator family (if it were doing the latter, Peter's case would\n>> already work). I think it has to do that. Extending my previous\n>> thought experiment about an unsigned integer type, if someone were to\n>> invent one, it would make a lot of sense to include it in integer_ops,\n>> and then the logic you suggest is toast.\n\n> Mumble. I hadn't considered that sort of thing. I assumed that when\n> the documentation and/or code comments talked about a compatible\n> notion of equality, it was a strong enough notion of \"compatible\" to\n> preclude this sort of case.\n\nFor btree indexes, you need a compatible notion of ordering, not only\nequality. That's really what's breaking my hypothetical case of a uint\ntype. But as long as you implement operators that behave in a consistent\nfashion, whether they interpret the same heap bitpattern the same is not\nsomething that matters for constructing a consistent operator family.\ndatetime_ops (which includes timestamp and timestamptz) is already a\ncounterexample, since unless the timezone is UTC, its operators *don't*\nall agree on what a particular bitpattern means.\n\n>> ... I'm also a bit confused about how it ever succeeds at all.\n\n> Well, you can change just the typemod, for example, which was a case\n> that motivated this work originally.\n\nAh, right. I guess binary-compatible cases such as text and varchar\nwould also fit into that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jul 2021 18:18:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 6:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> For btree indexes, you need a compatible notion of ordering, not only\n> equality. That's really what's breaking my hypothetical case of a uint\n> type. But as long as you implement operators that behave in a consistent\n> fashion, whether they interpret the same heap bitpattern the same is not\n> something that matters for constructing a consistent operator family.\n> datetime_ops (which includes timestamp and timestamptz) is already a\n> counterexample, since unless the timezone is UTC, its operators *don't*\n> all agree on what a particular bitpattern means.\n\nWell, that depends a bit on what \"means\" means. I would argue that the\nmeaning does not depend on the timezone setting, and that the timezone\nmerely controls the way that values are printed. That is, I would say\nthat the meaning is the point in time which the timestamp represents,\nconsidered as an abstract concept, and timezone is merely the user's\nway of asking that point in time to be expressed in a way that's easy\nfor them to understand. Regardless of that philosophical point, I\nthink it must be true that if a and b are timestamps, a < b implies\na::timestamptz < b::timestamptz, and a > b implies a::timestamptz >\nb::timestamptz, and the other way around, which is surely good enough,\nbut wasn't the case in your example. So the question I suppose is\nwhether in your example it's really legitimate to include your uint\ntype in the integer_ops opfamily. I said above that I didn't think so,\nbut I'm less sure now, because I realize that I read what you wrote\ncarefully enough, and it's not as artificial as I first thought.\n\n> >> ... I'm also a bit confused about how it ever succeeds at all.\n>\n> > Well, you can change just the typemod, for example, which was a case\n> > that motivated this work originally.\n>\n> Ah, right. I guess binary-compatible cases such as text and varchar\n> would also fit into that.\n\nOh, right. That was another one of the cases that motivated that work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Jul 2021 21:02:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Followup Timestamp to timestamp with TZ conversion"
}
] |
[
{
"msg_contents": "Hi\n\nI found a problem when using tab-completion as follows:\n\nCREATE SUBSCRIPTION my_subscription \nCONNECTION 'host=localhost port=5432 dbname=postgres' [TAB]\n\nThe word 'PUBLICATION' couldn't be auto completed as expected.\n\nThe reason is that the equal sign in a single quote is taken as WORD_BREAKS.\nWhich causes words count is not correct for a input command string.\n\nFix the problem in the attached patch. By fix this, \"\\t\\n@$><=;|&{() \" will not be taken as WORD_BREAKS in single quoted input string.\nPlease be kindly to take a look at the fix and kindly to tell me if you find any scenario affected by the fix.\n\nRegards,\nTang",
"msg_date": "Thu, 22 Jul 2021 04:04:46 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] support tab-completion for single quote input with equal sign"
},
{
"msg_contents": "On Thursday, July 22, 2021 1:05 PM, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote\n>I found a problem when using tab-completion as follows:\n>\n>CREATE SUBSCRIPTION my_subscription \n>CONNECTION 'host=localhost port=5432 dbname=postgres' [TAB]\n>\n>The word 'PUBLICATION' couldn't be auto completed as expected.\n\nAdded above patch in commit fest as follows:\n\nhttps://commitfest.postgresql.org/34/3267/\n\nRegards,\nTang\n\n\n",
"msg_date": "Fri, 23 Jul 2021 05:34:26 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "On Fri, 2021-07-23 at 05:34 +0000, tanghy.fnst@fujitsu.com wrote:\r\n> On Thursday, July 22, 2021 1:05 PM, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote\r\n> > I found a problem when using tab-completion as follows:\r\n> > \r\n> > CREATE SUBSCRIPTION my_subscription \r\n> > CONNECTION 'host=localhost port=5432 dbname=postgres' [TAB]\r\n> > \r\n> > The word 'PUBLICATION' couldn't be auto completed as expected.\r\n\r\nHello,\r\n\r\nI applied your patch against HEAD (and did a clean build for good\r\nmeasure) but couldn't get the tab-completion you described -- on my\r\nmachine, `PUBLICATION` still fails to complete. Tab completion is\r\nworking in general, for example with the `SUBSCRIPTION` and\r\n`CONNECTION` keywords.\r\n\r\nIs there additional setup that I need to do?\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 2 Sep 2021 17:13:35 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "On Friday, September 3, 2021 2:14 AM, Jacob Champion <pchampion@vmware.com> wrote\r\n>I applied your patch against HEAD (and did a clean build for good\r\n>measure) but couldn't get the tab-completion you described -- on my\r\n>machine, `PUBLICATION` still fails to complete. Tab completion is\r\n>working in general, for example with the `SUBSCRIPTION` and\r\n>`CONNECTION` keywords.\r\n>\r\n>Is there additional setup that I need to do?\r\n\r\nThanks for your check.\r\n\r\nI applied the 0001-patch to the HEAD(c95ede41) by now and it worked as I expected.\r\nDid you leave a space between \"dbname=postgres'\" and \"[TAB]\"?\r\n\r\nIn 0002-patch I added a tap test for the scenario of single quoted input with equal sign. \r\nThe test result is 'All tests successful. ' on my machine. \r\nYou can run the tap tests as follows:\r\n1. Apply the attached patch\r\n2. build the src with option '--enable-tap-tests' (./configure --enable-tap-tests)\r\n3. Move to src subdirectory 'src/bin/psql' \r\n4. Run 'make check' \r\n\r\nI'd appreciate it if you can share your test results with me.\r\n\r\nRegards,\r\nTang",
"msg_date": "Fri, 3 Sep 2021 04:32:49 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "Hi Tang,\r\n\r\nOn Fri, 2021-09-03 at 04:32 +0000, tanghy.fnst@fujitsu.com wrote:\r\n> I'd appreciate it if you can share your test results with me.\r\n\r\nSure! Here's my output (after a `make clean && make`):\r\n\r\n cd . && TESTDIR='/home/pchampion/workspace/postgres/src/bin/psql' PATH=\"/home/pchampion/workspace/postgres/tmp_install/usr/local/pgsql-master/bin:$PATH\" LD_LIBRARY_PATH=\"/home/pchampion/workspace/postgres/tmp_install/usr/local/pgsql-master/lib\" PGPORT='65432' PG_REGRESS='/home/pchampion/workspace/postgres/src/bin/psql/../../../src/test/regress/pg_regress' /usr/bin/prove -I ../../../src/test/perl/ -I . t/*.pl\r\n t/010_tab_completion.pl .. 17/? \r\n # Failed test 'tab-completion after single quoted text input with equal sign'\r\n # at t/010_tab_completion.pl line 198.\r\n # Actual output was \"CREATE SUBSCRIPTION my_sub CONNECTION 'host=localhost port=5432 dbname=postgres' \\a\"\r\n # Did not match \"(?^:PUBLICATION)\"\r\n # Looks like you failed 1 test of 23.\r\n t/010_tab_completion.pl .. Dubious, test returned 1 (wstat 256, 0x100)\r\n Failed 1/23 subtests \r\n t/020_cancel.pl .......... ok \r\n\r\n Test Summary Report\r\n -------------------\r\n t/010_tab_completion.pl (Wstat: 256 Tests: 23 Failed: 1)\r\n Failed test: 17\r\n Non-zero exit status: 1\r\n Files=2, Tests=25, 8 wallclock secs ( 0.02 usr 0.01 sys + 1.48 cusr 0.37 csys = 1.88 CPU)\r\n Result: FAIL\r\n make: *** [Makefile:87: check] Error 1\r\n\r\nThanks,\r\n--Jacob\r\n",
"msg_date": "Fri, 3 Sep 2021 23:54:01 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> [ v2-0001-support-tab-completion-for-single-quote-input-wit.patch ]\n\nSurely this patch is completely wrong? It needs more thought about\nthe interaction with the existing logic for double quotes, ie single\nquote inside double quotes is not special, nor the reverse; nor should\nparentheses inside quotes be counted. It also needs to be aware of\nbackslashes in escape-style strings.\n\nI kind of doubt that it's actually possible to parse string literals\ncorrectly when working backward, as this function does. For starters,\nyou won't know whether the string starts with \"E\", so you won't know\nwhether backslashes are special. We've got away with backwards\nparsing so far because the syntax rules for double-quoted strings are\nso much simpler. But if you want to handle single quotes, I think\nyou'll have to start by rearranging the code to parse forward. That's\nlikely to be fairly ticklish, see the comment about\n\n\t * backwards scan has some interesting but intentional properties\n\t * concerning parenthesis handling.\n\nI wish that whoever wrote that (which I think was me :-() had been\nmore explicit. But I think that the point is that we look for a place\nthat's at the same parenthesis nesting level as the completion point,\nnot necessarily one that's globally outside of any parens. That will\nbe messy to handle if we want to convert to scanning forwards from\nthe start of the string.\n\nI kind of wonder if it isn't time to enlist the help of psqlscan.l\ninstead of doubling down on the idea that tab-complete.c should have\nits own half-baked SQL lexer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Sep 2021 10:18:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "I wrote:\n> Surely this patch is completely wrong? It needs more thought about\n> the interaction with the existing logic for double quotes, ie single\n> quote inside double quotes is not special, nor the reverse; nor should\n> parentheses inside quotes be counted. It also needs to be aware of\n> backslashes in escape-style strings.\n\nActually ... those are just implementation details, and now that\nI've thought about it a little more, I question the entire concept\nof making single-quoted strings be single words in tab-complete's\nview. I think it's quite intentional that we don't do that;\nif we did, it'd forever foreclose the possibility of tab-completing\n*within* strings. You don't have to look any further than CREATE\nSUBSCRIPTION itself to see possible applications of that: someone\ncould wish that\n\nCREATE SUBSCRIPTION my_sub CONNECTION 'db<TAB>\n\nwould complete with \"name=\", or that <TAB> right after the quote\nwould offer a list of connection keywords.\n\n(More generally, I'm afraid that people are already relying on this\nbehavior in other contexts, and thus that the proposed patch could\nbreak more use-cases than it fixes.)\n\nSo now I think that this approach should be rejected, and that the\nright thing is to fix the CREATE SUBSCRIPTION completion rules\nto allow more than one \"word\" between CONNECTION and PUBLICATION.\n\nAnother idea that might be useful is to treat the opening and\nclosing quotes themselves as separate \"words\", which'd give\nthe CREATE SUBSCRIPTION rules a bit more to go on about when to\noffer PUBLICATION.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Sep 2021 10:57:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> t/010_tab_completion.pl .. 17/? \n> # Failed test 'tab-completion after single quoted text input with equal sign'\n> # at t/010_tab_completion.pl line 198.\n> # Actual output was \"CREATE SUBSCRIPTION my_sub CONNECTION 'host=localhost port=5432 dbname=postgres' \\a\"\n> # Did not match \"(?^:PUBLICATION)\"\n> # Looks like you failed 1 test of 23.\n\nIndependently of the concerns I raised, I'm wondering how come you\nare getting different results. Which readline or libedit version\nare you using, on what platform?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Sep 2021 11:32:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "On Sat, 2021-09-04 at 11:32 -0400, Tom Lane wrote:\r\n> Jacob Champion <pchampion@vmware.com> writes:\r\n> > t/010_tab_completion.pl .. 17/? \r\n> > # Failed test 'tab-completion after single quoted text input with equal sign'\r\n> > # at t/010_tab_completion.pl line 198.\r\n> > # Actual output was \"CREATE SUBSCRIPTION my_sub CONNECTION 'host=localhost port=5432 dbname=postgres' \\a\"\r\n> > # Did not match \"(?^:PUBLICATION)\"\r\n> > # Looks like you failed 1 test of 23.\r\n> \r\n> Independently of the concerns I raised, I'm wondering how come you\r\n> are getting different results. Which readline or libedit version\r\n> are you using, on what platform?\r\n\r\nNow you have me worried...\r\n\r\n- Ubuntu 20.04\r\n- libedit, version 3.1-20191231-1\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 8 Sep 2021 18:22:30 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> On Sat, 2021-09-04 at 11:32 -0400, Tom Lane wrote:\n>> Independently of the concerns I raised, I'm wondering how come you\n>> are getting different results. Which readline or libedit version\n>> are you using, on what platform?\n\n> Now you have me worried...\n> - Ubuntu 20.04\n> - libedit, version 3.1-20191231-1\n\nWe've had troubles with libedit before :-(, and that particular release\nis known to be a bit buggy [1].\n\nI can reproduce a problem using HEAD (no patch needed) and Fedora 32's\nlibedit-3.1-32.20191231cvs.fc32.x86_64: if I type\n\n\tcreate subscription s connection foo <TAB>\n\nthen it happily completes \"PUBLICATION\", but if I type\n\n\tcreate subscription s connection 'foo' <TAB>\n\nI just get beeps.\n\nI do *not* see this misbehavior on a nearby FreeBSD 13.0 machine\nwith libedit 3.1.20210216,1 (which isn't even the latest version).\nSo it's a fixed-some-time-ago bug. Given the symptoms, I wonder\nif it isn't closely related to the original complaint at [1].\n\nAnyway, the bottom line from our standpoint is going to be that\nwe can't put a test case like this one into the TAP test. I recall\nthat getting 010_tab_completion.pl to pass everywhere was a huge\nheadache at the outset, so this conclusion doesn't surprise me\nmuch.\n\n\t\t\tregards, tom lane\n\n[1] http://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=54510\n\n\n",
"msg_date": "Wed, 08 Sep 2021 17:08:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "On Wed, 2021-09-08 at 17:08 -0400, Tom Lane wrote:\r\n> Jacob Champion <pchampion@vmware.com> writes:\r\n> > On Sat, 2021-09-04 at 11:32 -0400, Tom Lane wrote:\r\n> > > Independently of the concerns I raised, I'm wondering how come you\r\n> > > are getting different results. Which readline or libedit version\r\n> > > are you using, on what platform?\r\n> > Now you have me worried...\r\n> > - Ubuntu 20.04\r\n> > - libedit, version 3.1-20191231-1\r\n> \r\n> We've had troubles with libedit before :-(, and that particular release\r\n> is known to be a bit buggy [1].\r\n\r\nThat's... unfortunate. Thanks for the info; I wonder what other tab-\r\ncompletion bugs I've just accepted as \"standard behavior\".\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 8 Sep 2021 21:54:27 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "On Saturday, September 4, 2021 11:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Actually ... those are just implementation details, and now that\n>I've thought about it a little more, I question the entire concept\n>of making single-quoted strings be single words in tab-complete's\n>view. I think it's quite intentional that we don't do that;\n>if we did, it'd forever foreclose the possibility of tab-completing\n>*within* strings. You don't have to look any further than CREATE\n>SUBSCRIPTION itself to see possible applications of that: someone\n>could wish that\n>\n>CREATE SUBSCRIPTION my_sub CONNECTION 'db<TAB>\n>\n>would complete with \"name=\", or that <TAB> right after the quote\n>would offer a list of connection keywords.\n>\n>(More generally, I'm afraid that people are already relying on this\n>behavior in other contexts, and thus that the proposed patch could\n>break more use-cases than it fixes.)\n\nAgreed. Thanks for your comments.\n\n>So now I think that this approach should be rejected, and that the\n>right thing is to fix the CREATE SUBSCRIPTION completion rules\n>to allow more than one \"word\" between CONNECTION and PUBLICATION.\n>\n>Another idea that might be useful is to treat the opening and\n>closing quotes themselves as separate \"words\", which'd give\n>the CREATE SUBSCRIPTION rules a bit more to go on about when to\n>offer PUBLICATION.\n\nTreat the opening and closing quotes themselves as separate \"words\" may affect some current tap completion.\nSo I tried to fix the CREATE SUBSCRIPTION completion rules in the V3 patch.\nThe basic idea is to check the head words of the input text as \"CREATE SUBSCRIPTION subname CONNECTION anystring\", \nthen check to see if anystring ends with single quote. If all check passed, PUBLICATION will be auto-completed.\n\nTap tests(including the one added in V3) has been passed.\n\nRegards,\nTang",
"msg_date": "Wed, 15 Sep 2021 14:41:15 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "At Sat, 04 Sep 2021 10:18:24 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I kind of wonder if it isn't time to enlist the help of psqlscan.l\n> instead of doubling down on the idea that tab-complete.c should have\n> its own half-baked SQL lexer.\n\nSo, I played with this idea and came up with the attached WIP. The\ntest added by the original patch succeeds with it not tweakig the\n\"Matches\" part in psql_completion.\n\nWhile I checked this I found several dubious parts in TAP test.\n\n=== 010_tab_completion.pl\n # COPY requires quoting\n # note: broken versions of libedit want to backslash the closing quote;\n # not much we can do about that\n check_completion(\n-\t\"COPY foo FROM tmp_check/some\\t\",\n+\t\"COPY foo FROM \\'tmp_check/some\\t\",\n\nThe original command syntax is just wrong and this patch make\ncompletion code treat the command line correctly (breaks the\n\"filename\" into \"tmp_check\" \"/\" \"some\") and the test item fails.\n\ncheck_completion(\n-\t\"COPY foo FROM tmp_check/af\\t\",\n-\tqr|'tmp_check/afile|,\n+\t\"COPY foo FROM \\'tmp_check/af\\t\",\n+\tqr|'tmp_check/af\\a?ile|,\t# \\a is BEL\n\nThis test fails for the same reason, but after fixing it the result\ncontains \\a (BEL) in the output on my CentOS8. I'm not sure what is\nhappening here..\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 17 Sep 2021 02:45:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with\n equal sign"
},
{
"msg_contents": "On Fri, Sep 17, 2021 at 02:45:57AM +0900, Kyotaro Horiguchi wrote:\n> This test fails for the same reason, but after fixing it the result\n> contains \\a (BEL) in the output on my CentOS8. I'm not sure what is\n> happening here..\n\nThe patch is still failing under the CF bot, and this last update was\ntwo months ago. I am marking it as returned with feedback.\n--\nMichael",
"msg_date": "Fri, 3 Dec 2021 15:16:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "At Fri, 3 Dec 2021 15:16:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Sep 17, 2021 at 02:45:57AM +0900, Kyotaro Horiguchi wrote:\n> > This test fails for the same reason, but after fixing it the result\n> > contains \\a (BEL) in the output on my CentOS8. I'm not sure what is\n> > happening here..\n> \n> The patch is still failing under the CF bot, and this last update was\n> two months ago. I am marking it as returned with feedback.\n\nI thought the last *WIP* patch as just a proposal of another direction\nafter the first attempt was failed.\n\nI'll start another thread named as 'let's psql-completion share the\npsql command line lexer' or such, after rebasing and some polishing.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 06 Dec 2021 14:16:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with\n equal sign"
},
{
"msg_contents": "On Thursday, July 22, 2021 1:05 PM, tanghy(dot)fnst(at)fujitsu(dot)com \n<tanghy(dot)fnst(at)fujitsu(dot)com> wrote\n> I found a problem when using tab-completion as follows:\n> \n> CREATE SUBSCRIPTION my_subscription\n> CONNECTION 'host=localhost port=5432 dbname=postgres' [TAB]\n> \n> The word 'PUBLICATION' couldn't be auto completed as expected.\n\nI too wondered about this behavior.\n\n> v3-0001-support-tab-completion-for-CONNECTION-string-with.patch\n\nI applied the patch and succeeded in the above case, but failed in the \nbelow case.\n\n =# CREATE SUBSCRIPTION s1 CONNECTION 'a=' PUBLICATION p1 <tab>\n\nBefore applying the patch, 'WITH (' was completed, but now it completes \nnothing since it matches the below condition:\n\n> 18 + else if ((HeadMatches(\"CREATE\", \"SUBSCRIPTION\", MatchAny, \n> \"CONNECTION\", MatchAny)))\n> 19 + {\n\nI updated the patch going along with the v3 direction.\n\nWhat do you think?\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Tue, 10 Jan 2023 22:01:01 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "torikoshia <torikoshia@oss.nttdata.com> writes:\n> I updated the patch going along with the v3 direction.\n\nI think this adds about as many failure modes as it removes,\nif not more.\n\n* The connection string doesn't necessarily end with \"'\"; it could\nbe a dollar-quoted string.\n\n* If it is a dollar-quoted string, there could be \"'\" marks internal\nto it, allowing PUBLICATION to be falsely offered when we're really\nstill within the connection string.\n\n* The following WITH options could contain \"'\", allowing PUBLICATION\nto be falsely offered within that clause.\n\nI've spent some effort previously on getting tab-completion to deal\nsanely with single-quoted strings, but everything I've tried has\ncrashed and burned :-(, mainly because it's not clear when to take\nthe whole literal as one \"word\" and when not. A relevant example\nhere is that somebody might wish that we could tab-complete within\nthe connection string, e.g. that\n\nCREATE SUBSCRIPTION sub CONNECTION 'db<TAB>\n\nwould complete with \"name=\". We have the info available from libpq\nto do that, if only we could figure out when to apply it. I think\nwe need some pretty fundamental design work to figure out what we\nwant to do in this area, and that in the meantime putting band-aids\non specific cases is probably not very productive.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Jan 2023 19:56:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "I wrote:\n> I've spent some effort previously on getting tab-completion to deal\n> sanely with single-quoted strings, but everything I've tried has\n> crashed and burned :-(, mainly because it's not clear when to take\n> the whole literal as one \"word\" and when not.\n\nAfter a little further thought, a new idea occurred to me: maybe\nwe could push some of the problem down into the Matches functions.\nConsider inventing a couple of new match primitives:\n\n* MatchLiteral matches one or more parse tokens that form a single\ncomplete, valid SQL literal string (either single-quoted or dollar\nstyle). Use it like\n\n else if (Matches(\"CREATE\", \"SUBSCRIPTION\", MatchAny, \"CONNECTION\", MatchLiteral))\n COMPLETE_WITH(\"PUBLICATION\");\n\nI think there's no question that most Matches calls that might subsume\na quoted literal would prefer to treat the literal as a single word,\nand this'd let them do that correctly.\n\n* MatchLiteralBegin matches the opening of a literal string (either '\nor $...$). Handwaving freely, we might do\n\n else if (Matches(\"CREATE\", \"SUBSCRIPTION\", MatchAny, \"CONNECTION\", MatchLiteralBegin))\n COMPLETE_WITH(List_of_connection_keywords);\n\nThis part of the idea still needs some thought, because it remains\nunclear how we might offer completion for connection keywords\nafter the first one.\n\nImplementing these primitives might be a little tricky too.\nIf memory serves, readline and libedit have different behaviors\naround quote marks. But at least it seems like a framework\nthat could solve a number of problems, if we can make it go.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Jan 2023 21:28:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
},
{
"msg_contents": "On 2023-01-11 11:28, Tom Lane wrote:\n> I wrote:\n>> I've spent some effort previously on getting tab-completion to deal\n>> sanely with single-quoted strings, but everything I've tried has\n>> crashed and burned :-(, mainly because it's not clear when to take\n>> the whole literal as one \"word\" and when not.\n> \n> After a little further thought, a new idea occurred to me: maybe\n> we could push some of the problem down into the Matches functions.\n> Consider inventing a couple of new match primitives:\n\nThanks for the idea!\nI'm going to try it.\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 11 Jan 2023 21:55:30 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] support tab-completion for single quote input with equal\n sign"
}
] |
[
{
"msg_contents": "Hi\n\nI tried to write test for plpgsql debug API, where I need to access to\nplpgsql.h\n\nI have line\n\nPG_CPPFLAGS = -I$(top_srcdir)/src/pl/plpgsql/src\n\nthat is working well on unix, but it do nothing on windows\n\n[00:05:14] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building\n\"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (87) on node 1 (default\ntargets).\n[00:05:14] PrepareForBuild:\n[00:05:14] Creating directory \".\\Release\\test_dbgapi\\\".\n[00:05:14] Creating directory \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\\".\n[00:05:14] InitializeBuildStatus:\n[00:05:14] Creating\n\".\\Release\\test_dbgapi\\test_dbgapi.tlog\\unsuccessfulbuild\" because\n\"AlwaysCreate\" was specified.\n[00:05:14] ClCompile:\n[00:05:14] C:\\Program Files (x86)\\Microsoft Visual Studio\n12.0\\VC\\bin\\x86_amd64\\CL.exe /c /Isrc/include /Isrc/include/port/win32\n/Isrc/include/port/win32_msvc /Zi /nologo /W3 /WX- /Ox /D WIN32 /D _WINDOWS\n/D __WINDOWS__ /D __WIN32__ /D WIN32_STACK_RLIMIT=4194304 /D\n_CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _WINDLL /D _MBCS\n/GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n/Fo\".\\Release\\test_dbgapi\\\\\" /Fd\".\\Release\\test_dbgapi\\vc120.pdb\" /Gd /TC\n/wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /errorReport:queue /MP\nsrc/test/modules/test_dbgapi/test_dbgapi.c\n[00:05:14] test_dbgapi.c\n[00:05:16] src/test/modules/test_dbgapi/test_dbgapi.c(17): fatal error\nC1083: Cannot open include file: 'plpgsql.h': No such file or directory\n[C:\\projects\\postgresql\\test_dbgapi.vcxproj]\n[00:05:16] Done Building Project\n\"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (default targets) -- FAILED.\n[00:05:16] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building\n\"C:\\projects\\postgresql\\test_ddl_deparse.vcxproj\" (88) on node 1 (default\ntargets).\n\nlooks so PG_CPPFLAGS is not propagated to CPPFLAGS there.\n\nRegards\n\nPavel\n\nHiI tried to write test for plpgsql debug API, where I need to access to plpgsql.hI have linePG_CPPFLAGS = -I$(top_srcdir)/src/pl/plpgsql/srcthat is working well on unix, but it do nothing on windows[00:05:14] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (87) on node 1 (default targets).[00:05:14] PrepareForBuild:[00:05:14] Creating directory \".\\Release\\test_dbgapi\\\".[00:05:14] Creating directory \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\\".[00:05:14] InitializeBuildStatus:[00:05:14] Creating \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\unsuccessfulbuild\" because \"AlwaysCreate\" was specified.[00:05:14] ClCompile:[00:05:14] C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin\\x86_amd64\\CL.exe /c /Isrc/include /Isrc/include/port/win32 /Isrc/include/port/win32_msvc /Zi /nologo /W3 /WX- /Ox /D WIN32 /D _WINDOWS /D __WINDOWS__ /D __WIN32__ /D WIN32_STACK_RLIMIT=4194304 /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _WINDLL /D _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope /Fo\".\\Release\\test_dbgapi\\\\\" /Fd\".\\Release\\test_dbgapi\\vc120.pdb\" /Gd /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /errorReport:queue /MP src/test/modules/test_dbgapi/test_dbgapi.c[00:05:14] test_dbgapi.c[00:05:16] src/test/modules/test_dbgapi/test_dbgapi.c(17): fatal error C1083: Cannot open include file: 'plpgsql.h': No such file or directory [C:\\projects\\postgresql\\test_dbgapi.vcxproj][00:05:16] Done Building Project \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (default targets) -- FAILED.[00:05:16] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building \"C:\\projects\\postgresql\\test_ddl_deparse.vcxproj\" (88) on node 1 (default targets).looks so PG_CPPFLAGS is not propagated to CPPFLAGS there.RegardsPavel",
"msg_date": "Thu, 22 Jul 2021 06:06:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "window build doesn't apply PG_CPPFLAGS correctly"
},
{
"msg_contents": "\nOn 7/22/21 12:06 AM, Pavel Stehule wrote:\n> Hi\n>\n> I tried to write test for plpgsql debug API, where I need to access to\n> plpgsql.h\n>\n> I have line\n>\n> PG_CPPFLAGS = -I$(top_srcdir)/src/pl/plpgsql/src\n>\n> that is working well on unix, but it do nothing on windows\n>\n> [00:05:14] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building\n> \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (87) on node 1 (default\n> targets).\n> [00:05:14] PrepareForBuild:\n> [00:05:14] Creating directory \".\\Release\\test_dbgapi\\\".\n> [00:05:14] Creating directory \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\\".\n> [00:05:14] InitializeBuildStatus:\n> [00:05:14] Creating\n> \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\unsuccessfulbuild\" because\n> \"AlwaysCreate\" was specified.\n> [00:05:14] ClCompile:\n> [00:05:14] C:\\Program Files (x86)\\Microsoft Visual Studio\n> 12.0\\VC\\bin\\x86_amd64\\CL.exe /c /Isrc/include /Isrc/include/port/win32\n> /Isrc/include/port/win32_msvc /Zi /nologo /W3 /WX- /Ox /D WIN32 /D\n> _WINDOWS /D __WINDOWS__ /D __WIN32__ /D WIN32_STACK_RLIMIT=4194304 /D\n> _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _WINDLL /D\n> _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n> /Fo\".\\Release\\test_dbgapi\\\\\" /Fd\".\\Release\\test_dbgapi\\vc120.pdb\" /Gd\n> /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /errorReport:queue\n> /MP src/test/modules/test_dbgapi/test_dbgapi.c\n> [00:05:14] test_dbgapi.c\n> [00:05:16] src/test/modules/test_dbgapi/test_dbgapi.c(17): fatal error\n> C1083: Cannot open include file: 'plpgsql.h': No such file or\n> directory [C:\\projects\\postgresql\\test_dbgapi.vcxproj]\n> [00:05:16] Done Building Project\n> \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (default targets) -- FAILED.\n> [00:05:16] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building\n> \"C:\\projects\\postgresql\\test_ddl_deparse.vcxproj\" (88) on node 1\n> (default targets).\n>\n> looks so PG_CPPFLAGS is not propagated to CPPFLAGS there.\n>\n>\n\nAlmost everything in the Makefiles is not used by the MSVC buid system.\nUsing this one seems likely to be quite difficult, since the syntax for\nthe MSVC compiler command line is very different, and furthermore the\nMSVC build system doesn't know anything about how to use this setting.\n\nAFAICT PG_CPPFLAGS is only used by pgxs.\n\nYou would need to tell us more about how your build process is working.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 08:04:34 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: window build doesn't apply PG_CPPFLAGS correctly"
},
{
"msg_contents": "čt 22. 7. 2021 v 14:04 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n> On 7/22/21 12:06 AM, Pavel Stehule wrote:\n> > Hi\n> >\n> > I tried to write test for plpgsql debug API, where I need to access to\n> > plpgsql.h\n> >\n> > I have line\n> >\n> > PG_CPPFLAGS = -I$(top_srcdir)/src/pl/plpgsql/src\n> >\n> > that is working well on unix, but it do nothing on windows\n> >\n> > [00:05:14] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building\n> > \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (87) on node 1 (default\n> > targets).\n> > [00:05:14] PrepareForBuild:\n> > [00:05:14] Creating directory \".\\Release\\test_dbgapi\\\".\n> > [00:05:14] Creating directory\n> \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\\".\n> > [00:05:14] InitializeBuildStatus:\n> > [00:05:14] Creating\n> > \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\unsuccessfulbuild\" because\n> > \"AlwaysCreate\" was specified.\n> > [00:05:14] ClCompile:\n> > [00:05:14] C:\\Program Files (x86)\\Microsoft Visual Studio\n> > 12.0\\VC\\bin\\x86_amd64\\CL.exe /c /Isrc/include /Isrc/include/port/win32\n> > /Isrc/include/port/win32_msvc /Zi /nologo /W3 /WX- /Ox /D WIN32 /D\n> > _WINDOWS /D __WINDOWS__ /D __WIN32__ /D WIN32_STACK_RLIMIT=4194304 /D\n> > _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _WINDLL /D\n> > _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n> > /Fo\".\\Release\\test_dbgapi\\\\\" /Fd\".\\Release\\test_dbgapi\\vc120.pdb\" /Gd\n> > /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /errorReport:queue\n> > /MP src/test/modules/test_dbgapi/test_dbgapi.c\n> > [00:05:14] test_dbgapi.c\n> > [00:05:16] src/test/modules/test_dbgapi/test_dbgapi.c(17): fatal error\n> > C1083: Cannot open include file: 'plpgsql.h': No such file or\n> > directory [C:\\projects\\postgresql\\test_dbgapi.vcxproj]\n> > [00:05:16] Done Building Project\n> > \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (default targets) -- FAILED.\n> > [00:05:16] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building\n> > \"C:\\projects\\postgresql\\test_ddl_deparse.vcxproj\" (88) on node 1\n> > (default targets).\n> >\n> > looks so PG_CPPFLAGS is not propagated to CPPFLAGS there.\n> >\n> >\n>\n> Almost everything in the Makefiles is not used by the MSVC buid system.\n> Using this one seems likely to be quite difficult, since the syntax for\n> the MSVC compiler command line is very different, and furthermore the\n> MSVC build system doesn't know anything about how to use this setting.\n>\n> AFAICT PG_CPPFLAGS is only used by pgxs.\n>\n> You would need to tell us more about how your build process is working.\n>\n\nI need access to plpgsql.h in build time. This is only one dependency. When\nI build an extension, then plpgsql.h is in a shared directory. But when I\nbuild a module for a test, the header files are not installed yet. For\nbuild it requires an include dir -I$(top_srcdir)/src/pl/plpgsql/src\n\nRegards\n\nPavel\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nčt 22. 7. 2021 v 14:04 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\nOn 7/22/21 12:06 AM, Pavel Stehule wrote:\n> Hi\n>\n> I tried to write test for plpgsql debug API, where I need to access to\n> plpgsql.h\n>\n> I have line\n>\n> PG_CPPFLAGS = -I$(top_srcdir)/src/pl/plpgsql/src\n>\n> that is working well on unix, but it do nothing on windows\n>\n> [00:05:14] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building\n> \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (87) on node 1 (default\n> targets).\n> [00:05:14] PrepareForBuild:\n> [00:05:14] Creating directory \".\\Release\\test_dbgapi\\\".\n> [00:05:14] Creating directory \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\\".\n> [00:05:14] InitializeBuildStatus:\n> [00:05:14] Creating\n> \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\unsuccessfulbuild\" because\n> \"AlwaysCreate\" was specified.\n> [00:05:14] ClCompile:\n> [00:05:14] C:\\Program Files (x86)\\Microsoft Visual Studio\n> 12.0\\VC\\bin\\x86_amd64\\CL.exe /c /Isrc/include /Isrc/include/port/win32\n> /Isrc/include/port/win32_msvc /Zi /nologo /W3 /WX- /Ox /D WIN32 /D\n> _WINDOWS /D __WINDOWS__ /D __WIN32__ /D WIN32_STACK_RLIMIT=4194304 /D\n> _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _WINDLL /D\n> _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n> /Fo\".\\Release\\test_dbgapi\\\\\" /Fd\".\\Release\\test_dbgapi\\vc120.pdb\" /Gd\n> /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267 /errorReport:queue\n> /MP src/test/modules/test_dbgapi/test_dbgapi.c\n> [00:05:14] test_dbgapi.c\n> [00:05:16] src/test/modules/test_dbgapi/test_dbgapi.c(17): fatal error\n> C1083: Cannot open include file: 'plpgsql.h': No such file or\n> directory [C:\\projects\\postgresql\\test_dbgapi.vcxproj]\n> [00:05:16] Done Building Project\n> \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (default targets) -- FAILED.\n> [00:05:16] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is building\n> \"C:\\projects\\postgresql\\test_ddl_deparse.vcxproj\" (88) on node 1\n> (default targets).\n>\n> looks so PG_CPPFLAGS is not propagated to CPPFLAGS there.\n>\n>\n\nAlmost everything in the Makefiles is not used by the MSVC buid system.\nUsing this one seems likely to be quite difficult, since the syntax for\nthe MSVC compiler command line is very different, and furthermore the\nMSVC build system doesn't know anything about how to use this setting.\n\nAFAICT PG_CPPFLAGS is only used by pgxs.\n\nYou would need to tell us more about how your build process is working.I need access to plpgsql.h in build time. This is only one dependency. When I build an extension, then plpgsql.h is in a shared directory. But when I build a module for a test, the header files are not installed yet. For build it requires an include dir -I$(top_srcdir)/src/pl/plpgsql/srcRegardsPavel \n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 22 Jul 2021 14:11:24 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: window build doesn't apply PG_CPPFLAGS correctly"
},
{
"msg_contents": "\nOn 7/22/21 8:11 AM, Pavel Stehule wrote:\n>\n>\n> čt 22. 7. 2021 v 14:04 odesílatel Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> napsal:\n>\n>\n> On 7/22/21 12:06 AM, Pavel Stehule wrote:\n> > Hi\n> >\n> > I tried to write test for plpgsql debug API, where I need to\n> access to\n> > plpgsql.h\n> >\n> > I have line\n> >\n> > PG_CPPFLAGS = -I$(top_srcdir)/src/pl/plpgsql/src\n> >\n> > that is working well on unix, but it do nothing on windows\n> >\n> > [00:05:14] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is\n> building\n> > \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (87) on node 1 (default\n> > targets).\n> > [00:05:14] PrepareForBuild:\n> > [00:05:14] Creating directory \".\\Release\\test_dbgapi\\\".\n> > [00:05:14] Creating directory\n> \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\\".\n> > [00:05:14] InitializeBuildStatus:\n> > [00:05:14] Creating\n> > \".\\Release\\test_dbgapi\\test_dbgapi.tlog\\unsuccessfulbuild\" because\n> > \"AlwaysCreate\" was specified.\n> > [00:05:14] ClCompile:\n> > [00:05:14] C:\\Program Files (x86)\\Microsoft Visual Studio\n> > 12.0\\VC\\bin\\x86_amd64\\CL.exe /c /Isrc/include\n> /Isrc/include/port/win32\n> > /Isrc/include/port/win32_msvc /Zi /nologo /W3 /WX- /Ox /D WIN32 /D\n> > _WINDOWS /D __WINDOWS__ /D __WIN32__ /D\n> WIN32_STACK_RLIMIT=4194304 /D\n> > _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _WINDLL /D\n> > _MBCS /GF /Gm- /EHsc /MD /GS /fp:precise /Zc:wchar_t /Zc:forScope\n> > /Fo\".\\Release\\test_dbgapi\\\\\"\n> /Fd\".\\Release\\test_dbgapi\\vc120.pdb\" /Gd\n> > /TC /wd4018 /wd4244 /wd4273 /wd4102 /wd4090 /wd4267\n> /errorReport:queue\n> > /MP src/test/modules/test_dbgapi/test_dbgapi.c\n> > [00:05:14] test_dbgapi.c\n> > [00:05:16] src/test/modules/test_dbgapi/test_dbgapi.c(17): fatal\n> error\n> > C1083: Cannot open include file: 'plpgsql.h': No such file or\n> > directory [C:\\projects\\postgresql\\test_dbgapi.vcxproj]\n> > [00:05:16] Done Building Project\n> > \"C:\\projects\\postgresql\\test_dbgapi.vcxproj\" (default targets)\n> -- FAILED.\n> > [00:05:16] Project \"C:\\projects\\postgresql\\pgsql.sln\" (1) is\n> building\n> > \"C:\\projects\\postgresql\\test_ddl_deparse.vcxproj\" (88) on node 1\n> > (default targets).\n> >\n> > looks so PG_CPPFLAGS is not propagated to CPPFLAGS there.\n> >\n> >\n>\n> Almost everything in the Makefiles is not used by the MSVC buid\n> system.\n> Using this one seems likely to be quite difficult, since the\n> syntax for\n> the MSVC compiler command line is very different, and furthermore the\n> MSVC build system doesn't know anything about how to use this setting.\n>\n> AFAICT PG_CPPFLAGS is only used by pgxs.\n>\n> You would need to tell us more about how your build process is\n> working.\n>\n>\n> I need access to plpgsql.h in build time. This is only one dependency.\n> When I build an extension, then plpgsql.h is in a shared directory.\n> But when I build a module for a test, the header files are not\n> installed yet. For build it requires an include dir\n> -I$(top_srcdir)/src/pl/plpgsql/src\n>\n>\n\nIf I understand correctly what you're doing, you probably need to add an\nentry for your module to $contrib_extraincludes in\nsrc/tools/msvc/Mkvcbuild.pm, e.g.\n\n\nmy $contrib_extraincludes = { 'dblink' => ['src/backend'] ,\n\n 'test_dbgapi' => [ 'src/pl/plpgsql/src'\n] };\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:38:40 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: window build doesn't apply PG_CPPFLAGS correctly"
},
{
"msg_contents": "On 2021-Jul-22, Pavel Stehule wrote:\n\n> čt 22. 7. 2021 v 14:04 odesílatel Andrew Dunstan <andrew@dunslane.net>\n> napsal:\n\n> > Almost everything in the Makefiles is not used by the MSVC buid system.\n> > Using this one seems likely to be quite difficult, since the syntax for\n> > the MSVC compiler command line is very different, and furthermore the\n> > MSVC build system doesn't know anything about how to use this setting.\n> >\n> > AFAICT PG_CPPFLAGS is only used by pgxs.\n> >\n> > You would need to tell us more about how your build process is working.\n> \n> I need access to plpgsql.h in build time. This is only one dependency. When\n> I build an extension, then plpgsql.h is in a shared directory. But when I\n> build a module for a test, the header files are not installed yet. For\n> build it requires an include dir -I$(top_srcdir)/src/pl/plpgsql/src\n\nBut Project.pm parses Makefiles and puts stuff into the MSVC buildsystem\nfile format; note David Rowley's patch that (among other things) removes\na bunch of ->AddIncludeDir calls by parsing PG_CPPFLAGS\nhttps://postgr.es/m/CAApHDvpXoav0aZnsji-ZNdo=9TXqAwnwmSh44gyn8K7i2PRwJg@mail.gmail.com\nwhich is probably apropos.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\n http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:41:53 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: window build doesn't apply PG_CPPFLAGS correctly"
},
{
"msg_contents": "\nOn 7/22/21 9:41 AM, Alvaro Herrera wrote:\n> On 2021-Jul-22, Pavel Stehule wrote:\n>\n>> čt 22. 7. 2021 v 14:04 odesílatel Andrew Dunstan <andrew@dunslane.net>\n>> napsal:\n>>> Almost everything in the Makefiles is not used by the MSVC buid system.\n>>> Using this one seems likely to be quite difficult, since the syntax for\n>>> the MSVC compiler command line is very different, and furthermore the\n>>> MSVC build system doesn't know anything about how to use this setting.\n>>>\n>>> AFAICT PG_CPPFLAGS is only used by pgxs.\n>>>\n>>> You would need to tell us more about how your build process is working.\n>> I need access to plpgsql.h in build time. This is only one dependency. When\n>> I build an extension, then plpgsql.h is in a shared directory. But when I\n>> build a module for a test, the header files are not installed yet. For\n>> build it requires an include dir -I$(top_srcdir)/src/pl/plpgsql/src\n> But Project.pm parses Makefiles and puts stuff into the MSVC buildsystem\n> file format; note David Rowley's patch that (among other things) removes\n> a bunch of ->AddIncludeDir calls by parsing PG_CPPFLAGS\n> https://postgr.es/m/CAApHDvpXoav0aZnsji-ZNdo=9TXqAwnwmSh44gyn8K7i2PRwJg@mail.gmail.com\n> which is probably apropos.\n>\n\n\nYeah, but that hasn't been applied yet. Pavel should be able to use what\nI gave him today, I think.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 14:52:48 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: window build doesn't apply PG_CPPFLAGS correctly"
},
{
"msg_contents": "čt 22. 7. 2021 v 20:52 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n> On 7/22/21 9:41 AM, Alvaro Herrera wrote:\n> > On 2021-Jul-22, Pavel Stehule wrote:\n> >\n> >> čt 22. 7. 2021 v 14:04 odesílatel Andrew Dunstan <andrew@dunslane.net>\n> >> napsal:\n> >>> Almost everything in the Makefiles is not used by the MSVC buid system.\n> >>> Using this one seems likely to be quite difficult, since the syntax for\n> >>> the MSVC compiler command line is very different, and furthermore the\n> >>> MSVC build system doesn't know anything about how to use this setting.\n> >>>\n> >>> AFAICT PG_CPPFLAGS is only used by pgxs.\n> >>>\n> >>> You would need to tell us more about how your build process is working.\n> >> I need access to plpgsql.h in build time. This is only one dependency.\n> When\n> >> I build an extension, then plpgsql.h is in a shared directory. But when\n> I\n> >> build a module for a test, the header files are not installed yet. For\n> >> build it requires an include dir -I$(top_srcdir)/src/pl/plpgsql/src\n> > But Project.pm parses Makefiles and puts stuff into the MSVC buildsystem\n> > file format; note David Rowley's patch that (among other things) removes\n> > a bunch of ->AddIncludeDir calls by parsing PG_CPPFLAGS\n> >\n> https://postgr.es/m/CAApHDvpXoav0aZnsji-ZNdo=9TXqAwnwmSh44gyn8K7i2PRwJg@mail.gmail.com\n> > which is probably apropos.\n> >\n>\n>\n> Yeah, but that hasn't been applied yet. Pavel should be able to use what\n> I gave him today, I think.\n>\n\nyes, and it is working\n\nThank you very much\n\nPavel\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nčt 22. 7. 2021 v 20:52 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\nOn 7/22/21 9:41 AM, Alvaro Herrera wrote:\n> On 2021-Jul-22, Pavel Stehule wrote:\n>\n>> čt 22. 7. 2021 v 14:04 odesílatel Andrew Dunstan <andrew@dunslane.net>\n>> napsal:\n>>> Almost everything in the Makefiles is not used by the MSVC buid system.\n>>> Using this one seems likely to be quite difficult, since the syntax for\n>>> the MSVC compiler command line is very different, and furthermore the\n>>> MSVC build system doesn't know anything about how to use this setting.\n>>>\n>>> AFAICT PG_CPPFLAGS is only used by pgxs.\n>>>\n>>> You would need to tell us more about how your build process is working.\n>> I need access to plpgsql.h in build time. This is only one dependency. When\n>> I build an extension, then plpgsql.h is in a shared directory. But when I\n>> build a module for a test, the header files are not installed yet. For\n>> build it requires an include dir -I$(top_srcdir)/src/pl/plpgsql/src\n> But Project.pm parses Makefiles and puts stuff into the MSVC buildsystem\n> file format; note David Rowley's patch that (among other things) removes\n> a bunch of ->AddIncludeDir calls by parsing PG_CPPFLAGS\n> https://postgr.es/m/CAApHDvpXoav0aZnsji-ZNdo=9TXqAwnwmSh44gyn8K7i2PRwJg@mail.gmail.com\n> which is probably apropos.\n>\n\n\nYeah, but that hasn't been applied yet. Pavel should be able to use what\nI gave him today, I think.yes, and it is workingThank you very muchPavel\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 22 Jul 2021 20:53:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: window build doesn't apply PG_CPPFLAGS correctly"
}
] |
[
{
"msg_contents": "Should Arthur patch be included in PostgreSQL 14 - Beta 4?\n\nShould Arthur patch be included in PostgreSQL 14 - Beta 4?",
"msg_date": "Thu, 22 Jul 2021 11:23:34 +0200",
"msg_from": "Davide Fasolo <faze79@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BUG #15293: Stored Procedure Triggered by Logical Replication is\n Unable to use Notification Events"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen I was writing an extension which need to get the median of an array, I\ntried to find if postgres provide some api that can do that. I found all the\nplaces in postgres invoke qsort() and then get the median. I was thinking can\nwe do better by using \"quick select\" and is it worth it.\n\nCurrently, there are some places[1] in the code that need the median and can\nuse \"quick select\" instead. And some of them(spg_box_quad_picksplit /\nspg_range_quad_picksplit) are invoked frequently when INSERT or CREATE INDEX.\nSo, Peronally, It's acceptable to introduce a quick select api to improve these\nplaces.\n\nSince most of the logic of \"quick select\" is similar to quick sort, I think\nwe can reuse the code in sort_template.h. We only need to let the sort stop\nwhen find the target top Kth element.\n\nAttach a POC patch about this idea. I did some simple performance tests, I can\nsee about 10% performance gain in this test[2].\n\nThoughts ?\n\n[1]\n1.\nentry_dealloc\n\t...\n\t/* Record the (approximate) median usage */\n\tif (i > 0)\n\t\tpgss->cur_median_usage = entries[i / 2]->counters.usage;\n2.\nspg_box_quad_picksplit\n\tqsort(lowXs, in->nTuples, sizeof(float8), compareDoubles);\n\t...\n\tcentroid->low.x = lowXs[median];\n\n3.\nspg_range_quad_picksplit\n\tqsort(lowerBounds, nonEmptyCount, sizeof(RangeBound),\n\t...\n\tcentroid = range_serialize(typcache, &lowerBounds[median],\n\n4.\nspg_quad_picksplit\n\tqsort(sorted, in->nTuples, sizeof(*sorted), x_cmp);\n\t...\n\tcentroid->x = sorted[median]->x;\n\n\n\n[2]\ndrop table quad_box_tbl;\nCREATE unlogged TABLE quad_box_tbl (id int, b box);\ntruncate quad_box_tbl ;\nexplain (verbose, analyze)INSERT INTO quad_box_tbl\n SELECT (x - 1) * 10 + x, box(point(x * 10, x * 20), point(x * 10, x * 20 + 5))\n FROM generate_series(1, 1000000) x order by random();\n\n-----test create index\ndrop index quad_box_tbl_idx;\nCREATE INDEX quad_box_tbl_idx ON quad_box_tbl USING spgist(b);\n\n-------test results\nPATCH:\nTime: 2609.664 ms (00:02.610)\n\nHEAD:\nTime: 2903.765 ms (00:02.944)\n\nBest regards,\nHouzj",
"msg_date": "Thu, 22 Jul 2021 12:07:06 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Use quick select instead of qsort to get median"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 8:07 AM houzj.fnst@fujitsu.com <\nhouzj.fnst@fujitsu.com> wrote:\n>\n> Hi,\n>\n> When I was writing an extension which need to get the median of an array,\nI\n> tried to find if postgres provide some api that can do that. I found all\nthe\n> places in postgres invoke qsort() and then get the median. I was thinking\ncan\n> we do better by using \"quick select\" and is it worth it.\n\n> Attach a POC patch about this idea. I did some simple performance tests,\nI can\n> see about 10% performance gain in this test[2].\n>\n> Thoughts ?\n\n> 1.\n> entry_dealloc\n> ...\n> /* Record the (approximate) median usage */\n> if (i > 0)\n> pgss->cur_median_usage = entries[i / 2]->counters.usage;\n\nIt might be useful to be more precise here, but it seems it would be\nslower, too?\n\n> -----test create index\n> drop index quad_box_tbl_idx;\n> CREATE INDEX quad_box_tbl_idx ON quad_box_tbl USING spgist(b);\n>\n> -------test results\n> PATCH:\n> Time: 2609.664 ms (00:02.610)\n>\n> HEAD:\n> Time: 2903.765 ms (00:02.944)\n\nThat index type is pretty rare, as far as I know. That doesn't seem to be\nquite enough motivation to change the qsort template. If the goal was to\nimprove the speed of \"create spgist index\", would this still be the best\napproach? Also, there are other things under consideration that would add\ncomplexity to the qsort template [1], and this would add even more.\n\nLooking in the docs [2], we don't have a MEDIAN aggregate, but we do have\npercentile_disc(), and quick select might help there, but I haven't looked.\n\n[1] https://commitfest.postgresql.org/33/3038/\n[2] https://www.postgresql.org/docs/14/functions-aggregate.html\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 22, 2021 at 8:07 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:>> Hi,>> When I was writing an extension which need to get the median of an array, I> tried to find if postgres provide some api that can do that. I found all the> places in postgres invoke qsort() and then get the median. I was thinking can> we do better by using \"quick select\" and is it worth it.> Attach a POC patch about this idea. I did some simple performance tests, I can> see about 10% performance gain in this test[2].>> Thoughts ?> 1.> entry_dealloc> ...> /* Record the (approximate) median usage */> if (i > 0)> pgss->cur_median_usage = entries[i / 2]->counters.usage;It might be useful to be more precise here, but it seems it would be slower, too?> -----test create index> drop index quad_box_tbl_idx;> CREATE INDEX quad_box_tbl_idx ON quad_box_tbl USING spgist(b);>> -------test results> PATCH:> Time: 2609.664 ms (00:02.610)>> HEAD:> Time: 2903.765 ms (00:02.944)That index type is pretty rare, as far as I know. That doesn't seem to be quite enough motivation to change the qsort template. If the goal was to improve the speed of \"create spgist index\", would this still be the best approach? Also, there are other things under consideration that would add complexity to the qsort template [1], and this would add even more.Looking in the docs [2], we don't have a MEDIAN aggregate, but we do have percentile_disc(), and quick select might help there, but I haven't looked.[1] https://commitfest.postgresql.org/33/3038/[2] https://www.postgresql.org/docs/14/functions-aggregate.html--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 22 Jul 2021 10:02:28 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use quick select instead of qsort to get median"
}
] |
[
{
"msg_contents": "I noticed that get_agg_clause_costs still claims that it recursively\nfinds Aggrefs in the expression tree, but I don't think that's been\ntrue since 0a2bc5d61.\n\nI've attached a patch that adjusts the comment so it's more aligned to\nwhat it now does.\n\nDavid",
"msg_date": "Fri, 23 Jul 2021 02:29:50 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Outdated comment in get_agg_clause_costs"
},
{
"msg_contents": "On Fri, 23 Jul 2021 at 02:29, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a patch that adjusts the comment so it's more aligned to\n> what it now does.\n\nThis was a bit more outdated than I first thought. I also removed the\nmention of the function setting the aggtranstype and what it mentions\nabout also gathering up \"counts\". I assume that was related to\nnumOrderedAggs which is now done in preprocess_aggref().\n\nI ended up pushing to master and PG14. The code was new to PG14 so I\nthought it made sense to keep master and 14 the same since 14 is not\nyet out the door.\n\nDavid\n\n\n",
"msg_date": "Mon, 26 Jul 2021 14:58:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Outdated comment in get_agg_clause_costs"
}
] |
[
{
"msg_contents": "It seems to me that when using the pg_amcheck --startblock and \n--endblock options on platforms where sizeof(long) == 4, you cannot \nspecify higher block numbers (unless you do tricks with negative \nnumbers). The attached patch should fix this by using strtoul() instead \nof strtol(). I also tightened up the number scanning a bit in other \nways, similar to the code in other frontend utilities. I know some \npeople have been working on tightening all this up. Please check that \nit's up to speed.",
"msg_date": "Thu, 22 Jul 2021 16:56:56 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_amcheck: Fix block number parsing on command line"
},
{
"msg_contents": "\n\n> On Jul 22, 2021, at 7:56 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> Please check that it's up to speed.\n> <0001-pg_amcheck-Fix-block-number-parsing-on-command-line.patch>\n\nThis looks correct to me. Thanks for the fix.\n\nYour use of strtoul compares favorably to that in pg_resetwal in that you are checking errno and it is not. The consequence is:\n\nbin % ./pg_resetwal/pg_resetwal -e 1111111111111111111111111111111111111111111111111111\npg_resetwal: error: transaction ID epoch (-e) must not be -1\nbin % ./pg_resetwal/pg_resetwal -e junkstring \npg_resetwal: error: invalid argument for option -e\nTry \"pg_resetwal --help\" for more information.\n\nUnless people are relying on this behavior, I would think pg_resetwal should complain of an invalid argument for both of those, rather than complaining about -1. That's not to do with this patch, but if we're tightening up the use of strtol in frontend tools, maybe we can use the identical logic in both places.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:18:51 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck: Fix block number parsing on command line"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 04:56:56PM +0200, Peter Eisentraut wrote:\n> It seems to me that when using the pg_amcheck --startblock and --endblock\n> options on platforms where sizeof(long) == 4, you cannot specify higher\n> block numbers (unless you do tricks with negative numbers). The attached\n> patch should fix this by using strtoul() instead of strtol(). I also\n> tightened up the number scanning a bit in other ways, similar to the code in\n> other frontend utilities. I know some people have been working on\n> tightening all this up. Please check that it's up to speed.\n\nYeah, some work is happening to tighten all that. Saying that, the\nfirst round of review I did for the option parsing is not involving\noption types other than int32, so the block options of pg_amcheck are\nnot changing for now, and what you are suggesting here is fine for the\nmoment for 14~. Still, note that I am planning to change that on HEAD\nwith an option parsing API for int64 that could be used for block\nnumbers, eliminating for example the 4 translatable strings for the\ncode being changed here.\n--\nMichael",
"msg_date": "Fri, 23 Jul 2021 14:11:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck: Fix block number parsing on command line"
},
{
"msg_contents": "On 22.07.21 18:18, Mark Dilger wrote:\n>> On Jul 22, 2021, at 7:56 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> Please check that it's up to speed.\n>> <0001-pg_amcheck-Fix-block-number-parsing-on-command-line.patch>\n> \n> This looks correct to me. Thanks for the fix.\n\nCommitted this to 14 and master.\n\n> Your use of strtoul compares favorably to that in pg_resetwal in that you are checking errno and it is not. The consequence is:\n> \n> bin % ./pg_resetwal/pg_resetwal -e 1111111111111111111111111111111111111111111111111111\n> pg_resetwal: error: transaction ID epoch (-e) must not be -1\n> bin % ./pg_resetwal/pg_resetwal -e junkstring\n> pg_resetwal: error: invalid argument for option -e\n> Try \"pg_resetwal --help\" for more information.\n> \n> Unless people are relying on this behavior, I would think pg_resetwal should complain of an invalid argument for both of those, rather than complaining about -1. That's not to do with this patch, but if we're tightening up the use of strtol in frontend tools, maybe we can use the identical logic in both places.\n\nCommitted a fix for this to master.\n\n\n",
"msg_date": "Fri, 20 Aug 2021 11:08:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck: Fix block number parsing on command line"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nAs previously discussed [0], canceling synchronous replication waits\r\ncan have the unfortunate side effect of making transactions visible on\r\na primary server before they are replicated. A failover at this time\r\nwould cause such transactions to be lost. The proposed solution in\r\nthe previous thread [0] involved blocking such cancellations, but many\r\nhad concerns about that approach (e.g., backends could be\r\nunresponsive, server restarts were still affected by this problem). I\r\nwould like to propose something more like what Fujii-san suggested [1]\r\nthat would avoid blocking cancellations while still preventing data\r\nloss. I believe this is a key missing piece of the synchronous\r\nreplication functionality in PostgreSQL.\r\n\r\nAFAICT there are a variety of ways that the aforementioned problem may\r\noccur:\r\n 1. Server restarts: As noted in the docs [2], \"waiting transactions\r\n will be marked fully committed once the primary database\r\n recovers.\" I think there are a few options for handling this,\r\n but the simplest would be to simply failover anytime the primary\r\n server shut down. My proposal may offer other ways of helping\r\n with this.\r\n 2. Backend crashes: If a backend crashes, the postmaster process\r\n will restart everything, leading to the same problem described in\r\n 1. However, this behavior can be prevented with the\r\n restart_after_crash parameter [3].\r\n 3. Client disconnections: During waits for synchronous replication,\r\n interrupt processing is turned off, so disconnected clients\r\n actually don't seem to cause a problem. The server will still\r\n wait for synchronous replication to complete prior to making the\r\n transaction visible on the primary.\r\n 4. Query cancellations and backend terminations: This appears to be\r\n the only gap where there is no way to avoid potential data loss,\r\n and it is the main target of my proposal.\r\n\r\nInstead of blocking query cancellations and backend terminations, I\r\nthink we should allow them to proceed, but we should keep the\r\ntransactions marked in-progress so they do not yet become visible to\r\nsessions on the primary. Once replication has caught up to the\r\nthe necessary point, the transactions can be marked completed, and\r\nthey would finally become visible.\r\n\r\nThe main advantages of this approach are 1) it still allows for\r\ncanceling waits for synchronous replication and 2) it provides an\r\nopportunity to view and manage waits for synchronous replication\r\noutside of the standard cancellation/termination functionality. The\r\ntooling for 2 could even allow a session to begin waiting for\r\nsynchronous replication again if it \"inadvertently interrupted a\r\nreplication wait...\" [4]. I think the main disadvantage of this\r\napproach is that transactions committed by a session may not be\r\nimmediately visible to the session when the command returns after\r\ncanceling the wait for synchronous replication. Instead, the\r\ntransactions would become visible in the future once the change is\r\nreplicated. This may cause problems for an application if it doesn't\r\nhandle this scenario carefully.\r\n\r\nWhat are folks' opinions on this idea? Is this something that is\r\nworth prototyping?\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/flat/C1F7905E-5DB2-497D-ABCC-E14D4DEE506C@yandex-team.ru\r\n[1] https://www.postgresql.org/message-id/4f8d54c9-6f18-23d5-c4de-9d6656d3a408%40oss.nttdata.com\r\n[2] https://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION-HA\r\n[3] https://www.postgresql.org/docs/devel/runtime-config-error-handling.html#GUC-RESTART-AFTER-CRASH\r\n[4] https://www.postgresql.org/message-id/CA%2BTgmoZpwBEyPDZixeHN9ZeNJJjd3EBEQ8nJPaRAsVexhssfNg%40mail.gmail.com\r\n\r\n",
"msg_date": "Thu, 22 Jul 2021 21:17:56 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Avoiding data loss with synchronous replication"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 2:48 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> Hi hackers,\n>\n> As previously discussed [0], canceling synchronous replication waits\n> can have the unfortunate side effect of making transactions visible on\n> a primary server before they are replicated. A failover at this time\n> would cause such transactions to be lost. The proposed solution in\n> the previous thread [0] involved blocking such cancellations, but many\n> had concerns about that approach (e.g., backends could be\n> unresponsive, server restarts were still affected by this problem). I\n> would like to propose something more like what Fujii-san suggested [1]\n> that would avoid blocking cancellations while still preventing data\n> loss. I believe this is a key missing piece of the synchronous\n> replication functionality in PostgreSQL.\n>\n> AFAICT there are a variety of ways that the aforementioned problem may\n> occur:\n> 1. Server restarts: As noted in the docs [2], \"waiting transactions\n> will be marked fully committed once the primary database\n> recovers.\" I think there are a few options for handling this,\n> but the simplest would be to simply failover anytime the primary\n> server shut down. My proposal may offer other ways of helping\n> with this.\n> 2. Backend crashes: If a backend crashes, the postmaster process\n> will restart everything, leading to the same problem described in\n> 1. However, this behavior can be prevented with the\n> restart_after_crash parameter [3].\n> 3. Client disconnections: During waits for synchronous replication,\n> interrupt processing is turned off, so disconnected clients\n> actually don't seem to cause a problem. The server will still\n> wait for synchronous replication to complete prior to making the\n> transaction visible on the primary.\n> 4. Query cancellations and backend terminations: This appears to be\n> the only gap where there is no way to avoid potential data loss,\n> and it is the main target of my proposal.\n>\n> Instead of blocking query cancellations and backend terminations, I\n> think we should allow them to proceed, but we should keep the\n> transactions marked in-progress so they do not yet become visible to\n> sessions on the primary.\n>\n\nOne naive question, what if the primary gets some error while changing\nthe status from in-progress to committed? Won't in such a case the\ntransaction will be visible on standby but not on the primary?\n\n> Once replication has caught up to the\n> the necessary point, the transactions can be marked completed, and\n> they would finally become visible.\n>\n\nIf the session issued the commit is terminated, will this work be done\nby some background process?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Jul 2021 16:28:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "On Thu, 2021-07-22 at 21:17 +0000, Bossart, Nathan wrote:\n> As previously discussed [0], canceling synchronous replication waits\n> can have the unfortunate side effect of making transactions visible on\n> a primary server before they are replicated. A failover at this time\n> would cause such transactions to be lost.\n> \n> AFAICT there are a variety of ways that the aforementioned problem may\n> occur:\n> 4. Query cancellations and backend terminations: This appears to be\n> the only gap where there is no way to avoid potential data loss,\n> and it is the main target of my proposal.\n> \n> Instead of blocking query cancellations and backend terminations, I\n> think we should allow them to proceed, but we should keep the\n> transactions marked in-progress so they do not yet become visible to\n> sessions on the primary. Once replication has caught up to the\n> the necessary point, the transactions can be marked completed, and\n> they would finally become visible.\n> \n> The main advantages of this approach are 1) it still allows for\n> canceling waits for synchronous replication and 2) it provides an\n> opportunity to view and manage waits for synchronous replication\n> outside of the standard cancellation/termination functionality. The\n> tooling for 2 could even allow a session to begin waiting for\n> synchronous replication again if it \"inadvertently interrupted a\n> replication wait...\" [4]. I think the main disadvantage of this\n> approach is that transactions committed by a session may not be\n> immediately visible to the session when the command returns after\n> canceling the wait for synchronous replication. Instead, the\n> transactions would become visible in the future once the change is\n> replicated. This may cause problems for an application if it doesn't\n> handle this scenario carefully.\n> \n> What are folks' opinions on this idea? Is this something that is\n> worth prototyping?\n\nBut that would mean that changes ostensibly rolled back (because the\ncancel request succeeded) will later turn out to be committed after all,\njust like it is now (only later). Where is the advantage?\n\nBesides, there is no room for another transaction status in the\ncommit log.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 23 Jul 2021 13:22:42 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "Hi Nathan!\n\nThanks for you interest in the topic. I think in the thread [0] we almost agreed on general design.\nThe only left question is that we want to threat pg_ctl stop and kill SIGTERM differently to pg_terminate_backend().\n\n> 23 июля 2021 г., в 02:17, Bossart, Nathan <bossartn@amazon.com> написал(а):\n> \n> Hi hackers,\n> \n> As previously discussed [0], canceling synchronous replication waits\n> can have the unfortunate side effect of making transactions visible on\n> a primary server before they are replicated. A failover at this time\n> would cause such transactions to be lost. The proposed solution in\n> the previous thread [0] involved blocking such cancellations, but many\n> had concerns about that approach (e.g., backends could be\n> unresponsive, server restarts were still affected by this problem). I\n> would like to propose something more like what Fujii-san suggested [1]\n> that would avoid blocking cancellations while still preventing data\n> loss. I believe this is a key missing piece of the synchronous\n> replication functionality in PostgreSQL.\n> \n> AFAICT there are a variety of ways that the aforementioned problem may\n> occur:\n> 1. Server restarts: As noted in the docs [2], \"waiting transactions\n> will be marked fully committed once the primary database\n> recovers.\" I think there are a few options for handling this,\n> but the simplest would be to simply failover anytime the primary\n> server shut down. My proposal may offer other ways of helping\n> with this.\nI think simple check that no other primary exists would suffice.\nCurrently this is totally concern of HA-tool.\n\n> 2. Backend crashes: If a backend crashes, the postmaster process\n> will restart everything, leading to the same problem described in\n> 1. However, this behavior can be prevented with the\n> restart_after_crash parameter [3].\n> 3. Client disconnections: During waits for synchronous replication,\n> interrupt processing is turned off, so disconnected clients\n> actually don't seem to cause a problem. The server will still\n> wait for synchronous replication to complete prior to making the\n> transaction visible on the primary.\n+1.\n\n> 4. Query cancellations and backend terminations: This appears to be\n> the only gap where there is no way to avoid potential data loss,\n> and it is the main target of my proposal.\n> \n> Instead of blocking query cancellations and backend terminations, I\n> think we should allow them to proceed, but we should keep the\n> transactions marked in-progress so they do not yet become visible to\n> sessions on the primary. Once replication has caught up to the\n> the necessary point, the transactions can be marked completed, and\n> they would finally become visible.\n> \n> The main advantages of this approach are 1) it still allows for\n> canceling waits for synchronous replication\nYou can cancel synchronous replication by \nALTER SYSTEM SET synchnorou_standby_names to 'new quorum';\nSELECT pg_reload_conf();\n\nAll backends waiting for sync rep will proceed with new quorum.\n\n> and 2) it provides an\n> opportunity to view and manage waits for synchronous replication\n> outside of the standard cancellation/termination functionality. The\n> tooling for 2 could even allow a session to begin waiting for\n> synchronous replication again if it \"inadvertently interrupted a\n> replication wait...\" [4]. I think the main disadvantage of this\n> approach is that transactions committed by a session may not be\n> immediately visible to the session when the command returns after\n> canceling the wait for synchronous replication. Instead, the\n> transactions would become visible in the future once the change is\n> replicated. This may cause problems for an application if it doesn't\n> handle this scenario carefully.\n> \n> What are folks' opinions on this idea? Is this something that is\n> worth prototyping?\n\nIn fact you propose converting transaction to 2PC if we get CANCEL during sync rep wait.\nTransferring locks and other stuff somewhere, acquiring new VXid to our backend, sending CommandComplete while it's not in fact complete etc.\nI think it's kind of overly complex for provided reasons.\n\nThe ultimate reason of synchronous replication is to make a client wait when it's necessary to wait. If the client wish to execute more commands they can open new connection or set synchronous_commit to desired level in first place. Canceling committed locally transaction will not be possible anyway.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/6a052e81060824a8286148b1165bafedbd7c86cd.camel%40j-davis.com#415dc2f7d41b8a251b419256407bb64d\n\n",
"msg_date": "Fri, 23 Jul 2021 16:32:08 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "On 7/23/21, 3:58 AM, \"Amit Kapila\" <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Jul 23, 2021 at 2:48 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> Instead of blocking query cancellations and backend terminations, I\r\n>> think we should allow them to proceed, but we should keep the\r\n>> transactions marked in-progress so they do not yet become visible to\r\n>> sessions on the primary.\r\n>>\r\n>\r\n> One naive question, what if the primary gets some error while changing\r\n> the status from in-progress to committed? Won't in such a case the\r\n> transaction will be visible on standby but not on the primary?\r\n\r\nYes. In this case, the transaction would remain in-progress on the\r\nprimary until it can be marked committed.\r\n\r\n>> Once replication has caught up to the\r\n>> the necessary point, the transactions can be marked completed, and\r\n>> they would finally become visible.\r\n>>\r\n>\r\n> If the session issued the commit is terminated, will this work be done\r\n> by some background process?\r\n\r\nI think the way I'm imagining it is that a background process would be\r\nresponsible for handling all of the \"offloaded\" transactions. I'm not\r\nwedded to any particular design at this point, though.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 23 Jul 2021 17:53:21 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "On 7/23/21, 4:23 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\r\n> But that would mean that changes ostensibly rolled back (because the\r\n> cancel request succeeded) will later turn out to be committed after all,\r\n> just like it is now (only later). Where is the advantage?\r\n\r\nThe advantage is that I can cancel waits for synchronous replication\r\nwithout risking data loss. The transactions would still be marked in-\r\nprogress until we get the proper acknowledgement from the standbys.\r\n\r\n> Besides, there is no room for another transaction status in the\r\n> commit log.\r\n\r\nRight. Like the existing synchronous replication functionality, the\r\ncommit log would be updated, but the transactions would still appear\r\nto be in-progress. Today, this is done via the procarray.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 23 Jul 2021 17:53:47 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "On 7/23/21, 4:33 AM, \"Andrey Borodin\" <x4mmm@yandex-team.ru> wrote:\r\n> Thanks for you interest in the topic. I think in the thread [0] we almost agreed on general design.\r\n> The only left question is that we want to threat pg_ctl stop and kill SIGTERM differently to pg_terminate_backend().\r\n\r\nI didn't get the idea that there was a tremendous amount of support\r\nfor the approach to block canceling waits for synchronous replication.\r\nFWIW this was my initial approach as well, but I've been trying to\r\nthink of alternatives.\r\n\r\nIf we can gather support for some variation of the block-cancels\r\napproach, I think that would be preferred over my proposal from a\r\ncomplexity standpoint. Robert's idea to provide a way to understand\r\nthe intent of the cancellation/termination request [0] could improve\r\nmatters. Perhaps adding an argument to pg_cancel/terminate_backend()\r\nand using different signals to indicate that we want to cancel the\r\nwait would be something that folks could get on board with.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/CA%2BTgmoaW8syC_wqQcsJ%3DsQ0gTbFVC6MqYmxbwNHk5w%3DxJ-McOQ%40mail.gmail.com\r\n\r\n",
"msg_date": "Fri, 23 Jul 2021 17:54:20 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "23 июля 2021 г., в 22:54, Bossart, Nathan <bossartn@amazon.com> написал(а):On 7/23/21, 4:33 AM, \"Andrey Borodin\" <x4mmm@yandex-team.ru> wrote:Thanks for you interest in the topic. I think in the thread [0] we almost agreed on general design.The only left question is that we want to threat pg_ctl stop and kill SIGTERM differently to pg_terminate_backend().I didn't get the idea that there was a tremendous amount of supportfor the approach to block canceling waits for synchronous replication.FWIW this was my initial approach as well, but I've been trying tothink of alternatives.If we can gather support for some variation of the block-cancelsapproach, I think that would be preferred over my proposal from acomplexity standpoint. Let's clearly enumerate problems of blocking.It's been mentioned that backend is not responsive when cancelation is blocked. But on the contrary, it's very responsive.postgres=# alter system set synchronous_standby_names to 'bogus';ALTER SYSTEMpostgres=# alter system set synchronous_commit_cancelation TO off ;ALTER SYSTEMpostgres=# select pg_reload_conf();2021-07-24 15:35:03.054 +05 [10452] LOG: received SIGHUP, reloading configuration files l --- t(1 row)postgres=# begin;BEGINpostgres=*# insert into t1 values(0);INSERT 0 1postgres=*# commit ;^CCancel request sentWARNING: canceling wait for synchronous replication requested, but cancelation is not allowedDETAIL: The COMMIT record has already flushed to WAL locally and might not have been replicated to the standby. We must wait here.^CCancel request sentWARNING: canceling wait for synchronous replication requested, but cancelation is not allowedDETAIL: The COMMIT record has already flushed to WAL locally and might not have been replicated to the standby. We must wait here.It tells clearly what's wrong. If it's still not enough, let's add hint about synchronous standby names.Are there any other problems with blocking cancels?Robert's idea to provide a way to understandthe intent of the cancellation/termination request [0] could improvematters. Perhaps adding an argument to pg_cancel/terminate_backend()and using different signals to indicate that we want to cancel thewait would be something that folks could get on board with.Semantics of cancelation assumes correct query interruption. This is not possible already when we committed locally. There cannot be any correct cancelation. And I don't think it worth to add incorrect cancelation.Interestingly, converting transaction to 2PC is a neat idea when the backend is terminated. It provides more guaranties that transaction will commit correctly even after restart. But we may be short of max_prepared_xacts slots...Anyway backend termination bothers me a lot less than cancelation - drivers do not terminate queries on their own. But they cancel queries by default.Thanks!Best regards, Andrey Borodin.\n",
"msg_date": "Sat, 24 Jul 2021 15:52:09 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "\n\n> 23 июля 2021 г., в 22:54, Bossart, Nathan <bossartn@amazon.com> написал(а):\n> \n> On 7/23/21, 4:33 AM, \"Andrey Borodin\" <x4mmm@yandex-team.ru> wrote:\n>> Thanks for you interest in the topic. I think in the thread [0] we almost agreed on general design.\n>> The only left question is that we want to threat pg_ctl stop and kill SIGTERM differently to pg_terminate_backend().\n> \n> I didn't get the idea that there was a tremendous amount of support\n> for the approach to block canceling waits for synchronous replication.\n> FWIW this was my initial approach as well, but I've been trying to\n> think of alternatives.\n> \n> If we can gather support for some variation of the block-cancels\n> approach, I think that would be preferred over my proposal from a\n> complexity standpoint. \nLet's clearly enumerate problems of blocking.\nIt's been mentioned that backend is not responsive when cancelation is blocked. But on the contrary, it's very responsive.\n\npostgres=# alter system set synchronous_standby_names to 'bogus';\nALTER SYSTEM\npostgres=# alter system set synchronous_commit_cancelation TO off ;\nALTER SYSTEM\npostgres=# select pg_reload_conf();\n2021-07-24 15:35:03.054 +05 [10452] LOG: received SIGHUP, reloading configuration files\nl \n---\nt\n(1 row)\npostgres=# begin;\nBEGIN\npostgres=*# insert into t1 values(0);\nINSERT 0 1\npostgres=*# commit ;\n^CCancel request sent\nWARNING: canceling wait for synchronous replication requested, but cancelation is not allowed\nDETAIL: The COMMIT record has already flushed to WAL locally and might not have been replicated to the standby. We must wait here.\n^CCancel request sent\nWARNING: canceling wait for synchronous replication requested, but cancelation is not allowed\nDETAIL: The COMMIT record has already flushed to WAL locally and might not have been replicated to the standby. We must wait here.\n\nIt tells clearly what's wrong. If it's still not enough, let's add hint about synchronous standby names.\n\nAre there any other problems with blocking cancels?\n\n\n> Robert's idea to provide a way to understand\n> the intent of the cancellation/termination request [0] could improve\n> matters. Perhaps adding an argument to pg_cancel/terminate_backend()\n> and using different signals to indicate that we want to cancel the\n> wait would be something that folks could get on board with.\n\nSemantics of cancelation assumes correct query interruption. This is not possible already when we committed locally. There cannot be any correct cancelation. And I don't think it worth to add incorrect cancelation.\n\n\nInterestingly, converting transaction to 2PC is a neat idea when the backend is terminated. It provides more guaranties that transaction will commit correctly even after restart. But we may be short of max_prepared_xacts slots...\nAnyway backend termination bothers me a lot less than cancelation - drivers do not terminate queries on their own. But they cancel queries by default.\n\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 24 Jul 2021 15:53:15 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-22 21:17:56 +0000, Bossart, Nathan wrote:\n> AFAICT there are a variety of ways that the aforementioned problem may\n> occur:\n> 1. Server restarts: As noted in the docs [2], \"waiting transactions\n> will be marked fully committed once the primary database\n> recovers.\" I think there are a few options for handling this,\n> but the simplest would be to simply failover anytime the primary\n> server shut down. My proposal may offer other ways of helping\n> with this.\n> 2. Backend crashes: If a backend crashes, the postmaster process\n> will restart everything, leading to the same problem described in\n> 1. However, this behavior can be prevented with the\n> restart_after_crash parameter [3].\n> 3. Client disconnections: During waits for synchronous replication,\n> interrupt processing is turned off, so disconnected clients\n> actually don't seem to cause a problem. The server will still\n> wait for synchronous replication to complete prior to making the\n> transaction visible on the primary.\n> 4. Query cancellations and backend terminations: This appears to be\n> the only gap where there is no way to avoid potential data loss,\n> and it is the main target of my proposal.\n> \n> Instead of blocking query cancellations and backend terminations, I\n> think we should allow them to proceed, but we should keep the\n> transactions marked in-progress so they do not yet become visible to\n> sessions on the primary. Once replication has caught up to the\n> the necessary point, the transactions can be marked completed, and\n> they would finally become visible.\n\nI think there's two aspects making this proposal problematic:\n\nFirst, from the user experience side of things, the issue is that this seems\nto propose violating read-your-own-writes. Within a single connection to a\nsingle node. Which imo is *far* worse than seeing writes that haven't yet been\nacknowledged as replicated after a query cancel.\n\nSecond, on the implementation side, I think this proposal practically amounts\nto internally converting plain transaction commits into 2PC\nprepare/commit. With all the associated overhead (two WAL entries/flushes per\ncommit, needing a separate set of procarray entries to hold the resources for\nthe the prepared-but-not-committed transactions, potential for running out of\nthe extra procarray slots). What if a user rapidly commits-cancels in a loop?\nYou'll almost immediately run out of procarray slots to represent all those\n\"not really committed\" transactions.\n\nI think there's benefit in optionally turning all transactions into 2PC ones,\nbut I don't see it being fast enough to be the only option.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 24 Jul 2021 17:24:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-24 15:53:15 +0500, Andrey Borodin wrote:\n> Are there any other problems with blocking cancels?\n\nUnless you have commandline access to the server, it's not hard to get\ninto a situation where you can't change the configuration setting\nbecause all connections are hanging, and you can't even log in to do an\nALTER SERVER etc. You can't kill applications to kill the connection,\nbecause they will just continue to hang.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 24 Jul 2021 17:29:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "> 25 июля 2021 г., в 05:29, Andres Freund <andres@anarazel.de> написал(а):\n> \n> Hi,\n> \n> On 2021-07-24 15:53:15 +0500, Andrey Borodin wrote:\n>> Are there any other problems with blocking cancels?\n> \n> Unless you have commandline access to the server, it's not hard to get\n> into a situation where you can't change the configuration setting\n> because all connections are hanging, and you can't even log in to do an\n> ALTER SERVER etc. You can't kill applications to kill the connection,\n> because they will just continue to hang.\n\nHmm, yes, it's not hard to get to this situation. Intentionally. But what would be setup to get into such troubles? Setting sync rep, but not configuring HA tool?\n\nIn normal circumstances HA cluster is not configured to allow this. Normally hanging commits are part of the failover. Somewhere new primary server is operating. You have to find commandline access to the server to execute pg_rewind, and join this node to cluster again as a standby.\n\nAnyway it's a good idea to set up superuser_reserved_connections for administrative intervention [0].\n\nI like the idea of transferring transaction locks somewhere until synchronous_commit requirements are satisfied. It makes us closer to making this locks durable to survive restart. But, IMO, the complexity and potentially dangerous conditions outweigh the benefits of this approach easily. \n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0]",
"msg_date": "Sun, 25 Jul 2021 14:13:18 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "On 7/24/21, 3:54 AM, \"Andrey Borodin\" <x4mmm@yandex-team.ru> wrote:\r\n> Let's clearly enumerate problems of blocking.\r\n> It's been mentioned that backend is not responsive when cancelation is blocked. But on the contrary, it's very responsive.\r\n\r\nIt is responsive in the sense that it emits a WARNING to the client\r\nwhose backend received the request. However, it is still not\r\nresponsive in other aspects. The backend won't take the requested\r\naction, and if the action was requested via\r\npg_cancel/terminate_backend(), no useful feedback is provided to the\r\nuser to explain why it is blocked.\r\n\r\n> Semantics of cancelation assumes correct query interruption. This is not possible already when we committed locally. There cannot be any correct cancelation. And I don't think it worth to add incorrect cancelation.\r\n\r\nThe latest version of the block-cancels patch that I've seen still\r\nallows you to cancel things if you really want to. For example, you\r\ncan completely turn off synchronous replication by unsetting\r\nsynchronous_standby_names. Alternatively, you could store the value\r\nof the new block-cancels parameter in shared memory and simply turn\r\nthat off to allow cancellations to proceed. In either case, a user is\r\nforced to change the settings for the whole server. I think allowing\r\nusers to target a specific synchronous replication wait is useful.\r\nEven if I want to block canceling waits for most queries, perhaps I am\r\nokay with unblocking an administrative session that is stuck trying to\r\nupdate a password (although that could also be achieved by remembering\r\nto set synchronous_commit = local).\r\n\r\nWhat do you think about allowing multiple sets of behavior with the\r\nnew parameter? The \"always allow\" value would make things work just\r\nlike they do today. The \"when specifically requested\" value would\r\nallow users to use a new mechanism (perhaps new signals) to\r\nintentionally cancel synchronous replication waits. And the \"always\r\nblock\" value would disallow blocking such waits without altering the\r\nserver-wide settings.\r\n\r\nNathan",
"msg_date": "Mon, 26 Jul 2021 17:08:52 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
},
{
"msg_contents": "On 7/24/21, 5:25 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> First, from the user experience side of things, the issue is that this seems\r\n> to propose violating read-your-own-writes. Within a single connection to a\r\n> single node. Which imo is *far* worse than seeing writes that haven't yet been\r\n> acknowledged as replicated after a query cancel.\r\n\r\nRight. I suspect others will have a similar opinion.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 26 Jul 2021 17:14:47 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding data loss with synchronous replication"
}
] |
[
{
"msg_contents": "Hi\r\n\r\n\r\n\r\nWhen reading PG DOC, found some example code not correct as it said.\r\n\r\nhttps://www.postgresql.org/docs/devel/regress-run.html\r\n\r\n\r\n\r\nHere's a tiny fix in regress.sgml.\r\n\r\n\r\n\r\n-make check PGOPTIONS=\"-c log_checkpoints=on -c work_mem=50MB\"\r\n\r\n+make check PGOPTIONS=\"-c geqo=off -c work_mem=50MB\"\r\n\r\n\r\n\r\nlog_checkpoints couldn't be set in PGOPTIONS.\r\n\r\nReplace log_checkpoints with geqo in the example code.\r\n\r\n\r\n\r\n-make check EXTRA_REGRESS_OPTS=\"--temp-config=test_postgresql.conf\"\r\n\r\n+make check EXTRA_REGRESS_OPTS=\"--temp-config=$(pwd)/test_postgresql.conf\"\r\n\r\n\r\n\r\nUser needs to specify $(pwd) to let the command execute as expected.\r\n\r\n\r\n\r\nThe above example code is added by Peter in PG14. So I think we need to apply this fix at PG14/master.\r\n\r\nI proposed this fix at pgsql-docs@lists.postgresql.org<mailto:pgsql-docs@lists.postgresql.org> at [1]. But no reply except Craig. So I please allow me to post the patch at Hackers’ mail list again in case the fix is missed.\r\n\r\n\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB6113FA937648B8F7A372359BFB009%40OS0PR01MB6113.jpnprd01.prod.outlook.com\r\n\r\n\r\n\r\nRegards,\r\n\r\nTang",
"msg_date": "Fri, 23 Jul 2021 06:12:02 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[Doc] Tiny fix for regression tests example"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 06:12:02AM +0000, tanghy.fnst@fujitsu.com wrote:\n> Here's a tiny fix in regress.sgml.\n> \n> -make check PGOPTIONS=\"-c log_checkpoints=on -c work_mem=50MB\"\n> +make check PGOPTIONS=\"-c geqo=off -c work_mem=50MB\"\n> \n> log_checkpoints couldn't be set in PGOPTIONS.\n> \n> Replace log_checkpoints with geqo in the example code.\n\nRight, that won't work. What about using something more\ndeveloper-oriented here, say force_parallel_mode=regress?\n\n> -make check EXTRA_REGRESS_OPTS=\"--temp-config=test_postgresql.conf\"\n> +make check EXTRA_REGRESS_OPTS=\"--temp-config=$(pwd)/test_postgresql.conf\"\n> \n> User needs to specify $(pwd) to let the command execute as expected.\n\nThis works as-is.\n--\nMichael",
"msg_date": "Mon, 26 Jul 2021 13:04:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Doc] Tiny fix for regression tests example"
},
{
"msg_contents": "On Monday, July 26, 2021 1:04 PM, Michael Paquier <michael@paquier.xyz> wrote:\n>> -make check PGOPTIONS=\"-c log_checkpoints=on -c work_mem=50MB\"\n>> +make check PGOPTIONS=\"-c geqo=off -c work_mem=50MB\"\n>> \n>> log_checkpoints couldn't be set in PGOPTIONS.\n>> \n>> Replace log_checkpoints with geqo in the example code\n>>\n>Right, that won't work. What about using something more\n>developer-oriented here, say force_parallel_mode=regress?\n\nThanks for your comment. Agree with your suggestion.\nModified it in the attachment patch.\n\nRegards,\nTang",
"msg_date": "Mon, 26 Jul 2021 05:50:28 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [Doc] Tiny fix for regression tests example"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 05:50:28AM +0000, tanghy.fnst@fujitsu.com wrote:\n> Thanks for your comment. Agree with your suggestion.\n> Modified it in the attachment patch.\n\nOkay, applied, but without the pwd part.\n--\nMichael",
"msg_date": "Mon, 26 Jul 2021 16:28:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Doc] Tiny fix for regression tests example"
}
] |
[
{
"msg_contents": "Hi!\n\nFrom time to time I observe $subj on clusters using logical replication.\nI most of cases there are a lot of other errors. Probably $subj condition should be kind of impossible without other problems.\nI propose to enhance error logging of XLogReadRecord() in ReadPageInternal().\n\nThank you!\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 23 Jul 2021 14:07:27 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Logical replication error \"no record found\" /* shouldn't happen */"
},
{
"msg_contents": "Hello.\n\nI saw this error multiple times trying to replicate the 2-3 TB server\n(version 11 to version 12). I was unable to find any explanation for\nthis error.\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Sat, 24 Jul 2021 13:26:55 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication error \"no record found\" /* shouldn't happen\n */"
},
{
"msg_contents": "On 2021-Jul-23, Andrey Borodin wrote:\n\n> Hi!\n> \n> From time to time I observe $subj on clusters using logical replication.\n> I most of cases there are a lot of other errors. Probably $subj condition should be kind of impossible without other problems.\n> I propose to enhance error logging of XLogReadRecord() in ReadPageInternal().\n\nHmm.\n\nA small problem in this patch is that XLogReaderValidatePageHeader\nalready sets errormsg_buf; you're overwriting that. I suggest to leave\nthat untouched. There are other two cases where the problem occurs in\npage_read() callback; ReadPageInternal explicitly documents that it\ndoesn't set the error in that case. We have two options to deal with\nthat:\n\n1. change all existing callbacks to set the errormsg_buf depending on\nwhat actually fails, and then if they return failure without an error\nmessage, add something like your proposed message.\n2. throw error directly in the callback rather than returning. I don't\nthink this strategy actually works\n\nI attach a cut-down patch that doesn't deal with the page_read callbacks\nissue, just added stub comments in xlog.c where something should be\ndone.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)",
"msg_date": "Sun, 12 Dec 2021 20:57:00 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication error \"no record found\" /* shouldn't happen\n */"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is a minor leak, oversight in\nhttps://github.com/postgres/postgres/commit/4526951d564a7eed512b4a0ac3b5893e0a115690#diff-e399f5c029192320f310a79f18c20fb18c8e916fee993237f6f82f05dad851c5\n\nExplainPropertyText does not save the *relations->data* pointer and\nvar relations goes out of scope.\n\nNo need to backpatch.\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 23 Jul 2021 11:22:37 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix memory leak when output postgres_fdw's \"Relations\""
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> This is a minor leak, oversight in\n> https://github.com/postgres/postgres/commit/4526951d564a7eed512b4a0ac3b5893e0a115690#diff-e399f5c029192320f310a79f18c20fb18c8e916fee993237f6f82f05dad851c5\n\nI don't think you understand how Postgres memory management works.\nThere's no permanent leak here, just till the end of the command;\nso it's pretty doubtful that there's any need to expend cycles on\nan explicit pfree.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jul 2021 10:32:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory leak when output postgres_fdw's \"Relations\""
},
{
"msg_contents": "Em sex., 23 de jul. de 2021 às 11:32, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > This is a minor leak, oversight in\n> >\n> https://github.com/postgres/postgres/commit/4526951d564a7eed512b4a0ac3b5893e0a115690#diff-e399f5c029192320f310a79f18c20fb18c8e916fee993237f6f82f05dad851c5\n>\n> I don't think you understand how Postgres memory management works.\n>\nMaybe not yet. Valgrind may also don't understand yet.\n\n\n> There's no permanent leak here, just till the end of the command;\n> so it's pretty doubtful that there's any need to expend cycles on\n> an explicit pfree.\n>\nMaybe.\n\n==30691== 24 bytes in 1 blocks are definitely lost in loss record 123 of 469\n==30691== at 0x8991F0: MemoryContextAlloc (mcxt.c:893)\n==30691== by 0x899F29: MemoryContextStrdup (mcxt.c:1291)\n==30691== by 0x864E09: RelationInitIndexAccessInfo (relcache.c:1419)\n==30691== by 0x865F81: RelationBuildDesc (relcache.c:1175)\n==30691== by 0x868575: load_critical_index (relcache.c:4168)\n==30691== by 0x8684A0: RelationCacheInitializePhase3 (relcache.c:3980)\n==30691== by 0x88047A: InitPostgres (postinit.c:1031)\n==30691== by 0x773F12: PostgresMain (postgres.c:4081)\n==30691== by 0x6F9C33: BackendRun (postmaster.c:4506)\n==30691== by 0x6F96D8: BackendStartup (postmaster.c:4228)\n==30691== by 0x6F8C08: ServerLoop (postmaster.c:1745)\n==30691== by 0x6F747B: PostmasterMain (postmaster.c:1417)\n\nIs this is a false-positive?\n\nregards,\nRanier Vilela\n\nEm sex., 23 de jul. de 2021 às 11:32, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> This is a minor leak, oversight in\n> https://github.com/postgres/postgres/commit/4526951d564a7eed512b4a0ac3b5893e0a115690#diff-e399f5c029192320f310a79f18c20fb18c8e916fee993237f6f82f05dad851c5\n\nI don't think you understand how Postgres memory management works.Maybe not yet. Valgrind may also don't understand yet. \nThere's no permanent leak here, just till the end of the command;\nso it's pretty doubtful that there's any need to expend cycles on\nan explicit pfree.Maybe.==30691== 24 bytes in 1 blocks are definitely lost in loss record 123 of 469==30691== at 0x8991F0: MemoryContextAlloc (mcxt.c:893)==30691== by 0x899F29: MemoryContextStrdup (mcxt.c:1291)==30691== by 0x864E09: RelationInitIndexAccessInfo (relcache.c:1419)==30691== by 0x865F81: RelationBuildDesc (relcache.c:1175)==30691== by 0x868575: load_critical_index (relcache.c:4168)==30691== by 0x8684A0: RelationCacheInitializePhase3 (relcache.c:3980)==30691== by 0x88047A: InitPostgres (postinit.c:1031)==30691== by 0x773F12: PostgresMain (postgres.c:4081)==30691== by 0x6F9C33: BackendRun (postmaster.c:4506)==30691== by 0x6F96D8: BackendStartup (postmaster.c:4228)==30691== by 0x6F8C08: ServerLoop (postmaster.c:1745)==30691== by 0x6F747B: PostmasterMain (postmaster.c:1417) Is this is a false-positive?regards,Ranier Vilela",
"msg_date": "Fri, 23 Jul 2021 16:20:37 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory leak when output postgres_fdw's \"Relations\""
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 04:20:37PM -0300, Ranier Vilela wrote:\n> Maybe not yet. Valgrind may also don't understand yet.\n\nI think that you should do things the opposite way. In short, instead\nof attempting to understand first Valgrind or Coverity and then\nPostgres, try to understand the internals of Postgres first and then\ninterpret what Valgrind or even Coverity tell you.\n\nTom is right. There is no point in caring about the addition some\npfree()'s in the backend code as long as they don't prove to be an\nactual leak in the context where a code path is used, and nobody will\nfollow you on that. Some examples where this would be worth caring\nabout are things like tight loops leaking a bit of memory each time\nthese are taken, with leaks that can be easily triggered by the user\nwith some specific SQL commands, or even memory contexts not cleaned\nup where they should, impacting some parts of the system (like the\nexecutor, or even the planner) for a long-running analytical query.\n--\nMichael",
"msg_date": "Mon, 26 Jul 2021 11:13:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory leak when output postgres_fdw's \"Relations\""
}
] |
[
{
"msg_contents": "This is a reply to an old thread with the same name:\nhttps://www.postgresql.org/message-id/1513971675.5870501.1439797066345.JavaMail.yahoo@mail.yahoo.com\nI was not able to do a proper reply since I cannot download the raw\nmessage: https://postgrespro.com/list/thread-id/2483942\n\nAnyway, the problem with thread sanitizer is still present. If I try to\nbuild libpq v13.3 with thread sanitizer, I get a configuration error like\nthis:\n\nconfigure:18852: checking thread safety of required library functions\nconfigure:18875: /usr/bin/clang-12 -o conftest -Wall -Wmissing-prototypes\n-Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv\n-Wno-unused-command-line-argument -m64 -O3 -fsanitize=thread -pthread\n-D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -DIN_CONFIGURE\n-I/home/mmatrosov/.conan/data/zlib/1.2.11/_/_/package/98c3afaf7dd035538c92258b227714d9d4a19852/include\n-DNDEBUG -D_GNU_SOURCE -m64\n-L/home/mmatrosov/.conan/data/zlib/1.2.11/_/_/package/98c3afaf7dd035538c92258b227714d9d4a19852/lib\n conftest.c -lz -lm -lz >&5\nconfigure:18875: $? = 0\nconfigure:18875: ./conftest\n==================\nWARNING: ThreadSanitizer: data race (pid=3413987)\n Write of size 4 at 0x000000f1744c by thread T2:\n #0 func_call_2 <null> (conftest+0x4b5e51)\n\n Previous read of size 4 at 0x000000f1744c by thread T1:\n #0 func_call_1 <null> (conftest+0x4b5d12)\n\n Location is global 'errno2_set' of size 4 at 0x000000f1744c\n(conftest+0x000000f1744c)\n\n Thread T2 (tid=3413990, running) created by main thread at:\n #0 pthread_create <null> (conftest+0x424b3b)\n #1 main <null> (conftest+0x4b5b49)\n\n Thread T1 (tid=3413989, running) created by main thread at:\n #0 pthread_create <null> (conftest+0x424b3b)\n #1 main <null> (conftest+0x4b5b2e)\n\n...\n\nconfigure:18879: result: no\nconfigure:18881: error: thread test program failed\nThis platform is not thread-safe. Check the file 'config.log' or compile\nand run src/test/thread/thread_test for the exact reason.\nUse --disable-thread-safety to disable thread safety.\n\n\nI am not sure what is the proper fix for the issue, but I at least suggest\nto disable this test when building with thread sanitizer:\nhttps://github.com/conan-io/conan-center-index/pull/6472/files#diff-b8639f81e30f36c8ba29a0878f1ef4d9f1552293bc9098ebb9b429ddb1f0935f\n\n-----\nBest regards, Mikhail Matrosov\n\nThis is a reply to an old thread with the same name: https://www.postgresql.org/message-id/1513971675.5870501.1439797066345.JavaMail.yahoo@mail.yahoo.comI was not able to do a proper reply since I cannot download the raw message: https://postgrespro.com/list/thread-id/2483942Anyway, the problem with thread sanitizer is still present. If I try to build libpq v13.3 with thread sanitizer, I get a configuration error like this:configure:18852: checking thread safety of required library functionsconfigure:18875: /usr/bin/clang-12 -o conftest -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -m64 -O3 -fsanitize=thread -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -DIN_CONFIGURE -I/home/mmatrosov/.conan/data/zlib/1.2.11/_/_/package/98c3afaf7dd035538c92258b227714d9d4a19852/include -DNDEBUG -D_GNU_SOURCE -m64 -L/home/mmatrosov/.conan/data/zlib/1.2.11/_/_/package/98c3afaf7dd035538c92258b227714d9d4a19852/lib conftest.c -lz -lm -lz >&5configure:18875: $? = 0configure:18875: ./conftest==================WARNING: ThreadSanitizer: data race (pid=3413987) Write of size 4 at 0x000000f1744c by thread T2: #0 func_call_2 <null> (conftest+0x4b5e51) Previous read of size 4 at 0x000000f1744c by thread T1: #0 func_call_1 <null> (conftest+0x4b5d12) Location is global 'errno2_set' of size 4 at 0x000000f1744c (conftest+0x000000f1744c) Thread T2 (tid=3413990, running) created by main thread at: #0 pthread_create <null> (conftest+0x424b3b) #1 main <null> (conftest+0x4b5b49) Thread T1 (tid=3413989, running) created by main thread at: #0 pthread_create <null> (conftest+0x424b3b) #1 main <null> (conftest+0x4b5b2e)...configure:18879: result: noconfigure:18881: error: thread test program failedThis platform is not thread-safe. Check the file 'config.log' or compileand run src/test/thread/thread_test for the exact reason.Use --disable-thread-safety to disable thread safety.I am not sure what is the proper fix for the issue, but I at least suggest to disable this test when building with thread sanitizer: https://github.com/conan-io/conan-center-index/pull/6472/files#diff-b8639f81e30f36c8ba29a0878f1ef4d9f1552293bc9098ebb9b429ddb1f0935f-----Best regards, Mikhail Matrosov",
"msg_date": "Fri, 23 Jul 2021 17:27:36 +0300",
"msg_from": "Mikhail Matrosov <mikhail.matrosov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "On 2021-Jul-23, Mikhail Matrosov wrote:\n\n> I am not sure what is the proper fix for the issue, but I at least suggest\n> to disable this test when building with thread sanitizer:\n> https://github.com/conan-io/conan-center-index/pull/6472/files#diff-b8639f81e30f36c8ba29a0878f1ef4d9f1552293bc9098ebb9b429ddb1f0935f\n\nHere's the proposed patch. Patches posted to the mailing list by their\nauthors proposed for inclusion are considered to be under the PostgreSQL\nlicense, but this patch hasn't been posted by the author so let's assume\nthey're not authorizing them to use it. (Otherwise, why wouldn't they\njust post it here instead of doing the absurdly convoluted dance of a\ngithub PR?)\n\ndiff --git a/src/test/thread/thread_test.c b/src/test/thread/thread_test.c\nindex e1bec01..e4ffd78 100644\n--- a/src/test/thread/thread_test.c\n+++ b/src/test/thread/thread_test.c\n@@ -61,6 +61,14 @@ main(int argc, char *argv[])\n \tfprintf(stderr, \"Perhaps rerun 'configure' using '--enable-thread-safety'.\\n\");\n \treturn 1;\n }\n+\n+#elif __has_feature(thread_sanitizer)\n+int\n+main(int argc, char *argv[])\n+{\n+\tprintf(\"Thread safety check is skipped since it does not work with thread sanitizer.\\n\");\n+\treturn 0;\n+}\n #else\n \n /* This must be down here because this is the code that uses threads. */\n\n-- \nÁlvaro Herrera\n\n\n",
"msg_date": "Fri, 23 Jul 2021 13:35:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "On 2021-Jul-23, Alvaro Herrera wrote:\n\n> On 2021-Jul-23, Mikhail Matrosov wrote:\n> \n> > I am not sure what is the proper fix for the issue, but I at least suggest\n> > to disable this test when building with thread sanitizer:\n> > https://github.com/conan-io/conan-center-index/pull/6472/files#diff-b8639f81e30f36c8ba29a0878f1ef4d9f1552293bc9098ebb9b429ddb1f0935f\n> \n> Here's the proposed patch. Patches posted to the mailing list by their\n> authors proposed for inclusion are considered to be under the PostgreSQL\n> license, but this patch hasn't been posted by the author so let's assume\n> they're not authorizing them to use it. (Otherwise, why wouldn't they\n> just post it here instead of doing the absurdly convoluted dance of a\n> github PR?)\n\n... that said, I wonder why would we do this in the thread_test program\nrather than in configure itself. Wouldn't it make more sense for the\nconfigure test to be skipped altogether (marking the result as\nthread-safe) when running under thread sanitizer, if there's a way to\ndetect that?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)\n\n\n",
"msg_date": "Fri, 23 Jul 2021 13:36:54 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> ... that said, I wonder why would we do this in the thread_test program\n> rather than in configure itself. Wouldn't it make more sense for the\n> configure test to be skipped altogether (marking the result as\n> thread-safe) when running under thread sanitizer, if there's a way to\n> detect that?\n\nTBH, I wonder why we don't just nuke thread_test.c altogether.\nIs it still useful in 2021? Machines that still need\n--disable-thread-safety can doubtless be counted without running\nout of fingers, and I think their owners can be expected to know\nthat they need that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jul 2021 13:42:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 01:42:41PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > ... that said, I wonder why would we do this in the thread_test program\n> > rather than in configure itself. Wouldn't it make more sense for the\n> > configure test to be skipped altogether (marking the result as\n> > thread-safe) when running under thread sanitizer, if there's a way to\n> > detect that?\n> \n> TBH, I wonder why we don't just nuke thread_test.c altogether.\n> Is it still useful in 2021? Machines that still need\n> --disable-thread-safety can doubtless be counted without running\n> out of fingers, and I think their owners can be expected to know\n> that they need that.\n\nI think it is fine to remove it. It was really designed to just try a\nlot of flags to see what the compiler required, but things have become\nmuch more stable since it was written.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 23 Jul 2021 14:01:24 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-23 13:42:41 -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > ... that said, I wonder why would we do this in the thread_test program\n> > rather than in configure itself. Wouldn't it make more sense for the\n> > configure test to be skipped altogether (marking the result as\n> > thread-safe) when running under thread sanitizer, if there's a way to\n> > detect that?\n> \n> TBH, I wonder why we don't just nuke thread_test.c altogether.\n> Is it still useful in 2021? Machines that still need\n> --disable-thread-safety can doubtless be counted without running\n> out of fingers, and I think their owners can be expected to know\n> that they need that.\n\n+1. And before long it might be time to remove support for systems\nwithout threads...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Jul 2021 12:57:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-07-23 13:42:41 -0400, Tom Lane wrote:\n>> TBH, I wonder why we don't just nuke thread_test.c altogether.\n\n> +1. And before long it might be time to remove support for systems\n> without threads...\n\nI'm not prepared to go that far just yet; but certainly we can stop\nexpending configure cycles on the case.\n\nShould we back-patch this, or just do it in HEAD? Maybe HEAD+v14?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jul 2021 17:18:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "Hi, Alvaro and all,\n\n> this patch hasn't been posted by the author so let's assume\n> they're not authorizing them to use it.\n\nNot sure what you mean. I am the author and I authorize anyone to do\nwhatever they want with it.\n\n> Otherwise, why wouldn't they just post it here instead of doing the\nabsurdly convoluted dance of a github PR?\n\nWell, for me submitting a github PR is a well-established and easy-to-do\nprocedure which is the same for all libraries. Posting to a new mailing\nlist definitely is not. I've spent around 15 minutes trying to do a proper\nreply to the thread and did not succeed.\n\nAnother reason to post a PR is that we consume libpq via conan, and\nreleasing a new revision of the conan recipe for the same library version\nis a relatively fast and well-defined process. While waiting for a new\nversion of a library with a patch depends heavily on a particular library.\nI am not aware of the release cadence of libpq.\n\nAnyway, I am very glad there is swift feedback from you guys and I am\nlooking forward to your comments on the proper way to fix the issue!\n\n-----\nBest regards, Mikhail Matrosov\n\nHi, Alvaro and all,> this patch hasn't been posted by the author so let's assume> they're not authorizing them to use it. Not sure what you mean. I am the author and I authorize anyone to do whatever they want with it.> Otherwise, why wouldn't they just post it here instead of doing the absurdly convoluted dance of a github PR?Well, for me submitting a github PR is a well-established and easy-to-do procedure which is the same for all libraries. Posting to a new mailing list definitely is not. I've spent around 15 minutes trying to do a proper reply to the thread and did not succeed.Another reason to post a PR is that we consume libpq via conan, and releasing a new revision of the conan recipe for the same library version is a relatively fast and well-defined process. While waiting for a new version of a library with a patch depends heavily on a particular library. I am not aware of the release cadence of libpq.Anyway, I am very glad there is swift feedback from you guys and I am looking forward to your comments on the proper way to fix the issue!-----Best regards, Mikhail Matrosov",
"msg_date": "Sat, 24 Jul 2021 00:43:32 +0300",
"msg_from": "Mikhail Matrosov <mikhail.matrosov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-23 17:18:37 -0400, Tom Lane wrote:\n> I'm not prepared to go that far just yet; but certainly we can stop\n> expending configure cycles on the case.\n> \n> Should we back-patch this, or just do it in HEAD? Maybe HEAD+v14?\n\nI'm ok with all, with a preference for HEAD+v14, followed by HEAD, and\nall branches.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Jul 2021 14:51:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-07-23 17:18:37 -0400, Tom Lane wrote:\n>> Should we back-patch this, or just do it in HEAD? Maybe HEAD+v14?\n\n> I'm ok with all, with a preference for HEAD+v14, followed by HEAD, and\n> all branches.\n\nAfter a bit more thought, HEAD+v14 is also my preference. I'm not\neager to change this in stable branches, but it doesn't seem too\nlate for v14.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Jul 2021 18:05:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
},
{
"msg_contents": "I wrote:\n> After a bit more thought, HEAD+v14 is also my preference. I'm not\n> eager to change this in stable branches, but it doesn't seem too\n> late for v14.\n\nPerusing the commit log, I realized that there's another reason why\nv14 is a good cutoff: thread_test.c was in a different place with\nan allegedly different raison d'etre before 8a2121185. So done\nthat way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Jul 2021 12:20:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Configure with thread sanitizer fails the thread test"
}
] |
[
{
"msg_contents": "Per the discussion at [1], users on Windows are seeing nasty performance\nlosses in v13/v14 (compared to prior releases) for hash aggregations that\nrequired somewhat more than 2GB in the prior releases. That's because\nthey spill to disk where they did not before. The easy answer of \"raise\nhash_mem_multiplier\" doesn't help, because on Windows the product of\nwork_mem and hash_mem_multiplier is clamped to 2GB, thanks to the ancient\ndecision to do a lot of memory-space-related calculations in \"long int\",\nwhich is only 32 bits on Win64.\n\nWhile I don't personally have the interest to fix that altogether,\nit does seem like we've got a performance regression that we ought\nto do something about immediately. So I took a look at getting rid of\nthis restriction for calculations associated with hash_mem_multiplier,\nand it doesn't seem to be too bad. I propose the attached patch.\n(This is against HEAD; there are minor conflicts in v13 and v14.)\n\nA couple of notes:\n\n* I did not change most of the comments referring to \"hash_mem\",\neven though that's not really a thing anymore. They seem readable\nenough anyway, and I failed to think of a reasonably-short substitute.\n\n* We should drop get_hash_mem() altogether in HEAD and maybe v14.\nI figure we'd better leave it available in v13, though, in case\nany outside code is using it.\n\nComments?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/MN2PR15MB25601E80A9B6D1BA6F592B1985E39%40MN2PR15MB2560.namprd15.prod.outlook.com",
"msg_date": "Fri, 23 Jul 2021 17:15:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-23 17:15:24 -0400, Tom Lane wrote:\n> Per the discussion at [1], users on Windows are seeing nasty performance\n> losses in v13/v14 (compared to prior releases) for hash aggregations that\n> required somewhat more than 2GB in the prior releases.\n\nUgh :(.\n\n\n> That's because they spill to disk where they did not before. The easy\n> answer of \"raise hash_mem_multiplier\" doesn't help, because on Windows\n> the product of work_mem and hash_mem_multiplier is clamped to 2GB,\n> thanks to the ancient decision to do a lot of memory-space-related\n> calculations in \"long int\", which is only 32 bits on Win64.\n\nWe really ought to just remove every single use of long. As Thomas\nquipped on twitter at some point, \"long is the asbestos of C\". I think\nwe've incurred far more cost due to weird workarounds to deal with the\ndifference in long width between windows and everything else, than just\nremoving all use of it outright would incur.\n\nAnd perhaps once we've done that, we shoulde experiment with putting\n__attribute__((deprecated)) on long, but conditionalize it so it's only\nused for building PG internal stuff, and doesn't leak into pg_config\noutput. Perhaps it'll be to painful due to external headers, but it\nseems worth trying.\n\nBut obviously that doesn't help with the issue in the release branches.\n\n\n> While I don't personally have the interest to fix that altogether,\n> it does seem like we've got a performance regression that we ought\n> to do something about immediately. So I took a look at getting rid of\n> this restriction for calculations associated with hash_mem_multiplier,\n> and it doesn't seem to be too bad. I propose the attached patch.\n> (This is against HEAD; there are minor conflicts in v13 and v14.)\n\nHm. I wonder if we would avoid some overflow dangers on 32bit systems if\nwe made get_hash_memory_limit() and the relevant variables 64 bit,\nrather than 32bit / size_t. E.g.\n\n> @@ -700,9 +697,9 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,\n> \tinner_rel_bytes = ntuples * tupsize;\n>\n> \t/*\n> -\t * Target in-memory hashtable size is hash_mem kilobytes.\n> +\t * Compute in-memory hashtable size limit from GUCs.\n> \t */\n> -\thash_table_bytes = hash_mem * 1024L;\n> +\thash_table_bytes = get_hash_memory_limit();\n>\n> \t/*\n> \t * Parallel Hash tries to use the combined hash_mem of all workers to\n> @@ -710,7 +707,14 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,\n> \t * per worker and tries to process batches in parallel.\n> \t */\n> \tif (try_combined_hash_mem)\n> -\t\thash_table_bytes += hash_table_bytes * parallel_workers;\n> +\t{\n> +\t\t/* Careful, this could overflow size_t */\n> +\t\tdouble\t\tnewlimit;\n> +\n> +\t\tnewlimit = (double) hash_table_bytes * (double) (parallel_workers + 1);\n> +\t\tnewlimit = Min(newlimit, (double) SIZE_MAX);\n> +\t\thash_table_bytes = (size_t) newlimit;\n> +\t}\n\nWouldn't need to be as carful, I think?\n\n\n\n> @@ -740,12 +747,26 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,\n> \t\t * size of skew bucket struct itself\n> \t\t *----------\n> \t\t */\n> -\t\t*num_skew_mcvs = skew_table_bytes / (tupsize +\n> -\t\t\t\t\t\t\t\t\t\t\t (8 * sizeof(HashSkewBucket *)) +\n> -\t\t\t\t\t\t\t\t\t\t\t sizeof(int) +\n> -\t\t\t\t\t\t\t\t\t\t\t SKEW_BUCKET_OVERHEAD);\n> -\t\tif (*num_skew_mcvs > 0)\n> -\t\t\thash_table_bytes -= skew_table_bytes;\n> +\t\tbytes_per_mcv = tupsize +\n> +\t\t\t(8 * sizeof(HashSkewBucket *)) +\n> +\t\t\tsizeof(int) +\n> +\t\t\tSKEW_BUCKET_OVERHEAD;\n> +\t\tskew_mcvs = hash_table_bytes / bytes_per_mcv;\n> +\n> +\t\t/*\n> +\t\t * Now scale by SKEW_HASH_MEM_PERCENT (we do it in this order so as\n> +\t\t * not to worry about size_t overflow in the multiplication)\n> +\t\t */\n> +\t\tskew_mcvs = skew_mcvs * SKEW_HASH_MEM_PERCENT / 100;\n\nI always have to think about the evaluation order of things like this\n(it's left to right for these), so I'd prefer parens around the\nmultiplication. I did wonder briefly whether the SKEW_HASH_MEM_PERCENT /\n100 just evaluates to 0...\n\n\nLooks like a good idea to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 24 Jul 2021 18:25:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "On Sat, Jul 24, 2021 at 06:25:53PM -0700, Andres Freund wrote:\n> > That's because they spill to disk where they did not before. The easy\n> > answer of \"raise hash_mem_multiplier\" doesn't help, because on Windows\n> > the product of work_mem and hash_mem_multiplier is clamped to 2GB,\n> > thanks to the ancient decision to do a lot of memory-space-related\n> > calculations in \"long int\", which is only 32 bits on Win64.\n> \n> We really ought to just remove every single use of long. As Thomas\n> quipped on twitter at some point, \"long is the asbestos of C\". I think\n> we've incurred far more cost due to weird workarounds to deal with the\n> difference in long width between windows and everything else, than just\n> removing all use of it outright would incur.\n\n+1\n\nAs I understand it, making long of undermined length was to allow\nsomeone to choose a data type that _might_ be longer than int if the\ncompiler/OS/CPU was optimized for that, but at this point, such\noptimizations just don't seem to make sense, and we know every(?) CPU\nsupports long-long, so why not go for something concrete? Do we really\nwant our feature limits to be determined by whether we have an optimized\ntype longer than int?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 24 Jul 2021 21:39:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-07-23 17:15:24 -0400, Tom Lane wrote:\n>> That's because they spill to disk where they did not before. The easy\n>> answer of \"raise hash_mem_multiplier\" doesn't help, because on Windows\n>> the product of work_mem and hash_mem_multiplier is clamped to 2GB,\n>> thanks to the ancient decision to do a lot of memory-space-related\n>> calculations in \"long int\", which is only 32 bits on Win64.\n\n> We really ought to just remove every single use of long.\n\nI have no objection to that as a long-term goal. But I'm not volunteering\nto do all the work, and in any case it wouldn't be a back-patchable fix.\nI feel that we do need to do something about this performance regression\nin v13.\n\n> Hm. I wonder if we would avoid some overflow dangers on 32bit systems if\n> we made get_hash_memory_limit() and the relevant variables 64 bit,\n> rather than 32bit / size_t. E.g.\n\nNo, I don't like that. Using size_t for memory-size variables is good\ndiscipline. Moreover, I'm not convinced that even with 64-bit ints,\noverflow would be impossible in all the places I fixed here. They're\nmultiplying several potentially very large values (one of which\nis a float). I think this is just plain sloppy coding, independently\nof which bit-width you choose to be sloppy in.\n\n>> +\t\tskew_mcvs = skew_mcvs * SKEW_HASH_MEM_PERCENT / 100;\n\n> I always have to think about the evaluation order of things like this\n> (it's left to right for these), so I'd prefer parens around the\n> multiplication. I did wonder briefly whether the SKEW_HASH_MEM_PERCENT /\n> 100 just evaluates to 0...\n\nOK, will do. I see your point, because I'd sort of instinctively\nwanted to write that as\n\t\tskew_mcvs *= SKEW_HASH_MEM_PERCENT / 100;\nwhich of course would not work.\n\nThanks for looking at the code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Jul 2021 12:28:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "Em dom., 25 de jul. de 2021 às 13:28, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-07-23 17:15:24 -0400, Tom Lane wrote:\n> >> That's because they spill to disk where they did not before. The easy\n> >> answer of \"raise hash_mem_multiplier\" doesn't help, because on Windows\n> >> the product of work_mem and hash_mem_multiplier is clamped to 2GB,\n> >> thanks to the ancient decision to do a lot of memory-space-related\n> >> calculations in \"long int\", which is only 32 bits on Win64.\n>\n> > We really ought to just remove every single use of long.\n>\n> I have no objection to that as a long-term goal. But I'm not volunteering\n> to do all the work, and in any case it wouldn't be a back-patchable fix.\n>\nI'm a volunteer, if you want to work together.\nI think int64 is in most cases the counterpart of *long* on Windows.\n\nregards,\nRanier Vilela\n\nEm dom., 25 de jul. de 2021 às 13:28, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Andres Freund <andres@anarazel.de> writes:\n> On 2021-07-23 17:15:24 -0400, Tom Lane wrote:\n>> That's because they spill to disk where they did not before. The easy\n>> answer of \"raise hash_mem_multiplier\" doesn't help, because on Windows\n>> the product of work_mem and hash_mem_multiplier is clamped to 2GB,\n>> thanks to the ancient decision to do a lot of memory-space-related\n>> calculations in \"long int\", which is only 32 bits on Win64.\n\n> We really ought to just remove every single use of long.\n\nI have no objection to that as a long-term goal. But I'm not volunteering\nto do all the work, and in any case it wouldn't be a back-patchable fix.I'm a volunteer, if you want to work together.I think int64 is in most cases the counterpart of *long* on Windows.regards,Ranier Vilela",
"msg_date": "Sun, 25 Jul 2021 14:19:26 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> I think int64 is in most cases the counterpart of *long* on Windows.\n\nI'm not particularly on board with s/long/int64/g as a universal\nsolution. I think that most of these usages are concerned with\nmemory sizes and would be better off as \"size_t\". We might need\nint64 in places where we're concerned with sums of memory usage\nacross processes, or where the value needs to be allowed to be\nnegative. So it'll take case-by-case analysis to do it right.\n\nBTW, one aspect of this that I'm unsure how to tackle is the\ncommon usage of \"L\" constants; in particular, \"work_mem * 1024L\"\nis a really common idiom that we'll need to get rid of. Not sure\nthat grep will be a useful aid for finding those.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Jul 2021 14:53:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "Em dom., 25 de jul. de 2021 às 15:53, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > I think int64 is in most cases the counterpart of *long* on Windows.\n>\n> I'm not particularly on board with s/long/int64/g as a universal\n> solution.\n\nSure, not a universal solution, I mean a start point.\nWhen I look for a type that is signed and size 8 bytes in Windows, I only\nsee int64.\n\nI think that most of these usages are concerned with\n> memory sizes and would be better off as \"size_t\".\n\nOk, but let's not forget that size_t is unsigned.\n\n We might need\n> int64 in places where we're concerned with sums of memory usage\n> across processes, or where the value needs to be allowed to be\n> negative. So it'll take case-by-case analysis to do it right.\n>\nSure.\n\n\n> BTW, one aspect of this that I'm unsure how to tackle is the\n> common usage of \"L\" constants; in particular, \"work_mem * 1024L\"\n> is a really common idiom that we'll need to get rid of. Not sure\n> that grep will be a useful aid for finding those.\n>\nI can see 30 matches in the head tree. (grep -d \"1024L\" *.c)\n\nFile backend\\access\\gin\\ginfast.c:\n if (metadata->nPendingPages * GIN_PAGE_FREESIZE > cleanupSize *\n1024L)\n (accum.allocatedMemory >= workMemory * 1024L)))\nIs it a good point to start?\n\nor one more simple?\n(src/backend/access/hash/hash.c) has one *long*.\n\nregards,\nRanier Vilela\n\nEm dom., 25 de jul. de 2021 às 15:53, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> I think int64 is in most cases the counterpart of *long* on Windows.\n\nI'm not particularly on board with s/long/int64/g as a universal\nsolution. Sure, not a universal solution, I mean a start point.When I look for a type that is signed and size 8 bytes in Windows, I only see int64. I think that most of these usages are concerned with\nmemory sizes and would be better off as \"size_t\".Ok, but let's not forget that size_t is unsigned. We might need\nint64 in places where we're concerned with sums of memory usage\nacross processes, or where the value needs to be allowed to be\nnegative. So it'll take case-by-case analysis to do it right.Sure. \n\nBTW, one aspect of this that I'm unsure how to tackle is the\ncommon usage of \"L\" constants; in particular, \"work_mem * 1024L\"\nis a really common idiom that we'll need to get rid of. Not sure\nthat grep will be a useful aid for finding those.I can see 30 matches in the head tree. (grep -d \"1024L\" *.c)File backend\\access\\gin\\ginfast.c: if (metadata->nPendingPages * GIN_PAGE_FREESIZE > cleanupSize * 1024L) (accum.allocatedMemory >= workMemory * 1024L)))Is it a good point to start?or one more simple?(src/backend/access/hash/hash.c) has one *long*.regards,Ranier Vilela",
"msg_date": "Sun, 25 Jul 2021 16:57:07 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "On Sun, Jul 25, 2021 at 12:28:04PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> We really ought to just remove every single use of long.\n> \n> I have no objection to that as a long-term goal. But I'm not volunteering\n> to do all the work, and in any case it wouldn't be a back-patchable fix.\n> I feel that we do need to do something about this performance regression\n> in v13.\n\nAnother idea may be to be more aggressive in c.h? A tweak there would\nbe dirtier than marking long as deprecated, but that would be less\ninvasive. Any of that is not backpatchable, of course..\n\n> No, I don't like that. Using size_t for memory-size variables is good\n> discipline. Moreover, I'm not convinced that even with 64-bit ints,\n> overflow would be impossible in all the places I fixed here. They're\n> multiplying several potentially very large values (one of which\n> is a float). I think this is just plain sloppy coding, independently\n> of which bit-width you choose to be sloppy in.\n\nYeah, using size_t where adapted is usually a good idea.\n--\nMichael",
"msg_date": "Mon, 26 Jul 2021 11:38:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "On 2021-Jul-25, Ranier Vilela wrote:\n\n> > BTW, one aspect of this that I'm unsure how to tackle is the\n> > common usage of \"L\" constants; in particular, \"work_mem * 1024L\"\n> > is a really common idiom that we'll need to get rid of. Not sure\n> > that grep will be a useful aid for finding those.\n> >\n> I can see 30 matches in the head tree. (grep -d \"1024L\" *.c)\n\ngrep grep '[0-9]L\\>' -- *.[chyl]\nshows some more constants.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 26 Jul 2021 15:20:40 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> On 2021-Jul-25, Ranier Vilela wrote:\n>\n>> > BTW, one aspect of this that I'm unsure how to tackle is the\n>> > common usage of \"L\" constants; in particular, \"work_mem * 1024L\"\n>> > is a really common idiom that we'll need to get rid of. Not sure\n>> > that grep will be a useful aid for finding those.\n>> >\n>> I can see 30 matches in the head tree. (grep -d \"1024L\" *.c)\n>\n> grep grep '[0-9]L\\>' -- *.[chyl]\n> shows some more constants.\n\ngit grep -Eiw '(0x[0-9a-f]+|[0-9]+)U?LL?' -- *.[chyl]\n\ngives about a hundred more hits.\n\nWe also have the (U)INT64CONST() macros, which are about about two\nthirds as common as the U?LL? suffixes.\n\n- ilmari\n\n\n",
"msg_date": "Mon, 26 Jul 2021 21:21:58 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> We also have the (U)INT64CONST() macros, which are about about two\n> thirds as common as the U?LL? suffixes.\n\nYeah. Ideally we'd forbid direct use of the suffixes and insist\nyou go through those macros, but I don't know of any way that\nwe could enforce such a coding rule, short of grepping the tree\nperiodically.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Jul 2021 16:27:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "On 2021-Jul-26, Tom Lane wrote:\n\n> ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> > We also have the (U)INT64CONST() macros, which are about about two\n> > thirds as common as the U?LL? suffixes.\n> \n> Yeah. Ideally we'd forbid direct use of the suffixes and insist\n> you go through those macros, but I don't know of any way that\n> we could enforce such a coding rule, short of grepping the tree\n> periodically.\n\nIIRC we have one buildfarm member that warns us about perlcritic; maybe\nthis is just another setup of that sort.\n\n(Personally I run the perlcritic check in my local commit-verifying\nscript before pushing.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\nhttps://burningbird.net/the-parable-of-the-languages/\n\n\n",
"msg_date": "Mon, 26 Jul 2021 16:39:23 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-26 11:38:41 +0900, Michael Paquier wrote:\n> On Sun, Jul 25, 2021 at 12:28:04PM -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> We really ought to just remove every single use of long.\n> > \n> > I have no objection to that as a long-term goal. But I'm not volunteering\n> > to do all the work, and in any case it wouldn't be a back-patchable fix.\n> > I feel that we do need to do something about this performance regression\n> > in v13.\n> \n> Another idea may be to be more aggressive in c.h? A tweak there would\n> be dirtier than marking long as deprecated, but that would be less\n> invasive. Any of that is not backpatchable, of course..\n\nHard to see how that could work - plenty system headers use long...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Jul 2021 09:52:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"long int\"-related limit on hash table sizes"
}
] |
[
{
"msg_contents": "Hi,\n\nAs an author of plpgsql_check, I permanently have to try to solve genetic\nproblems with the necessity of external information for successful static\nvalidation.\n\ncreate or replace function foo(tablename text)\nreturns void as $$\ndeclare r record;\nbegin\n execute format('select * from %I', tablename) into r;\n raise notice '%', r.x;\nend;\n$$ language plpgsql;\n\nOn this very simple code it is not possible to make static validation.\nCurrently, there is no way to push some extra information to source code,\nthat helps with static analysis, but doesn't break evaluation without\nplpgsql_check, and doesn't do some significant slowdown.\n\nI proposed Ada language pragmas, but it was rejected.\n\nI implemented fake pragmas in plpgsql_check via arguments of special\nfunction\n\nPERFORM plpgsql_check_pragma('plpgsql_check: off');\n\nIt is working, but it looks bizarre, and it requires a fake function\nplpgsql_check_pragma on production, so it is a little bit a dirty solution.\n\nNow, I have another proposal(s).\n\na) Can we invite new syntax for comments, that will be stored in AST like a\nnon executing statement?\n\nsome like:\n\n//* plpgsql_check: OFF *//\nRAISE NOTICE '%', r.x\n\nor second design\n\nb) can we introduce some flag for plpgsql_parser, that allows storing\ncomments in AST (it will not be default).\n\nso \"-- plpgsql_check: OFF \" will work for my purpose, and I don't need any\nspecial syntax.\n\nComments, notes?\n\nPavel\n\nhttps://github.com/okbob/plpgsql_check\n\nHi,As an author of plpgsql_check, I permanently have to try to solve genetic problems with the necessity of external information for successful static validation. create or replace function foo(tablename text)returns void as $$declare r record;begin execute format('select * from %I', tablename) into r; raise notice '%', r.x;end;$$ language plpgsql;On this very simple code it is not possible to make static validation. Currently, there is no way to push some extra information to source code, that helps with static analysis, but doesn't break evaluation without plpgsql_check, and doesn't do some significant slowdown.I proposed Ada language pragmas, but it was rejected.I implemented fake pragmas in plpgsql_check via arguments of special functionPERFORM plpgsql_check_pragma('plpgsql_check: off');It is working, but it looks bizarre, and it requires a fake function plpgsql_check_pragma on production, so it is a little bit a dirty solution.Now, I have another proposal(s). a) Can we invite new syntax for comments, that will be stored in AST like a non executing statement?some like://* plpgsql_check: OFF *//RAISE NOTICE '%', r.xor second designb) can we introduce some flag for plpgsql_parser, that allows storing comments in AST (it will not be default).so \"-- plpgsql_check: OFF \" will work for my purpose, and I don't need any special syntax.Comments, notes?Pavelhttps://github.com/okbob/plpgsql_check",
"msg_date": "Sat, 24 Jul 2021 07:39:53 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "proposal: plpgsql: special comments that will be part of AST"
},
{
"msg_contents": "Hi\n\n\n>\n> some like:\n>\n> //* plpgsql_check: OFF *//\n> RAISE NOTICE '%', r.x\n>\n> or second design\n>\n> b) can we introduce some flag for plpgsql_parser, that allows storing\n> comments in AST (it will not be default).\n>\n> so \"-- plpgsql_check: OFF \" will work for my purpose, and I don't need\n> any special syntax.\n>\n> Comments, notes?\n>\n\nWhen I started work on PoC I found that it was not a good idea. Comments\ncan be everywhere, but it is not possible to enhance plpgsql's gram in this\nway. So this is not an way\n\nRegards\n\nPavel\n\n\n> Pavel\n>\n> https://github.com/okbob/plpgsql_check\n>\n\nHisome like://* plpgsql_check: OFF *//RAISE NOTICE '%', r.xor second designb) can we introduce some flag for plpgsql_parser, that allows storing comments in AST (it will not be default).so \"-- plpgsql_check: OFF \" will work for my purpose, and I don't need any special syntax.Comments, notes?When I started work on PoC I found that it was not a good idea. Comments can be everywhere, but it is not possible to enhance plpgsql's gram in this way. So this is not an wayRegardsPavelPavelhttps://github.com/okbob/plpgsql_check",
"msg_date": "Sun, 25 Jul 2021 15:34:15 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: plpgsql: special comments that will be part of AST"
}
] |
[
{
"msg_contents": "When I am working on the UnqiueKey stuff, I find the following cases.\n\nSELECT * FROM (SELECT * FROM t offset 0) v ORDER BY a;\n// root->query_keys = A. root->order_pathkeys = A\n// Current: subroot->query_pathkeys = NIL.\n// Expected: subroot->xxxx_pathkeys = [A].\n\nSELECT * FROM (SELECT * FROM t offset 0) v, t2 WHERE t2.a = t.a;\n// root->query_keys = NIL\n// Current: subroot->query_keys = NIL\n// Expected: subroot->xxx_pathkeys = A\n\nTo resolve this issue, I want to add a root->outer_pathkeys which means the\ninteresting order from the outer side for a subquery. To cover the\ncases like below\n\n// root->query_keys = root->order_keys = b.\n// Expected: subroot->xxx_pathkeys = (a)? (b)?\nSELECT * FROM (SELECT * FROM t offset 0) v, t2\nWHERE t2.a = t.a order by v.b;\n\nthe root->outer_pathkeys should be a list of lists. in above case\nsubroot->outer_pathkeys should be [ [a], [b] ], this value may be\nchecked at many\nplaces, like pathkeys_useful_for_ordering, get_useful_pathkeys_for_relation,\nbuild_index_paths and more. My list might be incomplete, but once we\nhave a new place to check and the data is maintained already, it would\nbe easy to\nimprove. My thinking is we maintain the root->outer_pathkeys first, and then\nimprove the well known function as the first step. What do you think?\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n",
"msg_date": "Sat, 24 Jul 2021 21:14:42 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Maintain the pathkesy for subquery from outer side information"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> When I am working on the UnqiueKey stuff, I find the following cases.\n> SELECT * FROM (SELECT * FROM t offset 0) v ORDER BY a;\n> // root->query_keys = A. root->order_pathkeys = A\n> // Current: subroot->query_pathkeys = NIL.\n> // Expected: subroot->xxxx_pathkeys = [A].\n\nWhy do you \"expect\" that? I think pushing the outer ORDER BY past a\nLIMIT is an unacceptable semantics change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Jul 2021 10:14:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Maintain the pathkesy for subquery from outer side information"
},
{
"msg_contents": "On Sat, Jul 24, 2021 at 10:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > When I am working on the UnqiueKey stuff, I find the following cases.\n> > SELECT * FROM (SELECT * FROM t offset 0) v ORDER BY a;\n> > // root->query_keys = A. root->order_pathkeys = A\n> > // Current: subroot->query_pathkeys = NIL.\n> > // Expected: subroot->xxxx_pathkeys = [A].\n>\n> Why do you \"expect\" that? I think pushing the outer ORDER BY past a\n> LIMIT is an unacceptable semantics change.\n>\n> regards, tom lane\n\nI don't mean push down a \"ORDER BY\" clause to subquery, I mean push\ndown an \"interesting order\" to subquery. for example we have index t(a);\nthen SELECT * FROM (SELECT a FROM t OFFSET 0) v ORDER BY a;\nIn the current implementation, when we are planning the subuqery, planners\nthink the \"pathkey a\" is not interesting, but it should be IIUC.\n\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n",
"msg_date": "Sat, 24 Jul 2021 22:19:32 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Maintain the pathkesy for subquery from outer side information"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> On Sat, Jul 24, 2021 at 10:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Why do you \"expect\" that? I think pushing the outer ORDER BY past a\n>> LIMIT is an unacceptable semantics change.\n\n> I don't mean push down a \"ORDER BY\" clause to subquery, I mean push\n> down an \"interesting order\" to subquery. for example we have index t(a);\n> then SELECT * FROM (SELECT a FROM t OFFSET 0) v ORDER BY a;\n> In the current implementation, when we are planning the subuqery, planners\n> think the \"pathkey a\" is not interesting, but it should be IIUC.\n\nNo, it should not be.\n\n(1) We have long treated \"OFFSET 0\" as an optimization fence. That means\nthat the outer query shouldn't affect what plan you get for the subquery.\n\n(2) If you ignore point (1), you could argue that choosing a different\nresult order doesn't matter for this subquery. However, it potentially\n*does* matter for a large fraction of the cases in which we'd not have\nflattened the subquery into the outer query. In subqueries involving\nthings like volatile functions, aggregates, window functions, etc,\nencouraging the sub-planner to pick a different result ordering could\nlead to visibly different output.\n\nI think that in cases where there's not a semantic hazard involved,\nwe'd usually have pulled up the subquery so that this is all moot\nanyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Jul 2021 10:34:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Maintain the pathkesy for subquery from outer side information"
},
{
"msg_contents": "> I think that in cases where there's not a semantic hazard involved,\n> we'd usually have pulled up the subquery so that this is all moot\n> anyway.\n>\n\nI get your point with this statement. Things driven by this idea look\npractical and lucky. But for the UniqueKey stuff, we are not\nthat lucky.\n\nSELECT pk FROM t; -- Maintain the UniqueKey would be not necessary.\n\nHowever\n\nSELECT DISTINCT pk FROM (SELECT volatile_f(a), pk from t) WHERE ..;\n\nMaintaining the UniqueKey in subquery is necessary since it is useful outside.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n",
"msg_date": "Sun, 25 Jul 2021 16:16:06 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Maintain the pathkesy for subquery from outer side information"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been repeatedly confused by the the number of WAL files supposedly\nadded. Even when 100s of new WAL files are created the relevant portion\nof log_checkpoints will only ever list zero or one added WAL file.\n\nThe reason for that is that CheckpointStats.ckpt_segs_added is only\nincremented in PreallocXlogFiles(). Which has the following comment:\n * XXX this is currently extremely conservative, since it forces only one\n * future log segment to exist, and even that only if we are 75% done with\n * the current one. This is only appropriate for very low-WAL-volume systems.\n\nWhereas in real workloads WAL files are almost exclusively created via\nXLogWrite()->XLogFileInit().\n\nI think we should consider just removing that field. Or, even better, show\nsomething accurate instead.\n\nAs an example, here's the log output of a workload that has a replication slot\npreventing WAL files from being recycled (and too small max_wal_size):\n\n2021-07-24 15:47:42.524 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55767 buffers (42.5%); 0 WAL file(s) added, 0 removed, 0 recycled; write=3.914 s, sync=0.041 s, total=3.972 s; sync files=10, longest=0.010 s, average=0.005 s; distance=540578 kB, estimate=540905 kB\n2021-07-24 15:47:46.721 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55806 buffers (42.6%); 1 WAL file(s) added, 0 removed, 0 recycled; write=3.855 s, sync=0.028 s, total=3.928 s; sync files=8, longest=0.008 s, average=0.004 s; distance=540708 kB, estimate=540886 kB\n2021-07-24 15:47:51.004 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55850 buffers (42.6%); 1 WAL file(s) added, 0 removed, 0 recycled; write=3.895 s, sync=0.034 s, total=3.974 s; sync files=9, longest=0.009 s, average=0.004 s; distance=540657 kB, estimate=540863 kB\n2021-07-24 15:47:55.231 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55879 buffers (42.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=3.878 s, sync=0.026 s, total=3.944 s; sync files=9, longest=0.007 s, average=0.003 s; distance=540733 kB, estimate=540850 kB\n2021-07-24 15:47:59.462 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55847 buffers (42.6%); 1 WAL file(s) added, 0 removed, 0 recycled; write=3.882 s, sync=0.027 s, total=3.952 s; sync files=9, longest=0.008 s, average=0.003 s; distance=540640 kB, estimate=540829 kB\n\nSo it's 3 new WAL files in that timeframe, one might think?\n\nA probe instrumenting xlog file creation shows something very different:\nperf probe -x /home/andres/build/postgres/dev-assert/vpath/src/backend/postgres -a XLogFileInitInternal:39\n(39 is the O_CREAT BasicOpenFile(), not the recycle path).\n\nperf stat -a -e probe_postgres:XLogFileInitInternal_L39 -I 1000\n 1.001030943 9 probe_postgres:XLogFileInitInternal_L39\n 2.002998896 8 probe_postgres:XLogFileInitInternal_L39\n 3.005098857 8 probe_postgres:XLogFileInitInternal_L39\n 4.007000426 6 probe_postgres:XLogFileInitInternal_L39\n 5.008423119 9 probe_postgres:XLogFileInitInternal_L39\n 6.013427568 8 probe_postgres:XLogFileInitInternal_L39\n 7.015087613 8 probe_postgres:XLogFileInitInternal_L39\n 8.017393339 8 probe_postgres:XLogFileInitInternal_L39\n 9.018425526 7 probe_postgres:XLogFileInitInternal_L39\n 10.020398520 5 probe_postgres:XLogFileInitInternal_L39\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 24 Jul 2021 15:50:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "log_checkpoint's \"WAL file(s) added\" is misleading to the point of\n uselessness"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-24 15:50:36 -0700, Andres Freund wrote:\n> As an example, here's the log output of a workload that has a replication slot\n> preventing WAL files from being recycled (and too small max_wal_size):\n>\n> 2021-07-24 15:47:42.524 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55767 buffers (42.5%); 0 WAL file(s) added, 0 removed, 0 recycled; write=3.914 s, sync=0.041 s, total=3.972 s; sync files=10, longest=0.010 s, average=0.005 s; distance=540578 kB, estimate=540905 kB\n> 2021-07-24 15:47:46.721 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55806 buffers (42.6%); 1 WAL file(s) added, 0 removed, 0 recycled; write=3.855 s, sync=0.028 s, total=3.928 s; sync files=8, longest=0.008 s, average=0.004 s; distance=540708 kB, estimate=540886 kB\n> 2021-07-24 15:47:51.004 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55850 buffers (42.6%); 1 WAL file(s) added, 0 removed, 0 recycled; write=3.895 s, sync=0.034 s, total=3.974 s; sync files=9, longest=0.009 s, average=0.004 s; distance=540657 kB, estimate=540863 kB\n> 2021-07-24 15:47:55.231 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55879 buffers (42.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=3.878 s, sync=0.026 s, total=3.944 s; sync files=9, longest=0.007 s, average=0.003 s; distance=540733 kB, estimate=540850 kB\n> 2021-07-24 15:47:59.462 PDT [2251649][checkpointer][:0] LOG: checkpoint complete: wrote 55847 buffers (42.6%); 1 WAL file(s) added, 0 removed, 0 recycled; write=3.882 s, sync=0.027 s, total=3.952 s; sync files=9, longest=0.008 s, average=0.003 s; distance=540640 kB, estimate=540829 kB\n>\n> So it's 3 new WAL files in that timeframe, one might think?\n>\n> A probe instrumenting xlog file creation shows something very different:\n> perf probe -x /home/andres/build/postgres/dev-assert/vpath/src/backend/postgres -a XLogFileInitInternal:39\n> (39 is the O_CREAT BasicOpenFile(), not the recycle path).\n>\n> perf stat -a -e probe_postgres:XLogFileInitInternal_L39 -I 1000\n> 1.001030943 9 probe_postgres:XLogFileInitInternal_L39\n> 2.002998896 8 probe_postgres:XLogFileInitInternal_L39\n> 3.005098857 8 probe_postgres:XLogFileInitInternal_L39\n> 4.007000426 6 probe_postgres:XLogFileInitInternal_L39\n> 5.008423119 9 probe_postgres:XLogFileInitInternal_L39\n> 6.013427568 8 probe_postgres:XLogFileInitInternal_L39\n> 7.015087613 8 probe_postgres:XLogFileInitInternal_L39\n> 8.017393339 8 probe_postgres:XLogFileInitInternal_L39\n> 9.018425526 7 probe_postgres:XLogFileInitInternal_L39\n> 10.020398520 5 probe_postgres:XLogFileInitInternal_L39\n\nAnd even more extreme, the logs can end up suggestiong pg_wal is shrinking,\nwhen the opposite is the case. Here's the checkpoint output from a parallel\ncopy data load (without a replication slot holding things back):\n\n2021-07-24 15:59:03.215 PDT [2253324][checkpointer][:0] LOG: checkpoint complete: wrote 22291 buffers (17.0%); 0 WAL file(s) added, 27 removed, 141 recycled; write=9.737 s, sync=4.153 s, total=14.884 s; sync files=108, longest=0.116 s, average=0.039 s; distance=2756904 kB, estimate=2756904 kB\n2021-07-24 15:59:12.978 PDT [2253324][checkpointer][:0] LOG: checkpoint complete: wrote 21840 buffers (16.7%); 0 WAL file(s) added, 53 removed, 149 recycled; write=5.531 s, sync=3.008 s, total=9.763 s; sync files=81, longest=0.201 s, average=0.037 s; distance=3313885 kB, estimate=3313885 kB\n2021-07-24 15:59:23.421 PDT [2253324][checkpointer][:0] LOG: checkpoint complete: wrote 22326 buffers (17.0%); 0 WAL file(s) added, 56 removed, 149 recycled; write=5.787 s, sync=3.230 s, total=10.436 s; sync files=81, longest=0.099 s, average=0.040 s; distance=3346125 kB, estimate=3346125 kB\n2021-07-24 15:59:34.424 PDT [2253324][checkpointer][:0] LOG: checkpoint complete: wrote 22155 buffers (16.9%); 0 WAL file(s) added, 60 removed, 148 recycled; write=6.096 s, sync=3.432 s, total=10.995 s; sync files=81, longest=0.101 s, average=0.043 s; distance=3409281 kB, estimate=3409281 kB\n\nIt does look like WAL space usage is shrinking, but the opposite is true -\nwe're creating so much WAL that the checkpointer can't checkpoint fast enough\nto keep WAL usage below max_wal_size. So WAL files are constantly created that\nthen need to be removed (hence the non-zero removed counts).\n\n# time counts unit events\n 277.087990275 15 probe_postgres:XLogFileInitInternal_L39\n 278.098549960 15 probe_postgres:XLogFileInitInternal_L39\n 279.104787575 6 probe_postgres:XLogFileInitInternal_L39\n 280.108980690 5 probe_postgres:XLogFileInitInternal_L39\n 281.111781617 6 probe_postgres:XLogFileInitInternal_L39\n 282.113601958 2 probe_postgres:XLogFileInitInternal_L39\n 283.115711683 0 probe_postgres:XLogFileInitInternal_L39\n 284.121508636 0 probe_postgres:XLogFileInitInternal_L39\n 285.124865325 0 probe_postgres:XLogFileInitInternal_L39\n 286.126932016 0 probe_postgres:XLogFileInitInternal_L39\n 287.129874993 11 probe_postgres:XLogFileInitInternal_L39\n 288.131838429 15 probe_postgres:XLogFileInitInternal_L39\n 289.133609021 13 probe_postgres:XLogFileInitInternal_L39\n 290.136254341 6 probe_postgres:XLogFileInitInternal_L39\n 291.139368485 5 probe_postgres:XLogFileInitInternal_L39\n 292.142728293 6 probe_postgres:XLogFileInitInternal_L39\n 293.148078766 2 probe_postgres:XLogFileInitInternal_L39\n 294.150258476 0 probe_postgres:XLogFileInitInternal_L39\n 295.172398897 0 probe_postgres:XLogFileInitInternal_L39\n 296.174658196 0 probe_postgres:XLogFileInitInternal_L39\n 297.176818943 0 probe_postgres:XLogFileInitInternal_L39\n 298.179003473 14 probe_postgres:XLogFileInitInternal_L39\n 299.181597777 14 probe_postgres:XLogFileInitInternal_L39\n 300.184711566 14 probe_postgres:XLogFileInitInternal_L39\n 301.188919194 6 probe_postgres:XLogFileInitInternal_L39\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 24 Jul 2021 16:02:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: log_checkpoint's \"WAL file(s) added\" is misleading to the point\n of uselessness"
},
{
"msg_contents": "\n\nOn 2021/07/25 7:50, Andres Freund wrote:\n> Hi,\n> \n> I've been repeatedly confused by the the number of WAL files supposedly\n> added. Even when 100s of new WAL files are created the relevant portion\n> of log_checkpoints will only ever list zero or one added WAL file.\n> \n> The reason for that is that CheckpointStats.ckpt_segs_added is only\n> incremented in PreallocXlogFiles(). Which has the following comment:\n> * XXX this is currently extremely conservative, since it forces only one\n> * future log segment to exist, and even that only if we are 75% done with\n> * the current one. This is only appropriate for very low-WAL-volume systems.\n> \n> Whereas in real workloads WAL files are almost exclusively created via\n> XLogWrite()->XLogFileInit().\n> \n> I think we should consider just removing that field. Or, even better, show\n> something accurate instead.\n\n+1 to show something accurate instead.\n\nIt's also worth showing them in monitoring stats view like pg_stat_wal?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sun, 25 Jul 2021 12:10:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: log_checkpoint's \"WAL file(s) added\" is misleading to the point\n of uselessness"
},
{
"msg_contents": "On 7/24/21, 8:10 PM, \"Fujii Masao\" <masao.fujii@oss.nttdata.com> wrote:\r\n> On 2021/07/25 7:50, Andres Freund wrote:\r\n>> Hi,\r\n>>\r\n>> I've been repeatedly confused by the the number of WAL files supposedly\r\n>> added. Even when 100s of new WAL files are created the relevant portion\r\n>> of log_checkpoints will only ever list zero or one added WAL file.\r\n>>\r\n>> The reason for that is that CheckpointStats.ckpt_segs_added is only\r\n>> incremented in PreallocXlogFiles(). Which has the following comment:\r\n>> * XXX this is currently extremely conservative, since it forces only one\r\n>> * future log segment to exist, and even that only if we are 75% done with\r\n>> * the current one. This is only appropriate for very low-WAL-volume systems.\r\n>>\r\n>> Whereas in real workloads WAL files are almost exclusively created via\r\n>> XLogWrite()->XLogFileInit().\r\n>>\r\n>> I think we should consider just removing that field. Or, even better, show\r\n>> something accurate instead.\r\n>\r\n> +1 to show something accurate instead.\r\n>\r\n> It's also worth showing them in monitoring stats view like pg_stat_wal?\r\n\r\n+1. I was confused by this when working on a WAL pre-allocation\r\npatch [0]. Perhaps it could be replaced by a new parameter and a new\r\nfield in pg_stat_wal. How about something like log_wal_init_interval,\r\nwhere the value is the minimum amount of time between reporting the\r\nnumber of WAL segments created since the last report?\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/flat/20201225200953.jjkrytlrzojbndh5@alap3.anarazel.de\r\n\r\n",
"msg_date": "Mon, 26 Jul 2021 20:27:21 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: log_checkpoint's \"WAL file(s) added\" is misleading to the point\n of\n uselessness"
},
{
"msg_contents": "\n\nOn 2021/07/27 5:27, Bossart, Nathan wrote:\n> +1. I was confused by this when working on a WAL pre-allocation\n> patch [0]. Perhaps it could be replaced by a new parameter and a new\n> field in pg_stat_wal. How about something like log_wal_init_interval,\n> where the value is the minimum amount of time between reporting the\n> number of WAL segments created since the last report?\n\nYou mean to introduce new GUC like log_wal_init_interval and that\nthe number of WAL files created since the last report will be logged\nevery that interval? But isn't it better and simpler to just expose\nthe accumulated number of WAL files created, in pg_stat_wal view\nor elsewhere? If so, we can easily get to know the number of WAL files\ncreated in every interval by checking the view and calculating the diff.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 27 Jul 2021 09:22:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: log_checkpoint's \"WAL file(s) added\" is misleading to the point\n of uselessness"
},
{
"msg_contents": "On 7/26/21, 5:23 PM, \"Fujii Masao\" <masao.fujii@oss.nttdata.com> wrote:\r\n> On 2021/07/27 5:27, Bossart, Nathan wrote:\r\n>> +1. I was confused by this when working on a WAL pre-allocation\r\n>> patch [0]. Perhaps it could be replaced by a new parameter and a new\r\n>> field in pg_stat_wal. How about something like log_wal_init_interval,\r\n>> where the value is the minimum amount of time between reporting the\r\n>> number of WAL segments created since the last report?\r\n>\r\n> You mean to introduce new GUC like log_wal_init_interval and that\r\n> the number of WAL files created since the last report will be logged\r\n> every that interval? But isn't it better and simpler to just expose\r\n> the accumulated number of WAL files created, in pg_stat_wal view\r\n> or elsewhere? If so, we can easily get to know the number of WAL files\r\n> created in every interval by checking the view and calculating the diff.\r\n\r\nI agree with you about adding a new field to pg_stat_wal. The\r\nparameter would just be a convenient way of logging this information\r\nfor future reference. I don't feel strongly about the parameter if\r\nyou think the pg_stat_wal addition is enough.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 27 Jul 2021 00:40:11 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: log_checkpoint's \"WAL file(s) added\" is misleading to the point\n of\n uselessness"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-26 20:27:21 +0000, Bossart, Nathan wrote:\n> +1. I was confused by this when working on a WAL pre-allocation\n> patch [0]. Perhaps it could be replaced by a new parameter and a new\n> field in pg_stat_wal. How about something like log_wal_init_interval,\n> where the value is the minimum amount of time between reporting the\n> number of WAL segments created since the last report?\n\nWhy not just make the number in log_checkpoints accurate? There's no\npoint in the current number, so we don't need to preserve it...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jul 2021 17:48:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: log_checkpoint's \"WAL file(s) added\" is misleading to the point\n of uselessness"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-25 12:10:07 +0900, Fujii Masao wrote:\n> It's also worth showing them in monitoring stats view like pg_stat_wal?\n\nI'm not convinced that's all that meaningful. It makes sense to include\nit as part of the checkpoint output, because checkpoints determine when\nWAL can be recycled etc. It's not that clear to me how to represent that\nas part of pg_stat_wal?\n\nI guess we could add columns for the amount of WAL has been a) newly\ncreated b) recycled c) removed. In combination that *does* seem\nuseful. But also a mostly independent change...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jul 2021 17:50:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: log_checkpoint's \"WAL file(s) added\" is misleading to the point\n of uselessness"
},
{
"msg_contents": "On 7/26/21, 5:48 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> On 2021-07-26 20:27:21 +0000, Bossart, Nathan wrote:\r\n>> +1. I was confused by this when working on a WAL pre-allocation\r\n>> patch [0]. Perhaps it could be replaced by a new parameter and a new\r\n>> field in pg_stat_wal. How about something like log_wal_init_interval,\r\n>> where the value is the minimum amount of time between reporting the\r\n>> number of WAL segments created since the last report?\r\n>\r\n> Why not just make the number in log_checkpoints accurate? There's no\r\n> point in the current number, so we don't need to preserve it...\r\n\r\nMy understanding is that the statistics logged for log_checkpoints are\r\njust for that specific checkpoint. From that angle, the value for the\r\nnumber of WAL files added is technically correct. Checkpoints will\r\nonly ever create zero or one new files via PreallocXlogFiles(). If we\r\nalso added all the segments created outside of the checkpoint, the\r\nvalue for \"added\" would go from meaning \"WAL files created by this\r\ncheckpoint\" to \"WAL files creates since the last checkpoint.\" That's\r\nprobably less confusing than what's there today, but it's still\r\nslightly different than all the other log_checkpoints stats.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 27 Jul 2021 03:39:38 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: log_checkpoint's \"WAL file(s) added\" is misleading to the point\n of\n uselessness"
}
] |
[
{
"msg_contents": "Deduplicate choice of horizon for a relation procarray.c.\n\n5a1e1d83022 was a minimal bug fix for dc7420c2c92. To avoid future bugs of\nthat kind, deduplicate the choice of a relation's horizon into a new helper,\nGlobalVisHorizonKindForRel().\n\nAs the code in question was only introduced in dc7420c2c92 it seems worth\nbackpatching this change as well, otherwise 14 will look different from all\nother branches.\n\nA different approach to this was suggested by Matthias van de Meent.\n\nAuthor: Andres Freund\nDiscussion: https://postgr.es/m/20210621122919.2qhu3pfugxxp3cji@alap3.anarazel.de\nBackpatch: 14, like 5a1e1d83022\n\nBranch\n------\nREL_14_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/3d0a4636aa4c976e971c05c77e162fc70c61f40b\n\nModified Files\n--------------\nsrc/backend/storage/ipc/procarray.c | 98 ++++++++++++++++++++++++-------------\n1 file changed, 64 insertions(+), 34 deletions(-)",
"msg_date": "Sun, 25 Jul 2021 03:34:22 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgsql: Deduplicate choice of horizon for a relation procarray.c."
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> As the code in question was only introduced in dc7420c2c92 it seems worth\n> backpatching this change as well, otherwise 14 will look different from all\n> other branches.\n\nInterestingly, these patches ended up actually introducing a difference\nbetween 14 and master in the form of:\n\nGlobalVisTestFor(Relation rel)\n\n- GlobalVisState *state;\n+ GlobalVisState *state = NULL;\n\nbeing done on master but not in the 14 stable branch, leading to, at\nleast for me:\n\n.../src/backend/storage/ipc/procarray.c: In function ‘GlobalVisTestFor’:\n.../src/backend/storage/ipc/procarray.c:4054:9: warning: ‘state’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n 4054 | return state;\n | ^~~~~\n\nSeems like we should include that change in 14 too, to get rid of the\nabove warning and to make that bit of code the same too..?\n\nThanks!\n\nStephen",
"msg_date": "Fri, 27 Aug 2021 18:46:39 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Deduplicate choice of horizon for a relation procarray.c."
},
{
"msg_contents": "Hi,\n\nOn 2021-08-27 18:46:39 -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > As the code in question was only introduced in dc7420c2c92 it seems worth\n> > backpatching this change as well, otherwise 14 will look different from all\n> > other branches.\n> \n> Interestingly, these patches ended up actually introducing a difference\n> between 14 and master in the form of:\n> \n> GlobalVisTestFor(Relation rel)\n> \n> - GlobalVisState *state;\n> + GlobalVisState *state = NULL;\n> \n> being done on master but not in the 14 stable branch, leading to, at\n> least for me:\n> \n> .../src/backend/storage/ipc/procarray.c: In function ‘GlobalVisTestFor’:\n> .../src/backend/storage/ipc/procarray.c:4054:9: warning: ‘state’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n> 4054 | return state;\n> | ^~~~~\n> \n> Seems like we should include that change in 14 too, to get rid of the\n> above warning and to make that bit of code the same too..?\n\nDone! Stupid oversight :(\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Sep 2021 17:01:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Deduplicate choice of horizon for a relation procarray.c."
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile looking at the code areas of $subject, I got surprised about a\ncouple of things:\n- pgbench has its own parsing routines for int64 and double, with\nan option to skip errors. That's not surprising in itself, but, for\nstrtodouble(), errorOK is always true, meaning that the error strings\nare dead. For strtoint64(), errorOK is false only when parsing a\nVariable, where a second error string is generated. I don't really\nthink that we need to be that pedantic about the type of errors\ngenerated in those code paths when failing to parse a variable, so I'd\nlike to propose a simplification of the code where we reuse the same\nerror message as for double, cutting a bit the number of translatable\nstrings.\n- pgbench and pg_verifybackup make use of pg_log_fatal() to report\nsome failures in code paths dedicated to command-line options, which\nis inconsistent with all the other tools that use pg_log_error().\n\nPlease find attached a patch to clean up all those inconsistencies.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 26 Jul 2021 16:01:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\nMy 0.02ᅵ:\n\n> - pgbench has its own parsing routines for int64 and double, with\n> an option to skip errors. That's not surprising in itself, but, for\n> strtodouble(), errorOK is always true, meaning that the error strings\n> are dead.\n\nIndeed. However, there are \"atof\" calls for option parsing which should \nrather use strtodouble, and that may or may not call it with errorOk as \ntrue or false, it may depend.\n\n> For strtoint64(), errorOK is false only when parsing a Variable, where a \n> second error string is generated.\n\nISTM that it just returns false, there is no message about the parsing \nerror, hence the message is generated in the function.\n\n> I don't really think that we need to be that pedantic about the type of \n> errors generated in those code paths when failing to parse a variable, \n> so I'd like to propose a simplification of the code where we reuse the \n> same error message as for double, cutting a bit the number of \n> translatable strings.\n\nISTM that point is that errors from the parser are handled differently (by \ncalling some \"yyerror\" function which do different things), so they need a \nspecial call for that.\n\nFor other cases we would not to have to replicate generating an error \nmessages for each caller, so it is best done directly in the function. Ok, \ncurrently there is only one call, but there can be more, eg I have a \nnot-yet submitted patch to add a new option which will need to parse an \nint64.\n\n> - pgbench and pg_verifybackup make use of pg_log_fatal() to report\n> some failures in code paths dedicated to command-line options, which\n> is inconsistent with all the other tools that use pg_log_error().\n\nThe semantics for fatal vs error is that an error is somehow handled while \na fatal is not. If the log message is followed by an cold exit, ISTM that \nfatal is the right call, and I cannot help if other commands do not do \nthat. ISTM more logical to align other commands to fatal when appropriate.\n\n> Thoughts?\n\nI'd be in favor of letting it as it is.\n\n-- \nFabien.",
"msg_date": "Mon, 26 Jul 2021 10:13:27 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "On 2021-Jul-26, Fabien COELHO wrote:\n\n> > - pgbench and pg_verifybackup make use of pg_log_fatal() to report\n> > some failures in code paths dedicated to command-line options, which\n> > is inconsistent with all the other tools that use pg_log_error().\n> \n> The semantics for fatal vs error is that an error is somehow handled while a\n> fatal is not. If the log message is followed by an cold exit, ISTM that\n> fatal is the right call, and I cannot help if other commands do not do that.\n> ISTM more logical to align other commands to fatal when appropriate.\n\nI was surprised to discover a few weeks ago that pg_log_fatal() did not\nterminate the program, which was my expectation. If every single call\nto pg_log_fatal() is followed by exit(1), why not make pg_log_fatal()\nitself exit? Apparently this coding pattern confuses many people -- for\nexample pg_verifybackup.c lines 291ff fail to exit(1) after \"throwing\" a\nfatal error, while the block at lines 275 does the right thing.\n\n(In reality we cannot literally just exit(1) in pg_log_fatal(), because\nthere are quite a few places that do some other thing after the log\ncall and before exit(1), or terminate the program in some other way than\nexit().)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\n(Tom Lane)\n\n\n",
"msg_date": "Mon, 26 Jul 2021 15:35:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 03:35:29PM -0400, Alvaro Herrera wrote:\n> On 2021-Jul-26, Fabien COELHO wrote:\n>> The semantics for fatal vs error is that an error is somehow handled while a\n>> fatal is not. If the log message is followed by an cold exit, ISTM that\n>> fatal is the right call, and I cannot help if other commands do not do that.\n>> ISTM more logical to align other commands to fatal when appropriate.\n\nI disagree. pgbench is an outlier here. There are 71 calls to\npg_log_fatal() in src/bin/, and pgbench counts for 54 of them. It\nwould be more consistent to align pgbench with the others.\n\n> I was surprised to discover a few weeks ago that pg_log_fatal() did not\n> terminate the program, which was my expectation. If every single call\n> to pg_log_fatal() is followed by exit(1), why not make pg_log_fatal()\n> itself exit? Apparently this coding pattern confuses many people -- for\n> example pg_verifybackup.c lines 291ff fail to exit(1) after \"throwing\" a\n> fatal error, while the block at lines 275 does the right thing.\n\nI remember having the same reaction when those logging APIs got\nintroduced (I may be wrong here), and I also recall that this point\nhas been discussed, where the conclusion was that the logging should\nnever exit() directly.\n\n> (In reality we cannot literally just exit(1) in pg_log_fatal(), because\n> there are quite a few places that do some other thing after the log\n> call and before exit(1), or terminate the program in some other way than\n> exit().)\n\nYes, that's obviously wrong. I am wondering if there is more of\nthat.\n--\nMichael",
"msg_date": "Tue, 27 Jul 2021 08:53:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "Bonjour Michaᅵl-san,\n\n>>> The semantics for fatal vs error is that an error is somehow handled while a\n>>> fatal is not. If the log message is followed by an cold exit, ISTM that\n>>> fatal is the right call, and I cannot help if other commands do not do that.\n>>> ISTM more logical to align other commands to fatal when appropriate.\n>\n> I disagree. pgbench is an outlier here. There are 71 calls to\n> pg_log_fatal() in src/bin/, and pgbench counts for 54 of them. It\n> would be more consistent to align pgbench with the others.\n\nI do not understand your disagreement. Do you disagree about the expected \nsemantics of fatal? Then why provide fatal if it should not be used?\nWhat is the expected usage of fatal?\n\nI do not dispute that pgbench is a statistical outlier. However, Pgbench \nis somehow special because it does not handle a lot of errors, hence a lot \nof \"fatal & exit\" pattern is used, and the user has to restart. There are \n76 calls to \"exit\" from pgbench, but only 23 from psql which is much \nlarger. ISTM that most \"normal\" pg programs won't do that because they are \nnice and try to handle errors.\n\nSo for me \"fatal\" is the right choice before exiting with a non zero \nstatus, but if \"error\" is called instead I do not think it matters much, \nyou do as you please.\n\n>> I was surprised to discover a few weeks ago that pg_log_fatal() did not\n>> terminate the program, which was my expectation.\n\nMine too.\n\n-- \nFabien.",
"msg_date": "Tue, 27 Jul 2021 06:36:15 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 06:36:15AM +0200, Fabien COELHO wrote:\n> I do not understand your disagreement. Do you disagree about the expected\n> semantics of fatal? Then why provide fatal if it should not be used?\n> What is the expected usage of fatal?\n\nI disagree about the fact that pgbench uses pg_log_fatal() in ways\nthat other binaries don't do. For example, other things use\npg_log_error() followed by an exit(), but not this code. I am not\ngoing to fight hard on that, though.\n\nThat's a set of inconsistences I bumped into while plugging in\noption_parse_int() within pgbench.\n--\nMichael",
"msg_date": "Tue, 27 Jul 2021 15:56:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "Hello,\n\n>> I do not understand your disagreement. Do you disagree about the \n>> expected>> semantics of fatal? Then why provide fatal if it should not \n>> be used? What is the expected usage of fatal?\n>\n> I disagree about the fact that pgbench uses pg_log_fatal() in ways\n> that other binaries don't do.\n\nSure. Then what should be the expected usage of fatal? Doc says:\n\n * Severe errors that cause program termination. (One-shot programs may\n * chose to label even fatal errors as merely \"errors\". The distinction\n * is up to the program.)\n\npgbench is consistent with the doc. I prefer fatal for this purpose to \ndistinguish these clearly from recoverable errors, i.e. the programs goes \non despite the error, or at least for some time. I think it is good to \nhave such a distinction, and bgpench has many errors and many fatals, \nalthough maybe some error should be fatal and some fatal should be error…\n\n> For example, other things use pg_log_error() followed by an exit(), but \n> not this code.\n\nSure.\n\n> I am not going to fight hard on that, though.\n\nMe neither.\n\n> That's a set of inconsistences I bumped into while plugging in \n> option_parse_int()\n\nWhich is a very good thing! I have already been bitten by atoi.\n\n-- \nFabien.",
"msg_date": "Tue, 27 Jul 2021 11:45:07 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 11:45:07AM +0200, Fabien COELHO wrote:\n> Sure. Then what should be the expected usage of fatal? Doc says:\n> \n> * Severe errors that cause program termination. (One-shot programs may\n> * chose to label even fatal errors as merely \"errors\". The distinction\n> * is up to the program.)\n>\n> pgbench is consistent with the doc. I prefer fatal for this purpose to\n> distinguish these clearly from recoverable errors, i.e. the programs goes on\n> despite the error, or at least for some time. I think it is good to have\n> such a distinction, and bgpench has many errors and many fatals, although\n> maybe some error should be fatal and some fatal should be error.\n\nHm? Incorrect option values are recoverable errors, no? The root\ncause of the problem is the user. One can note that pg_log_fatal() vs\npg_log_error() results in a score of 54 vs 50 in src/bin/pgbench/, so\nI am not quite sure your last statement is true.\n\n>> That's a set of inconsistences I bumped into while plugging in\n>> option_parse_int()\n> \n> Which is a very good thing! I have already been bitten by atoi.\n\nBy the way, if you can write a patch that makes use of strtodouble()\nfor the float option parsing in pgbench with the way you see things,\nI'd welcome that. This is a local change as only pgbench needs to\ncare about that.\n--\nMichael",
"msg_date": "Tue, 27 Jul 2021 19:58:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "> On 27 Jul 2021, at 01:53, Michael Paquier <michael@paquier.xyz> wrote:\n\n> ..and I also recall that this point has been discussed, where the conclusion\n> was that the logging should never exit() directly.\n\n\nThat's a characteristic of the API which still holds IMO. If we want\nsomething, it's better to have an explicit exit function which takes a log\nstring than a log function that exits.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 13:02:34 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 08:53:52AM +0900, Michael Paquier wrote:\n> On Mon, Jul 26, 2021 at 03:35:29PM -0400, Alvaro Herrera wrote:\n>> (In reality we cannot literally just exit(1) in pg_log_fatal(), because\n>> there are quite a few places that do some other thing after the log\n>> call and before exit(1), or terminate the program in some other way than\n>> exit().)\n> \n> Yes, that's obviously wrong. I am wondering if there is more of\n> that.\n\nI have been looking at that. Here are some findings:\n- In pg_archivecleanup, CleanupPriorWALFiles() does not bother\nexit()'ing with an error code. Shouldn't we worry about reporting\nthat correctly? It seems strange to not inform users about paths that\nwould be broken, as that could bloat the archives without one knowing\nabout it.\n- In pg_basebackup.c, ReceiveTarAndUnpackCopyChunk() would not\nexit() when failing to change permissions for non-WIN32.\n- pg_recvlogical is missing a failure handling for fstat(), as of\n5c0de38. \n- pg_verifybackup is in the wrong, as mentioned upthread.\n\nThoughts? All that does not seem to enter into the category of things\nworth back-patching, except for pg_verifybackup.\n--\nMichael",
"msg_date": "Wed, 28 Jul 2021 12:18:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "\n> [...] Thoughts?\n\nFor pgbench it is definitely ok to add the exit. For others the added \nexits look reasonable, but I do not know them intimately enough to be sure \nthat it is the right thing to do in all cases.\n\n> All that does not seem to enter into the category of things worth \n> back-patching, except for pg_verifybackup.\n\nYes.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 28 Jul 2021 08:49:12 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 11:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I have been looking at that. Here are some findings:\n> - In pg_archivecleanup, CleanupPriorWALFiles() does not bother\n> exit()'ing with an error code. Shouldn't we worry about reporting\n> that correctly? It seems strange to not inform users about paths that\n> would be broken, as that could bloat the archives without one knowing\n> about it.\n> - In pg_basebackup.c, ReceiveTarAndUnpackCopyChunk() would not\n> exit() when failing to change permissions for non-WIN32.\n> - pg_recvlogical is missing a failure handling for fstat(), as of\n> 5c0de38.\n> - pg_verifybackup is in the wrong, as mentioned upthread.\n>\n> Thoughts? All that does not seem to enter into the category of things\n> worth back-patching, except for pg_verifybackup.\n\nI think all of these are reasonable fixes. In the case of\npg_basebackup, a chmod() failure doesn't necessarily oblige us to give\nup and exit; we could presumably still write the data. But it may be\nbest to give up and exit. The other cases appear to be clear bugs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Jul 2021 10:28:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 10:28:13AM -0400, Robert Haas wrote:\n> I think all of these are reasonable fixes. In the case of\n> pg_basebackup, a chmod() failure doesn't necessarily oblige us to give\n> up and exit; we could presumably still write the data. But it may be\n> best to give up and exit. The other cases appear to be clear bugs.\n\nYeah, there are advantages in both positions, still it is more natural\nto me to not ignore this kind of failures. Note the inconsistency\nwith initdb for example. So, done.\n--\nMichael",
"msg_date": "Thu, 29 Jul 2021 16:16:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some code cleanup for pgbench and pg_verifybackup"
}
] |
[
{
"msg_contents": "StartupXLOG() has code beginning around line 7900 of xlog.c that\ndecides, at the end of recovery, between four possible courses of\naction. It either writes an end-of-recovery record, or requests a\ncheckpoint, or creates a checkpoint, or does nothing, depending on the\nvalue of 3 flag variables, and also on whether we're still able to\nread the last checkpoint record:\n\n checkPointLoc = ControlFile->checkPoint;\n\n /*\n * Confirm the last checkpoint is available for us to recover\n * from if we fail.\n */\n record = ReadCheckpointRecord(xlogreader,\ncheckPointLoc, 1, false);\n if (record != NULL)\n {\n promoted = true;\n\nIt seems to me that ReadCheckpointRecord() should never fail here. It\nshould always be true, even when we're not in recovery, that the last\ncheckpoint record is readable. If there's ever a situation where that\nis not true, even for an instant, then a crash at that point will be\nunrecoverable. Now, one way that it could happen is if we've got a bug\nin the code someplace that removes WAL segments too soon. However, if\nwe have such a bug, we should fix it. What this code does is says \"oh,\nI guess we removed the old checkpoint too soon, no big deal, let's\njust be more aggressive about getting the next one done,\" which I do\nnot think is the right response. Aside from a bug, the only other way\nI can see it happening is if someone is manually removing WAL segments\nas the server is running through recovery, perhaps as some last-ditch\nplay to avoid running out of disk space. I don't think the server\nneeds to have - or should have - code to cater to such crazy\nscenarios. Therefore I think that this check, at least in its current\nform, is not sensible.\n\nMy first thought was that we should do the check unconditionally,\nrather than just when bgwriterLaunched && LocalPromoteIsTriggered, and\nERROR if it fails. But then I wondered what the point of that would be\nexactly. If we have such a bug -- and to the best of my knowledge\nthere's no evidence that we do -- there's no particular reason it\nshould only happen at the end of recovery. It could happen any time\nthe system -- or the user, or malicious space aliens -- remove files\nfrom pg_wal, and we have no real idea about the timing of malicious\nspace alien activity, so doing the test here rather than anywhere else\njust seems like a shot in the dark. Perhaps the most logical place\nwould be to move it to CreateCheckPoint() just after we remove old\nxlog files, but we don't have the xlogreader there, and anyway I don't\nsee how it's really going to help. What bug do we expect to catch by\nremoving files we think we don't need and then checking that we didn't\nremove the files we think we do need? That seems more like grasping at\nstraws than a serious attempt to make things work better.\n\nSo at the moment I am leaning toward the view that we should just\nremove this check entirely, as in the attached, proposed patch.\n\nReally, I think we should consider going further. If it's safe to\nwrite an end-of-recovery record rather than a checkpoint, why not do\nso all the time? Right now we fail to do that in the above-described\n\"impossible\" scenario where the previous checkpoint record can't be\nread, or if we're exiting archive recovery for some reason other than\na promotion request, or if we're in single-user mode, or if we're in\ncrash recovery. Presumably, people would like to start up the server\nquickly in all of those scenarios, so the only reason not to use this\ntechnology all the time is if we think it's safe in some scenarios and\nnot others. I can't think of a reason why it matters why we're exiting\narchive recovery, nor can I think of a reason why it matters whether\nwe're in single user mode. The distinction between crash recovery and\narchive recovery does seem to matter, but if anything the crash\nrecovery scenario seems simpler, because then there's only one\ntimeline involved.\n\nI realize that conservatism may have played a role in this code ending\nup looking the way that it does; someone seems to have thought it\nwould be better not to rely on a new idea in all cases. From my point\nof view, though, it's scary to have so many cases, especially cases\nthat don't seem like they should ever be reached. I think that\nsimplifying the logic here and trying to do the same things in as many\ncases as we can will lead to better robustness. Imagine if instead of\nall the hairy logic we have now we just replaced this whole if\n(IsInRecovery) stanza with this:\n\nif (InRecovery)\n CreateEndOfRecoveryRecord();\n\nThat would be WAY easier to reason about than the rat's nest we have\nhere today. Now, I am not sure what it would take to get there, but I\nthink that is the direction we ought to be heading.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 26 Jul 2021 12:12:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "needless complexity in StartupXLOG"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> So at the moment I am leaning toward the view that we should just\n> remove this check entirely, as in the attached, proposed patch.\n\nHaven't dug in deeply but at least following your explanation and\nreading over the patch and the code a bit, I tend to agree.\n\n> Really, I think we should consider going further. If it's safe to\n> write an end-of-recovery record rather than a checkpoint, why not do\n> so all the time? Right now we fail to do that in the above-described\n> \"impossible\" scenario where the previous checkpoint record can't be\n> read, or if we're exiting archive recovery for some reason other than\n> a promotion request, or if we're in single-user mode, or if we're in\n> crash recovery. Presumably, people would like to start up the server\n> quickly in all of those scenarios, so the only reason not to use this\n> technology all the time is if we think it's safe in some scenarios and\n> not others. I can't think of a reason why it matters why we're exiting\n> archive recovery, nor can I think of a reason why it matters whether\n> we're in single user mode. The distinction between crash recovery and\n> archive recovery does seem to matter, but if anything the crash\n> recovery scenario seems simpler, because then there's only one\n> timeline involved.\n\nYeah, tend to agree with this too ... but something I find a bit curious\nis the comment:\n\n* Insert a special WAL record to mark the end of\n* recovery, since we aren't doing a checkpoint.\n\n... immediately after setting promoted = true, and then at the end of\nStartupXLOG() having:\n\nif (promoted)\n\tRequestCheckpoint(CHECKPOINT_FORCE);\n\nmaybe I'm missing something, but seems like that comment isn't being\nterribly clear. Perhaps we aren't doing a full checkpoint *there*, but\nsure looks like we're going to do one moments later regardless of\nanything else since we've set promoted to true..?\n\n> I realize that conservatism may have played a role in this code ending\n> up looking the way that it does; someone seems to have thought it\n> would be better not to rely on a new idea in all cases. From my point\n> of view, though, it's scary to have so many cases, especially cases\n> that don't seem like they should ever be reached. I think that\n> simplifying the logic here and trying to do the same things in as many\n> cases as we can will lead to better robustness. Imagine if instead of\n> all the hairy logic we have now we just replaced this whole if\n> (IsInRecovery) stanza with this:\n> \n> if (InRecovery)\n> CreateEndOfRecoveryRecord();\n> \n> That would be WAY easier to reason about than the rat's nest we have\n> here today. Now, I am not sure what it would take to get there, but I\n> think that is the direction we ought to be heading.\n\nAgreed that simpler logic is better, provided it's correct logic, of\ncourse. Finding better ways to test all of this would be really nice.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 26 Jul 2021 13:32:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 1:32 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Yeah, tend to agree with this too ... but something I find a bit curious\n> is the comment:\n>\n> * Insert a special WAL record to mark the end of\n> * recovery, since we aren't doing a checkpoint.\n>\n> ... immediately after setting promoted = true, and then at the end of\n> StartupXLOG() having:\n>\n> if (promoted)\n> RequestCheckpoint(CHECKPOINT_FORCE);\n>\n> maybe I'm missing something, but seems like that comment isn't being\n> terribly clear. Perhaps we aren't doing a full checkpoint *there*, but\n> sure looks like we're going to do one moments later regardless of\n> anything else since we've set promoted to true..?\n\nYep. So it's a question of whether we allow operations that might\nwrite WAL in the meantime. When we write the checkpoint record right\nhere, there can't be any WAL from the new server lifetime until the\ncheckpoint completes. When we write an end-of-recovery record, there\ncan. And there could actually be quite a bit, because if we do the\ncheckpoint right in this section of code, it will be a fast\ncheckpoint, whereas in the code you quoted above, it's a spread\ncheckpoint, which takes a lot longer. So the question is whether it's\nreasonable to give the checkpoint some time to complete or whether it\nneeds to be completed right now.\n\n> Agreed that simpler logic is better, provided it's correct logic, of\n> course. Finding better ways to test all of this would be really nice.\n\nYeah, and there again, it's a lot easier to test if we don't have as\nmany cases. Now no single change is going to fix that, but the number\nof flag variables in xlog.c is simply bonkers. This particular stretch\nof code uses 3 of them to even decide whether to attempt the test in\nquestion, and all of those are set in complex ways depending on the\nvalues of still more flag variables. The comments where\nbgwriterLaunched is set claim that we only do this during archive\nrecovery, not crash recovery, but the code depends on the value of\nArchiveRecoveryRequested, not InArchiveRecovery. So I wonder if we\ncan't get the bgwriter to run even during crash recovery in the\nscenario described by:\n\n * It's possible that archive recovery was requested, but we don't\n * know how far we need to replay the WAL before we reach consistency.\n * This can happen for example if a base backup is taken from a\n * running server using an atomic filesystem snapshot, without calling\n * pg_start/stop_backup. Or if you just kill a running primary server\n * and put it into archive recovery by creating a recovery signal\n * file.\n\nIf we ran the bgwriter all the time during crash recovery, we'd know\nfor sure whether that causes any problems. If we only do it like this\nin certain corner cases, it's much more likely that we have bugs.\nGrumble, grumble.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Jul 2021 15:53:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Jul 26, 2021 at 1:32 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Yeah, tend to agree with this too ... but something I find a bit curious\n> > is the comment:\n> >\n> > * Insert a special WAL record to mark the end of\n> > * recovery, since we aren't doing a checkpoint.\n> >\n> > ... immediately after setting promoted = true, and then at the end of\n> > StartupXLOG() having:\n> >\n> > if (promoted)\n> > RequestCheckpoint(CHECKPOINT_FORCE);\n> >\n> > maybe I'm missing something, but seems like that comment isn't being\n> > terribly clear. Perhaps we aren't doing a full checkpoint *there*, but\n> > sure looks like we're going to do one moments later regardless of\n> > anything else since we've set promoted to true..?\n> \n> Yep. So it's a question of whether we allow operations that might\n> write WAL in the meantime. When we write the checkpoint record right\n> here, there can't be any WAL from the new server lifetime until the\n> checkpoint completes. When we write an end-of-recovery record, there\n> can. And there could actually be quite a bit, because if we do the\n> checkpoint right in this section of code, it will be a fast\n> checkpoint, whereas in the code you quoted above, it's a spread\n> checkpoint, which takes a lot longer. So the question is whether it's\n> reasonable to give the checkpoint some time to complete or whether it\n> needs to be completed right now.\n\nAll I was really trying to point out above was that the comment might be\nimproved upon, just so someone understands that we aren't doing a\ncheckpoint at this particular place, but one will be done later due to\nthe promotion. Maybe I'm being a bit extra with that, but that was my\nthought when reading the code and the use of the promoted flag variable.\n\n> > Agreed that simpler logic is better, provided it's correct logic, of\n> > course. Finding better ways to test all of this would be really nice.\n> \n> Yeah, and there again, it's a lot easier to test if we don't have as\n> many cases. Now no single change is going to fix that, but the number\n> of flag variables in xlog.c is simply bonkers. This particular stretch\n> of code uses 3 of them to even decide whether to attempt the test in\n> question, and all of those are set in complex ways depending on the\n> values of still more flag variables. The comments where\n> bgwriterLaunched is set claim that we only do this during archive\n> recovery, not crash recovery, but the code depends on the value of\n> ArchiveRecoveryRequested, not InArchiveRecovery. So I wonder if we\n> can't get the bgwriter to run even during crash recovery in the\n> scenario described by:\n> \n> * It's possible that archive recovery was requested, but we don't\n> * know how far we need to replay the WAL before we reach consistency.\n> * This can happen for example if a base backup is taken from a\n> * running server using an atomic filesystem snapshot, without calling\n> * pg_start/stop_backup. Or if you just kill a running primary server\n> * and put it into archive recovery by creating a recovery signal\n> * file.\n> \n> If we ran the bgwriter all the time during crash recovery, we'd know\n> for sure whether that causes any problems. If we only do it like this\n> in certain corner cases, it's much more likely that we have bugs.\n> Grumble, grumble.\n\nYeah ... not to mention that it really is just incredibly dangerous to\nuse such an approach for PITR. For my 2c, I'd rather we figure out a\nway to prevent this than to imply that we support it when we have no way\nof knowing if we actually have replayed far enough to be consistent.\nThat isn't to say that using snapshots for database backups isn't\npossible, but it should be done in-between pg_start/stop_backup calls\nwhich properly grab the returned info from those and store the backup\nlabel with the snapshot, etc.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 26 Jul 2021 16:15:23 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 03:53:20PM -0400, Robert Haas wrote:\n> Yeah, and there again, it's a lot easier to test if we don't have as\n> many cases. Now no single change is going to fix that, but the number\n> of flag variables in xlog.c is simply bonkers. This particular stretch\n> of code uses 3 of them to even decide whether to attempt the test in\n> question, and all of those are set in complex ways depending on the\n> values of still more flag variables. The comments where\n> bgwriterLaunched is set claim that we only do this during archive\n> recovery, not crash recovery, but the code depends on the value of\n> ArchiveRecoveryRequested, not InArchiveRecovery. So I wonder if we\n> can't get the bgwriter to run even during crash recovery in the\n> scenario described by:\n\nI'm not following along closely and maybe you're already aware of this one?\nhttps://commitfest.postgresql.org/33/2706/\nBackground writer and checkpointer in crash recovery\n\n@Thomas:\nhttps://www.postgresql.org/message-id/CA%2BTgmoYmw%3D%3DTOJ6EzYb_vcjyS09NkzrVKSyBKUUyo1zBEaJASA%40mail.gmail.com\n> * It's possible that archive recovery was requested, but we don't\n> * know how far we need to replay the WAL before we reach consistency.\n> * This can happen for example if a base backup is taken from a\n> * running server using an atomic filesystem snapshot, without calling\n> * pg_start/stop_backup. Or if you just kill a running primary server\n> * and put it into archive recovery by creating a recovery signal\n> * file.\n> \n> If we ran the bgwriter all the time during crash recovery, we'd know\n> for sure whether that causes any problems. If we only do it like this\n> in certain corner cases, it's much more likely that we have bugs.\n> Grumble, grumble.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 26 Jul 2021 19:36:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 4:15 PM Stephen Frost <sfrost@snowman.net> wrote:\n> All I was really trying to point out above was that the comment might be\n> improved upon, just so someone understands that we aren't doing a\n> checkpoint at this particular place, but one will be done later due to\n> the promotion. Maybe I'm being a bit extra with that, but that was my\n> thought when reading the code and the use of the promoted flag variable.\n\nYeah, I agree, it confused me too, at first.\n\n> Yeah ... not to mention that it really is just incredibly dangerous to\n> use such an approach for PITR. For my 2c, I'd rather we figure out a\n> way to prevent this than to imply that we support it when we have no way\n> of knowing if we actually have replayed far enough to be consistent.\n> That isn't to say that using snapshots for database backups isn't\n> possible, but it should be done in-between pg_start/stop_backup calls\n> which properly grab the returned info from those and store the backup\n> label with the snapshot, etc.\n\nMy position on that is that I would not particularly recommend the\ntechnique described here, but I would not choose to try to block it\neither. That's an argument for another thread, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Jul 2021 08:23:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "At Mon, 26 Jul 2021 16:15:23 -0400, Stephen Frost <sfrost@snowman.net> wrote in \n> Greetings,\n> \n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Mon, Jul 26, 2021 at 1:32 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Yeah, tend to agree with this too ... but something I find a bit curious\n> > > is the comment:\n> > >\n> > > * Insert a special WAL record to mark the end of\n> > > * recovery, since we aren't doing a checkpoint.\n> > >\n> > > ... immediately after setting promoted = true, and then at the end of\n> > > StartupXLOG() having:\n> > >\n> > > if (promoted)\n> > > RequestCheckpoint(CHECKPOINT_FORCE);\n> > >\n> > > maybe I'm missing something, but seems like that comment isn't being\n> > > terribly clear. Perhaps we aren't doing a full checkpoint *there*, but\n> > > sure looks like we're going to do one moments later regardless of\n> > > anything else since we've set promoted to true..?\n> > \n> > Yep. So it's a question of whether we allow operations that might\n> > write WAL in the meantime. When we write the checkpoint record right\n> > here, there can't be any WAL from the new server lifetime until the\n> > checkpoint completes. When we write an end-of-recovery record, there\n> > can. And there could actually be quite a bit, because if we do the\n> > checkpoint right in this section of code, it will be a fast\n> > checkpoint, whereas in the code you quoted above, it's a spread\n> > checkpoint, which takes a lot longer. So the question is whether it's\n> > reasonable to give the checkpoint some time to complete or whether it\n> > needs to be completed right now.\n> \n> All I was really trying to point out above was that the comment might be\n> improved upon, just so someone understands that we aren't doing a\n> checkpoint at this particular place, but one will be done later due to\n> the promotion. Maybe I'm being a bit extra with that, but that was my\n> thought when reading the code and the use of the promoted flag variable.\n\n(I feel we don't need to check for last-checkpoint, too.)\n\nFWIW, by the way, I complained that the variable name \"promoted\" is a\nbit confusing. It's old name was fast_promoted, which means that fast\npromotion is being *requsted* and ongoing. On the other hand the\ncurrent name \"promoted\" still means \"(fast=non-fallback) promotion is\nongoing\" so there was a conversation as the follows.\n\nhttps://www.postgresql.org/message-id/9fdd994d-a531-a52b-7906-e1cc22701310%40oss.nttdata.com\n\n>> How about changing it to fallback_promotion, or some names with more\n>> behavior-specific name like immediate_checkpoint_needed?\n> \n> I like doEndOfRecoveryCkpt or something, but I have no strong opinion\n> about that flag naming. So I'm ok with immediate_checkpoint_needed\n> if others also like that, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Jul 2021 22:17:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 9:18 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> FWIW, by the way, I complained that the variable name \"promoted\" is a\n> bit confusing. It's old name was fast_promoted, which means that fast\n> promotion is being *requsted* and ongoing. On the other hand the\n> current name \"promoted\" still means \"(fast=non-fallback) promotion is\n> ongoing\" so there was a conversation as the follows.\n>\n> https://www.postgresql.org/message-id/9fdd994d-a531-a52b-7906-e1cc22701310%40oss.nttdata.com\n\nI agree - that variable name is also not great. I am open to making\nimprovements in that area and in others that have been suggested on\nthis thread, but my immediate goal is to figure out whether anyone\nobjects to me committing the posted patch. If nobody comes up with a\nreason why it's a bad idea in the next few days, I'll plan to move\nahead with it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Jul 2021 11:03:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "At Tue, 27 Jul 2021 11:03:14 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, Jul 27, 2021 at 9:18 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > FWIW, by the way, I complained that the variable name \"promoted\" is a\n> > bit confusing. It's old name was fast_promoted, which means that fast\n> > promotion is being *requsted* and ongoing. On the other hand the\n> > current name \"promoted\" still means \"(fast=non-fallback) promotion is\n> > ongoing\" so there was a conversation as the follows.\n> >\n> > https://www.postgresql.org/message-id/9fdd994d-a531-a52b-7906-e1cc22701310%40oss.nttdata.com\n> \n> I agree - that variable name is also not great. I am open to making\n> improvements in that area and in others that have been suggested on\n> this thread, but my immediate goal is to figure out whether anyone\n> objects to me committing the posted patch. If nobody comes up with a\n> reason why it's a bad idea in the next few days, I'll plan to move\n> ahead with it.\n\nThat's fine with me.\n\nI still haven't find a way to lose the last checkpoint due to software\nfailure. Repeated promotion without having new checkpoints is safe as\nexpected. We don't remove WAL files unless a checkpoint completes, and\na checkpoint preserves segments back to the one containing its redo\npoint.\n\nIn short, I'm for it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 28 Jul 2021 18:04:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-26 12:12:53 -0400, Robert Haas wrote:\n> My first thought was that we should do the check unconditionally,\n> rather than just when bgwriterLaunched && LocalPromoteIsTriggered, and\n> ERROR if it fails. But then I wondered what the point of that would be\n> exactly. If we have such a bug -- and to the best of my knowledge\n> there's no evidence that we do -- there's no particular reason it\n> should only happen at the end of recovery. It could happen any time\n> the system -- or the user, or malicious space aliens -- remove files\n> from pg_wal, and we have no real idea about the timing of malicious\n> space alien activity, so doing the test here rather than anywhere else\n> just seems like a shot in the dark.\n\nYea. The history of that code being added doesn't suggest that there was\na concrete issue being addressed, from what I can tell.\n\n\n> So at the moment I am leaning toward the view that we should just\n> remove this check entirely, as in the attached, proposed patch.\n\n+1\n\n\n> Really, I think we should consider going further. If it's safe to\n> write an end-of-recovery record rather than a checkpoint, why not do\n> so all the time?\n\n+many. The current split doesn't make much sense. For one, it often is a huge\nissue if crash recovery takes a long time - why should we incur the cost that\nwe are OK avoiding during promotions? For another, end-of-recovery is a\ncrucial path for correctness, reducing the number of non-trivial paths is\ngood.\n\n\n> Imagine if instead of\n> all the hairy logic we have now we just replaced this whole if\n> (IsInRecovery) stanza with this:\n> \n> if (InRecovery)\n> CreateEndOfRecoveryRecord();\n> \n> That would be WAY easier to reason about than the rat's nest we have\n> here today. Now, I am not sure what it would take to get there, but I\n> think that is the direction we ought to be heading.\n\nWhat are we going to do in the single user ([1]) case in this awesome future?\nI guess we could just not create a checkpoint until single user mode is shut\ndown / creates a checkpoint for other reasons?\n\n\nGreetings,\n\nAndres Freund\n\n\n[1] I really wish somebody had the energy to just remove single user and\nbootstrap modes. The degree to which they increase complexity in the rest of\nthe system is entirely unreasonable. There's not actually any reason\nbootstrapping can't happen with checkpointer et al running, it's just\nautovacuum that'd need to be blocked.\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:28:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 3:28 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> [1] I really wish somebody had the energy to just remove single user and\n> bootstrap modes. The degree to which they increase complexity in the rest of\n> the system is entirely unreasonable. There's not actually any reason\n> bootstrapping can't happen with checkpointer et al running, it's just\n> autovacuum that'd need to be blocked.\n\nAny objection to adding an entry for that in the wiki TODO?\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:49:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 9:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> StartupXLOG() has code beginning around line 7900 of xlog.c that\n> decides, at the end of recovery, between four possible courses of\n> action. It either writes an end-of-recovery record, or requests a\n> checkpoint, or creates a checkpoint, or does nothing, depending on the\n> value of 3 flag variables, and also on whether we're still able to\n> read the last checkpoint record:\n>\n> checkPointLoc = ControlFile->checkPoint;\n>\n> /*\n> * Confirm the last checkpoint is available for us to recover\n> * from if we fail.\n> */\n> record = ReadCheckpointRecord(xlogreader,\n> checkPointLoc, 1, false);\n> if (record != NULL)\n> {\n> promoted = true;\n>\n> It seems to me that ReadCheckpointRecord() should never fail here. It\n> should always be true, even when we're not in recovery, that the last\n> checkpoint record is readable. If there's ever a situation where that\n> is not true, even for an instant, then a crash at that point will be\n> unrecoverable. Now, one way that it could happen is if we've got a bug\n> in the code someplace that removes WAL segments too soon. However, if\n> we have such a bug, we should fix it. What this code does is says \"oh,\n> I guess we removed the old checkpoint too soon, no big deal, let's\n> just be more aggressive about getting the next one done,\" which I do\n> not think is the right response. Aside from a bug, the only other way\n> I can see it happening is if someone is manually removing WAL segments\n> as the server is running through recovery, perhaps as some last-ditch\n> play to avoid running out of disk space. I don't think the server\n> needs to have - or should have - code to cater to such crazy\n> scenarios. Therefore I think that this check, at least in its current\n> form, is not sensible.\n>\n> My first thought was that we should do the check unconditionally,\n> rather than just when bgwriterLaunched && LocalPromoteIsTriggered, and\n> ERROR if it fails. But then I wondered what the point of that would be\n> exactly. If we have such a bug -- and to the best of my knowledge\n> there's no evidence that we do -- there's no particular reason it\n> should only happen at the end of recovery. It could happen any time\n> the system -- or the user, or malicious space aliens -- remove files\n> from pg_wal, and we have no real idea about the timing of malicious\n> space alien activity, so doing the test here rather than anywhere else\n> just seems like a shot in the dark. Perhaps the most logical place\n> would be to move it to CreateCheckPoint() just after we remove old\n> xlog files, but we don't have the xlogreader there, and anyway I don't\n> see how it's really going to help. What bug do we expect to catch by\n> removing files we think we don't need and then checking that we didn't\n> remove the files we think we do need? That seems more like grasping at\n> straws than a serious attempt to make things work better.\n>\n> So at the moment I am leaning toward the view that we should just\n> remove this check entirely, as in the attached, proposed patch.\n>\n\nCan we have an elog() fatal error or warning to make sure that the\nlast checkpoint is still readable? Since the case where the user\n(knowingly or unknowingly) or some buggy code has removed the WAL file\ncontaining the last checkpoint could be possible. If it is then we\nwould have a hard time finding out when we get further unexpected\nbehavior due to this. Thoughts?\n\n> Really, I think we should consider going further. If it's safe to\n> write an end-of-recovery record rather than a checkpoint, why not do\n> so all the time? Right now we fail to do that in the above-described\n> \"impossible\" scenario where the previous checkpoint record can't be\n> read, or if we're exiting archive recovery for some reason other than\n> a promotion request, or if we're in single-user mode, or if we're in\n> crash recovery. Presumably, people would like to start up the server\n> quickly in all of those scenarios, so the only reason not to use this\n> technology all the time is if we think it's safe in some scenarios and\n> not others. I can't think of a reason why it matters why we're exiting\n> archive recovery, nor can I think of a reason why it matters whether\n> we're in single user mode. The distinction between crash recovery and\n> archive recovery does seem to matter, but if anything the crash\n> recovery scenario seems simpler, because then there's only one\n> timeline involved.\n>\n> I realize that conservatism may have played a role in this code ending\n> up looking the way that it does; someone seems to have thought it\n> would be better not to rely on a new idea in all cases. From my point\n> of view, though, it's scary to have so many cases, especially cases\n> that don't seem like they should ever be reached. I think that\n> simplifying the logic here and trying to do the same things in as many\n> cases as we can will lead to better robustness. Imagine if instead of\n> all the hairy logic we have now we just replaced this whole if\n> (IsInRecovery) stanza with this:\n>\n> if (InRecovery)\n> CreateEndOfRecoveryRecord();\n\n+1, and do the checkpoint at the end unconditionally as we are doing\nfor the promotion.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 29 Jul 2021 11:17:16 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > Imagine if instead of\n> > all the hairy logic we have now we just replaced this whole if\n> > (IsInRecovery) stanza with this:\n> >\n> > if (InRecovery)\n> > CreateEndOfRecoveryRecord();\n> >\n> > That would be WAY easier to reason about than the rat's nest we have\n> > here today. Now, I am not sure what it would take to get there, but I\n> > think that is the direction we ought to be heading.\n>\n> What are we going to do in the single user ([1]) case in this awesome future?\n> I guess we could just not create a checkpoint until single user mode is shut\n> down / creates a checkpoint for other reasons?\n\nIt probably depends on who writes this thus-far-hypothetical patch,\nbut my thought is that we'd unconditionally request a checkpoint after\nwriting the end-of-recovery record, same as we do now if (promoted).\nIf we happened to be in single-user mode, then that checkpoint request\nwould turn into performing a checkpoint on the spot in the one and\nonly process we've got, but StartupXLOG() wouldn't really need to care\nwhat happens under the hood after it called RequestCheckpoint().\n\n> [1] I really wish somebody had the energy to just remove single user and\n> bootstrap modes. The degree to which they increase complexity in the rest of\n> the system is entirely unreasonable. There's not actually any reason\n> bootstrapping can't happen with checkpointer et al running, it's just\n> autovacuum that'd need to be blocked.\n\nI don't know whether this is the way forward or not. I think a lot of\nthe complexity of the current system is incidental rather than\nintrinsic. If I were going to work on this, I'd probably working on\ntrying to tidy up the code and reduce the number of places that need\nto care about IsUnderPostmaster and IsPostmasterEnvironment, rather\nthan trying to get rid of them. I suspect that's a necessary\nprerequisite step anyway, and not a trivial effort either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Jul 2021 11:49:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 1:47 AM Amul Sul <sulamul@gmail.com> wrote:\n> Can we have an elog() fatal error or warning to make sure that the\n> last checkpoint is still readable? Since the case where the user\n> (knowingly or unknowingly) or some buggy code has removed the WAL file\n> containing the last checkpoint could be possible. If it is then we\n> would have a hard time finding out when we get further unexpected\n> behavior due to this. Thoughts?\n\nSure, we could, but I don't think we should. Such crazy things can\nhappen any time, not just at the point where this check is happening.\nIt's not particularly more likely to happen here vs. any other place\nwhere we could insert a check. Should we check everywhere, all the\ntime, just in case?\n\n> > I realize that conservatism may have played a role in this code ending\n> > up looking the way that it does; someone seems to have thought it\n> > would be better not to rely on a new idea in all cases. From my point\n> > of view, though, it's scary to have so many cases, especially cases\n> > that don't seem like they should ever be reached. I think that\n> > simplifying the logic here and trying to do the same things in as many\n> > cases as we can will lead to better robustness. Imagine if instead of\n> > all the hairy logic we have now we just replaced this whole if\n> > (IsInRecovery) stanza with this:\n> >\n> > if (InRecovery)\n> > CreateEndOfRecoveryRecord();\n>\n> +1, and do the checkpoint at the end unconditionally as we are doing\n> for the promotion.\n\nYeah, that was my thought, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:22:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-29 12:49:19 +0800, Julien Rouhaud wrote:\n> On Thu, Jul 29, 2021 at 3:28 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > [1] I really wish somebody had the energy to just remove single user and\n> > bootstrap modes. The degree to which they increase complexity in the rest of\n> > the system is entirely unreasonable. There's not actually any reason\n> > bootstrapping can't happen with checkpointer et al running, it's just\n> > autovacuum that'd need to be blocked.\n> \n> Any objection to adding an entry for that in the wiki TODO?\n\nNot sure there's enough concensus on the idea for that. I personally\nthink that's a good approach at reducing relevant complexity, but I\ndon't know if anybody agrees...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:16:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 3:16 PM Andres Freund <andres@anarazel.de> wrote:\n> Not sure there's enough concensus on the idea for that. I personally\n> think that's a good approach at reducing relevant complexity, but I\n> don't know if anybody agrees...\n\nThere does seem to be agreement on the proposed patch, so I have committed it.\n\nThanks to all for the discussion and the links to other relevant threads.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Jul 2021 08:39:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: needless complexity in StartupXLOG"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 11:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jul 28, 2021 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Imagine if instead of\n> > > all the hairy logic we have now we just replaced this whole if\n> > > (IsInRecovery) stanza with this:\n> > >\n> > > if (InRecovery)\n> > > CreateEndOfRecoveryRecord();\n> > >\n> > > That would be WAY easier to reason about than the rat's nest we have\n> > > here today. Now, I am not sure what it would take to get there, but I\n> > > think that is the direction we ought to be heading.\n> >\n> > What are we going to do in the single user ([1]) case in this awesome future?\n> > I guess we could just not create a checkpoint until single user mode is shut\n> > down / creates a checkpoint for other reasons?\n>\n> It probably depends on who writes this thus-far-hypothetical patch,\n> but my thought is that we'd unconditionally request a checkpoint after\n> writing the end-of-recovery record, same as we do now if (promoted).\n> If we happened to be in single-user mode, then that checkpoint request\n> would turn into performing a checkpoint on the spot in the one and\n> only process we've got, but StartupXLOG() wouldn't really need to care\n> what happens under the hood after it called RequestCheckpoint().\n\nI decided to try writing a patch to use an end-of-recovery record\nrather than a checkpoint record in all cases. The patch itself was\npretty simple but didn't work. There are two different reasons why it\ndidn't work, which I'll explain in a minute. I'm not sure whether\nthere are any other problems; these are the only two things that cause\nproblems with 'make check-world', but that's hardly a guarantee of\nanything. Anyway, I thought it would be useful to report these issues\nfirst and hopefully get some feedback.\n\nThe first problem I hit was that GetRunningTransactionData() does\nAssert(TransactionIdIsNormal(CurrentRunningXacts->latestCompletedXid)).\nWhile that does seem like a superficially reasonable thing to assert,\nStartupXLOG() initializes latestCompletedXid by calling\nTransactionIdRetreat on the value extracted from checkPoint.nextXid,\nand the first-ever checkpoint that is written at the beginning of WAL\nhas a nextXid of 3, so when starting up from that checkpoint nextXid\nis 2, which is not a normal XID. When we try to create the next\ncheckpoint, CreateCheckPoint() does LogStandbySnapshot() which calls\nGetRunningTransactionData() and the assertion fails. In the current\ncode, we avoid this more or less accidentally because\nLogStandbySnapshot() is skipped when starting from a shutdown\ncheckpoint. If we write an end-of-recovery record and then trigger a\ncheckpoint afterwards, then we don't avoid the problem. Although I'm\nimpishly tempted to suggest that we #define SecondNormalTransactionId\n4 and then use that to create the first checkpoint instead of\nFirstNormalTransactionId -- naturally with no comments explaining why\nwe're doing it -- I think the real problem is that the assertion is\njust wrong. CurrentRunningXacts->latestCompletedXid shouldn't be\nInvalidTransactionId or BootstrapTransactionId, but\nFrozenTransactionId is a legal, if corner-case, value, at least as the\ncode stands today.\n\nUnfortunately we can't just relax the assertion, because the\nXLOG_RUNNING_XACTS record will eventually be handed to\nProcArrayApplyRecoveryInfo() for processing ... and that function\ncontains a matching assertion which would in turn fail. It in turn\npasses the value to MaintainLatestCompletedXidRecovery() which\ncontains yet another matching assertion, so the restriction to normal\nXIDs here looks pretty deliberate. There are no comments, though, so\nthe reader is left to guess why. I see one problem:\nMaintainLatestCompletedXidRecovery uses FullXidRelativeTo, which\nexpects a normal XID. Perhaps it's best to just dodge the entire issue\nby skipping LogStandbySnapshot() if latestCompletedXid happens to be\n2, but that feels like a hack, because AFAICS the real problem is that\nStartupXLog() doesn't agree with the rest of the code on whether 2 is\na legal case, and maybe we ought to be storing a value that doesn't\nneed to be computed via TransactionIdRetreat().\n\nThe second problem I hit was a preexisting bug where, under\nwal_level=minimal, redo of a \"create tablespace\" record can blow away\ndata of which we have no other copy. See\nhttp://postgr.es/m/CA+TgmoaLO9ncuwvr2nN-J4VEP5XyAcy=zKiHxQzBbFRxxGxm0w@mail.gmail.com\nfor details. That bug happens to make src/test/recovery's\n018_wal_optimize.pl fail one of the tests, because the test starts and\nstops the server repeatedly, with the result that with the attached\npatch, we just keep writing end-of-recovery records and never getting\ntime to finish a checkpoint before the next shutdown, so every test\nreplays the CREATE TABLESPACE record and everything that previous\ntests did. The \"wal_level = minimal, SET TABLESPACE commit\nsubtransaction\" fails because it's the only one that (1) uses the\ntablespace for a new table, (2) commits, and (3) runs before a\ncheckpoint is manually forced.\n\nIt's also worth noting that if we go this way,\nCHECKPOINT_END_OF_RECOVERY should be ripped out entirely. We'd still\nbe triggering a checkpoint at the end of recovery, but because it\ncould be running concurrently with WAL-generating activity, it\nwouldn't be an end-of-recovery checkpoint in the sense that we now use\nthat term. In particular, you couldn't assume that no other write\ntransactions are running at the point when this checkpoint is\nperformed. I haven't yet tried ripping that out and doing so might\nreveal other problems.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 9 Aug 2021 15:00:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Mon, Aug 9, 2021 at 3:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I decided to try writing a patch to use an end-of-recovery record\n> rather than a checkpoint record in all cases.\n>\n> The first problem I hit was that GetRunningTransactionData() does\n> Assert(TransactionIdIsNormal(CurrentRunningXacts->latestCompletedXid)).\n>\n> Unfortunately we can't just relax the assertion, because the\n> XLOG_RUNNING_XACTS record will eventually be handed to\n> ProcArrayApplyRecoveryInfo() for processing ... and that function\n> contains a matching assertion which would in turn fail. It in turn\n> passes the value to MaintainLatestCompletedXidRecovery() which\n> contains yet another matching assertion, so the restriction to normal\n> XIDs here looks pretty deliberate. There are no comments, though, so\n> the reader is left to guess why. I see one problem:\n> MaintainLatestCompletedXidRecovery uses FullXidRelativeTo, which\n> expects a normal XID. Perhaps it's best to just dodge the entire issue\n> by skipping LogStandbySnapshot() if latestCompletedXid happens to be\n> 2, but that feels like a hack, because AFAICS the real problem is that\n> StartupXLog() doesn't agree with the rest of the code on whether 2 is\n> a legal case, and maybe we ought to be storing a value that doesn't\n> need to be computed via TransactionIdRetreat().\n\nAnyone have any thoughts about this?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 Sep 2021 11:30:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "At Thu, 2 Sep 2021 11:30:59 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Mon, Aug 9, 2021 at 3:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I decided to try writing a patch to use an end-of-recovery record\n> > rather than a checkpoint record in all cases.\n> >\n> > The first problem I hit was that GetRunningTransactionData() does\n> > Assert(TransactionIdIsNormal(CurrentRunningXacts->latestCompletedXid)).\n> >\n> > Unfortunately we can't just relax the assertion, because the\n> > XLOG_RUNNING_XACTS record will eventually be handed to\n> > ProcArrayApplyRecoveryInfo() for processing ... and that function\n> > contains a matching assertion which would in turn fail. It in turn\n> > passes the value to MaintainLatestCompletedXidRecovery() which\n> > contains yet another matching assertion, so the restriction to normal\n> > XIDs here looks pretty deliberate. There are no comments, though, so\n> > the reader is left to guess why. I see one problem:\n> > MaintainLatestCompletedXidRecovery uses FullXidRelativeTo, which\n> > expects a normal XID. Perhaps it's best to just dodge the entire issue\n> > by skipping LogStandbySnapshot() if latestCompletedXid happens to be\n> > 2, but that feels like a hack, because AFAICS the real problem is that\n> > StartupXLog() doesn't agree with the rest of the code on whether 2 is\n> > a legal case, and maybe we ought to be storing a value that doesn't\n> > need to be computed via TransactionIdRetreat().\n> \n> Anyone have any thoughts about this?\n\nI tried to reproduce this but just replacing the end-of-recovery\ncheckpoint request with issuing an end-of-recovery record didn't cause\nmake check-workd fail for me. Do you have an idea of any other\nrequirement to cause that?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 03 Sep 2021 13:52:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Fri, Sep 3, 2021 at 10:23 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 2 Sep 2021 11:30:59 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > On Mon, Aug 9, 2021 at 3:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I decided to try writing a patch to use an end-of-recovery record\n> > > rather than a checkpoint record in all cases.\n> > >\n> > > The first problem I hit was that GetRunningTransactionData() does\n> > > Assert(TransactionIdIsNormal(CurrentRunningXacts->latestCompletedXid)).\n> > >\n> > > Unfortunately we can't just relax the assertion, because the\n> > > XLOG_RUNNING_XACTS record will eventually be handed to\n> > > ProcArrayApplyRecoveryInfo() for processing ... and that function\n> > > contains a matching assertion which would in turn fail. It in turn\n> > > passes the value to MaintainLatestCompletedXidRecovery() which\n> > > contains yet another matching assertion, so the restriction to normal\n> > > XIDs here looks pretty deliberate. There are no comments, though, so\n> > > the reader is left to guess why. I see one problem:\n> > > MaintainLatestCompletedXidRecovery uses FullXidRelativeTo, which\n> > > expects a normal XID. Perhaps it's best to just dodge the entire issue\n> > > by skipping LogStandbySnapshot() if latestCompletedXid happens to be\n> > > 2, but that feels like a hack, because AFAICS the real problem is that\n> > > StartupXLog() doesn't agree with the rest of the code on whether 2 is\n> > > a legal case, and maybe we ought to be storing a value that doesn't\n> > > need to be computed via TransactionIdRetreat().\n> >\n> > Anyone have any thoughts about this?\n>\n> I tried to reproduce this but just replacing the end-of-recovery\n> checkpoint request with issuing an end-of-recovery record didn't cause\n> make check-workd fail for me. Do you have an idea of any other\n> requirement to cause that?\n>\n\nYou might need the following change at the end of StartupXLOG():\n\n- if (promoted)\n- RequestCheckpoint(CHECKPOINT_FORCE);\n+ RequestCheckpoint(CHECKPOINT_FORCE);\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 3 Sep 2021 15:56:35 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Fri, Sep 3, 2021 at 12:52 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I tried to reproduce this but just replacing the end-of-recovery\n> checkpoint request with issuing an end-of-recovery record didn't cause\n> make check-workd fail for me. Do you have an idea of any other\n> requirement to cause that?\n\nDid you use the patch I posted, or a different one?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Sep 2021 10:13:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 12:31 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jul 29, 2021 at 11:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Jul 28, 2021 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > Imagine if instead of\n> > > > all the hairy logic we have now we just replaced this whole if\n> > > > (IsInRecovery) stanza with this:\n> > > >\n> > > > if (InRecovery)\n> > > > CreateEndOfRecoveryRecord();\n> > > >\n> > > > That would be WAY easier to reason about than the rat's nest we have\n> > > > here today. Now, I am not sure what it would take to get there, but I\n> > > > think that is the direction we ought to be heading.\n> > >\n> > > What are we going to do in the single user ([1]) case in this awesome future?\n> > > I guess we could just not create a checkpoint until single user mode is shut\n> > > down / creates a checkpoint for other reasons?\n> >\n> > It probably depends on who writes this thus-far-hypothetical patch,\n> > but my thought is that we'd unconditionally request a checkpoint after\n> > writing the end-of-recovery record, same as we do now if (promoted).\n> > If we happened to be in single-user mode, then that checkpoint request\n> > would turn into performing a checkpoint on the spot in the one and\n> > only process we've got, but StartupXLOG() wouldn't really need to care\n> > what happens under the hood after it called RequestCheckpoint().\n>\n> I decided to try writing a patch to use an end-of-recovery record\n> rather than a checkpoint record in all cases. The patch itself was\n> pretty simple but didn't work. There are two different reasons why it\n> didn't work, which I'll explain in a minute. I'm not sure whether\n> there are any other problems; these are the only two things that cause\n> problems with 'make check-world', but that's hardly a guarantee of\n> anything. Anyway, I thought it would be useful to report these issues\n> first and hopefully get some feedback.\n>\n> The first problem I hit was that GetRunningTransactionData() does\n> Assert(TransactionIdIsNormal(CurrentRunningXacts->latestCompletedXid)).\n> While that does seem like a superficially reasonable thing to assert,\n> StartupXLOG() initializes latestCompletedXid by calling\n> TransactionIdRetreat on the value extracted from checkPoint.nextXid,\n> and the first-ever checkpoint that is written at the beginning of WAL\n> has a nextXid of 3, so when starting up from that checkpoint nextXid\n> is 2, which is not a normal XID. When we try to create the next\n> checkpoint, CreateCheckPoint() does LogStandbySnapshot() which calls\n> GetRunningTransactionData() and the assertion fails. In the current\n> code, we avoid this more or less accidentally because\n> LogStandbySnapshot() is skipped when starting from a shutdown\n> checkpoint. If we write an end-of-recovery record and then trigger a\n> checkpoint afterwards, then we don't avoid the problem. Although I'm\n> impishly tempted to suggest that we #define SecondNormalTransactionId\n> 4 and then use that to create the first checkpoint instead of\n> FirstNormalTransactionId -- naturally with no comments explaining why\n> we're doing it -- I think the real problem is that the assertion is\n> just wrong. CurrentRunningXacts->latestCompletedXid shouldn't be\n> InvalidTransactionId or BootstrapTransactionId, but\n> FrozenTransactionId is a legal, if corner-case, value, at least as the\n> code stands today.\n>\n> Unfortunately we can't just relax the assertion, because the\n> XLOG_RUNNING_XACTS record will eventually be handed to\n> ProcArrayApplyRecoveryInfo() for processing ... and that function\n> contains a matching assertion which would in turn fail. It in turn\n> passes the value to MaintainLatestCompletedXidRecovery() which\n> contains yet another matching assertion, so the restriction to normal\n> XIDs here looks pretty deliberate. There are no comments, though, so\n> the reader is left to guess why. I see one problem:\n> MaintainLatestCompletedXidRecovery uses FullXidRelativeTo, which\n> expects a normal XID. Perhaps it's best to just dodge the entire issue\n> by skipping LogStandbySnapshot() if latestCompletedXid happens to be\n> 2,\n>\n\nBy reading above explanation, it seems like it is better to skip\nLogStandbySnapshot() for this proposal. Can't we consider skipping\nLogStandbySnapshot() by passing some explicit flag-like\n'recovery_checkpoint' or something like that?\n\nI think there is some prior discussion in another thread related to\nthis work but it would be easier to follow if you can summarize the\nsame.\n\n\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 4 Sep 2021 15:21:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "At Fri, 3 Sep 2021 15:56:35 +0530, Amul Sul <sulamul@gmail.com> wrote in \n> On Fri, Sep 3, 2021 at 10:23 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> You might need the following change at the end of StartupXLOG():\n> \n> - if (promoted)\n> - RequestCheckpoint(CHECKPOINT_FORCE);\n> + RequestCheckpoint(CHECKPOINT_FORCE);\n\nAt Fri, 3 Sep 2021 10:13:53 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> Did you use the patch I posted, or a different one\n\nThanks to both of you. That was my stupid.\n\n..But even with the patch (with removal of 018_..pl test file), I\ndidn't get check-world failed.. I'll retry later.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Sep 2021 16:26:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "I was trying to understand the v1 patch and found that at the end\nRequestCheckpoint() is called unconditionally, I think that should\nhave been called if REDO had performed, here is the snip from the v1\npatch:\n\n /*\n- * If this was a promotion, request an (online) checkpoint now. This isn't\n- * required for consistency, but the last restartpoint might be far back,\n- * and in case of a crash, recovering from it might take a longer than is\n- * appropriate now that we're not in standby mode anymore.\n+ * Request an (online) checkpoint now. Note that, until this is complete,\n+ * a crash would start replay from the same WAL location we did, or from\n+ * the last restartpoint that completed. We don't want to let that\n+ * situation persist for longer than necessary, since users don't like\n+ * long recovery times. On the other hand, they also want to be able to\n+ * start doing useful work again as quickly as possible. Therfore, we\n+ * don't pass CHECKPOINT_IMMEDIATE to avoid bogging down the system.\n+ *\n+ * Note that the consequence of requesting a checkpoint here only after\n+ * we've allowed WAL writes is that a single checkpoint cycle can span\n+ * multiple server lifetimes. So for example if you want to something to\n+ * happen at least once per checkpoint cycle or at most once per\n+ * checkpoint cycle, you have to consider what happens if the server\n+ * is restarted someplace in the middle.\n */\n- if (promoted)\n- RequestCheckpoint(CHECKPOINT_FORCE);\n+ RequestCheckpoint(CHECKPOINT_FORCE);\n\nWhen I try to call that conditionally like attached, I don't see any\nregression failure, correct me if I am missing something here.\n\nRegards,\nAmul",
"msg_date": "Tue, 5 Oct 2021 17:13:41 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 7:44 AM Amul Sul <sulamul@gmail.com> wrote:\n> I was trying to understand the v1 patch and found that at the end\n> RequestCheckpoint() is called unconditionally, I think that should\n> have been called if REDO had performed,\n\nYou're right. But I don't think we need an extra variable like this,\nright? We can just test InRecovery?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 11:34:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Tue, 5 Oct 2021 at 9:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Oct 5, 2021 at 7:44 AM Amul Sul <sulamul@gmail.com> wrote:\n> > I was trying to understand the v1 patch and found that at the end\n> > RequestCheckpoint() is called unconditionally, I think that should\n> > have been called if REDO had performed,\n>\n> You're right. But I don't think we need an extra variable like this,\n> right? We can just test InRecovery?\n\n\nNo, InRecovery flag get cleared before this point. I think, we can use\nlastReplayedEndRecPtr what you have suggested in other thread.\n\nRegards,\nAmul\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n\nOn Tue, 5 Oct 2021 at 9:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Oct 5, 2021 at 7:44 AM Amul Sul <sulamul@gmail.com> wrote:\n> I was trying to understand the v1 patch and found that at the end\n> RequestCheckpoint() is called unconditionally, I think that should\n> have been called if REDO had performed,\n\nYou're right. But I don't think we need an extra variable like this,\nright? We can just test InRecovery?No, InRecovery flag get cleared before this point. I think, we can use lastReplayedEndRecPtr what you have suggested in other thread.Regards,Amul-- Regards,Amul SulEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 5 Oct 2021 22:11:10 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 12:41 PM Amul Sul <sulamul@gmail.com> wrote:\n> No, InRecovery flag get cleared before this point. I think, we can use lastReplayedEndRecPtr what you have suggested in other thread.\n\nHmm, right, that makes sense. Perhaps I should start remembering what\nI said in my own emails.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 13:12:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 10:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Oct 5, 2021 at 12:41 PM Amul Sul <sulamul@gmail.com> wrote:\n> > No, InRecovery flag get cleared before this point. I think, we can use lastReplayedEndRecPtr what you have suggested in other thread.\n>\n> Hmm, right, that makes sense. Perhaps I should start remembering what\n> I said in my own emails.\n>\n\nHere I end up with the attached version where I have dropped the\nchanges for standby.c and 018_wal_optimize.pl files. Also, I am not\nsure that we should have the changes for bgwriter.c and slot.c in this\npatch, but that's not touched.\n\nRegards,\nAmul",
"msg_date": "Wed, 6 Oct 2021 19:24:23 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 7:24 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Tue, Oct 5, 2021 at 10:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Oct 5, 2021 at 12:41 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > No, InRecovery flag get cleared before this point. I think, we can use lastReplayedEndRecPtr what you have suggested in other thread.\n> >\n> > Hmm, right, that makes sense. Perhaps I should start remembering what\n> > I said in my own emails.\n> >\n>\n> Here I end up with the attached version where I have dropped the\n> changes for standby.c and 018_wal_optimize.pl files. Also, I am not\n> sure that we should have the changes for bgwriter.c and slot.c in this\n> patch, but that's not touched.\n>\n\nAttached is the rebased and updated version. The patch removes the\nnewly introduced PerformRecoveryXLogAction() function. In addition to\nthat, removed the CHECKPOINT_END_OF_RECOVERY flag and its related\ncode. Also, dropped changes for bgwriter.c and slot.c in this patch, which\nseem unrelated to this work.\n\nRegards,\nAmul",
"msg_date": "Mon, 18 Oct 2021 10:56:53 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "Hi,\n\nOn Mon, Oct 18, 2021 at 10:56:53AM +0530, Amul Sul wrote:\n> \n> Attached is the rebased and updated version. The patch removes the\n> newly introduced PerformRecoveryXLogAction() function. In addition to\n> that, removed the CHECKPOINT_END_OF_RECOVERY flag and its related\n> code. Also, dropped changes for bgwriter.c and slot.c in this patch, which\n> seem unrelated to this work.\n\nThe cfbot reports that this version of the patch doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_3365.log\n=== Applying patches on top of PostgreSQL commit ID 0c53a6658e47217ad3dd416a5543fc87c3ecfd58 ===\n=== applying patch ./v3-0001-Always-use-an-end-of-recovery-record-rather-than-.patch\npatching file src/backend/access/transam/xlog.c\n[...]\nHunk #14 FAILED at 9061.\nHunk #15 FAILED at 9241.\n2 out of 15 hunks FAILED -- saving rejects to file src/backend/access/transam/xlog.c.rej\n\nCan you send a rebased version? In the meantime I will switch the cf entry to\nWaiting on Author.\n\n\n",
"msg_date": "Sun, 16 Jan 2022 12:52:28 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Sat, Jan 15, 2022 at 11:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> The cfbot reports that this version of the patch doesn't apply anymore:\n\nHere is a new version of the patch which, unlike v1, I think is\nsomething we could seriously consider applying (not before v16, of\ncourse). It now removes CHECKPOINT_END_OF_RECOVERY completely, and I\nattach a second patch as well which nukes checkPoint.PrevTimeLineID as\nwell.\n\nI mentioned two problems with $SUBJECT in the first email with this\nsubject line. One was a bug, which Noah has since fixed (thanks,\nNoah). The other problem is that LogStandbySnapshot() and a bunch of\nits friends expect latestCompletedXid to always be a normal XID, which\nis a problem because (1) we currently set nextXid to 3 and (2) at\nstartup, we compute latestCompletedXid = nextXid - 1. The current code\ndodges this kind of accidentally: the checkpoint that happens at\nstartup is a \"shutdown checkpoint\" and thus skips logging a standby\nsnapshot, since a shutdown checkpoint is a sure indicator that there\nare no running transactions. With the changes, the checkpoint at\nstartup happens after we've started allowing write transactions, and\nthus a standby snapshot needs to be logged also. In the cases where\nthe end-of-recovery record was already being used, the problem could\nhave happened already, except for the fact that those cases involve a\nstandby promotion, which doesn't happen during initdb. I explored a\nfew possible ways of solving this problem.\n\nThe first thing I considered was replacing latestCompletedXid with a\nname like incompleteXidHorizon. The idea is that, where\nlatestCompletedXid is the highest XID that is known to have committed\nor aborted, incompleteXidHorizon would be the lowest XID such that all\nknown commits or aborts are for lower XIDs. In effect,\nincompleteXidHorizon would be latestCompletedXid + 1. Since\nlatestCompletedXid is always normal except when it's 2,\nincompleteXidHorizon would be normal in all cases. Initially this\nseemed fairly promising, but it kind of fell down when I realized that\nwe copy latestCompletedXid into\nComputeXidHorizonsResult.latest_completed. That seemed to me to make\nthe consequences of the change a bit more far-reaching than I liked.\nAlso, it wasn't entirely clear to me that I wouldn't be introducing\nany off-by-one errors into various wraparound calculations with this\napproach.\n\nThe second thing I considered was skipping LogStandbySnapshot() during\ninitdb. There are two ways of doing this and neither of them seem that\ngreat to me. Something that does work is to skip LogStandbySnapshot()\nwhen in single user mode, but there is no particular reason why WAL\ngenerated in single user mode couldn't be replayed on a standby, so\nthis doesn't seem great. It's too big a hammer for what we really\nwant, which is just to suppress this during initdb. Another way of\napproaching it is to skip it in bootstrap mode, but that actually\ndoesn't work: initdb then fails during the post-bootstrapping step\nrather than during bootstrapping. I thought about patching around that\nby forcing the code that generates checkpoint records to forcibly\nadvance an XID of 3 to 4, but that seemed like working around the\nproblem from the wrong end.\n\nSo ... I decided that the best approach here seems to be to just skip\nFirstNormalTransactionId and use FirstNormalTransactionId + 1 for the\nfirst write transaction that the cluster ever processes. That's very\nsimple and doesn't seem likely to break anything else. On the downside\nit seems a bit grotty, but I don't see anything better, and on the\nwhole, I think with this approach we come out substantially ahead.\n0001 removes 3 times as many lines as it inserts, and 0002 saves a few\nmore lines of code.\n\nNow, I still don't really know that there isn't some theoretical\ndifficulty here that makes this whole approach a non-starter, but I\nalso can't think of what it might be. If the promotion case, which has\nused the end-of-recovery record for many years, is basically safe,\ndespite the fact that it switches TLIs, then it seems to me that the\ncrash recovery case, which doesn't have that complication, ought to be\nsafe too. But I might well be missing something, so if you see a\nproblem, please speak up!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Apr 2022 16:44:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 2:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jan 15, 2022 at 11:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > The cfbot reports that this version of the patch doesn't apply anymore:\n>\n> Here is a new version of the patch which, unlike v1, I think is\n> something we could seriously consider applying (not before v16, of\n> course). It now removes CHECKPOINT_END_OF_RECOVERY completely, and I\n> attach a second patch as well which nukes checkPoint.PrevTimeLineID as\n> well.\n>\n> I mentioned two problems with $SUBJECT in the first email with this\n> subject line. One was a bug, which Noah has since fixed (thanks,\n> Noah). The other problem is that LogStandbySnapshot() and a bunch of\n> its friends expect latestCompletedXid to always be a normal XID, which\n> is a problem because (1) we currently set nextXid to 3 and (2) at\n> startup, we compute latestCompletedXid = nextXid - 1. The current code\n> dodges this kind of accidentally: the checkpoint that happens at\n> startup is a \"shutdown checkpoint\" and thus skips logging a standby\n> snapshot, since a shutdown checkpoint is a sure indicator that there\n> are no running transactions. With the changes, the checkpoint at\n> startup happens after we've started allowing write transactions, and\n> thus a standby snapshot needs to be logged also. In the cases where\n> the end-of-recovery record was already being used, the problem could\n> have happened already, except for the fact that those cases involve a\n> standby promotion, which doesn't happen during initdb. I explored a\n> few possible ways of solving this problem.\n>\n> The first thing I considered was replacing latestCompletedXid with a\n> name like incompleteXidHorizon. The idea is that, where\n> latestCompletedXid is the highest XID that is known to have committed\n> or aborted, incompleteXidHorizon would be the lowest XID such that all\n> known commits or aborts are for lower XIDs. In effect,\n> incompleteXidHorizon would be latestCompletedXid + 1. Since\n> latestCompletedXid is always normal except when it's 2,\n> incompleteXidHorizon would be normal in all cases. Initially this\n> seemed fairly promising, but it kind of fell down when I realized that\n> we copy latestCompletedXid into\n> ComputeXidHorizonsResult.latest_completed. That seemed to me to make\n> the consequences of the change a bit more far-reaching than I liked.\n> Also, it wasn't entirely clear to me that I wouldn't be introducing\n> any off-by-one errors into various wraparound calculations with this\n> approach.\n>\n> The second thing I considered was skipping LogStandbySnapshot() during\n> initdb. There are two ways of doing this and neither of them seem that\n> great to me. Something that does work is to skip LogStandbySnapshot()\n> when in single user mode, but there is no particular reason why WAL\n> generated in single user mode couldn't be replayed on a standby, so\n> this doesn't seem great. It's too big a hammer for what we really\n> want, which is just to suppress this during initdb. Another way of\n> approaching it is to skip it in bootstrap mode, but that actually\n> doesn't work: initdb then fails during the post-bootstrapping step\n> rather than during bootstrapping. I thought about patching around that\n> by forcing the code that generates checkpoint records to forcibly\n> advance an XID of 3 to 4, but that seemed like working around the\n> problem from the wrong end.\n>\n> So ... I decided that the best approach here seems to be to just skip\n> FirstNormalTransactionId and use FirstNormalTransactionId + 1 for the\n> first write transaction that the cluster ever processes. That's very\n> simple and doesn't seem likely to break anything else. On the downside\n> it seems a bit grotty, but I don't see anything better, and on the\n> whole, I think with this approach we come out substantially ahead.\n> 0001 removes 3 times as many lines as it inserts, and 0002 saves a few\n> more lines of code.\n>\n> Now, I still don't really know that there isn't some theoretical\n> difficulty here that makes this whole approach a non-starter, but I\n> also can't think of what it might be. If the promotion case, which has\n> used the end-of-recovery record for many years, is basically safe,\n> despite the fact that it switches TLIs, then it seems to me that the\n> crash recovery case, which doesn't have that complication, ought to be\n> safe too. But I might well be missing something, so if you see a\n> problem, please speak up!\n>\n\n /*\n- * If this was a promotion, request an (online) checkpoint now. This isn't\n- * required for consistency, but the last restartpoint might be far back,\n- * and in case of a crash, recovering from it might take a longer than is\n- * appropriate now that we're not in standby mode anymore.\n+ * Request an (online) checkpoint now. This isn't required for consistency,\n+ * but the last restartpoint might be far back, and in case of a crash,\n+ * recovering from it might take a longer than is appropriate now that\n+ * we're not in standby mode anymore.\n */\n- if (promoted)\n- RequestCheckpoint(CHECKPOINT_FORCE);\n+ RequestCheckpoint(CHECKPOINT_FORCE);\n }\n\nI think RequestCheckpoint() should be called conditionally. What is the need\nof the checkpoint if we haven't been through the recovery, in other words,\nstarting up from a clean shutdown?\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 19 Apr 2022 09:26:00 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 11:56 PM Amul Sul <sulamul@gmail.com> wrote:\n> I think RequestCheckpoint() should be called conditionally. What is the need\n> of the checkpoint if we haven't been through the recovery, in other words,\n> starting up from a clean shutdown?\n\nGood point. v5 attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Apr 2022 08:30:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 04:44:03PM -0400, Robert Haas wrote:\n> Here is a new version of the patch which, unlike v1, I think is\n> something we could seriously consider applying (not before v16, of\n> course). It now removes CHECKPOINT_END_OF_RECOVERY completely, and I\n> attach a second patch as well which nukes checkPoint.PrevTimeLineID as\n> well.\n\nI'd like to add a big +1 for this change. IIUC this should help with some\nof the problems I've noted elsewhere [0].\n\n> I mentioned two problems with $SUBJECT in the first email with this\n> subject line. One was a bug, which Noah has since fixed (thanks,\n> Noah). The other problem is that LogStandbySnapshot() and a bunch of\n> its friends expect latestCompletedXid to always be a normal XID, which\n> is a problem because (1) we currently set nextXid to 3 and (2) at\n> startup, we compute latestCompletedXid = nextXid - 1. The current code\n> dodges this kind of accidentally: the checkpoint that happens at\n> startup is a \"shutdown checkpoint\" and thus skips logging a standby\n> snapshot, since a shutdown checkpoint is a sure indicator that there\n> are no running transactions. With the changes, the checkpoint at\n> startup happens after we've started allowing write transactions, and\n> thus a standby snapshot needs to be logged also. In the cases where\n> the end-of-recovery record was already being used, the problem could\n> have happened already, except for the fact that those cases involve a\n> standby promotion, which doesn't happen during initdb. I explored a\n> few possible ways of solving this problem.\n\nShouldn't latestCompletedXid be set to MaxTransactionId in this case? Or\nis this related to the logic in FullTransactionIdRetreat() that avoids\nskipping over the \"actual\" special transaction IDs?\n\n> So ... I decided that the best approach here seems to be to just skip\n> FirstNormalTransactionId and use FirstNormalTransactionId + 1 for the\n> first write transaction that the cluster ever processes. That's very\n> simple and doesn't seem likely to break anything else. On the downside\n> it seems a bit grotty, but I don't see anything better, and on the\n> whole, I think with this approach we come out substantially ahead.\n> 0001 removes 3 times as many lines as it inserts, and 0002 saves a few\n> more lines of code.\n\nThis doesn't seem all that bad to me. It's a little hacky, but it's very\neasy to understand and only happens once per initdb. I don't think it's\nworth any extra complexity.\n\n> Now, I still don't really know that there isn't some theoretical\n> difficulty here that makes this whole approach a non-starter, but I\n> also can't think of what it might be. If the promotion case, which has\n> used the end-of-recovery record for many years, is basically safe,\n> despite the fact that it switches TLIs, then it seems to me that the\n> crash recovery case, which doesn't have that complication, ought to be\n> safe too. But I might well be missing something, so if you see a\n> problem, please speak up!\n\nYour reasoning seems sound to me.\n\n[0] https://postgr.es/m/C1EE64B0-D4DB-40F3-98C8-0CED324D34CB%40amazon.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Apr 2022 13:37:59 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "At Tue, 19 Apr 2022 13:37:59 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Mon, Apr 18, 2022 at 04:44:03PM -0400, Robert Haas wrote:\n> > Here is a new version of the patch which, unlike v1, I think is\n> > something we could seriously consider applying (not before v16, of\n> > course). It now removes CHECKPOINT_END_OF_RECOVERY completely, and I\n> > attach a second patch as well which nukes checkPoint.PrevTimeLineID as\n> > well.\n> \n> I'd like to add a big +1 for this change. IIUC this should help with some\n> of the problems I've noted elsewhere [0].\n\nAgreed.\n\n> > I mentioned two problems with $SUBJECT in the first email with this\n> > subject line. One was a bug, which Noah has since fixed (thanks,\n> > Noah). The other problem is that LogStandbySnapshot() and a bunch of\n> > its friends expect latestCompletedXid to always be a normal XID, which\n> > is a problem because (1) we currently set nextXid to 3 and (2) at\n> > startup, we compute latestCompletedXid = nextXid - 1. The current code\n> > dodges this kind of accidentally: the checkpoint that happens at\n> > startup is a \"shutdown checkpoint\" and thus skips logging a standby\n> > snapshot, since a shutdown checkpoint is a sure indicator that there\n> > are no running transactions. With the changes, the checkpoint at\n> > startup happens after we've started allowing write transactions, and\n> > thus a standby snapshot needs to be logged also. In the cases where\n> > the end-of-recovery record was already being used, the problem could\n> > have happened already, except for the fact that those cases involve a\n> > standby promotion, which doesn't happen during initdb. I explored a\n> > few possible ways of solving this problem.\n> \n> Shouldn't latestCompletedXid be set to MaxTransactionId in this case? Or\n> is this related to the logic in FullTransactionIdRetreat() that avoids\n> skipping over the \"actual\" special transaction IDs?\n\nAs the result FullTransactionIdRetreat(FirstNormalFullTransactionId)\nresults in FrozenTransactionId, which looks odd. It seems to me\nrather should be InvalidFullTransactionId, or simply should assert-out\nthat case. But incrmenting the very first xid avoid all that\ncomplexity. It is somewhat hacky but very smiple and understandable.\n\n> > So ... I decided that the best approach here seems to be to just skip\n> > FirstNormalTransactionId and use FirstNormalTransactionId + 1 for the\n> > first write transaction that the cluster ever processes. That's very\n> > simple and doesn't seem likely to break anything else. On the downside\n> > it seems a bit grotty, but I don't see anything better, and on the\n> > whole, I think with this approach we come out substantially ahead.\n> > 0001 removes 3 times as many lines as it inserts, and 0002 saves a few\n> > more lines of code.\n> \n> This doesn't seem all that bad to me. It's a little hacky, but it's very\n> easy to understand and only happens once per initdb. I don't think it's\n> worth any extra complexity.\n\n+1.\n\n> > Now, I still don't really know that there isn't some theoretical\n> > difficulty here that makes this whole approach a non-starter, but I\n> > also can't think of what it might be. If the promotion case, which has\n> > used the end-of-recovery record for many years, is basically safe,\n> > despite the fact that it switches TLIs, then it seems to me that the\n> > crash recovery case, which doesn't have that complication, ought to be\n> > safe too. But I might well be missing something, so if you see a\n> > problem, please speak up!\n> \n> Your reasoning seems sound to me.\n> \n> [0] https://postgr.es/m/C1EE64B0-D4DB-40F3-98C8-0CED324D34CB%40amazon.com\n\nFWIW, I don't find a flaw in the reasoning, too.\n\nBy the way do we need to leave CheckPoint.PrevTimeLineID? It is always\nthe same value with ThisTimeLineID. The most dubious part is\nApplyWalRecord but XLOG_CHECKPOINT_SHUTDOWN no longer induces timeline\nswitch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Apr 2022 10:41:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": " On Tue, Apr 19, 2022 at 2:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jan 15, 2022 at 11:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > The cfbot reports that this version of the patch doesn't apply anymore:\n>\n> Here is a new version of the patch which, unlike v1, I think is\n> something we could seriously consider applying (not before v16, of\n> course). It now removes CHECKPOINT_END_OF_RECOVERY completely, and I\n> attach a second patch as well which nukes checkPoint.PrevTimeLineID as\n> well.\n>\n> I mentioned two problems with $SUBJECT in the first email with this\n> subject line. One was a bug, which Noah has since fixed (thanks,\n> Noah). The other problem is that LogStandbySnapshot() and a bunch of\n> its friends expect latestCompletedXid to always be a normal XID, which\n> is a problem because (1) we currently set nextXid to 3 and (2) at\n> startup, we compute latestCompletedXid = nextXid - 1. The current code\n> dodges this kind of accidentally: the checkpoint that happens at\n> startup is a \"shutdown checkpoint\" and thus skips logging a standby\n> snapshot, since a shutdown checkpoint is a sure indicator that there\n> are no running transactions. With the changes, the checkpoint at\n> startup happens after we've started allowing write transactions, and\n> thus a standby snapshot needs to be logged also. In the cases where\n> the end-of-recovery record was already being used, the problem could\n> have happened already, except for the fact that those cases involve a\n> standby promotion, which doesn't happen during initdb. I explored a\n> few possible ways of solving this problem.\n>\n> The first thing I considered was replacing latestCompletedXid with a\n> name like incompleteXidHorizon. The idea is that, where\n> latestCompletedXid is the highest XID that is known to have committed\n> or aborted, incompleteXidHorizon would be the lowest XID such that all\n> known commits or aborts are for lower XIDs. In effect,\n> incompleteXidHorizon would be latestCompletedXid + 1. Since\n> latestCompletedXid is always normal except when it's 2,\n> incompleteXidHorizon would be normal in all cases. Initially this\n> seemed fairly promising, but it kind of fell down when I realized that\n> we copy latestCompletedXid into\n> ComputeXidHorizonsResult.latest_completed. That seemed to me to make\n> the consequences of the change a bit more far-reaching than I liked.\n> Also, it wasn't entirely clear to me that I wouldn't be introducing\n> any off-by-one errors into various wraparound calculations with this\n> approach.\n>\n> The second thing I considered was skipping LogStandbySnapshot() during\n> initdb. There are two ways of doing this and neither of them seem that\n> great to me. Something that does work is to skip LogStandbySnapshot()\n> when in single user mode, but there is no particular reason why WAL\n> generated in single user mode couldn't be replayed on a standby, so\n> this doesn't seem great. It's too big a hammer for what we really\n> want, which is just to suppress this during initdb. Another way of\n> approaching it is to skip it in bootstrap mode, but that actually\n> doesn't work: initdb then fails during the post-bootstrapping step\n> rather than during bootstrapping. I thought about patching around that\n> by forcing the code that generates checkpoint records to forcibly\n> advance an XID of 3 to 4, but that seemed like working around the\n> problem from the wrong end.\n>\n> So ... I decided that the best approach here seems to be to just skip\n> FirstNormalTransactionId and use FirstNormalTransactionId + 1 for the\n> first write transaction that the cluster ever processes. That's very\n> simple and doesn't seem likely to break anything else. On the downside\n> it seems a bit grotty, but I don't see anything better, and on the\n> whole, I think with this approach we come out substantially ahead.\n\nIIUC, the failure was something like this on initdb:\n\nrunning bootstrap script ... TRAP:\nFailedAssertion(\"TransactionIdIsNormal(CurrentRunningXacts->latestCompletedXid)\",\nFile: \"procarray.c\", Line: 2892, PID: 60363)\n\n/bin/postgres(ExceptionalCondition+0xb9)[0xb3917d]\n/bin/postgres(GetRunningTransactionData+0x36c)[0x96aa26]\n/bin/postgres(LogStandbySnapshot+0x64)[0x974393]\n/bin/postgres(CreateCheckPoint+0x67f)[0x5928bf]\n/bin/postgres(RequestCheckpoint+0x26)[0x8ca649]\n/bin/postgres(StartupXLOG+0xf51)[0x591126]\n/bin/postgres(InitPostgres+0x188)[0xb4f2ac]\n/bin/postgres(BootstrapModeMain+0x4d3)[0x5ac6de]\n/bin/postgres(main+0x275)[0x7ca72e]\n/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f71af82d445]\n/bin/postgres[0x48aae9]\nchild process was terminated by signal 6: Aborted\ninitdb: removing data directory \"/inst/data\"\n\nThat was happening because RequestCheckpoint() was called from StartupXLOG()\nunconditionally, but with the v5 patch that is not true.\n\nIf my understanding is correct then we don't need any handling\nfor latestCompletedXid, at least in this patch.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 20 Apr 2022 12:24:07 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 4:38 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Shouldn't latestCompletedXid be set to MaxTransactionId in this case? Or\n> is this related to the logic in FullTransactionIdRetreat() that avoids\n> skipping over the \"actual\" special transaction IDs?\n\nThe problem here is this code:\n\n /* also initialize latestCompletedXid, to nextXid - 1 */\n LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n ShmemVariableCache->latestCompletedXid = ShmemVariableCache->nextXid;\n FullTransactionIdRetreat(&ShmemVariableCache->latestCompletedXid);\n LWLockRelease(ProcArrayLock);\n\nIf nextXid is 3, then latestCompletedXid gets 2. But in\nGetRunningTransactionData:\n\n Assert(TransactionIdIsNormal(CurrentRunningXacts->latestCompletedXid));\n\n> Your reasoning seems sound to me.\n\nI was talking with Thomas Munro yesterday and he thinks there is a\nproblem with relfilenode reuse here. In normal running, when a\nrelation is dropped, we leave behind a 0-length file until the next\ncheckpoint; this keeps that relfilenode from being used even if the\nOID counter wraps around. If we didn't do that, then imagine that\nwhile running with wal_level=minimal, we drop an existing relation,\ncreate a new relation with the same OID, load some data into it, and\ncrash, all within the same checkpoint cycle, then we will be able to\nreplay the drop, but we will not be able to restore the relation\ncontents afterward because at wal_level=minimal they are not logged.\nApparently, we don't create tombstone files during recovery because we\nknow that there will be a checkpoint at the end.\n\nWith the existing use of the end-of-recovery record, we always know\nthat wal_level>minimal, because we're only using it on standbys. But\nwith this use that wouldn't be true any more. So I guess we need to\nstart creating tombstone files even during recovery, or else do\nsomething like what Dilip coded up in\nhttp://postgr.es/m/CAFiTN-u=r8UTCSzu6_pnihYAtwR1=esq5sRegTEZ2tLa92fovA@mail.gmail.com\nwhich I think would be a better solution at least in the long term.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Apr 2022 09:26:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 09:26:07AM -0400, Robert Haas wrote:\n> I was talking with Thomas Munro yesterday and he thinks there is a\n> problem with relfilenode reuse here. In normal running, when a\n> relation is dropped, we leave behind a 0-length file until the next\n> checkpoint; this keeps that relfilenode from being used even if the\n> OID counter wraps around. If we didn't do that, then imagine that\n> while running with wal_level=minimal, we drop an existing relation,\n> create a new relation with the same OID, load some data into it, and\n> crash, all within the same checkpoint cycle, then we will be able to\n> replay the drop, but we will not be able to restore the relation\n> contents afterward because at wal_level=minimal they are not logged.\n> Apparently, we don't create tombstone files during recovery because we\n> know that there will be a checkpoint at the end.\n\nIn the example you provided, won't the tombstone file already be present\nbefore the crash? During recovery, the tombstone file will be removed, and\nthe new relation wouldn't use the same relfilenode anyway. I'm probably\nmissing something obvious here.\n\nI do see the problem if we drop an existing relation, crash, reuse the\nfilenode, and then crash again (all within the same checkpoint cycle). The\nfirst recovery would remove the tombstone file, and the second recovery\nwould wipe out the new relation's files.\n\n> With the existing use of the end-of-recovery record, we always know\n> that wal_level>minimal, because we're only using it on standbys. But\n> with this use that wouldn't be true any more. So I guess we need to\n> start creating tombstone files even during recovery, or else do\n> something like what Dilip coded up in\n> http://postgr.es/m/CAFiTN-u=r8UTCSzu6_pnihYAtwR1=esq5sRegTEZ2tLa92fovA@mail.gmail.com\n> which I think would be a better solution at least in the long term.\n\nIMO this would be good just to reduce the branching a bit. I suppose\nremoving the files immediately during recovery might be an optimization in\nsome cases, but I am skeptical that it really makes that much of a\ndifference in practice.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Apr 2022 10:02:24 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 5:02 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I do see the problem if we drop an existing relation, crash, reuse the\n> filenode, and then crash again (all within the same checkpoint cycle). The\n> first recovery would remove the tombstone file, and the second recovery\n> would wipe out the new relation's files.\n\nRight, the double-crash case is what I was worrying about. I'm not\nsure, but it might even be more likely than usual that you'll reuse\nthe same relfilenode after the first crash, because the OID allocator\nwill start from the same value.\n\n\n",
"msg_date": "Thu, 21 Apr 2022 05:16:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using an end-of-recovery record in all cases"
}
] |
[
{
"msg_contents": "Hi, all\n\nRecently, I got a PANIC while restarts standby, which can be reproduced by the following steps, based on pg 11:\n1. begin a transaction in primary node;\n2. create a table in the transaction;\n3. insert lots of data into the table;\n4. do a checkpoint, and restart standby after checkpoint is done in primary node;\n5. insert/update lots of data into the table again;\n6. abort the transaction.\n\nafter step 6, fast shutdown standby node, and then restart standby, you will get a PANIC log, and the backtrace is:\n#0 0x00007fc663e5a277 in raise () from /lib64/libc.so.6\n#1 0x00007fc663e5b968 in abort () from /lib64/libc.so.6\n#2 0x0000000000c89f01 in errfinish (dummy=0) at elog.c:707\n#3 0x0000000000c8cba3 in elog_finish (elevel=22, fmt=0xdccc18 \"WAL contains references to invalid pages\") at elog.c:1658\n#4 0x00000000005e476a in XLogCheckInvalidPages () at xlogutils.c:253\n#5 0x00000000005cbc1a in CheckRecoveryConsistency () at xlog.c:9477\n#6 0x00000000005ca5c5 in StartupXLOG () at xlog.c:8609\n#7 0x0000000000a025a5 in StartupProcessMain () at startup.c:274\n#8 0x0000000000643a5c in AuxiliaryProcessMain (argc=2, argv=0x7ffe4e4849a0) at bootstrap.c:485\n#9 0x0000000000a00620 in StartChildProcess (type=StartupProcess) at postmaster.c:6215\n#10 0x00000000009f92c6 in PostmasterMain (argc=3, argv=0x4126500) at postmaster.c:1506\n#11 0x00000000008eab64 in main (argc=3, argv=0x4126500) at main.c:232\n\nI think the reason for the above error is as follows:\n1. the transaction in primary node was aborted finally, the standby node also deleted the table files after replayed the xlog record, however, without updating minimum recovery point;\n2. primary node did a checkpoint before abort, and then standby node is restarted, so standby node will recovery from a point where the table has already been created and data has been inserted into the table;\n3. when standby node restarts after step 6, it will find the page needed during recovery doesn't exist, which has already been deleted by xact_redo_abort before, so standby node will treat this page as an invalid page;\n4. xact_redo_abort drop relation files without updating minumum recovery point, before standby node replay the abort xlog record and forget invalid pages again, it will reach consistency because the abort xlogrecord lsn is greater than minrecoverypoint;\n5. during checkRecoveryConsistency, it will check invalid pages, and find that there is invalid page, and the PANIC log will be generated.\n\nSo why don't update minimum recovery point in xact_redo_abort, just like XLogFlush in xact_redo_commit, in which way standby could reach consistency and check invalid pages after replayed the abort xlogrecord.\nHope to get your reply\n\nThanks & Best Regard\nHi, allRecently, I got a PANIC while restarts standby, which can be reproduced by the following steps, based on pg 11:1. begin a transaction in primary node;2. create a table in the transaction;3. insert lots of data into the table;4. do a checkpoint, and restart standby after checkpoint is done in primary node;5. insert/update lots of data into the table again;6. abort the transaction.after step 6, fast shutdown standby node, and then restart standby, you will get a PANIC log, and the backtrace is:#0 0x00007fc663e5a277 in raise () from /lib64/libc.so.6#1 0x00007fc663e5b968 in abort () from /lib64/libc.so.6#2 0x0000000000c89f01 in errfinish (dummy=0) at elog.c:707#3 0x0000000000c8cba3 in elog_finish (elevel=22, fmt=0xdccc18 \"WAL contains references to invalid pages\") at elog.c:1658#4 0x00000000005e476a in XLogCheckInvalidPages () at xlogutils.c:253#5 0x00000000005cbc1a in CheckRecoveryConsistency () at xlog.c:9477#6 0x00000000005ca5c5 in StartupXLOG () at xlog.c:8609#7 0x0000000000a025a5 in StartupProcessMain () at startup.c:274#8 0x0000000000643a5c in AuxiliaryProcessMain (argc=2, argv=0x7ffe4e4849a0) at bootstrap.c:485#9 0x0000000000a00620 in StartChildProcess (type=StartupProcess) at postmaster.c:6215#10 0x00000000009f92c6 in PostmasterMain (argc=3, argv=0x4126500) at postmaster.c:1506#11 0x00000000008eab64 in main (argc=3, argv=0x4126500) at main.c:232I think the reason for the above error is as follows:1. the transaction in primary node was aborted finally, the standby node also deleted the table files after replayed the xlog record, however, without updating minimum recovery point;2. primary node did a checkpoint before abort, and then standby node is restarted, so standby node will recovery from a point where the table has already been created and data has been inserted into the table;3. when standby node restarts after step 6, it will find the page needed during recovery doesn't exist, which has already been deleted by xact_redo_abort before, so standby node will treat this page as an invalid page;4. xact_redo_abort drop relation files without updating minumum recovery point, before standby node replay the abort xlog record and forget invalid pages again, it will reach consistency because the abort xlogrecord lsn is greater than minrecoverypoint;5. during checkRecoveryConsistency, it will check invalid pages, and find that there is invalid page, and the PANIC log will be generated.So why don't update minimum recovery point in xact_redo_abort, just like XLogFlush in xact_redo_commit, in which way standby could reach consistency and check invalid pages after replayed the abort xlogrecord.Hope to get your replyThanks & Best Regard",
"msg_date": "Tue, 27 Jul 2021 01:38:58 +0800",
"msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?V2h5IGRvbid0IHVwZGF0ZSBtaW5pbXVtIHJlY292ZXJ5IHBvaW50IGluIHhhY3RfcmVkb19h?=\n =?UTF-8?B?Ym9ydA==?="
},
{
"msg_contents": "\n\nOn 2021/07/27 2:38, 蔡梦娟(玊于) wrote:\n> Hi, all\n> \n> Recently, I got a PANIC while restarts standby, which can be reproduced by the following steps, based on pg 11:\n> 1. begin a transaction in primary node;\n> 2. create a table in the transaction;\n> 3. insert lots of data into the table;\n> 4. do a checkpoint, and restart standby after checkpoint is done in primary node;\n> 5. insert/update lots of data into the table again;\n> 6. abort the transaction.\n\nI could reproduce the issue by using the similar steps and\ndisabling full_page_writes, in the master branch.\n\n\n> \n> after step 6, fast shutdown standby node, and then restart standby, you will get a PANIC log, and the backtrace is:\n> #0 0x00007fc663e5a277 in raise () from /lib64/libc.so.6\n> #1 0x00007fc663e5b968 in abort () from /lib64/libc.so.6\n> #2 0x0000000000c89f01 in errfinish (dummy=0) at elog.c:707\n> #3 0x0000000000c8cba3 in elog_finish (elevel=22, fmt=0xdccc18 \"WAL contains references to invalid pages\") at elog.c:1658\n> #4 0x00000000005e476a in XLogCheckInvalidPages () at xlogutils.c:253\n> #5 0x00000000005cbc1a in CheckRecoveryConsistency () at xlog.c:9477\n> #6 0x00000000005ca5c5 in StartupXLOG () at xlog.c:8609\n> #7 0x0000000000a025a5 in StartupProcessMain () at startup.c:274\n> #8 0x0000000000643a5c in AuxiliaryProcessMain (argc=2, argv=0x7ffe4e4849a0) at bootstrap.c:485\n> #9 0x0000000000a00620 in StartChildProcess (type=StartupProcess) at postmaster.c:6215\n> #10 0x00000000009f92c6 in PostmasterMain (argc=3, argv=0x4126500) at postmaster.c:1506\n> #11 0x00000000008eab64 in main (argc=3, argv=0x4126500) at main.c:232\n> \n> I think the reason for the above error is as follows:\n> 1. the transaction in primary node was aborted finally, the standby node also deleted the table files after replayed the xlog record, however, without updating minimum recovery point;\n> 2. primary node did a checkpoint before abort, and then standby node is restarted, so standby node will recovery from a point where the table has already been created and data has been inserted into the table;\n> 3. when standby node restarts after step 6, it will find the page needed during recovery doesn't exist, which has already been deleted by xact_redo_abort before, so standby node will treat this page as an invalid page;\n> 4. xact_redo_abort drop relation files without updating minumum recovery point, before standby node replay the abort xlog record and forget invalid pages again, it will reach consistency because the abort xlogrecord lsn is greater than minrecoverypoint;\n> 5. during checkRecoveryConsistency, it will check invalid pages, and find that there is invalid page, and the PANIC log will be generated.\n> \n> So why don't update minimum recovery point in xact_redo_abort, just like XLogFlush in xact_redo_commit, in which way standby could reach consistency and check invalid pages after replayed the abort xlogrecord.\n\nISTM that you're right. xact_redo_abort() should call XLogFlush() to\nupdate the minimum recovery point on truncation. This seems\nthe oversight in commit 7bffc9b7bf.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 27 Jul 2021 17:26:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't update minimum recovery point in xact_redo_abort"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 05:26:05PM +0900, Fujii Masao wrote:\n> ISTM that you're right. xact_redo_abort() should call XLogFlush() to\n> update the minimum recovery point on truncation. This seems\n> the oversight in commit 7bffc9b7bf.\n\nIndeed. It would be nice to see some refactoring of this code as\nwell? Both share a lot of steps, so adding something to one path can\neasily lead to the other path being forgotten.\n--\nMichael",
"msg_date": "Tue, 27 Jul 2021 19:51:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why don't update minimum recovery point in xact_redo_abort"
},
{
"msg_contents": "On 2021/07/27 19:51, Michael Paquier wrote:\n> On Tue, Jul 27, 2021 at 05:26:05PM +0900, Fujii Masao wrote:\n>> ISTM that you're right. xact_redo_abort() should call XLogFlush() to\n>> update the minimum recovery point on truncation. This seems\n>> the oversight in commit 7bffc9b7bf.\n> \n> Indeed. It would be nice to see some refactoring of this code as\n> well? Both share a lot of steps, so adding something to one path can\n> easily lead to the other path being forgotten.\n\nThat's idea, but as far as I read both functions, they seem not\nso similar. So I'm not sure how much such refactoring would help.\n\nAnyway I attached the patch that changes only xact_redo_abort()\nso that it calls XLogFlush() to update min recovery point.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 28 Jul 2021 01:49:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't update minimum recovery point in xact_redo_abort"
},
{
"msg_contents": "Hi, Fujii\n\nThanks for your reply.\nAnd I want to share a patch about the bug with you, I add XLogFlush() in xact_redo_abort() to update the minimum recovery point.\n\nBest Regards,\nSuyu\n\n\n\n------------------------------------------------------------------\n发件人:Fujii Masao <masao.fujii@oss.nttdata.com>\n发送时间:2021年7月27日(星期二) 16:26\n收件人:蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: Why don't update minimum recovery point in xact_redo_abort\n\n\n\nOn 2021/07/27 2:38, 蔡梦娟(玊于) wrote:\n> Hi, all\n> \n> Recently, I got a PANIC while restarts standby, which can be reproduced by the following steps, based on pg 11:\n> 1. begin a transaction in primary node;\n> 2. create a table in the transaction;\n> 3. insert lots of data into the table;\n> 4. do a checkpoint, and restart standby after checkpoint is done in primary node;\n> 5. insert/update lots of data into the table again;\n> 6. abort the transaction.\n\nI could reproduce the issue by using the similar steps and\ndisabling full_page_writes, in the master branch.\n\n\n> \n> after step 6, fast shutdown standby node, and then restart standby, you will get a PANIC log, and the backtrace is:\n> #0 0x00007fc663e5a277 in raise () from /lib64/libc.so.6\n> #1 0x00007fc663e5b968 in abort () from /lib64/libc.so.6\n> #2 0x0000000000c89f01 in errfinish (dummy=0) at elog.c:707\n> #3 0x0000000000c8cba3 in elog_finish (elevel=22, fmt=0xdccc18 \"WAL contains references to invalid pages\") at elog.c:1658\n> #4 0x00000000005e476a in XLogCheckInvalidPages () at xlogutils.c:253\n> #5 0x00000000005cbc1a in CheckRecoveryConsistency () at xlog.c:9477\n> #6 0x00000000005ca5c5 in StartupXLOG () at xlog.c:8609\n> #7 0x0000000000a025a5 in StartupProcessMain () at startup.c:274\n> #8 0x0000000000643a5c in AuxiliaryProcessMain (argc=2, argv=0x7ffe4e4849a0) at bootstrap.c:485\n> #9 0x0000000000a00620 in StartChildProcess (type=StartupProcess) at postmaster.c:6215\n> #10 0x00000000009f92c6 in PostmasterMain (argc=3, argv=0x4126500) at postmaster.c:1506\n> #11 0x00000000008eab64 in main (argc=3, argv=0x4126500) at main.c:232\n> \n> I think the reason for the above error is as follows:\n> 1. the transaction in primary node was aborted finally, the standby node also deleted the table files after replayed the xlog record, however, without updating minimum recovery point;\n> 2. primary node did a checkpoint before abort, and then standby node is restarted, so standby node will recovery from a point where the table has already been created and data has been inserted into the table;\n> 3. when standby node restarts after step 6, it will find the page needed during recovery doesn't exist, which has already been deleted by xact_redo_abort before, so standby node will treat this page as an invalid page;\n> 4. xact_redo_abort drop relation files without updating minumum recovery point, before standby node replay the abort xlog record and forget invalid pages again, it will reach consistency because the abort xlogrecord lsn is greater than minrecoverypoint;\n> 5. during checkRecoveryConsistency, it will check invalid pages, and find that there is invalid page, and the PANIC log will be generated.\n> \n> So why don't update minimum recovery point in xact_redo_abort, just like XLogFlush in xact_redo_commit, in which way standby could reach consistency and check invalid pages after replayed the abort xlogrecord.\n\nISTM that you're right. xact_redo_abort() should call XLogFlush() to\nupdate the minimum recovery point on truncation. This seems\nthe oversight in commit 7bffc9b7bf.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 28 Jul 2021 00:55:18 +0800",
"msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaV2h5IGRvbid0IHVwZGF0ZSBtaW5pbXVtIHJlY292ZXJ5IHBvaW50IGluIHhh?=\n =?UTF-8?B?Y3RfcmVkb19hYm9ydA==?="
},
{
"msg_contents": "On 27/07/2021 19:49, Fujii Masao wrote:\n> Anyway I attached the patch that changes only xact_redo_abort()\n> so that it calls XLogFlush() to update min recovery point.\n\nLooks good to me, thanks! FWIW, I used the attached script to reproduce \nthis.\n\n- Heikki",
"msg_date": "Wed, 28 Jul 2021 00:25:09 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Why don't update minimum recovery point in xact_redo_abort"
},
{
"msg_contents": "\n\nOn 2021/07/28 1:55, 蔡梦娟(玊于) wrote:\n> \n> Hi, Fujii\n> \n> Thanks for your reply.\n> And I want to share a patch about the bug with you, I add XLogFlush() in xact_redo_abort() to update the minimum recovery point.\n\nThanks for the patch! It looks almost the same as the patch I posted upthread.\nOne diff between them is that you copy-and-pasted the comments for update of\nminRecoveryPoint, but instead I just added the comment \"See comments ... in\nxact_redo_commit()\". IMO it's better to avoid putting the same (a bit long)\ncomments in multiple places so that we can more easily maintain the comments.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:43:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IOWbnuWkje+8mldoeSBkb24ndCB1cGRhdGUgbWluaW11bSByZWNv?=\n =?UTF-8?Q?very_point_in_xact=5fredo=5fabort?="
},
{
"msg_contents": "\n\nOn 2021/07/28 6:25, Heikki Linnakangas wrote:\n> On 27/07/2021 19:49, Fujii Masao wrote:\n>> Anyway I attached the patch that changes only xact_redo_abort()\n>> so that it calls XLogFlush() to update min recovery point.\n> \n> Looks good to me, thanks! FWIW, I used the attached script to reproduce this.\n\nThanks for the review!\n\nBarring any objection, I will commit the patch and\nback-patch it to all supported versions.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:44:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't update minimum recovery point in xact_redo_abort"
},
{
"msg_contents": "\n\nOn 2021/07/28 12:44, Fujii Masao wrote:\n> \n> \n> On 2021/07/28 6:25, Heikki Linnakangas wrote:\n>> On 27/07/2021 19:49, Fujii Masao wrote:\n>>> Anyway I attached the patch that changes only xact_redo_abort()\n>>> so that it calls XLogFlush() to update min recovery point.\n>>\n>> Looks good to me, thanks! FWIW, I used the attached script to reproduce this.\n> \n> Thanks for the review!\n> \n> Barring any objection, I will commit the patch and\n> back-patch it to all supported versions.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 29 Jul 2021 01:39:42 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't update minimum recovery point in xact_redo_abort"
}
] |
[
{
"msg_contents": "Hi\n\nI got a report from Gabriele Bartolini and team that the pg_settings\nview does not get the pending_restart flag set when a setting's line is\nremoved from a file (as opposed to its value changed).\n\nThe explanation seems to be that GUC_PENDING_RESTART is set by\nset_config_option, but when ProcessConfigFileInternal is called only to\nprovide data (applySettings=true), then set_config_option is never\ncalled and thus the flag doesn't get set.\n\nI tried the attached patch, which sets GUC_PENDING_RESTART if we're\ndoing pg_file_settings(). Then any subsequent read of pg_settings will\nhave the pending_restart flag set. This seems to work correctly, and\nconsistently with the case where you change a line (without removing it)\nin unpatched master.\n\nYou could argue that this is *weird* (why does reading pg_file_settings\nset a flag in global state?) ... but that weirdness is not something\nthis patch is introducing.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)",
"msg_date": "Mon, 26 Jul 2021 19:02:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pg_settings.pending_restart not set when line removed"
},
{
"msg_contents": "> On 27 Jul 2021, at 01:02, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> I tried the attached patch, which sets GUC_PENDING_RESTART if we're\n> doing pg_file_settings(). Then any subsequent read of pg_settings will\n> have the pending_restart flag set. This seems to work correctly, and\n> consistently with the case where you change a line (without removing it)\n> in unpatched master.\n\nLGTM after testing this with various changes and ways to reload, and +1 for\nbeing consistent with changing a line.\n\n> You could argue that this is *weird* (why does reading pg_file_settings\n> set a flag in global state?) ... but that weirdness is not something\n> this patch is introducing.\n\nAgreed.\n\nAnother unrelated weird issue is that we claim that the config file \"contains\nerrors\" if the context is < PGC_SIGHUP for restart required settings. It seems\na bit misleading to call pending_restart an error since it implies (in my\nreading) there were syntax errors. But, unrelated to this patch and report\n(and it's been like that for a long time), just hadn't noticed that before.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 11:45:03 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_settings.pending_restart not set when line removed"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> You could argue that this is *weird* (why does reading pg_file_settings\n> set a flag in global state?) ... but that weirdness is not something\n> this patch is introducing.\n\nUgh. I think this patch is likely to create more problems than it fixes.\nWe should be looking to get rid of that flag, not make its behavior even\nmore complex.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jul 2021 09:57:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_settings.pending_restart not set when line removed"
},
{
"msg_contents": "On 2021-Jul-27, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > You could argue that this is *weird* (why does reading pg_file_settings\n> > set a flag in global state?) ... but that weirdness is not something\n> > this patch is introducing.\n> \n> Ugh. I think this patch is likely to create more problems than it fixes.\n\nI doubt that; as I said, the code already behaves in exactly that way\nfor closely related operations, so this patch isn't doing anything new.\nNote that that loop this code is modifying only applies to lines that\nare removed from the config file.\n\n> We should be looking to get rid of that flag, not make its behavior even\n> more complex.\n\nAre you proposing to remove the pending_restart column from pg_settings?\nThat seems a step backwards.\n\nWhat I know is that the people behind management interfaces need some\nway to know if changes to the config need a system restart. Now maybe \nwe want that feature to be implemented in a different way than it\ncurrently is. I frankly don't care enough to do that myself. I agree\nthat the current mechanism is weird, but it's going to take more than a\none-liner to fix it. The one-liner is only intended to fix a very\nspecific problem.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL! PostgreSQL was\namazing when I first started using it at 7.2, and I'm continually astounded by\nlearning new features and techniques made available by the continuing work of\nthe development team.\"\nBerend Tober, http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php\n\n\n",
"msg_date": "Tue, 27 Jul 2021 10:29:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_settings.pending_restart not set when line removed"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jul-27, Tom Lane wrote:\n>> Ugh. I think this patch is likely to create more problems than it fixes.\n\n> I doubt that; as I said, the code already behaves in exactly that way\n> for closely related operations, so this patch isn't doing anything new.\n> Note that that loop this code is modifying only applies to lines that\n> are removed from the config file.\n\nAh ... what's wrong here is some combination of -ENOCAFFEINE and a\nnot-great explanation on your part. I misread the patch as adding\n\"error = true\" rather than the flag change. I agree that setting\nthe GUC_PENDING_RESTART flag is fine, because set_config_option\nwould do so if we reached it. Perhaps you should comment this\nalong that line? Also, the cases inside set_config_option\nuniformly set that flag *before* the ereport not after.\nSo maybe like\n\n if (gconf->context < PGC_SIGHUP)\n {\n+ /* The removal can't be effective without a restart */\n+ gconf->status |= GUC_PENDING_RESTART;\n ereport(elevel,\n (errcode(ERRCODE_CANT_CHANGE_RUNTIME_PARAM),\n\nOne thing worth checking is whether the pending-restart flag\ngets cleared again if the DBA undoes the removal and again\nreloads. I think the right thing will happen, but it'd be\nworthwhile to check.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jul 2021 11:00:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_settings.pending_restart not set when line removed"
},
{
"msg_contents": "On 2021-Jul-27, Tom Lane wrote:\n\n> So maybe like\n> \n> if (gconf->context < PGC_SIGHUP)\n> {\n> + /* The removal can't be effective without a restart */\n> + gconf->status |= GUC_PENDING_RESTART;\n> ereport(elevel,\n> (errcode(ERRCODE_CANT_CHANGE_RUNTIME_PARAM),\n\nThanks, done that way.\n\n> One thing worth checking is whether the pending-restart flag\n> gets cleared again if the DBA undoes the removal and again\n> reloads. I think the right thing will happen, but it'd be\n> worthwhile to check.\n\nI tested this -- it works correctly AFAICS.\n\nThanks!\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nAl principio era UNIX, y UNIX habló y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n\n\n",
"msg_date": "Tue, 27 Jul 2021 16:17:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_settings.pending_restart not set when line removed"
},
{
"msg_contents": "Hi Alvaro,\r\n\r\n\r\n\r\nOn Tue, 27 Jul 2021 at 22:17, Alvaro Herrera <alvherre@alvh.no-ip.org>\r\nwrote:\r\n\r\n> I tested this -- it works correctly AFAICS.\r\n>\r\n\r\nNope, IMO it doesn't work correctly.\r\nLets say we have recovery_target = '' in the config:\r\nlocalhost/postgres=# select name, setting, setting is null, pending_restart\r\nfrom pg_settings where name = 'recovery_target';\r\n name │ setting │ ?column? │ pending_restart\r\n─────────────────┼─────────┼──────────┼─────────────────\r\nrecovery_target │ │ f │ f\r\n(1 row)\r\n\r\n\r\nAfter that we remove it from the config and call pg_ctl reload. It sets the\r\npanding_restart.\r\nlocalhost/postgres=# select name, setting, setting is null, pending_restart\r\nfrom pg_settings where name = 'recovery_target';\r\n name │ setting │ ?column? │ pending_restart\r\n─────────────────┼─────────┼──────────┼─────────────────\r\nrecovery_target │ │ f │ t\r\n(1 row)\r\n\r\nIMO is totally wrong, because the actual value didn't change: it was an\r\nempty string in the config and now it remains an empty string due to the\r\ndefault value in the guc.c\r\n\r\nRegards,\r\n--\r\nAlexander Kukushkin\r\n\nHi Alvaro,On Tue, 27 Jul 2021 at 22:17, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\nI tested this -- it works correctly AFAICS.Nope, IMO it doesn't work correctly.Lets say we have recovery_target = '' in the config:localhost/postgres=# select name, setting, setting is null, pending_restart from pg_settings where name = 'recovery_target';\r\n name │ setting │ ?column? │ pending_restart ─────────────────┼─────────┼──────────┼─────────────────\r\n recovery_target │ │ f │ f\r\n(1 row)\nAfter that we remove it from the config and call pg_ctl reload. It sets the panding_restart.localhost/postgres=# select name, setting, setting is null, pending_restart from pg_settings where name = 'recovery_target';\r\n name │ setting │ ?column? │ pending_restart ─────────────────┼─────────┼──────────┼─────────────────\r\n recovery_target │ │ f │ t\r\n(1 row)\nIMO is totally \r\nwrong, because the actual value didn't change: it was an empty string in\r\n the config and now it remains an empty string due to the default value in the guc.cRegards,--Alexander Kukushkin",
"msg_date": "Fri, 13 Aug 2021 16:30:39 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_settings.pending_restart not set when line removed"
},
{
"msg_contents": "Alexander Kukushkin <cyberdemn@gmail.com> writes:\n> IMO is totally wrong, because the actual value didn't change: it was an\n> empty string in the config and now it remains an empty string due to the\n> default value in the guc.c\n\nI can't get very excited about that. The existing message about\n\"parameter \\\"%s\\\" cannot be changed without restarting the server\"\nwas emitted without regard to that fine point, and nobody has\ncomplained about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Aug 2021 10:45:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_settings.pending_restart not set when line removed"
}
] |
[
{
"msg_contents": "Folks,\n\nPlease find attached a patch to do $subject. It's down to a one table\nlookup and 3 instructions.\n\nIn covering the int64 versions, I swiped a light weight division from\nthe Ryu stuff. I'm pretty sure that what I did is not how to do\n#includes, but it's a PoC. What would be a better way to do this?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 26 Jul 2021 23:51:30 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Slim down integer formatting"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 9:51 AM David Fetter <david@fetter.org> wrote:\n>\n> In covering the int64 versions, I swiped a light weight division from\n> the Ryu stuff. I'm pretty sure that what I did is not how to do\n> #includes, but it's a PoC. What would be a better way to do this?\n>\n\nThat patch didn't apply for me (on latest source) so I've attached an\nequivalent with those changes, that does apply, and also tweaks the\nMakefile include path to address that #include issue.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia",
"msg_date": "Tue, 27 Jul 2021 12:28:22 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 12:28:22PM +1000, Greg Nancarrow wrote:\n> That patch didn't apply for me (on latest source) so I've attached an\n> equivalent with those changes, that does apply, and also tweaks the\n> Makefile include path to address that #include issue.\n\nWhen applying some micro-benchmarking to stress those APIs, how much\ndoes this change things? At the end of the day, this also comes down\nto an evaluation of pg_ulltoa_n() and pg_ultoa_n().\n\n #include \"common/int.h\"\n+#include \"d2s_intrinsics.h\"\nEr, are you sure about this part? The first version of the patch did\nthat in a different, also incorrect, way.\n--\nMichael",
"msg_date": "Tue, 27 Jul 2021 11:42:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On Tue, 27 Jul 2021 at 14:42, Michael Paquier <michael@paquier.xyz> wrote:\n> When applying some micro-benchmarking to stress those APIs, how much\n> does this change things? At the end of the day, this also comes down\n> to an evaluation of pg_ulltoa_n() and pg_ultoa_n().\n\nI'd suggest something like creating a table with, say 1 million INTs\nand testing the performance of copy <table> to '/dev/null';\n\nRepeat for BIGINT\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Jul 2021 14:53:14 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 12:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> #include \"common/int.h\"\n> +#include \"d2s_intrinsics.h\"\n> Er, are you sure about this part? The first version of the patch did\n> that in a different, also incorrect, way.\n\nEr, I was just trying to help out, so at least the patch could be\napplied (whether the patch has merit is a different story).\nAre you saying that it's incorrect to include that header file in this\nsource, or that's the wrong way to do it? (i.e. it's wrong to adjust\nthe makefile include path to pickup the location where that header is\nlocated and use #include \"<header>\"? That header is in src/common,\nwhich is not on the default include path).\nThe method I used certainly works, but you have objections?\nCan you clarify what you say is incorrect?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 27 Jul 2021 13:08:26 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On Tue, 27 Jul 2021 at 15:08, Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Jul 27, 2021 at 12:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > #include \"common/int.h\"\n> > +#include \"d2s_intrinsics.h\"\n> > Er, are you sure about this part? The first version of the patch did\n> > that in a different, also incorrect, way.\n>\n> Er, I was just trying to help out, so at least the patch could be\n> applied (whether the patch has merit is a different story).\n> Are you saying that it's incorrect to include that header file in this\n> source, or that's the wrong way to do it? (i.e. it's wrong to adjust\n> the makefile include path to pickup the location where that header is\n> located and use #include \"<header>\"? That header is in src/common,\n> which is not on the default include path).\n> The method I used certainly works, but you have objections?\n> Can you clarify what you say is incorrect?\n\nI think the mistake is that the header file is not in\nsrc/include/common. For some reason, it's ended up with all the .c\nfiles in src/common.\n\nI imagine Andrew did this because he didn't ever expect anything else\nto have a use for these. He indicates that in [1].\n\nMaybe Andrew can confirm?\n\n[1] https://www.postgresql.org/message-id/87mup9192t.fsf%40news-spur.riddles.org.uk\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Jul 2021 16:30:25 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": ">>>>> \"David\" == David Rowley <dgrowleyml@gmail.com> writes:\n\n David> I think the mistake is that the header file is not in\n David> src/include/common. For some reason, it's ended up with all the\n David> .c files in src/common.\n\n David> I imagine Andrew did this because he didn't ever expect anything\n David> else to have a use for these. He indicates that in [1].\n\n David> Maybe Andrew can confirm?\n\nIt's not that anything else wouldn't have a use for those, it's that\nanything else SHOULDN'T have a use for those because they are straight\nimports from upstream Ryu code, they aren't guaranteed to work outside\nthe ranges of values required by Ryu, and if we decided to import a\nnewer copy of Ryu then it would be annoying if any other code was broken\nas a result.\n\nIn short, please don't include d2s_intrinsics.h from anywhere other than\nd2s.c\n\n-- \nAndrew.\n\n\n",
"msg_date": "Tue, 27 Jul 2021 06:07:52 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 3:08 PM Andrew Gierth\n<andrew@tao11.riddles.org.uk> wrote:\n>\n> In short, please don't include d2s_intrinsics.h from anywhere other than\n> d2s.c\n>\n\nThanks for the clarification.\nThe patch author will need to take this into account for the patch's\ndiv1e8() usage.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 27 Jul 2021 16:53:16 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On 2021-Jul-26, David Fetter wrote:\n\n> Folks,\n> \n> Please find attached a patch to do $subject. It's down to a one table\n> lookup and 3 instructions.\n\nSo how much faster is it than the original?\n\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Tue, 27 Jul 2021 09:43:54 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On Wed, 28 Jul 2021 at 01:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> So how much faster is it than the original?\n\nI only did some very quick tests. They're a bit noisey. The results\nindicate an average speedup of 1.7%, but the noise level is above\nthat, so unsure.\n\ncreate table a (a int);\ninsert into a select a from generate_series(1,1000000)a;\nvacuum freeze a;\n\nbench.sql: copy a to '/dev/null';\n\nmaster @ 93a0bf239\ndrowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 postgres\nlatency average = 153.815 ms\nlatency average = 152.955 ms\nlatency average = 147.491 ms\n\nmaster + v2 patch\ndrowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 postgres\nlatency average = 144.749 ms\nlatency average = 151.525 ms\nlatency average = 150.392 ms\n\nDavid\n\n\n",
"msg_date": "Wed, 28 Jul 2021 13:17:43 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 01:17:43PM +1200, David Rowley wrote:\n> On Wed, 28 Jul 2021 at 01:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > So how much faster is it than the original?\n> \n> I only did some very quick tests. They're a bit noisey. The results\n> indicate an average speedup of 1.7%, but the noise level is above\n> that, so unsure.\n> \n> create table a (a int);\n> insert into a select a from generate_series(1,1000000)a;\n> vacuum freeze a;\n> \n> bench.sql: copy a to '/dev/null';\n> \n> master @ 93a0bf239\n> drowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 postgres\n> latency average = 153.815 ms\n> latency average = 152.955 ms\n> latency average = 147.491 ms\n> \n> master + v2 patch\n> drowley@amd3990x:~$ pgbench -n -f bench.sql -T 60 postgres\n> latency average = 144.749 ms\n> latency average = 151.525 ms\n> latency average = 150.392 ms\n\nThanks for testing this! I got a few promising results early on with\n-O0, and the technique seemed like a neat way to do things.\n\nI generated a million int4s intended to be uniformly distributed\nacross the range of int4, and similarly across int8.\n\nint4:\n patch 6feebcb6b44631c3dc435e971bd80c2dd218a5ab\nlatency average: 362.149 ms 359.933 ms\nlatency stddev: 3.44 ms 3.40 ms\n\nint8:\n patch 6feebcb6b44631c3dc435e971bd80c2dd218a5ab\nlatency average: 434.944 ms 422.270 ms\nlatency stddev: 3.23 ms 4.02 ms\n\nwhen compiled with -O2:\n\nint4:\n patch 6feebcb6b44631c3dc435e971bd80c2dd218a5ab\nlatency average: 167.262 ms 148.673 ms\nlatency stddev: 6.26 ms 1.28 ms\n\ni.e. it was actually slower, at least over the 10 runs I did.\n\nI assume that \"uniform distribution across the range\" is a bad case\nscenario for ints, but I was a little surprised to measure worse\nperformance. Interestingly, what I got for int8s generated to be\nuniform across their range was\n\nint8:\n patch 6feebcb6b44631c3dc435e971bd80c2dd218a5ab\nlatency average: 171.737 ms 174.013 ms\nlatency stddev: 1.94 ms 6.84 ms\n\nwhich doesn't look like a difference to me.\n\nIntuitively, I'd expect us to get things in the neighborhood of 1 a\nlot more often than things in the neighborhood of 1 << (30 or 60). Do\nwe have some idea of the distribution, or at least of the distribution\nfamily, that we should expect for ints?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 28 Jul 2021 02:25:43 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Slim down integer formatting"
},
{
"msg_contents": "On Wed, 28 Jul 2021 at 14:25, David Fetter <david@fetter.org> wrote:\n> Intuitively, I'd expect us to get things in the neighborhood of 1 a\n> lot more often than things in the neighborhood of 1 << (30 or 60). Do\n> we have some idea of the distribution, or at least of the distribution\n> family, that we should expect for ints?\n\nserial and bigserial are generally going to start with smaller\nnumbers. Larger and longer lived databases those numbers could end up\non the larger side. serial and bigserial should be a fairly large\nportion of the use case for integer types, so anything that slows down\nint4out and int8out for lower order numbers is not a good idea. I\nthink it would have to be a very small slowdown on the low order\nnumbers vs a large speedup for higher order numbers for us to even\nconsider it.\n\nDavid\n\n\n",
"msg_date": "Wed, 28 Jul 2021 14:39:28 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slim down integer formatting"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed $subject while rebasing my patch at [1] to enable batching\nfor the inserts used in cross-partition UPDATEs.\n\nb676ac443b6 did this:\n\n- resultRelInfo->ri_PlanSlots[resultRelInfo->ri_NumSlots] =\n- MakeSingleTupleTableSlot(planSlot->tts_tupleDescriptor,\n- planSlot->tts_ops);\n...\n+ {\n+ TupleDesc tdesc =\nCreateTupleDescCopy(slot->tts_tupleDescriptor);\n+\n+ resultRelInfo->ri_Slots[resultRelInfo->ri_NumSlots] =\n+ MakeSingleTupleTableSlot(tdesc, slot->tts_ops);\n...\n+ resultRelInfo->ri_PlanSlots[resultRelInfo->ri_NumSlots] =\n+ MakeSingleTupleTableSlot(tdesc, planSlot->tts_ops);\n\nI think it can be incorrect to use the same TupleDesc for both the\nslots in ri_Slots (for ready-to-be-inserted tuples) and ri_PlanSlots\n(for subplan output tuples). Especially if you consider what we did\nin 86dc90056df that was committed into v14. In that commit, we\nchanged the way a subplan under ModifyTable produces its output for an\nUPDATE statement. Previously, it would produce a tuple matching the\ntarget table's TupleDesc exactly (plus any junk columns), but now it\nproduces only a partial tuple containing the values for the changed\ncolumns.\n\nSo it's better to revert to using planSlot->tts_tupleDescriptor for\nthe slots in ri_PlanSlots. Attached a patch to do so.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/33/2992/",
"msg_date": "Tue, 27 Jul 2021 11:28:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "a thinko in b676ac443b6"
},
{
"msg_contents": "On 7/27/21 4:28 AM, Amit Langote wrote:\n> Hi,\n> \n> I noticed $subject while rebasing my patch at [1] to enable batching\n> for the inserts used in cross-partition UPDATEs.\n> \n> b676ac443b6 did this:\n> \n> - resultRelInfo->ri_PlanSlots[resultRelInfo->ri_NumSlots] =\n> - MakeSingleTupleTableSlot(planSlot->tts_tupleDescriptor,\n> - planSlot->tts_ops);\n> ...\n> + {\n> + TupleDesc tdesc =\n> CreateTupleDescCopy(slot->tts_tupleDescriptor);\n> +\n> + resultRelInfo->ri_Slots[resultRelInfo->ri_NumSlots] =\n> + MakeSingleTupleTableSlot(tdesc, slot->tts_ops);\n> ...\n> + resultRelInfo->ri_PlanSlots[resultRelInfo->ri_NumSlots] =\n> + MakeSingleTupleTableSlot(tdesc, planSlot->tts_ops);\n> \n> I think it can be incorrect to use the same TupleDesc for both the\n> slots in ri_Slots (for ready-to-be-inserted tuples) and ri_PlanSlots\n> (for subplan output tuples). Especially if you consider what we did\n> in 86dc90056df that was committed into v14. In that commit, we\n> changed the way a subplan under ModifyTable produces its output for an\n> UPDATE statement. Previously, it would produce a tuple matching the\n> target table's TupleDesc exactly (plus any junk columns), but now it\n> produces only a partial tuple containing the values for the changed\n> columns.\n> \n> So it's better to revert to using planSlot->tts_tupleDescriptor for\n> the slots in ri_PlanSlots. Attached a patch to do so.\n> \n\nYeah, this seems like a clear mistake - thanks for noticing it! Clearly\nno regression test triggered the issue, so I wonder what's the best way\nto test it - any idea what would the test need to do?\n\nI did some quick experiments with batched INSERTs with RETURNING clauses\nand/or subplans, but I haven't succeeded in triggering the issue :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 27 Jul 2021 18:07:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: a thinko in b676ac443b6"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 1:07 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 7/27/21 4:28 AM, Amit Langote wrote:\n> > I think it can be incorrect to use the same TupleDesc for both the\n> > slots in ri_Slots (for ready-to-be-inserted tuples) and ri_PlanSlots\n> > (for subplan output tuples). Especially if you consider what we did\n> > in 86dc90056df that was committed into v14. In that commit, we\n> > changed the way a subplan under ModifyTable produces its output for an\n> > UPDATE statement. Previously, it would produce a tuple matching the\n> > target table's TupleDesc exactly (plus any junk columns), but now it\n> > produces only a partial tuple containing the values for the changed\n> > columns.\n> >\n> > So it's better to revert to using planSlot->tts_tupleDescriptor for\n> > the slots in ri_PlanSlots. Attached a patch to do so.\n>\n> Yeah, this seems like a clear mistake - thanks for noticing it! Clearly\n> no regression test triggered the issue, so I wonder what's the best way\n> to test it - any idea what would the test need to do?\n\nAh, I should've mentioned that this is only a problem if the original\nquery is an UPDATE. With v14, only INSERTs can use batching and the\nsubplan does output a tuple matching the target table's TupleDesc in\ntheir case, so the code seems to work fine.\n\nAs I said, I noticed a problem when rebasing my patch to allow\ncross-partition UPDATEs to use batching for the inserts that are\nperformed internally to implement such UPDATEs. The exact problem I\nnoticed is that the following Assert tts_virtual_copyslot() (via\nExecCopySlot called with an ri_PlanSlots[] entry) failed:\n\n Assert(srcdesc->natts <= dstslot->tts_tupleDescriptor->natts);\n\nsrcdesc in this case is a slot in ri_PlanSlots[] initialized with the\ntarget table's TupleDesc (the \"thinko\") and dstslot is the slot that\nholds subplan's output tuple ('planSlot' passed to ExecInsert). As I\ndescribed in my previous email, dstslot's TupleDesc can be narrower\nthan the target table's TupleDesc in the case of an UPDATE, so the\nAssert can fail in theory.\n\n> I did some quick experiments with batched INSERTs with RETURNING clauses\n> and/or subplans, but I haven't succeeded in triggering the issue :-(\n\nYeah, no way to trigger this except UPDATEs. It still seems like a\ngood idea to fix this in v14.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Jul 2021 10:15:34 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: a thinko in b676ac443b6"
},
{
"msg_contents": "On 7/28/21 3:15 AM, Amit Langote wrote:\n> On Wed, Jul 28, 2021 at 1:07 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 7/27/21 4:28 AM, Amit Langote wrote:\n>>> I think it can be incorrect to use the same TupleDesc for both the\n>>> slots in ri_Slots (for ready-to-be-inserted tuples) and ri_PlanSlots\n>>> (for subplan output tuples). Especially if you consider what we did\n>>> in 86dc90056df that was committed into v14. In that commit, we\n>>> changed the way a subplan under ModifyTable produces its output for an\n>>> UPDATE statement. Previously, it would produce a tuple matching the\n>>> target table's TupleDesc exactly (plus any junk columns), but now it\n>>> produces only a partial tuple containing the values for the changed\n>>> columns.\n>>>\n>>> So it's better to revert to using planSlot->tts_tupleDescriptor for\n>>> the slots in ri_PlanSlots. Attached a patch to do so.\n>>\n>> Yeah, this seems like a clear mistake - thanks for noticing it! Clearly\n>> no regression test triggered the issue, so I wonder what's the best way\n>> to test it - any idea what would the test need to do?\n> \n> Ah, I should've mentioned that this is only a problem if the original\n> query is an UPDATE. With v14, only INSERTs can use batching and the\n> subplan does output a tuple matching the target table's TupleDesc in\n> their case, so the code seems to work fine.\n> \n> As I said, I noticed a problem when rebasing my patch to allow\n> cross-partition UPDATEs to use batching for the inserts that are\n> performed internally to implement such UPDATEs. The exact problem I\n> noticed is that the following Assert tts_virtual_copyslot() (via\n> ExecCopySlot called with an ri_PlanSlots[] entry) failed:\n> \n> Assert(srcdesc->natts <= dstslot->tts_tupleDescriptor->natts);\n> \n> srcdesc in this case is a slot in ri_PlanSlots[] initialized with the\n> target table's TupleDesc (the \"thinko\") and dstslot is the slot that\n> holds subplan's output tuple ('planSlot' passed to ExecInsert). As I\n> described in my previous email, dstslot's TupleDesc can be narrower\n> than the target table's TupleDesc in the case of an UPDATE, so the\n> Assert can fail in theory.\n> \n>> I did some quick experiments with batched INSERTs with RETURNING clauses\n>> and/or subplans, but I haven't succeeded in triggering the issue :-(\n> \n> Yeah, no way to trigger this except UPDATEs. It still seems like a\n> good idea to fix this in v14.\n> \n\nOK, thanks for the explanation. So it's benign in v14, but I agree it's\nbetter to fix it there too. I'll get this sorted/pushed.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:19:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: a thinko in b676ac443b6"
}
] |
[
{
"msg_contents": "Hello,\n\nI've notived that pg_receivewal logic for deciding which LSN to start \nstreaming at consists of:\n - looking up the latest WAL file in our destination folder, and resume from \nhere\n - if there isn't, use the current flush location instead.\n\nThis behaviour surprised me when using it with a replication slot: I was \nexpecting it to start streaming at the last flushed location from the \nreplication slot instead. If you consider a backup tool which will take \npg_receivewal's output and transfer it somewhere else, using the replication \nslot position would be the easiest way to ensure we don't miss WAL files.\n\nDoes that make sense ? \n\nI don't know if it should be the default, toggled by a command line flag, or if \nwe even should let the user provide a LSN.\n\nI'd be happy to implement any of that if we agree.\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 07:50:39 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "pg_receivewal starting position"
},
{
"msg_contents": "At Tue, 27 Jul 2021 07:50:39 +0200, Ronan Dunklau <ronan.dunklau@aiven.io> wrote in \n> Hello,\n> \n> I've notived that pg_receivewal logic for deciding which LSN to start \n> streaming at consists of:\n> - looking up the latest WAL file in our destination folder, and resume from \n> here\n> - if there isn't, use the current flush location instead.\n> \n> This behaviour surprised me when using it with a replication slot: I was \n> expecting it to start streaming at the last flushed location from the \n> replication slot instead. If you consider a backup tool which will take \n> pg_receivewal's output and transfer it somewhere else, using the replication \n> slot position would be the easiest way to ensure we don't miss WAL files.\n> \n> Does that make sense ? \n> \n> I don't know if it should be the default, toggled by a command line flag, or if \n> we even should let the user provide a LSN.\n\n*I* think it is completely reasonable (or at least convenient or less\nastonishing) that pg_receivewal starts from the restart_lsn of the\nreplication slot to use. The tool already decides the clean-start LSN\na bit unusual way. And it seems to me that proposed behavior can be\nthe default when -S is specified.\n\n> I'd be happy to implement any of that if we agree.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 28 Jul 2021 15:22:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le mercredi 28 juillet 2021, 08:22:30 CEST Kyotaro Horiguchi a écrit :\n> At Tue, 27 Jul 2021 07:50:39 +0200, Ronan Dunklau <ronan.dunklau@aiven.io>\n> wrote in\n> > Hello,\n> > \n> > I've notived that pg_receivewal logic for deciding which LSN to start\n> > \n> > streaming at consists of:\n> > - looking up the latest WAL file in our destination folder, and resume\n> > from\n> > \n> > here\n> > \n> > - if there isn't, use the current flush location instead.\n> > \n> > This behaviour surprised me when using it with a replication slot: I was\n> > expecting it to start streaming at the last flushed location from the\n> > replication slot instead. If you consider a backup tool which will take\n> > pg_receivewal's output and transfer it somewhere else, using the\n> > replication slot position would be the easiest way to ensure we don't\n> > miss WAL files.\n> > \n> > Does that make sense ?\n> > \n> > I don't know if it should be the default, toggled by a command line flag,\n> > or if we even should let the user provide a LSN.\n> \n> *I* think it is completely reasonable (or at least convenient or less\n> astonishing) that pg_receivewal starts from the restart_lsn of the\n> replication slot to use. The tool already decides the clean-start LSN\n> a bit unusual way. And it seems to me that proposed behavior can be\n> the default when -S is specified.\n> \n\nAs of now we can't get the replication_slot restart_lsn with a replication \nconnection AFAIK.\n\nThis implies that the patch could require the user to specify a maintenance-db \nparameter, and we would use that if provided to fetch the replication slot \ninfo, or fallback to the previous behaviour. I don't really like this approach \nas the behaviour changing wether we supply a maintenance-db parameter or not \nis error-prone for the user.\n\nAnother option would be to add a new replication command (for example \nACQUIRE_REPLICATION_SLOT <slot_name>) to set the replication slot as the \ncurrent one, and return some info about it (restart_lsn at least for a \nphysical slot).\n\nI don't see any reason not to make it work for logical replication connections \n/ slots, but it wouldn't be that useful since we can query the database in \nthat case.\n\nAcquiring the replication slot instead of just reading it would make sure that \nno other process could start the replication between the time we read the \nrestart_lsn and when we issue START_REPLICATION. START_REPLICATION could then \ncheck if we already have a replication slot, and ensure it is the same one as \nthe one we're trying to use. \n\nFrom pg_receivewal point of view, this would amount to:\n\n - check if we currently have wal in the target directory.\n - if we do, proceed as currently done, by computing the start lsn and \ntimeline from the last archived wal\n - if we don't, and we have a slot, run ACQUIRE_REPLICATION_SLOT. Use the \nrestart_lsn as the start lsn if there is one, and don't provide a timeline\n - if we still don't have a start_lsn, fallback to using the current server \nwal position as is done. \n\nWhat do you think ? Which information should we provide about the slot ?\n\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:57:39 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "At Wed, 28 Jul 2021 12:57:39 +0200, Ronan Dunklau <ronan.dunklau@aiven.io> wrote in \n> Le mercredi 28 juillet 2021, 08:22:30 CEST Kyotaro Horiguchi a écrit :\n> > At Tue, 27 Jul 2021 07:50:39 +0200, Ronan Dunklau <ronan.dunklau@aiven.io>\n> > wrote in\n> > > I don't know if it should be the default, toggled by a command line flag,\n> > > or if we even should let the user provide a LSN.\n> > \n> > *I* think it is completely reasonable (or at least convenient or less\n> > astonishing) that pg_receivewal starts from the restart_lsn of the\n> > replication slot to use. The tool already decides the clean-start LSN\n> > a bit unusual way. And it seems to me that proposed behavior can be\n> > the default when -S is specified.\n> > \n> \n> As of now we can't get the replication_slot restart_lsn with a replication \n> connection AFAIK.\n> \n> This implies that the patch could require the user to specify a maintenance-db \n> parameter, and we would use that if provided to fetch the replication slot \n> info, or fallback to the previous behaviour. I don't really like this approach \n> as the behaviour changing wether we supply a maintenance-db parameter or not \n> is error-prone for the user.\n>\n> Another option would be to add a new replication command (for example \n> ACQUIRE_REPLICATION_SLOT <slot_name>) to set the replication slot as the \n> current one, and return some info about it (restart_lsn at least for a \n> physical slot).\n\nI didn't thought in details. But I forgot that ordinary SQL commands\nhave been prohibited in physical replication connection. So we need a\nnew replication command but it's not that a big deal.\n\n> I don't see any reason not to make it work for logical replication connections \n> / slots, but it wouldn't be that useful since we can query the database in \n> that case.\n\nOrdinary SQL queries are usable on a logical replication slot so\nI'm not sure how logical replication connection uses the command.\nHowever, like you, I wouldn't bother restricting the command to\nphysical replication, but perhaps the new command should return the\nslot type.\n\n> Acquiring the replication slot instead of just reading it would make sure that \n> no other process could start the replication between the time we read the \n> restart_lsn and when we issue START_REPLICATION. START_REPLICATION could then \n> check if we already have a replication slot, and ensure it is the same one as \n> the one we're trying to use. \n\nI'm not sure it's worth adding complexity for such strictness.\nSTART_REPLICATION safely fails if someone steals the slot meanwhile.\nIn the first place there's no means to protect a slot from others\nwhile idle. One possible problem is the case where START_REPLICATION\nsuccessfully acquire the slot after the new command failed. But that\ncase doesn't seem worse than the case someone advances the slot while\nabsence. So I think READ_REPLICATION_SLOT is sufficient.\n\n> From pg_receivewal point of view, this would amount to:\n> \n> - check if we currently have wal in the target directory.\n> - if we do, proceed as currently done, by computing the start lsn and \n> timeline from the last archived wal\n> - if we don't, and we have a slot, run ACQUIRE_REPLICATION_SLOT. Use the \n> restart_lsn as the start lsn if there is one, and don't provide a timeline\n> - if we still don't have a start_lsn, fallback to using the current server \n> wal position as is done. \n\nThat's pretty much it.\n\n> What do you think ? Which information should we provide about the slot ?\n\nWe need the timeline id to start with when using restart_lsn. The\ncurrent timeline can be used in most cases but there's a case where\nthe LSN is historical.\n\npg_receivewal doesn't send a replication status report when a segment\nis finished. So after pg_receivewal stops just after a segment is\nfinished, the slot stays at the beginning of the last segment. Thus\nnext time it will start from there, creating a duplicate segment.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 29 Jul 2021 14:32:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le jeudi 29 juillet 2021, 07:32:37 CEST Kyotaro Horiguchi a écrit :\n> I didn't thought in details. But I forgot that ordinary SQL commands\n> have been prohibited in physical replication connection. So we need a\n> new replication command but it's not that a big deal.\n\nThank you for your feedback !\n\n\n> \n> > I don't see any reason not to make it work for logical replication\n> > connections / slots, but it wouldn't be that useful since we can query\n> > the database in that case.\n> \n> Ordinary SQL queries are usable on a logical replication slot so\n> I'm not sure how logical replication connection uses the command.\n> However, like you, I wouldn't bother restricting the command to\n> physical replication, but perhaps the new command should return the\n> slot type.\n> \n\nOk done in the attached patch.\n\n> \n> I'm not sure it's worth adding complexity for such strictness.\n> START_REPLICATION safely fails if someone steals the slot meanwhile.\n> In the first place there's no means to protect a slot from others\n> while idle. One possible problem is the case where START_REPLICATION\n> successfully acquire the slot after the new command failed. But that\n> case doesn't seem worse than the case someone advances the slot while\n> absence. So I think READ_REPLICATION_SLOT is sufficient.\n> \n\nOk, I implemented it like this. I tried to follow the pg_get_replication_slots \napproach with regards to how to prevent concurrent modification while reading \nthe slot.\n\n> > From pg_receivewal point of view, this would amount to:\n> > - check if we currently have wal in the target directory.\n> > \n> > - if we do, proceed as currently done, by computing the start lsn and\n> > \n> > timeline from the last archived wal\n> > \n> > - if we don't, and we have a slot, run ACQUIRE_REPLICATION_SLOT. Use the\n> > \n> > restart_lsn as the start lsn if there is one, and don't provide a timeline\n> > \n> > - if we still don't have a start_lsn, fallback to using the current\n> > server\n> > \n> > wal position as is done.\n> \n> That's pretty much it.\n\nGreat.\n\n> \n> > What do you think ? Which information should we provide about the slot ?\n> \n> We need the timeline id to start with when using restart_lsn. The\n> current timeline can be used in most cases but there's a case where\n> the LSN is historical.\n\nOk, see below.\n\n> \n> pg_receivewal doesn't send a replication status report when a segment\n> is finished. So after pg_receivewal stops just after a segment is\n> finished, the slot stays at the beginning of the last segment. Thus\n> next time it will start from there, creating a duplicate segment.\n\nI'm not sure I see where the problem is here. If we don't keep the segments in \npg_walreceiver target directory, then it would be the responsibility of \nwhoever moved them to make sure we don't have duplicates, or to handle them \ngracefully. \n\nEven if we were forcing a feedback after a segment is finished, there could \nstill be a problem if the feedback never made it to the server but the segment \nwas here. It might be interesting to send a feedback anyway.\n\nPlease find attached two patches implementing what we've been discussing.\n\nPatch 0001 adds the new READ_REPLICATION_SLOT command. \nIt returns for a given slot the type, restart_lsn, flush_lsn, \nrestart_lsn_timeline and flush_lsn_timeline.\nThe timelines are determined by reading the current timeline history, and \nfinding the timeline where we may find the record. I didn't find explicit test \nfor eg IDENTIFY_SYSTEM so didn't write one either for this new command, but it \nis tested indirectly in patch 0002.\n\nPatch 0002 makes pg_receivewal use that command if we use a replication slot \nand the command is available, and use the restart_lsn and restart_lsn_timeline \nas a starting point. It also adds a small test to check that we start back \nfrom the previous restart_lsn instead of the current flush position when our \ndestination directory does not contain any WAL file. \n\nI also noticed we don't test following a timeline switch. It would probably be \ngood to add that, both for the case where we determine the previous timeline \nfrom the archived segments and when it comes from the new command. What do you \nthink ?\n\n\nRegards,\n\n-- \nRonan Dunklau",
"msg_date": "Thu, 29 Jul 2021 11:09:40 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le jeudi 29 juillet 2021, 11:09:40 CEST Ronan Dunklau a écrit :\n> Patch 0001 adds the new READ_REPLICATION_SLOT command.\n> It returns for a given slot the type, restart_lsn, flush_lsn,\n> restart_lsn_timeline and flush_lsn_timeline.\n> The timelines are determined by reading the current timeline history, and\n> finding the timeline where we may find the record. I didn't find explicit\n> test for eg IDENTIFY_SYSTEM so didn't write one either for this new\n> command, but it is tested indirectly in patch 0002.\n> \n> Patch 0002 makes pg_receivewal use that command if we use a replication slot\n> and the command is available, and use the restart_lsn and\n> restart_lsn_timeline as a starting point. It also adds a small test to\n> check that we start back from the previous restart_lsn instead of the\n> current flush position when our destination directory does not contain any\n> WAL file.\n> \n> I also noticed we don't test following a timeline switch. It would probably\n> be good to add that, both for the case where we determine the previous\n> timeline from the archived segments and when it comes from the new command.\n> What do you think ?\n\nFollowing the discussion at [1], I refactored the implementation into \nstreamutil and added a third patch making use of it in pg_basebackup itself in \norder to fail early if the replication slot doesn't exist, so please find \nattached v2 for that.\n\nBest regards,\n\n[1]: https://www.postgresql.org/message-id/flat/\nCAD21AoDYmv0yJMQnWtCx_kZGwVZnkQSTQ1re2JNSgM0k37afYQ%40mail.gmail.com\n\n-- \nRonan Dunklau",
"msg_date": "Thu, 26 Aug 2021 14:14:27 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 02:14:27PM +0200, Ronan Dunklau wrote:\n> Following the discussion at [1], I refactored the implementation into \n> streamutil and added a third patch making use of it in pg_basebackup itself in \n> order to fail early if the replication slot doesn't exist, so please find \n> attached v2 for that.\n\nThanks for the split. That helps a lot.\n\n+\n+\n /*\n * Run IDENTIFY_SYSTEM through a given connection and give back to caller\n\nThe patch series has some noise diffs here and there, you may want to\nclean up that for clarity.\n\n+ if (slot == NULL || !slot->in_use)\n+ {\n+ LWLockRelease(ReplicationSlotControlLock);\n+\n+ ereport(ERROR,\n+ (errcode(ERRCODE_UNDEFINED_OBJECT),\nLWLocks are released on ERROR, so no need for LWLockRelease() here.\n\n+ <listitem>\n+ <para>\n+ Read information about the named replication slot. This is\nuseful to determine which WAL location we should be asking the server\nto start streaming at.\n\nA nit. You may want to be more careful with the indentation of the\ndocumentation. Things are usually limited in width for readability.\nMore <literal> markups would be nice for the field names used in the\ndescriptions.\n\n+ if (slot == NULL || !slot->in_use) [...]\n+ ereport(ERROR,\n+ (errcode(ERRCODE_UNDEFINED_OBJECT),\n+ errmsg(\"replication slot \\\"%s\\\" does not exist\",\n+ cmd->slotname)));\n[...]\n+ if (PQntuples(res) == 0)\n+ {\n+ pg_log_error(\"replication slot %s does not exist\", slot_name);\n+ PQclear(0);\n+ return false;\nSo, the backend and ReadReplicationSlot() report an ERROR if a slot\ndoes not exist but pg_basebackup's GetSlotInformation() does the same\nif there are no tuples returned. That's inconsistent. Wouldn't it be\nmore instinctive to return a NULL tuple instead if the slot does not\nexist to be able to check after real ERRORs in frontends using this\ninterface? A slot in use exists, so the error is a bit confusing here\nanyway, no?\n\n+ * XXX: should we allow the caller to specify which target timeline it wants\n+ * ?\n+ */\nWhat are you thinking about here?\n\n-# restarts of pg_receivewal will see this segment as full..\n+# restarts of pg_receivewal will see this segment as full../\nTypo.\n\n+ TupleDescInitBuiltinEntry(tupdesc, (AttrNumber) 4, \"restart_lsn_timeline\",\n+ INT4OID, -1, 0);\n+ TupleDescInitBuiltinEntry(tupdesc, (AttrNumber) 5, \"confirmed_flush_lsn_timeline\",\n+ INT4OID, -1, 0);\nI would call these restart_tli and confirmed_flush_tli., without the\n\"lsn\" part.\n\nThe patch for READ_REPLICATION_SLOT could have some tests using a\nconnection that has replication=1 in some TAP tests. We do that in\n001_stream_rep.pl with SHOW, as one example.\n\n- 'slot0'\n+ 'slot0', '-p',\n+ \"$port\"\nSomething we are missing here?\n--\nMichael",
"msg_date": "Fri, 27 Aug 2021 12:44:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 5:45 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> order to fail early if the replication slot doesn't exist, so please find\n> attached v2 for that.\n\nThanks for the patches. Here are some comments:\n\n1) While the intent of these patches looks good, I have following\nconcern with new replication command READ_REPLICATION_SLOT: what if\nthe pg_receivewal exits (because user issued a SIGINT or for some\nreason) after flushing the received WAL to disk, before it sends\nsendFeedback to postgres server's walsender so that it doesn't get a\nchance to update the restart_lsn in the replication slot via\nPhysicalConfirmReceivedLocation. If the pg_receivewal is started\nagain, isn't it going to get the previous restart_lsn and receive the\nlast chunk of flushed WAL again?\n\n2) What is the significance of READ_REPLICATION_SLOT for logical\nreplication slots? I read above that somebody suggested to restrict\nthe walsender to handle READ_REPLICATION_SLOT for physical replication\nslots so that the callers will see a command failure. But I tend to\nthink that it is clean to have this command common for both physical\nand logical replication slots and the callers can have an Assert(type\n== 'physical').\n\n3) Isn't it useful to send active, active_pid info of the replication\nslot via READ_REPLICATION_SLOT? pg_receivewal can use Assert(active ==\ntrue && active_pid == getpid()) as an assertion to ensure that it is\nthe sole owner of the replication slot? Also, is it good send\nwal_status info\n\n4) I think below messages should start with lower case letter and also\nthere are some typos:\n+ pg_log_warning(\"Could not fetch the replication_slot \\\"%s\\\" information \"\n+ pg_log_warning(\"Server does not suport fetching the slot's position, \"\nsomething like:\n+ pg_log_warning(\"could not fetch replication slot \\\"%s\\\" information, \"\n+ \"resuming from current server position instead\", replication_slot);\n+ pg_log_warning(\"server does not support fetching replication slot\ninformation, \"\n+ \"resuming from current server position instead\");\n\n5) How about emitting the above messages in case of \"verbose\"?\n\n6) How about an assertion like below?\n+ if (stream.startpos == InvalidXLogRecPtr)\n+ {\n+ stream.startpos = serverpos;\n+ stream.timeline = servertli;\n+ }\n+\n+Assert(stream.startpos != InvalidXLogRecPtr)>>\n\n7) How about we let pg_receivewal use READ_REPLICATION_SLOT as an option?\n\n8) Just an idea, how about we store pg_receivewal's lastFlushPosition\nin a file before pg_receivewal exits and compare it with the\nrestart_lsn that it received from the replication slot, if\nlastFlushPosition == received_restart_lsn well and good, if not, then\nsomething would have happened and we always start at the\nlastFlushPosition ?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 28 Aug 2021 17:40:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le vendredi 27 août 2021, 05:44:32 CEST Michael Paquier a écrit :\n> On Thu, Aug 26, 2021 at 02:14:27PM +0200, Ronan Dunklau wrote:\n> > Following the discussion at [1], I refactored the implementation into\n> > streamutil and added a third patch making use of it in pg_basebackup\n> > itself in order to fail early if the replication slot doesn't exist, so\n> > please find attached v2 for that.\n> \n> Thanks for the split. That helps a lot.\n> \n\nThank you very much for the review, please find attached an updated patchset.\nI've also taken into account some remarks made by Bharath Rupireddy.\n\n> +\n> +\n> /*\n> * Run IDENTIFY_SYSTEM through a given connection and give back to caller\n> \n> The patch series has some noise diffs here and there, you may want to\n> clean up that for clarity.\n\nOk, sorry about that.\n\n> \n> + if (slot == NULL || !slot->in_use)\n> + {\n> + LWLockRelease(ReplicationSlotControlLock);\n> +\n> + ereport(ERROR,\n> + (errcode(ERRCODE_UNDEFINED_OBJECT),\n> LWLocks are released on ERROR, so no need for LWLockRelease() here.\n> \n\nFollowing your suggestion of not erroring out on an unexisting slot this point \nis no longer be relevant, but thanks for pointing this out anyway.\n\n> + <listitem>\n> + <para>\n> + Read information about the named replication slot. This is\n> useful to determine which WAL location we should be asking the server\n> to start streaming at.\n> \n> A nit. You may want to be more careful with the indentation of the\n> documentation. Things are usually limited in width for readability.\n> More <literal> markups would be nice for the field names used in the\n> descriptions.\n\nOk.\n\n> \n> + if (slot == NULL || !slot->in_use) \n> \n> [...] +\n> ereport(ERROR,\n> + (errcode(ERRCODE_UNDEFINED_OBJECT),\n> + errmsg(\"replication slot \\\"%s\\\" does not exist\",\n> + cmd->slotname)));\n> [...]\n> + if (PQntuples(res) == 0)\n> + {\n> + pg_log_error(\"replication slot %s does not exist\",\n> slot_name); + PQclear(0);\n> + return false;\n> So, the backend and ReadReplicationSlot() report an ERROR if a slot\n> does not exist but pg_basebackup's GetSlotInformation() does the same\n> if there are no tuples returned. That's inconsistent. Wouldn't it be\n> more instinctive to return a NULL tuple instead if the slot does not\n> exist to be able to check after real ERRORs in frontends using this\n> interface? \n\nThe attached patch returns no tuple at all when the replication slot doesn't \nexist. I'm not sure if that's what you meant by returning a NULL tuple ? \n\n> A slot in use exists, so the error is a bit confusing here\n> anyway, no?\n\nFrom my understanding, a slot *not* in use doesn't exist anymore, as such I \ndon't really understand this point. Could you clarify ?\n\n\n> \n> + * XXX: should we allow the caller to specify which target timeline it\n> wants + * ?\n> + */\n> What are you thinking about here?\n\nI was thinking that maybe instead of walking back the timeline history from \nwhere we currently are on the server, we could allow an additional argument \nfor the client to specify which timeline it wants. But I guess a replication \nslot can not be present for a past, divergent timeline ? I have removed that \nsuggestion. \n\n> \n> -# restarts of pg_receivewal will see this segment as full..\n> +# restarts of pg_receivewal will see this segment as full../\n> Typo.\n\nOk.\n\n> \n> + TupleDescInitBuiltinEntry(tupdesc, (AttrNumber) 4,\n> \"restart_lsn_timeline\", + INT4OID, -1, 0);\n> + TupleDescInitBuiltinEntry(tupdesc, (AttrNumber) 5,\n> \"confirmed_flush_lsn_timeline\", + INT4OID, -1,\n> 0);\n> I would call these restart_tli and confirmed_flush_tli., without the\n> \"lsn\" part.\n\nOk.\n> \n> The patch for READ_REPLICATION_SLOT could have some tests using a\n> connection that has replication=1 in some TAP tests. We do that in\n> 001_stream_rep.pl with SHOW, as one example.\n\nOk. I added the physical part to 001_stream_rep.pl, using the protocol \ninterface directly for creating / dropping the slot, and some tests for \nlogical replication slots to 006_logical_decoding.pl.\n\n> \n> - 'slot0'\n> + 'slot0', '-p',\n> + \"$port\"\n> Something we are missing here?\n\nThe thing we're missing here is a wrapper for command_fails_like. I've added \nthis to PostgresNode.pm.\n\nBest regards,\n\n\n-- \nRonan Dunklau",
"msg_date": "Mon, 30 Aug 2021 11:55:42 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le samedi 28 août 2021, 14:10:25 CEST Bharath Rupireddy a écrit :\n> On Thu, Aug 26, 2021 at 5:45 PM Ronan Dunklau <ronan.dunklau@aiven.io> \nwrote:\n> > order to fail early if the replication slot doesn't exist, so please find\n> > attached v2 for that.\n> \n> Thanks for the patches. Here are some comments:\n> \n\nThank you for this review ! Please see the other side of the thread where I \nanswered Michael Paquier with a new patchset, which includes some of your \nproposed modifications.\n\n> 1) While the intent of these patches looks good, I have following\n> concern with new replication command READ_REPLICATION_SLOT: what if\n> the pg_receivewal exits (because user issued a SIGINT or for some\n> reason) after flushing the received WAL to disk, before it sends\n> sendFeedback to postgres server's walsender so that it doesn't get a\n> chance to update the restart_lsn in the replication slot via\n> PhysicalConfirmReceivedLocation. If the pg_receivewal is started\n> again, isn't it going to get the previous restart_lsn and receive the\n> last chunk of flushed WAL again?\n\nI've kept the existing directory as the source of truth if we have any WAL \nthere already. If we don't, we fallback to the restart_lsn on the replication \nslot.\nSo in the event that we start it again from somewhere else where we don't have \naccess to those WALs anymore, we could be receiving it again, which in my \nopinion is better than losing everything in between in that case. \n\n> \n> 2) What is the significance of READ_REPLICATION_SLOT for logical\n> replication slots? I read above that somebody suggested to restrict\n> the walsender to handle READ_REPLICATION_SLOT for physical replication\n> slots so that the callers will see a command failure. But I tend to\n> think that it is clean to have this command common for both physical\n> and logical replication slots and the callers can have an Assert(type\n> == 'physical').\n\nI've updated the patch to make it easy for the caller to check the slot's type \nand added a verification for those cases.\n\nIn general, I tried to implement the meaning of the different fields exactly as \nit's done in the pg_replication_slots view.\n\n> \n> 3) Isn't it useful to send active, active_pid info of the replication\n> slot via READ_REPLICATION_SLOT? pg_receivewal can use Assert(active ==\n> true && active_pid == getpid()) as an assertion to ensure that it is\n> the sole owner of the replication slot? Also, is it good send\n> wal_status info\n\nTypically we read the slot before attaching to it, since what we want to do is \ncheck if it exists. It may be worthwile to check that it's not already used by \nanother backend though.\n\n> \n> 4) I think below messages should start with lower case letter and also\n> there are some typos:\n> + pg_log_warning(\"Could not fetch the replication_slot \\\"%s\\\" information \"\n> + pg_log_warning(\"Server does not suport fetching the slot's position, \"\n> something like:\n> + pg_log_warning(\"could not fetch replication slot \\\"%s\\\" information, \"\n> + \"resuming from current server position instead\", replication_slot);\n> + pg_log_warning(\"server does not support fetching replication slot\n> information, \"\n> + \"resuming from current server position instead\");\n> \nI've rephrased it a bit in v3, let me know if that's what you had in mind.\n\n\n> 5) How about emitting the above messages in case of \"verbose\"?\n\nI think it is useful to warn the user even if not in the verbose case, but if \nthe consensus is to move it to verbose only output I can change it.\n\n> \n> 6) How about an assertion like below?\n> + if (stream.startpos == InvalidXLogRecPtr)\n> + {\n> + stream.startpos = serverpos;\n> + stream.timeline = servertli;\n> + }\n> +\n> +Assert(stream.startpos != InvalidXLogRecPtr)>>\n\nGood idea.\n\n> \n> 7) How about we let pg_receivewal use READ_REPLICATION_SLOT as an option?\n\nFrom my point of view, I already expected it to use something like that when \nusing a replication slot. Maybe an option to turn it off could be useful ? \n\n> \n> 8) Just an idea, how about we store pg_receivewal's lastFlushPosition\n> in a file before pg_receivewal exits and compare it with the\n> restart_lsn that it received from the replication slot, if\n> lastFlushPosition == received_restart_lsn well and good, if not, then\n> something would have happened and we always start at the\n> lastFlushPosition ?\n\nThe patch idea originally came from the fact that some utility use \npg_receivewal to fetch WALs, and move them elsewhere. In that sense I don't \nreally see what value this brings compared to the existing (and unmodified) way \nof computing the restart position from the already stored files ?\n\nBest regards,\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 30 Aug 2021 11:56:07 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 3:26 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> Thank you for this review ! Please see the other side of the thread where I\n> answered Michael Paquier with a new patchset, which includes some of your\n> proposed modifications.\n\nThanks for the updated patches. Here are some comments on v3-0001\npatch. I will continue to review 0002 and 0003.\n\n1) Missing full stops \".\" at the end.\n+ <literal>logical</literal>\n+ when following the current timeline history\n+ position, when following the current timeline history\n\n2) Can we have the \"type\" column as \"slot_type\" just to keep in sync\nwith the pg_replication_slots view?\n\n3) Can we mention the link to pg_replication_slots view in the columns\n- \"type\", \"restart_lsn\", \"confirmed_flush_lsn\"?\nSomething like: the \"slot_type\"/\"restart_lsn\"/\"confirmed_flush_lsn\" is\nsame as <link linkend=\"view-pg-replication-slots\"><structname>pg_replication_slots</structname></link>\nview.\n\n4) Can we use \"read_replication_slot\" instead of\n\"identify_replication_slot\", just to be in sync with the actual\ncommand?\n\n5) Are you actually copying the slot contents into the slot_contents\nvariable here? Isn't just taking the pointer to the shared memory?\n+ /* Copy slot contents while holding spinlock */\n+ SpinLockAcquire(&slot->mutex);\n+ slot_contents = *slot;\n+ SpinLockRelease(&slot->mutex);\n+ LWLockRelease(ReplicationSlotControlLock);\n\nYou could just do:\n+ Oid dbid;\n+ XLogRecPtr restart_lsn;\n+ XLogRecPtr confirmed_flush;\n\n+ /* Copy the required slot contents */\n+ SpinLockAcquire(&slot->mutex);\n+ dbid = slot.data.database;\n+ restart_lsn = slot.data.restart_lsn;\n+ confirmed_flush = slot.data.confirmed_flush;\n+ SpinLockRelease(&slot->mutex);\n+ LWLockRelease(ReplicationSlotControlLock);\n\n6) It looks like you are not sending anything as a response to the\nREAD_REPLICATION_SLOT command, if the slot specified doesn't exist.\nYou are just calling end_tup_output which just calls rShutdown (points\nto donothingCleanup of printsimpleDR)\nif (has_value)\ndo_tup_output(tstate, values, nulls);\nend_tup_output(tstate);\n\nCan you test the use case where the pg_receivewal asks\nREAD_REPLICATION_SLOT with a non-existent replication slot and see\nwith your v3 patch how it behaves?\n\nWhy don't you remove has_value flag and do this in ReadReplicationSlot:\nDatum values[5];\nbool nulls[5];\nMemSet(values, 0, sizeof(values));\nMemSet(nulls, 0, sizeof(nulls));\n\n+ dest = CreateDestReceiver(DestRemoteSimple);\n+ tstate = begin_tup_output_tupdesc(dest, tupdesc, &TTSOpsVirtual);\n+ do_tup_output(tstate, values, nulls);\n+ end_tup_output(tstate);\n\n7) Why don't we use 2 separate variables \"restart_tli\",\n\"confirmed_flush_tli\" instead of \"slots_position_timeline\", just to be\nmore informative?\n\n8) I still have the question like, how can a client (pg_receivewal for\ninstance) know that it is the current owner/user of the slot it is\nrequesting the info? As I said upthread, why don't we send \"active\"\nand \"active_pid\" fields of the pg_replication_slots view?\nAlso, it would be good to send the \"wal_status\" field so that the\nclient can check if the \"wal_status\" is not \"lost\"?\n\n9) There are 2 new lines at the end of ReadReplicationSlot. We give\nonly one new line after each function definition.\nend_tup_output(tstate);\n}\n<<1stnewline>>\n<<2ndnewline>>\n/*\n * Handle TIMELINE_HISTORY command.\n */\n\n10) Why do we need to have two test cases for \"non-existent\" slots?\nIsn't the test case after \"DROP REPLICATION\" enough?\n+($ret, $stdout, $stderr) = $node_primary->psql(\n+ 'postgres', 'READ_REPLICATION_SLOT non_existent_slot;',\n+ extra_params => [ '-d', $connstr_rep ]);\n+ok( $ret == 0,\n+ \"READ_REPLICATION_SLOT does not produce an error with non existent slot\");\n+ok( $stdout eq '',\n+ \"READ_REPLICATION_SLOT returns no tuple if a slot is non existent\");\n\nYou can just rename the test case name from:\n+ \"READ_REPLICATION_SLOT returns no tuple if a slot has been dropped\");\nto\n+ \"READ_REPLICATION_SLOT returns no tuple if a slot is non existent\");\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 31 Aug 2021 16:47:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Tue, Aug 31, 2021 at 4:47 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Aug 30, 2021 at 3:26 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > Thank you for this review ! Please see the other side of the thread where I\n> > answered Michael Paquier with a new patchset, which includes some of your\n> > proposed modifications.\n>\n> Thanks for the updated patches. Here are some comments on v3-0001\n> patch. I will continue to review 0002 and 0003.\n\nContinuing my review on the v3 patch set:\n\n0002:\n1) I think the following message\n\"could not fetch replication slot LSN: got %d rows and %d fields,\nexpected %d rows and %d or more fields\"\nshould be:\n\"could not read replication slot: got %d rows and %d fields, expected\n%d rows and %d or more fields\"\n\n2) Why GetSlotInformation is returning InvalidXLogRecPtr? Change it to\nreturn false instead.\n\n3) Alignment issue:\nChange below 2 lines:\n+ appendPQExpBuffer(query, \"READ_REPLICATION_SLOT %s\",\n+ slot_name);\nTo 1 line, as it will be under 80 char limit:\n+ appendPQExpBuffer(query, \"READ_REPLICATION_SLOT %s\", slot_name);\n\n4) Shouldn't your GetSlotInformation return what ever the\nREAD_REPLICATION_SLOT gets as output columns to be more generic?\nCallers would pass non-null/null pointers to the inputs required/not\nrequired for them. Please refer to RunIdentifySystem how it does that.\nGetSlotInformation can just read the tuples that are interested for the callers.\nbool\nGetSlotInformation(PGconn *conn, const char *slot_name, char **slot_type,\n XLogRecPtr *restart_lsn, uint32 *restart_lsn_tli,\n XLogRecPtr *confirmed_lsn, XLogRecPtr *confirmed_lsn_tli)\n{\nif (slot_type)\n{\n/* get the slot_type value from the received tuple */\n}\nif (restart_lsn)\n{\n/* get the restart_lsn value from the received tuple */\n}\nif (restart_lsn_tli)\n{\n/* get the restart_lsn_tli value from the received tuple */\n}\n\nif (confirmed_lsn)\n{\n/* get the confirmed_lsn value from the received tuple */\n}\n\nif (confirmed_lsn_tli)\n{\n/* get the confirmed_lsn_tli value from the received tuple */\n}\n}\n\n5) How about below as GetSlotInformation function comment?\n/*\n * Run READ_REPLICATION_SLOT through a given connection and give back to caller\n * following information of the slot if requested:\n * - type\n * - restart lsn\n * - restart lsn timeline\n * - confirmed lsn\n * - confirmed lsn timeline\n */\n\n6) Do you need +#include \"pqexpbuffer.h\" in pg_receivewal.c?\n\n7) Missing \",\" after information and it is not required to use \"the\"\nin the messages.\nChange below\n+ pg_log_warning(\"could not fetch the replication_slot \\\"%s\\\" information \"\n+ \"resuming from the current server position instead\", replication_slot);\nto:\n+ pg_log_warning(\"could not fetch replication_slot \\\"%s\\\" information, \"\n+ \"resuming from current server position instead\", replication_slot);\n\n8) A typo \"suport\". Ignore this comment, if you incorporate review comment #10.\nChange below\npg_log_warning(\"server does not suport fetching the slot's position, \"\n \"resuming from the current server position instead\");\nto:\npg_log_warning(\"server does not support getting start LSN from\nreplication slot, \"\n \"resuming from current server position instead\");\n\n9) I think you should do free the memory allocated to slot_type by\nGetSlotInformation:\n+ if (strcmp(slot_type, \"physical\") != 0)\n+ {\n+ pg_log_error(\"slot \\\"%s\\\" is not a physical replication slot\",\n+ replication_slot);\n+ exit(1);\n+ }\n+\n+ pg_free(slot_type);\n\n10) Isn't it PQclear(res);?\n+ PQclear(0);\n\n11) I don't think you need to check for the null value of\nreplication_slot. In the StreamLog it can't be null, so you can safely\nremove the below if condition.\n+ if (replication_slot)\n+ {\n\n12) How about\n/* Try to get start position from server's replication slot information */\ninsted of\n+ /* Try to get it from the slot if any, and the server supports it */\n\n13) When you say that the server supports the new\nREAD_REPLICATION_SLOT command only if version >= 15000, then isn't it\nthe function GetSlotInformation doing the following:\nbool\nGetSlotInformation(PGconn *conn,....,bool *is_supported)\n{\n\nif (PQserverVersion(conn) < 15000)\n{\n*is_supported = false;\nreturn false;\n}\n*is_supported = true;\n}\nSo, the callers will just do:\n/* Try to get start position from server's replication slot information */\nchar *slot_type = NULL;\nbool is_supported;\n\nif (!GetSlotInformation(conn, replication_slot, &stream.startpos,\n&stream.timeline, &slot_type, &is_supported))\n{\nif (!is_supported)\npg_log_warning(\"server does not support getting start LSN from\nreplication slot, \"\n \"resuming from current server position instead\");\nelse\npg_log_warning(\"could not fetch replication_slot \\\"%s\\\" information, \"\n \"resuming from current server position instead\",\n replication_slot);\n}\n\nif (slot_type && strcmp(slot_type, \"physical\") != 0)\n{\npg_log_error(\"slot \\\"%s\\\" is not a physical replication slot\",\n replication_slot);\nexit(1);\n}\n\npg_free(slot_type);\n}\n\n14) Instead of just\n+ if (strcmp(slot_type, \"physical\") != 0)\ndo\n+ if (slot_type && strcmp(slot_type, \"physical\") != 0)\n\n0003:\n1) The message should start with lower case: \"slot \\\"%s\\\" is not a\nphysical replication slot\".\n+ pg_log_error(\"Slot \\\"%s\\\" is not a physical replication slot\",\n\n2)\n+ if (strcmp(slot_type, \"physical\") != 0)\ndo\n+ if (slot_type && strcmp(slot_type, \"physical\") != 0)\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 31 Aug 2021 17:51:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le mardi 31 août 2021, 13:17:22 CEST Bharath Rupireddy a écrit :\n> On Mon, Aug 30, 2021 at 3:26 PM Ronan Dunklau <ronan.dunklau@aiven.io> \nwrote:\n> > Thank you for this review ! Please see the other side of the thread where\n> > I\n> > answered Michael Paquier with a new patchset, which includes some of your\n> > proposed modifications.\n> \n> Thanks for the updated patches. Here are some comments on v3-0001\n> patch. I will continue to review 0002 and 0003.\n\nThank you ! I will send a new version shortly, once I address your remarks \nconcerning patch 0002 (and hopefully 0003 :-) ) \n> \n> 1) Missing full stops \".\" at the end.\n> + <literal>logical</literal>\n> + when following the current timeline history\n> + position, when following the current timeline history\n> \n\nGood catch, I will take care of it for the next version.\n\n> 2) Can we have the \"type\" column as \"slot_type\" just to keep in sync\n> with the pg_replication_slots view?\n\nYou're right, it makes more sense like this.\n\n> \n> 3) Can we mention the link to pg_replication_slots view in the columns\n> - \"type\", \"restart_lsn\", \"confirmed_flush_lsn\"?\n> Something like: the \"slot_type\"/\"restart_lsn\"/\"confirmed_flush_lsn\" is\n> same as <link\n> linkend=\"view-pg-replication-slots\"><structname>pg_replication_slots</struc\n> tname></link> view.\n\nSame as above, thanks.\n\n> \n> 4) Can we use \"read_replication_slot\" instead of\n> \"identify_replication_slot\", just to be in sync with the actual\n> command?\n\nThat must have been a leftover from an earlier version of the patch, I will fix \nit also.\n\n> \n> 5) Are you actually copying the slot contents into the slot_contents\n> variable here? Isn't just taking the pointer to the shared memory?\n> + /* Copy slot contents while holding spinlock */\n> + SpinLockAcquire(&slot->mutex);\n> + slot_contents = *slot;\n> + SpinLockRelease(&slot->mutex);\n> + LWLockRelease(ReplicationSlotControlLock);\n> \n> You could just do:\n> + Oid dbid;\n> + XLogRecPtr restart_lsn;\n> + XLogRecPtr confirmed_flush;\n> \n> + /* Copy the required slot contents */\n> + SpinLockAcquire(&slot->mutex);\n> + dbid = slot.data.database;\n> + restart_lsn = slot.data.restart_lsn;\n> + confirmed_flush = slot.data.confirmed_flush;\n> + SpinLockRelease(&slot->mutex);\n> + LWLockRelease(ReplicationSlotControlLock);\n\nIt's probably simpler that way.\n\n> \n> 6) It looks like you are not sending anything as a response to the\n> READ_REPLICATION_SLOT command, if the slot specified doesn't exist.\n> You are just calling end_tup_output which just calls rShutdown (points\n> to donothingCleanup of printsimpleDR)\n> if (has_value)\n> do_tup_output(tstate, values, nulls);\n> end_tup_output(tstate);\n\n> \n> Can you test the use case where the pg_receivewal asks\n> READ_REPLICATION_SLOT with a non-existent replication slot and see\n> with your v3 patch how it behaves?\n\n> \n> Why don't you remove has_value flag and do this in ReadReplicationSlot:\n> Datum values[5];\n> bool nulls[5];\n> MemSet(values, 0, sizeof(values));\n> MemSet(nulls, 0, sizeof(nulls));\n> \n> + dest = CreateDestReceiver(DestRemoteSimple);\n> + tstate = begin_tup_output_tupdesc(dest, tupdesc, &TTSOpsVirtual);\n> + do_tup_output(tstate, values, nulls);\n> + end_tup_output(tstate);\n\nAs for the API, I think it's cleaner to just send an empty result set instead \nof a null tuple in that case but I won't fight over it if there is consensus on \nhaving an all-nulls tuple value instead. \n\nThere is indeed a bug, but not here, in the second patch: I still test the \nslot type even when we didn't fetch anything. So I will add a test for that \ntoo. \n> \n> 7) Why don't we use 2 separate variables \"restart_tli\",\n> \"confirmed_flush_tli\" instead of \"slots_position_timeline\", just to be\n> more informative?\n\nYou're right.\n\n> \n> 8) I still have the question like, how can a client (pg_receivewal for\n> instance) know that it is the current owner/user of the slot it is\n> requesting the info? As I said upthread, why don't we send \"active\"\n> and \"active_pid\" fields of the pg_replication_slots view?\n> Also, it would be good to send the \"wal_status\" field so that the\n> client can check if the \"wal_status\" is not \"lost\"?\n\n As for pg_receivewal, it can only check that it's not active at that time, \nsince we only aquire the replication slot once we know the start_lsn. For the \nbasebackup case it's the same thing as we only want to check if it exists. \nAs such, I didn't add them as I didn't see the need, but if it can be useful \nwhy not ? I will do that in the next version.\n\n> \n> 9) There are 2 new lines at the end of ReadReplicationSlot. We give\n> only one new line after each function definition.\n> end_tup_output(tstate);\n> }\n> <<1stnewline>>\n> <<2ndnewline>>\n> /*\n> * Handle TIMELINE_HISTORY command.\n> */\n> \n\nOk !\n\n\n> 10) Why do we need to have two test cases for \"non-existent\" slots?\n> Isn't the test case after \"DROP REPLICATION\" enough?\n> +($ret, $stdout, $stderr) = $node_primary->psql(\n> + 'postgres', 'READ_REPLICATION_SLOT non_existent_slot;',\n> + extra_params => [ '-d', $connstr_rep ]);\n> +ok( $ret == 0,\n> + \"READ_REPLICATION_SLOT does not produce an error with non existent\n> slot\"); +ok( $stdout eq '',\n> + \"READ_REPLICATION_SLOT returns no tuple if a slot is non existent\");\n> \n> You can just rename the test case name from:\n> + \"READ_REPLICATION_SLOT returns no tuple if a slot has been dropped\");\n> to\n> + \"READ_REPLICATION_SLOT returns no tuple if a slot is non existent\");\n> \n\nI wanted to test both the case where no slot by this name exists, and the case \nwhere it has been dropped hence still referenced but marked as not \"in_use\". \nMaybe it's not worth it and we can remove the first case as you suggest.\n\nBest regards,\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 31 Aug 2021 15:08:47 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 11:55:42AM +0200, Ronan Dunklau wrote:\n> Le vendredi 27 août 2021, 05:44:32 CEST Michael Paquier a écrit :\n>> + if (slot == NULL || !slot->in_use) \n>> \n>> [...] +\n>> ereport(ERROR,\n>> + (errcode(ERRCODE_UNDEFINED_OBJECT),\n>> + errmsg(\"replication slot \\\"%s\\\" does not exist\",\n>> + cmd->slotname)));\n>> [...]\n>> + if (PQntuples(res) == 0)\n>> + {\n>> + pg_log_error(\"replication slot %s does not exist\",\n>> slot_name); + PQclear(0);\n>> + return false;\n>> So, the backend and ReadReplicationSlot() report an ERROR if a slot\n>> does not exist but pg_basebackup's GetSlotInformation() does the same\n>> if there are no tuples returned. That's inconsistent. Wouldn't it be\n>> more instinctive to return a NULL tuple instead if the slot does not\n>> exist to be able to check after real ERRORs in frontends using this\n>> interface? \n> \n> The attached patch returns no tuple at all when the replication slot doesn't \n> exist. I'm not sure if that's what you meant by returning a NULL tuple ? \n\nJust return a tuple filled only with NULL values. I would tend to\ncode things so as we set all the flags of nulls[] to true by default,\nremove has_value and define the number of columns in a #define, as of:\n#define READ_REPLICATION_SLOT_COLS 5\n[...]\nDatum values[READ_REPLICATION_SLOT_COLS];\nbool nulls[READ_REPLICATION_SLOT_COLS];\n[...]\nMemSet(nulls, true, READ_REPLICATION_SLOT_COLS * sizeof(bool));\nAssert(i == READ_REPLICATION_SLOT_COLS); // when filling values.\n\nThis would make ReadReplicationSlot() cleaner by removing all the\nelse{} blocks coded now to handle the NULL values, and that would be\nmore in-line with the documentation where we state that one tuple is\nreturned. Note that this is the same kind of behavior for similar\nin-core functions where objects are queried if they don't exist.\n\nI would also suggest a reword of some of the docs, say:\n+ <listitem>\n+ <para>\n+ Read the information of a replication slot. Returns a tuple with\n+ <literal>NULL</literal> values if the replication slot does not\n+ exist.\n+ </para>\n\n> \n>> A slot in use exists, so the error is a bit confusing here\n>> anyway, no?\n> \n> From my understanding, a slot *not* in use doesn't exist anymore, as such I \n> don't really understand this point. Could you clarify ?\n\nYeah, sorry about that. I did not recall the exact meaning of\nin_use. Considering the slot as undefined if the flag is false is the\nright thing to do.\n\n> I was thinking that maybe instead of walking back the timeline history from \n> where we currently are on the server, we could allow an additional argument \n> for the client to specify which timeline it wants. But I guess a replication \n> slot can not be present for a past, divergent timeline ? I have removed that \n> suggestion.\n\nThe parent TLI history is linear, so I'd find that a bit strange in\nconcept, FWIW.\n\n>> - 'slot0'\n>> + 'slot0', '-p',\n>> + \"$port\"\n>> Something we are missing here?\n> \n> The thing we're missing here is a wrapper for command_fails_like. I've added \n> this to PostgresNode.pm.\n\nIt may be better to apply this bit separately, then.\n--\nMichael",
"msg_date": "Wed, 1 Sep 2021 09:24:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 3:26 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > 7) How about we let pg_receivewal use READ_REPLICATION_SLOT as an option?\n>\n> From my point of view, I already expected it to use something like that when\n> using a replication slot. Maybe an option to turn it off could be useful ?\n\nIMO, pg_receivewal should have a way to turn off/on using\nREAD_REPLICATION_SLOT. Imagine if the postgres server doesn't support\nREAD_REPLICATION_SLOT (a lower version) but for some reasons the\npg_receivewal(running separately) is upgraded to the version that uses\nREAD_REPLICATION_SLOT, knowing that the server doesn't support\nREAD_REPLICATION_SLOT why should user let pg_receivewal run an\nunnecessary code?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 1 Sep 2021 10:30:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "At Wed, 1 Sep 2021 10:30:05 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Aug 30, 2021 at 3:26 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > > 7) How about we let pg_receivewal use READ_REPLICATION_SLOT as an option?\n> >\n> > From my point of view, I already expected it to use something like that when\n> > using a replication slot. Maybe an option to turn it off could be useful ?\n> \n> IMO, pg_receivewal should have a way to turn off/on using\n> READ_REPLICATION_SLOT. Imagine if the postgres server doesn't support\n> READ_REPLICATION_SLOT (a lower version) but for some reasons the\n> pg_receivewal(running separately) is upgraded to the version that uses\n> READ_REPLICATION_SLOT, knowing that the server doesn't support\n> READ_REPLICATION_SLOT why should user let pg_receivewal run an\n> unnecessary code?\n\nIf I read the patch correctly the situation above is warned by the\nfollowing message then continue to the next step giving up to identify\nstart position from slot data.\n\n> \"server does not suport fetching the slot's position, resuming from the current server position instead\"\n\n(The message looks a bit too long, though..)\n\nHowever, if the operator doesn't know the server is old, pg_receivewal\nstarts streaming from unexpected position, which is a kind of\ndisaster. So I'm inclined to agree to Bharath, but rather I imagine of\nan option to explicitly specify how to determine the start position.\n\n--start-source=[server,wal,slot] specify starting-LSN source, default is\n trying all of them in the order of wal, slot, server. \n\nI don't think the option doesn't need to accept multiple values at once.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 02 Sep 2021 14:45:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Sep 2, 2021 at 11:15 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 1 Sep 2021 10:30:05 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Mon, Aug 30, 2021 at 3:26 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > > > 7) How about we let pg_receivewal use READ_REPLICATION_SLOT as an option?\n> > >\n> > > From my point of view, I already expected it to use something like that when\n> > > using a replication slot. Maybe an option to turn it off could be useful ?\n> >\n> > IMO, pg_receivewal should have a way to turn off/on using\n> > READ_REPLICATION_SLOT. Imagine if the postgres server doesn't support\n> > READ_REPLICATION_SLOT (a lower version) but for some reasons the\n> > pg_receivewal(running separately) is upgraded to the version that uses\n> > READ_REPLICATION_SLOT, knowing that the server doesn't support\n> > READ_REPLICATION_SLOT why should user let pg_receivewal run an\n> > unnecessary code?\n>\n> If I read the patch correctly the situation above is warned by the\n> following message then continue to the next step giving up to identify\n> start position from slot data.\n>\n> > \"server does not suport fetching the slot's position, resuming from the current server position instead\"\n>\n> (The message looks a bit too long, though..)\n>\n> However, if the operator doesn't know the server is old, pg_receivewal\n> starts streaming from unexpected position, which is a kind of\n> disaster. So I'm inclined to agree to Bharath, but rather I imagine of\n> an option to explicitly specify how to determine the start position.\n>\n> --start-source=[server,wal,slot] specify starting-LSN source, default is\n> trying all of them in the order of wal, slot, server.\n>\n> I don't think the option doesn't need to accept multiple values at once.\n\nIf --start-source = 'wal' fails, then pg_receivewal should show up an\nerror saying \"cannot find start position from <<user-specified-wal>>\ndirectory, try with \"server\" or \"slot\" for --start-source\". We might\nend having similar errors for other options as well. Isn't this going\nto create unnecessary complexity?\n\nThe existing way the pg_receivewal fetches start pos i.e. first from\nwal directory and then from server start position, isn't known to the\nuser at all, no verbose message or anything specified in the docs. Why\ndo we need to expose this with the --start-source option? IMO, we can\nkeep it that way and we can just have a way to turn off the new\nbehaviour that we are proposing here, i.e.fetching the start position\nfrom the slot's restart_lsn.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 2 Sep 2021 12:12:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le jeudi 2 septembre 2021, 08:42:22 CEST Bharath Rupireddy a écrit :\n> On Thu, Sep 2, 2021 at 11:15 AM Kyotaro Horiguchi\n> \n> <horikyota.ntt@gmail.com> wrote:\n> > At Wed, 1 Sep 2021 10:30:05 +0530, Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote in> \n> > > On Mon, Aug 30, 2021 at 3:26 PM Ronan Dunklau <ronan.dunklau@aiven.io> \nwrote:\n> > > > > 7) How about we let pg_receivewal use READ_REPLICATION_SLOT as an\n> > > > > option?\n> > > > \n> > > > From my point of view, I already expected it to use something like\n> > > > that when using a replication slot. Maybe an option to turn it off\n> > > > could be useful ?> > \n> > > IMO, pg_receivewal should have a way to turn off/on using\n> > > READ_REPLICATION_SLOT. Imagine if the postgres server doesn't support\n> > > READ_REPLICATION_SLOT (a lower version) but for some reasons the\n> > > pg_receivewal(running separately) is upgraded to the version that uses\n> > > READ_REPLICATION_SLOT, knowing that the server doesn't support\n> > > READ_REPLICATION_SLOT why should user let pg_receivewal run an\n> > > unnecessary code?\n> > \n> > If I read the patch correctly the situation above is warned by the\n> > following message then continue to the next step giving up to identify\n> > start position from slot data.\n> > \n> > > \"server does not suport fetching the slot's position, resuming from the\n> > > current server position instead\"> \n> > (The message looks a bit too long, though..)\n> > \n> > However, if the operator doesn't know the server is old, pg_receivewal\n> > starts streaming from unexpected position, which is a kind of\n> > disaster. So I'm inclined to agree to Bharath, but rather I imagine of\n> > an option to explicitly specify how to determine the start position.\n> > \n> > --start-source=[server,wal,slot] specify starting-LSN source, default is\n> > \n> > trying all of them in the order of wal, slot, server.\n> > \n> > I don't think the option doesn't need to accept multiple values at once.\n> \n> If --start-source = 'wal' fails, then pg_receivewal should show up an\n> error saying \"cannot find start position from <<user-specified-wal>>\n> directory, try with \"server\" or \"slot\" for --start-source\". We might\n> end having similar errors for other options as well. Isn't this going\n> to create unnecessary complexity?\n> \n> The existing way the pg_receivewal fetches start pos i.e. first from\n> wal directory and then from server start position, isn't known to the\n> user at all, no verbose message or anything specified in the docs. Why\n> do we need to expose this with the --start-source option? IMO, we can\n> keep it that way and we can just have a way to turn off the new\n> behaviour that we are proposing here, i.e.fetching the start position\n> from the slot's restart_lsn.\n\nThen it should probably be documented. We write in the docs that it is \nstrongly recommended to use a replication slot, but do not mention how we \nresume from have been already processed.\n\nIf someone really cares about having control over how the start position is \ndefined instead of relying on the auto detection, it would be wiser to add a --\nstartpos parameter similar to the endpos one, which would override everything \nelse, instead of different flags for different behaviours.\n\nRegards,\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Thu, 02 Sep 2021 09:02:57 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Sep 02, 2021 at 02:45:54PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 1 Sep 2021 10:30:05 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> If I read the patch correctly the situation above is warned by the\n> following message then continue to the next step giving up to identify\n> start position from slot data.\n\nBetter to fallback to the past behavior if attempting to use a\npg_receivewal >= 15 with a PG instance older than 14.\n\n>> \"server does not suport fetching the slot's position, resuming from the current server position instead\"\n> \n> (The message looks a bit too long, though..)\n\nAgreed. Falling back to a warning is not the best answer we can have\nhere, as there could be various failure types, and for some of them a\nhard failure is adapted;\n- Failure in the backend while running READ_REPLICATION_SLOT. This\nshould imply a hard failure, no?\n- Slot that does not exist. In this case, we could fall back to the\ncurrent write position of the server. \n\nby default if the slot information cannot be retrieved.\nSomething that's disturbing me in patch 0002 is that we would ignore\nthe results of GetSlotInformation() if any error happens, even if\nthere is a problem in the backend, like an OOM. We should be careful\nabout the semantics here.\n\n> However, if the operator doesn't know the server is old, pg_receivewal\n> starts streaming from unexpected position, which is a kind of\n> disaster. So I'm inclined to agree to Bharath, but rather I imagine of\n> an option to explicitly specify how to determine the start position.\n> \n> --start-source=[server,wal,slot] specify starting-LSN source, default is\n> trying all of them in the order of wal, slot, server. \n> \n> I don't think the option doesn't need to accept multiple values at once.\n\nWhat is the difference between \"wal\" and \"server\"? \"wal\" stands for\nthe start position of the set of files stored in the location\ndirectory, and \"server\" is the location that we'd receive from the\nserver? I don't think that we need that because, when using a slot,\nwe know that we can rely on the LSN that the slot retains for\npg_receivewal as that should be the same point as what has been\nstreamed last. Could there be an argument instead for changing the\ndefault and rely on the slot information rather than scanning the\nlocal WAL archive path for the start point when using --slot? When\nusing pg_receivewal as a service, relying on a scan of the WAL archive\ndirectory if there is no slot and fallback to an invalid LSN if there\nis nothing is fine by me, but I think that just relying on the slot\ninformation is saner as the backend makes sure that nothing is\nmissing. That's also more useful when streaming changes from a single\nslot from multiple locations (stream to location 1 with a slot, stop\npg_receivewal, stream to location 2 that completes 1 with the same\nslot).\n\n+ pg_log_error(\"Slot \\\"%s\\\" is not a physical replication slot\",\n+ replication_slot);\nIn 0003, the format of this error is not really project-like.\nSomething like that perhaps would be more adapted:\n\"cannot use the slot provided, physical slot expected.\"\n\nI am not really convinced about the need of getting the active state\nand the PID used in the backend when fetcing the slot data,\nparticularly if that's just for some frontend-side checks. The\nbackend has safeguards already for all that.\n\nWhile looking at that, I have applied de1d4fe to add \nPostgresNode::command_fails_like(), coming from 0003, and put my hands\non 0001 as per the attached, as the starting point. That basically\ncomes down to all the points raised upthread, plus some tweaks for\nthings I bumped into to get the semantics of the command to what looks\nlike the right shape.\n--\nMichael",
"msg_date": "Thu, 2 Sep 2021 16:28:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le jeudi 2 septembre 2021, 09:28:29 CEST Michael Paquier a écrit :\n> On Thu, Sep 02, 2021 at 02:45:54PM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 1 Sep 2021 10:30:05 +0530, Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote in If I read the patch\n> > correctly the situation above is warned by the following message then\n> > continue to the next step giving up to identify start position from slot\n> > data.\n> \n> Better to fallback to the past behavior if attempting to use a\n> pg_receivewal >= 15 with a PG instance older than 14.\n> \n> >> \"server does not suport fetching the slot's position, resuming from the\n> >> current server position instead\"> \n> > (The message looks a bit too long, though..)\n> \n> Agreed. Falling back to a warning is not the best answer we can have\n> here, as there could be various failure types, and for some of them a\n> hard failure is adapted;\n> - Failure in the backend while running READ_REPLICATION_SLOT. This\n> should imply a hard failure, no?\n> - Slot that does not exist. In this case, we could fall back to the\n> current write position of the server.\n> \n> by default if the slot information cannot be retrieved.\n> Something that's disturbing me in patch 0002 is that we would ignore\n> the results of GetSlotInformation() if any error happens, even if\n> there is a problem in the backend, like an OOM. We should be careful\n> about the semantics here.\n\nOk !\n\n> \n> > However, if the operator doesn't know the server is old, pg_receivewal\n> > starts streaming from unexpected position, which is a kind of\n> > disaster. So I'm inclined to agree to Bharath, but rather I imagine of\n> > an option to explicitly specify how to determine the start position.\n> > \n> > --start-source=[server,wal,slot] specify starting-LSN source, default is\n> > \n> > trying all of them in the order of wal, slot, server.\n> > \n> > I don't think the option doesn't need to accept multiple values at once.\n> \n> What is the difference between \"wal\" and \"server\"? \"wal\" stands for\n> the start position of the set of files stored in the location\n> directory, and \"server\" is the location that we'd receive from the\n> server? I don't think that we need that because, when using a slot,\n> we know that we can rely on the LSN that the slot retains for\n> pg_receivewal as that should be the same point as what has been\n> streamed last. Could there be an argument instead for changing the\n> default and rely on the slot information rather than scanning the\n> local WAL archive path for the start point when using --slot? When\n> using pg_receivewal as a service, relying on a scan of the WAL archive\n> directory if there is no slot and fallback to an invalid LSN if there\n> is nothing is fine by me, but I think that just relying on the slot\n> information is saner as the backend makes sure that nothing is\n> missing. That's also more useful when streaming changes from a single\n> slot from multiple locations (stream to location 1 with a slot, stop\n> pg_receivewal, stream to location 2 that completes 1 with the same\n> slot).\n\nOne benefit I see from first trying to get it from the local WAL stream is that \nwe may end up in a state where it has been flushed to disk but we couldn't \nadvance the replication slot. In that case it is better to resume from the \npoint on disk. Maybe taking the max(slot_lsn, local_file_lsn) would work best \nfor the use case you're describing.\n\n> \n> + pg_log_error(\"Slot \\\"%s\\\" is not a physical replication slot\",\n> + replication_slot);\n> In 0003, the format of this error is not really project-like.\n> Something like that perhaps would be more adapted:\n> \"cannot use the slot provided, physical slot expected.\"\n> \n> I am not really convinced about the need of getting the active state\n> and the PID used in the backend when fetcing the slot data,\n> particularly if that's just for some frontend-side checks. The\n> backend has safeguards already for all that.\n\nI could see the use for sending active_pid for use within pg_basebackup: at \nleast we could fail early if the slot is already in use. But at the same time, \nmaybe it won't be in use anymore once we need it.\n\n> \n> While looking at that, I have applied de1d4fe to add\n> PostgresNode::command_fails_like(), coming from 0003, and put my hands\n> on 0001 as per the attached, as the starting point. That basically\n> comes down to all the points raised upthread, plus some tweaks for\n> things I bumped into to get the semantics of the command to what looks\n> like the right shape.\n\nThanks, I was about to send a new patchset with basically the same thing. It \nwould be nice to know we work on the same thing concurrently in the future to \navoid duplicate efforts. I'll rebase and send the updated version for patches \n0002 and 0003 of my original proposal once we reach an agreement over the \nbehaviour / options of pg_receivewal.\n\nAlso considering the number of different fields to be filled by the \nGetSlotInformation function, my local branch groups them into a dedicated \nstruct which is more convenient than having X possibly null arguments.\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Thu, 02 Sep 2021 10:08:26 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Sep 02, 2021 at 10:08:26AM +0200, Ronan Dunklau wrote:\n> I could see the use for sending active_pid for use within pg_basebackup: at \n> least we could fail early if the slot is already in use. But at the same time, \n> maybe it won't be in use anymore once we need it.\n\nThere is no real concurrent protection with this design. You could\nread that the slot is not active during READ_REPLICATION_SLOT just to\nfind out after in the process of pg_basebackup streaming WAL that it\nbecame in use in-between. And the backend-side protections would kick\nin at this stage.\n\nHmm. The logic doing the decision-making with pg_receivewal may\nbecome more tricky when it comes to pg_replication_slots.wal_status,\nmax_slot_wal_keep_size and pg_replication_slots.safe_wal_size. The\nnumber of cases we'd like to consider impacts directly the amount of\ndata send through READ_REPLICATION_SLOT. That's not really different\nthan deciding of a failure, a success or a retry with active_pid at an\nearlier or a later stage of a base backup. pg_receivewal, on the\ncontrary, can just rely on what the backend tells when it begins\nstreaming. So I'd prefer keeping things simple and limit the number\nof fields a minimum for this command.\n--\nMichael",
"msg_date": "Thu, 2 Sep 2021 17:37:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le jeudi 2 septembre 2021, 10:37:20 CEST Michael Paquier a écrit :\n> On Thu, Sep 02, 2021 at 10:08:26AM +0200, Ronan Dunklau wrote:\n> > I could see the use for sending active_pid for use within pg_basebackup:\n> > at\n> > least we could fail early if the slot is already in use. But at the same\n> > time, maybe it won't be in use anymore once we need it.\n> \n> There is no real concurrent protection with this design. You could\n> read that the slot is not active during READ_REPLICATION_SLOT just to\n> find out after in the process of pg_basebackup streaming WAL that it\n> became in use in-between. And the backend-side protections would kick\n> in at this stage.\n> \n> Hmm. The logic doing the decision-making with pg_receivewal may\n> become more tricky when it comes to pg_replication_slots.wal_status,\n> max_slot_wal_keep_size and pg_replication_slots.safe_wal_size. The\n> number of cases we'd like to consider impacts directly the amount of\n> data send through READ_REPLICATION_SLOT. That's not really different\n> than deciding of a failure, a success or a retry with active_pid at an\n> earlier or a later stage of a base backup. pg_receivewal, on the\n> contrary, can just rely on what the backend tells when it begins\n> streaming. So I'd prefer keeping things simple and limit the number\n> of fields a minimum for this command.\n\nOk. Please find attached new patches rebased on master.*\n\n0001 is yours without any modification.\n\n0002 for pg_receivewal tried to simplify the logic of information to return, \nby using a dedicated struct for this. This accounts for Bahrath's demands to \nreturn every possible field.\nIn particular, an is_logical field simplifies the detection of the type of slot. \nIn my opinion it makes sense to simplify it like this on the client side while \nbeing more open-minded on the server side if we ever need to provide a new \ntype of slot. Also, GetSlotInformation now returns an enum to be able to \nhandle the different modes of failures, which differ between pg_receivewal and \npg_basebackup. \n\n0003 is the pg_basebackup one, not much changed except for the concerns you \nhad about the log message and handling of different failure modes.\n\nThere is still the concern raised by Bharath about being able to select the \nway to fetch the replication slot information for the user, and what should or \nshould not be included in the documentation. I am in favor of documenting the \nprocess of selecting the wal start, and maybe include a --start-lsn option for \nthe user to override it, but that's maybe for another patch.\n\n-- \nRonan Dunklau",
"msg_date": "Fri, 03 Sep 2021 11:58:27 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Fri, Sep 3, 2021 at 3:28 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> There is still the concern raised by Bharath about being able to select the\n> way to fetch the replication slot information for the user, and what should or\n> should not be included in the documentation. I am in favor of documenting the\n> process of selecting the wal start, and maybe include a --start-lsn option for\n> the user to override it, but that's maybe for another patch.\n\nLet's hear from others.\n\nThanks for the patches. I have some quick comments on the v5 patch-set:\n\n0001:\n1) Do you also want to MemSet values too in ReadReplicationSlot?\n\n2) When if clause has single statement we don't generally use \"{\" and \"}\"\n+ if (slot == NULL || !slot->in_use)\n+ {\n+ LWLockRelease(ReplicationSlotControlLock);\n+ }\nyou can just have:\n+ if (slot == NULL || !slot->in_use)\n+ LWLockRelease(ReplicationSlotControlLock);\n\n3) This is still not copying the slot contents, as I said earlier you\ncan just take the required info into some local variables instead of\ntaking the slot pointer to slot_contents pointer.\n+ /* Copy slot contents while holding spinlock */\n+ SpinLockAcquire(&slot->mutex);\n+ slot_contents = *slot;\n+ SpinLockRelease(&slot->mutex);\n+ LWLockRelease(ReplicationSlotControlLock);\n\n4) As I said earlier, why can't we have separate tli variables for\nmore readability instead of just one slots_position_timeline and\ntimeline_history variable? And you are not even resetting those 2\nvariables after if\n(!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn)), you might end\nup sending the restart_lsn timelineid for confirmed_flush. So, better\nuse two separate variables. In fact you can use block local variables:\n+ if (!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn))\n+ {\n+ List *tl_history= NIL;\n+ TimeLineID tli;\n+ tl_history= readTimeLineHistory(ThisTimeLineID);\n+ tli = tliOfPointInHistory(slot_contents.data.restart_lsn,\n+ tl_history);\n+ values[i] = Int32GetDatum(tli);\n+ nulls[i] = false;\n+ }\nsimilarly for confirmed_flush.\n\n5) I still don't see the need for below test case:\n+ \"READ_REPLICATION_SLOT does not produce an error with non existent slot\");\nwhen we have\n+ \"READ_REPLICATION_SLOT does not produce an error with dropped slot\");\nBecause for a user, dropped or non existent slot is same, it's just\nthat for dropped slot we internally don't delete its entry from the\nshared memory.\n\n0002:\n1) Imagine GetSlotInformation always returns\nREAD_REPLICATION_SLOT_ERROR, don't you think StreamLog enters an\ninfinite loop there? Instead, why don't you just exit(1); instead of\nreturn; and retry? Similarly for READ_REPLICATION_SLOT_NONEXISTENT? I\nthink, you can just do exit(1), no need to retry.\n+ case READ_REPLICATION_SLOT_ERROR:\n+\n+ /*\n+ * Error has been logged by GetSlotInformation, return and\n+ * maybe retry\n+ */\n+ return;\n\n2) Why is it returning READ_REPLICATION_SLOT_OK when slot_info isn't\npassed? And why are you having this check after you connect to the\nserver and fetch the data?\n+ /* If no slotinformation has been passed, we can return immediately */\n+ if (slot_info == NULL)\n+ {\n+ PQclear(res);\n+ return READ_REPLICATION_SLOT_OK;\n+ }\nInstead you can just have a single assert:\n\n+ Assert(slot_name && slot_info );\n\n3) How about\npg_log_error(\"could not read replication slot:\ninstead of\npg_log_error(\"could not fetch replication slot:\n\n4) Why are you having the READ_REPLICATION_SLOT_OK case in default?\n+ default:\n+ if (slot_info.is_logical)\n+ {\n+ /*\n+ * If the slot is not physical we can't expect to\n+ * recover from that\n+ */\n+ pg_log_error(\"cannot use the slot provided, physical slot expected.\");\n+ exit(1);\n+ }\n+ stream.startpos = slot_info.restart_lsn;\n+ stream.timeline = slot_info.restart_tli;\n+ }\nYou can just have another case statement for READ_REPLICATION_SLOT_OK\nand in the default you can throw an error \"unknown read replication\nslot status\" or some other better working and exit(1);\n\n5) Do you want initialize slot_info to 0?\n+ if (replication_slot)\n+ {\n+ SlotInformation slot_info;\n+ MemSet(slot_info, 0, sizeof(SlotInformation));\n\n6) This isn't readable:\n+ slot_info->is_logical = strcmp(PQgetvalue(res, 0, 0), \"logical\") == 0;\nHow about:\nif (strcmp(PQgetvalue(res, 0, 0), \"logical\") == 0)\n slot_info->is_logical = true;\nYou don't need to set it false, because you would have\nMemSet(slot_info) in the caller.\n\n7) How about\nuint32 hi;\nuint32 lo;\ninstead of\n+ uint32 hi,\n+ lo;\n\n8) Move SlotInformation * slot_info) to the next line as it crosses\nthe 80 char limit.\n+GetSlotInformation(PGconn *conn, const char *slot_name,\nSlotInformation * slot_info)\n\n9) Instead of a boolean is_logical, I would rather suggest to use an\nenum or #define macros the slot types correctly, because someday we\nmight introduce new type slots and somebody wants is_physical kind of\nvariable name?\n+typedef struct SlotInformation {\n+ bool is_logical;\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 3 Sep 2021 21:19:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Fri, Sep 03, 2021 at 11:58:27AM +0200, Ronan Dunklau wrote:\n> \n> 0002 for pg_receivewal tried to simplify the logic of information to return, \n> by using a dedicated struct for this. This accounts for Bahrath's demands to \n> return every possible field.\n> In particular, an is_logical field simplifies the detection of the type of slot. \n> In my opinion it makes sense to simplify it like this on the client side while \n> being more open-minded on the server side if we ever need to provide a new \n> type of slot. Also, GetSlotInformation now returns an enum to be able to \n> handle the different modes of failures, which differ between pg_receivewal and \n> pg_basebackup. \n\n+ if (PQserverVersion(conn) < 150000)\n+ return READ_REPLICATION_SLOT_UNSUPPORTED;\n[...]\n+typedef enum {\n+ READ_REPLICATION_SLOT_OK,\n+ READ_REPLICATION_SLOT_UNSUPPORTED,\n+ READ_REPLICATION_SLOT_ERROR,\n+ READ_REPLICATION_SLOT_NONEXISTENT\n+} ReadReplicationSlotStatus;\n\nDo you really need this much complication? We could treat the\nunsupported case and the non-existing case the same way: we don't know\nso just assign InvalidXlogRecPtr or NULL to the fields of the\nstructure, and make GetSlotInformation() return false just on error,\nwith some pg_log_error() where adapted in its internals.\n\n> There is still the concern raised by Bharath about being able to select the \n> way to fetch the replication slot information for the user, and what should or \n> should not be included in the documentation. I am in favor of documenting the \n> process of selecting the wal start, and maybe include a --start-lsn option for \n> the user to override it, but that's maybe for another patch.\n\nThe behavior of pg_receivewal that you are implementing should be\ndocumented. We don't say either how the start location is selected\nwhen working on an existing directory, so I would recommend to add a\nparagraph in the description section to detail all that, as of:\n- First a scan of the existing archives is done.\n- If nothing is found, and if there is a slot, request the slot\ninformation.\n- If still nothing (aka slot undefined, or command not supported), use\nthe last flush location.\n\nAs a whole, I am not really convinced that we need a new option for\nthat as long as we rely on a slot with pg_receivewal as these are used\nto make sure that we avoid holes in the WAL archives.\n\nRegarding pg_basebackup, Daniel has proposed a couple of days ago a\ndifferent solution to trap errors earlier, which would cover the case\ndealt with here:\nhttps://www.postgresql.org/message-id/0F69E282-97F9-4DB7-8D6D-F927AA6340C8@yesql.se\nWe should not mimic in the frontend errors that are safely trapped in\nthe backend with the proper locks, in any case.\n\nWhile on it, READ_REPLICATION_SLOT returns a confirmed LSN when\ngrabbing the data of a logical slot. We are not going to use that\nwith pg_recvlogical as by default START_STREAMING 0/0 would just use\nthe confirmed LSN. Do we have use cases where this information would\nhelp? There is the argument of consistency with physical slots and\nthat this can be helpful to do sanity checks for clients, but that's\nrather thin.\n--\nMichael",
"msg_date": "Mon, 6 Sep 2021 13:22:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le lundi 6 septembre 2021, 06:22:30 CEST Michael Paquier a écrit :\n> On Fri, Sep 03, 2021 at 11:58:27AM +0200, Ronan Dunklau wrote:\n> > 0002 for pg_receivewal tried to simplify the logic of information to\n> > return, by using a dedicated struct for this. This accounts for Bahrath's\n> > demands to return every possible field.\n> > In particular, an is_logical field simplifies the detection of the type of\n> > slot. In my opinion it makes sense to simplify it like this on the client\n> > side while being more open-minded on the server side if we ever need to\n> > provide a new type of slot. Also, GetSlotInformation now returns an enum\n> > to be able to handle the different modes of failures, which differ\n> > between pg_receivewal and pg_basebackup.\n> \n> + if (PQserverVersion(conn) < 150000)\n> + return READ_REPLICATION_SLOT_UNSUPPORTED;\n> [...]\n> +typedef enum {\n> + READ_REPLICATION_SLOT_OK,\n> + READ_REPLICATION_SLOT_UNSUPPORTED,\n> + READ_REPLICATION_SLOT_ERROR,\n> + READ_REPLICATION_SLOT_NONEXISTENT\n> +} ReadReplicationSlotStatus;\n> \n> Do you really need this much complication? We could treat the\n> unsupported case and the non-existing case the same way: we don't know\n> so just assign InvalidXlogRecPtr or NULL to the fields of the\n> structure, and make GetSlotInformation() return false just on error,\n> with some pg_log_error() where adapted in its internals.\n\nI actually started with the implementation you propose, but changed my mind \nwhile writing it because I realised it's easier to reason about like this, \ninstead of failing silently during READ_REPLICATION_SLOT to fail a bit later \nwhen actually trying to start the replication slot because it doesn't exists. \nEither the user expects the replication slot to exists, and in this case we \nshould retry the whole loop in the hope of getting an interesting LSN, or the \nuser doesn't and shouldn't even pass a replication_slot name to begin with.\n\n> \n> > There is still the concern raised by Bharath about being able to select\n> > the\n> > way to fetch the replication slot information for the user, and what\n> > should or should not be included in the documentation. I am in favor of\n> > documenting the process of selecting the wal start, and maybe include a\n> > --start-lsn option for the user to override it, but that's maybe for\n> > another patch.\n> \n> The behavior of pg_receivewal that you are implementing should be\n> documented. We don't say either how the start location is selected\n> when working on an existing directory, so I would recommend to add a\n> paragraph in the description section to detail all that, as of:\n> - First a scan of the existing archives is done.\n> - If nothing is found, and if there is a slot, request the slot\n> information.\n> - If still nothing (aka slot undefined, or command not supported), use\n> the last flush location.\n\nSounds good, I will add another patch for the documentation of this.\n\n> \n> As a whole, I am not really convinced that we need a new option for\n> that as long as we rely on a slot with pg_receivewal as these are used\n> to make sure that we avoid holes in the WAL archives.\n> \n> Regarding pg_basebackup, Daniel has proposed a couple of days ago a\n> different solution to trap errors earlier, which would cover the case\n> dealt with here:\n> https://www.postgresql.org/message-id/0F69E282-97F9-4DB7-8D6D-F927AA6340C8@y\n> esql.se \n\nI will take a look.\n\n> We should not mimic in the frontend errors that are safely trapped\n> in the backend with the proper locks, in any case.\n\nI don't understand what you mean by this ? My original proposal was for the \ncommand to actually attach to the replication slot while reading it, thus \nkeeping a lock on it to prevent dropping or concurrent access on the server.\n> \n> While on it, READ_REPLICATION_SLOT returns a confirmed LSN when\n> grabbing the data of a logical slot. We are not going to use that\n> with pg_recvlogical as by default START_STREAMING 0/0 would just use\n> the confirmed LSN. Do we have use cases where this information would\n> help? There is the argument of consistency with physical slots and\n> that this can be helpful to do sanity checks for clients, but that's\n> rather thin.\n\nIf we don't, we should rename the command to READ_PHYSICAL_REPLICATION_SLOT \nand error out on the server if the slot is not actually a physical one to \nspare the client from performing those checks. I still think it's better to \nsupport both cases as opposed to having two completely different APIs \n(READ_(PHYSICAL)_REPLICATION_SLOT for physical ones on a replication \nconnection, pg_replication_slots view for logical ones) as it would enable \nmore for third-party clients at a relatively low maintenance cost for us. \n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 06 Sep 2021 08:50:21 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Sep 6, 2021 at 12:20 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n>\n> Le lundi 6 septembre 2021, 06:22:30 CEST Michael Paquier a écrit :\n> > On Fri, Sep 03, 2021 at 11:58:27AM +0200, Ronan Dunklau wrote:\n> > > 0002 for pg_receivewal tried to simplify the logic of information to\n> > > return, by using a dedicated struct for this. This accounts for Bahrath's\n> > > demands to return every possible field.\n> > > In particular, an is_logical field simplifies the detection of the type of\n> > > slot. In my opinion it makes sense to simplify it like this on the client\n> > > side while being more open-minded on the server side if we ever need to\n> > > provide a new type of slot. Also, GetSlotInformation now returns an enum\n> > > to be able to handle the different modes of failures, which differ\n> > > between pg_receivewal and pg_basebackup.\n> >\n> > + if (PQserverVersion(conn) < 150000)\n> > + return READ_REPLICATION_SLOT_UNSUPPORTED;\n> > [...]\n> > +typedef enum {\n> > + READ_REPLICATION_SLOT_OK,\n> > + READ_REPLICATION_SLOT_UNSUPPORTED,\n> > + READ_REPLICATION_SLOT_ERROR,\n> > + READ_REPLICATION_SLOT_NONEXISTENT\n> > +} ReadReplicationSlotStatus;\n> >\n> > Do you really need this much complication? We could treat the\n> > unsupported case and the non-existing case the same way: we don't know\n> > so just assign InvalidXlogRecPtr or NULL to the fields of the\n> > structure, and make GetSlotInformation() return false just on error,\n> > with some pg_log_error() where adapted in its internals.\n>\n> I actually started with the implementation you propose, but changed my mind\n> while writing it because I realised it's easier to reason about like this,\n> instead of failing silently during READ_REPLICATION_SLOT to fail a bit later\n> when actually trying to start the replication slot because it doesn't exists.\n> Either the user expects the replication slot to exists, and in this case we\n> should retry the whole loop in the hope of getting an interesting LSN, or the\n> user doesn't and shouldn't even pass a replication_slot name to begin with.\n\nI don't think so we need to retry the whole loop if the\nREAD_REPLICATION_SLOT command fails as pg_receivewal might enter an\ninfinite loop there. IMO, we should just exit(1) if\nREAD_REPLICATION_SLOT fails.\n\n> > > There is still the concern raised by Bharath about being able to select\n> > > the\n> > > way to fetch the replication slot information for the user, and what\n> > > should or should not be included in the documentation. I am in favor of\n> > > documenting the process of selecting the wal start, and maybe include a\n> > > --start-lsn option for the user to override it, but that's maybe for\n> > > another patch.\n> >\n> > The behavior of pg_receivewal that you are implementing should be\n> > documented. We don't say either how the start location is selected\n> > when working on an existing directory, so I would recommend to add a\n> > paragraph in the description section to detail all that, as of:\n> > - First a scan of the existing archives is done.\n> > - If nothing is found, and if there is a slot, request the slot\n> > information.\n> > - If still nothing (aka slot undefined, or command not supported), use\n> > the last flush location.\n>\n> Sounds good, I will add another patch for the documentation of this.\n\n+1.\n\n> > While on it, READ_REPLICATION_SLOT returns a confirmed LSN when\n> > grabbing the data of a logical slot. We are not going to use that\n> > with pg_recvlogical as by default START_STREAMING 0/0 would just use\n> > the confirmed LSN. Do we have use cases where this information would\n> > help? There is the argument of consistency with physical slots and\n> > that this can be helpful to do sanity checks for clients, but that's\n> > rather thin.\n>\n> If we don't, we should rename the command to READ_PHYSICAL_REPLICATION_SLOT\n> and error out on the server if the slot is not actually a physical one to\n> spare the client from performing those checks. I still think it's better to\n> support both cases as opposed to having two completely different APIs\n> (READ_(PHYSICAL)_REPLICATION_SLOT for physical ones on a replication\n> connection, pg_replication_slots view for logical ones) as it would enable\n> more for third-party clients at a relatively low maintenance cost for us.\n\n-1 for READ_PHYSICAL_REPLICATION_SLOT or failing on the server. What\nhappens if we have another slot type \"PHYSIOLOGICAL\" or \"FOO\" or \"BAR\"\nsome other? IMO, READ_REPLICATION_SLOT should just return info of all\nslots. The clients know better how to deal with the slot type.\nAlthough, we don't have a use case for logical slots with the\nREAD_REPLICATION_SLOT command, let's not change it.\n\nIf others are still concerned about the unnecessary slot being\nreturned, you might consider passing in a parameter to\nREAD_REPLICATION_SLOT command, something like below. But this too\nlooks complex to me. I would vote for what the existing patch does\nwith READ_REPLICATION_SLOT.\nREAD_REPLICATION_SLOT 'slot_name' 'physical'; only returns physical\nslot info with the given name.\nREAD_REPLICATION_SLOT 'slot_name' 'logical'; only returns logical slot\nwith the given name.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 6 Sep 2021 12:37:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le vendredi 3 septembre 2021 17:49:34 CEST, vous avez écrit :\n> On Fri, Sep 3, 2021 at 3:28 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > There is still the concern raised by Bharath about being able to select\n> > the\n> > way to fetch the replication slot information for the user, and what\n> > should or should not be included in the documentation. I am in favor of\n> > documenting the process of selecting the wal start, and maybe include a\n> > --start-lsn option for the user to override it, but that's maybe for\n> > another patch.\n> \n> Let's hear from others.\n> \n> Thanks for the patches. I have some quick comments on the v5 patch-set:\n> \n> 0001:\n> 1) Do you also want to MemSet values too in ReadReplicationSlot?\n> \n> 2) When if clause has single statement we don't generally use \"{\" and \"}\"\n> + if (slot == NULL || !slot->in_use)\n> + {\n> + LWLockRelease(ReplicationSlotControlLock);\n> + }\n> you can just have:\n> + if (slot == NULL || !slot->in_use)\n> + LWLockRelease(ReplicationSlotControlLock);\n> \n> 3) This is still not copying the slot contents, as I said earlier you\n> can just take the required info into some local variables instead of\n> taking the slot pointer to slot_contents pointer.\n> + /* Copy slot contents while holding spinlock */\n> + SpinLockAcquire(&slot->mutex);\n> + slot_contents = *slot;\n> + SpinLockRelease(&slot->mutex);\n> + LWLockRelease(ReplicationSlotControlLock);\n> \n> 4) As I said earlier, why can't we have separate tli variables for\n> more readability instead of just one slots_position_timeline and\n> timeline_history variable? And you are not even resetting those 2\n> variables after if\n> (!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn)), you might end\n> up sending the restart_lsn timelineid for confirmed_flush. So, better\n> use two separate variables. In fact you can use block local variables:\n> + if (!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn))\n> + {\n> + List *tl_history= NIL;\n> + TimeLineID tli;\n> + tl_history= readTimeLineHistory(ThisTimeLineID);\n> + tli = tliOfPointInHistory(slot_contents.data.restart_lsn,\n> + tl_history);\n> + values[i] = Int32GetDatum(tli);\n> + nulls[i] = false;\n> + }\n> similarly for confirmed_flush.\n> \n> 5) I still don't see the need for below test case:\n> + \"READ_REPLICATION_SLOT does not produce an error with non existent slot\");\n> when we have\n> + \"READ_REPLICATION_SLOT does not produce an error with dropped slot\");\n> Because for a user, dropped or non existent slot is same, it's just\n> that for dropped slot we internally don't delete its entry from the\n> shared memory.\n> \n\nThank you for reiterating those concerns. As I said, I haven't touched \nMichael's version of the patch. I was incorporating those changes in my branch \nbefore he sent this, so I'll probably merge both before sending an updated \npatch.\n\n> 0002:\n> 1) Imagine GetSlotInformation always returns\n> READ_REPLICATION_SLOT_ERROR, don't you think StreamLog enters an\n> infinite loop there? Instead, why don't you just exit(1); instead of\n> return; and retry? Similarly for READ_REPLICATION_SLOT_NONEXISTENT? I\n> think, you can just do exit(1), no need to retry.\n> + case READ_REPLICATION_SLOT_ERROR:\n> +\n> + /*\n> + * Error has been logged by GetSlotInformation, return and\n> + * maybe retry\n> + */\n> + return;\n\nThis is the same behaviour we had before: if there is an error with \npg_receivewal we retry the command. There was no special case for the \nreplication slot not existing before, I don't see why we should change it now \n?\n\nEg:\n\n2021-09-06 09:11:07.774 CEST [95853] ERROR: replication slot \"nonexistent\" \ndoes not exist\n2021-09-06 09:11:07.774 CEST [95853] STATEMENT: START_REPLICATION SLOT \n\"nonexistent\" 0/1000000 TIMELINE 1\npg_receivewal: error: could not send replication command \"START_REPLICATION\": \nERROR: replication slot \"nonexistent\" does not exist\npg_receivewal: disconnected; waiting 5 seconds to try again\n\nUsers may rely on it to keep retrying in the background until the slot has \nbeen created for example.\n\n> \n> 2) Why is it returning READ_REPLICATION_SLOT_OK when slot_info isn't\n> passed? And why are you having this check after you connect to the\n> server and fetch the data?\n> + /* If no slotinformation has been passed, we can return immediately */\n> + if (slot_info == NULL)\n> + {\n> + PQclear(res);\n> + return READ_REPLICATION_SLOT_OK;\n> + }\n> Instead you can just have a single assert:\n> \n> + Assert(slot_name && slot_info );\n\nAt first it was so that we didn't have to fill in all required information if we \ndon't need too, but it turns out pg_basebackup also has to check for the \nslot's type. I agree we should not support the null slot_info case anymore.\n\n> \n> 3) How about\n> pg_log_error(\"could not read replication slot:\n> instead of\n> pg_log_error(\"could not fetch replication slot:\n\nOk.\n> \n> 4) Why are you having the READ_REPLICATION_SLOT_OK case in default?\n> + default:\n> + if (slot_info.is_logical)\n> + {\n> + /*\n> + * If the slot is not physical we can't expect to\n> + * recover from that\n> + */\n> + pg_log_error(\"cannot use the slot provided, physical slot expected.\");\n> + exit(1);\n> + }\n> + stream.startpos = slot_info.restart_lsn;\n> + stream.timeline = slot_info.restart_tli;\n> + }\n> You can just have another case statement for READ_REPLICATION_SLOT_OK\n> and in the default you can throw an error \"unknown read replication\n> slot status\" or some other better working and exit(1);\n\nOk.\n\n> \n> 5) Do you want initialize slot_info to 0?\n> + if (replication_slot)\n> + {\n> + SlotInformation slot_info;\n> + MemSet(slot_info, 0, sizeof(SlotInformation));\n> \n> 6) This isn't readable:\n> + slot_info->is_logical = strcmp(PQgetvalue(res, 0, 0), \"logical\") == 0;\n> How about:\n> if (strcmp(PQgetvalue(res, 0, 0), \"logical\") == 0)\n> slot_info->is_logical = true;\n> You don't need to set it false, because you would have\n> MemSet(slot_info) in the caller.\n> \n\nIsn't it preferable to fill in the whole struct without any regards to it's \nprior state ? I will modify the is_logical assignment to make it more readable \nthough.\n\n\n> 7) How about\n> uint32 hi;\n> uint32 lo;\n> instead of\n> + uint32 hi,\n> + lo;\n> \n\nOk.\n\n> 8) Move SlotInformation * slot_info) to the next line as it crosses\n> the 80 char limit.\n> +GetSlotInformation(PGconn *conn, const char *slot_name,\n> SlotInformation * slot_info)\n\nOk.\n\n> \n> 9) Instead of a boolean is_logical, I would rather suggest to use an\n> enum or #define macros the slot types correctly, because someday we\n> might introduce new type slots and somebody wants is_physical kind of\n> variable name?\n> +typedef struct SlotInformation {\n> + bool is_logical;\n\nThat's the reason why the READ_REPLICATION_SLOT command returns a slot type as \na string, but the intermediate representation on the client doesn't really \nneed to be concerned about this: it's the client after all and it will be very \neasy to change it locally if we need to. Thinking about it, for our use case \nwe should really use is_physical instead of is_logical. But I guess all of \nthis is moot if we end up deciding not to support the logical case anymore ? \n\nBest regards,\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 06 Sep 2021 09:15:38 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Sep 06, 2021 at 12:37:29PM +0530, Bharath Rupireddy wrote:\n> -1 for READ_PHYSICAL_REPLICATION_SLOT or failing on the server. What\n> happens if we have another slot type \"PHYSIOLOGICAL\" or \"FOO\" or \"BAR\"\n> some other? IMO, READ_REPLICATION_SLOT should just return info of all\n> slots. The clients know better how to deal with the slot type.\n> Although, we don't have a use case for logical slots with the\n> READ_REPLICATION_SLOT command, let's not change it.\n\nUsing READ_REPLICATION_SLOT as the command name is fine, and it could\nbe extended with more fields if necessary, implemented now with only\nwhat we think is useful. Returning errors on cases that are still not\nsupported yet is fine, say for logical slots if we decide to reject\nthe case for now, and testrictions can always be lifted in the\nfuture.\n--\nMichael",
"msg_date": "Mon, 6 Sep 2021 16:17:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Sep 06, 2021 at 04:17:28PM +0900, Michael Paquier wrote:\n> Using READ_REPLICATION_SLOT as the command name is fine, and it could\n> be extended with more fields if necessary, implemented now with only\n> what we think is useful. Returning errors on cases that are still not\n> supported yet is fine, say for logical slots if we decide to reject\n> the case for now, and testrictions can always be lifted in the\n> future.\n\nAnd marked as RwF as this was three weeks ago. Please feel free to\nregister a new entry if this is being worked on.\n--\nMichael",
"msg_date": "Fri, 1 Oct 2021 16:05:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Hello,\n\nFollowing recommendations, I stripped most of the features from the patch. For \nnow we support only physical replication slots, and only provide the two fields \nof interest (restart_lsn, restart_tli) in addition to the slot type (fixed at \nphysical) to not paint ourselves in a corner.\n\nI also removed the part about pg_basebackup since other fixes have been \nproposed for that case. \n\nLe vendredi 1 octobre 2021, 09:05:18 CEST Michael Paquier a écrit :\n> On Mon, Sep 06, 2021 at 04:17:28PM +0900, Michael Paquier wrote:\n> > Using READ_REPLICATION_SLOT as the command name is fine, and it could\n> > be extended with more fields if necessary, implemented now with only\n> > what we think is useful. Returning errors on cases that are still not\n> > supported yet is fine, say for logical slots if we decide to reject\n> > the case for now, and testrictions can always be lifted in the\n> > future.\n> \n> And marked as RwF as this was three weeks ago. Please feel free to\n> register a new entry if this is being worked on.\n> --\n> Michael\n\n\n-- \nRonan Dunklau",
"msg_date": "Tue, 19 Oct 2021 17:32:55 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Tue, Oct 19, 2021 at 05:32:55PM +0200, Ronan Dunklau wrote:\n> Following recommendations, I stripped most of the features from the patch. For \n> now we support only physical replication slots, and only provide the two fields \n> of interest (restart_lsn, restart_tli) in addition to the slot type (fixed at \n> physical) to not paint ourselves in a corner.\n> \n> I also removed the part about pg_basebackup since other fixes have been \n> proposed for that case. \n\nPatch 0001 looks rather clean. I have a couple of comments.\n\n+ if (OidIsValid(slot_contents.data.database))\n+ elog(ERROR, \"READ_REPLICATION_SLOT is only supported for physical slots\");\n\nelog() can only be used for internal errors. Errors that can be\ntriggered by a user should use ereport() instead.\n\n+ok($stdout eq '||',\n+ \"READ_REPLICATION_SLOT returns NULL values if slot does not exist\");\n[...]\n+ok($stdout =~ 'physical\\|[^|]*\\|1',\n+ \"READ_REPLICATION_SLOT returns tuple corresponding to the slot\");\nIsn't result pattern matching something we usually test with like()?\n\n+($ret, $stdout, $stderr) = $node_primary->psql(\n+ 'postgres',\n+ \"READ_REPLICATION_SLOT $slotname;\",\n+ extra_params => [ '-d', $connstr_rep ]);\nNo need for extra_params in this test. You can just pass down\n\"replication => 1\" instead, no?\n\n--- a/src/test/recovery/t/006_logical_decoding.pl\n+++ b/src/test/recovery/t/006_logical_decoding.pl\n[...]\n+ok($stderr=~ 'READ_REPLICATION_SLOT is only supported for physical slots',\n+ 'Logical replication slot is not supported');\nThis one should use like().\n\n+ <para>\n+ The slot's <literal>restart_lsn</literal> can also beused as a starting\n+ point if the target directory is empty.\n+ </para>\nI am not sure that there is a need for this addition as the same thing\nis said when describing the lookup ordering.\n\n+ If nothing is found and a slot is specified, use the\n+ <command>READ_REPLICATION_SLOT</command>\n+ command.\nIt may be clearer to say that the position is retrieved from the\ncommand.\n\n+bool\n+GetSlotInformation(PGconn *conn, const char *slot_name, XLogRecPtr\n*restart_lsn, TimeLineID* restart_tli)\n+{\nCould you extend that so as we still run the command but don't crash\nif the caller specifies NULL for any of the result fields? This would\nbe handy.\n\n+ if (PQgetisnull(res, 0, 0))\n+ {\n+ PQclear(res);\n+ pg_log_error(\"replication slot \\\"%s\\\" does not exist\",\nslot_name);\n+ return false;\n+ }\n+ if (PQntuples(res) != 1 || PQnfields(res) < 3)\n+ {\n+ pg_log_error(\"could not fetch replication slot: got %d rows\nand %d fields, expected %d rows and %d or more fields\",\n+ PQntuples(res), PQnfields(res), 1, 3);\n+ PQclear(res);\n+ return false;\n+ }\nWouldn't it be better to reverse the order of these two checks?\n\nI don't mind the addition of the slot type being part of the result of\nREAD_REPLICATION_SLOT even if it is not mandatory (?), but at least\nGetSlotInformation() should check after it.\n\n+# Setup the slot, and connect to it a first time\n+$primary->run_log(\n+ [ 'pg_receivewal', '--slot', $slot_name, '--create-slot' ],\n+ 'creating a replication slot');\n+$primary->psql('postgres',\n+ 'INSERT INTO test_table VALUES (generate_series(1,100));');\n+$primary->psql('postgres', 'SELECT pg_switch_wal();');\n+$nextlsn =\n+ $primary->safe_psql('postgres', 'SELECT pg_current_wal_insert_lsn();');\n+chomp($nextlsn);\nWouldn't it be simpler to use CREATE_REPLICATION_SLOT with RESERVE_WAL\nhere, rather than going through pg_receivewal? It seems to me that\nthis would be cheaper without really impacting the coverage.\n--\nMichael",
"msg_date": "Wed, 20 Oct 2021 14:13:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le mercredi 20 octobre 2021, 07:13:15 CEST Michael Paquier a écrit :\n> On Tue, Oct 19, 2021 at 05:32:55PM +0200, Ronan Dunklau wrote:\n> > Following recommendations, I stripped most of the features from the patch.\n> > For now we support only physical replication slots, and only provide the\n> > two fields of interest (restart_lsn, restart_tli) in addition to the slot\n> > type (fixed at physical) to not paint ourselves in a corner.\n> > \n> > I also removed the part about pg_basebackup since other fixes have been\n> > proposed for that case.\n> \n> Patch 0001 looks rather clean. I have a couple of comments.\n\nThank you for the quick review !\n\n\n> \n> + if (OidIsValid(slot_contents.data.database))\n> + elog(ERROR, \"READ_REPLICATION_SLOT is only supported for\n> physical slots\");\n> \n> elog() can only be used for internal errors. Errors that can be\n> triggered by a user should use ereport() instead.\n\nOk.\n> \n> +ok($stdout eq '||',\n> + \"READ_REPLICATION_SLOT returns NULL values if slot does not exist\");\n> [...]\n> +ok($stdout =~ 'physical\\|[^|]*\\|1',\n> + \"READ_REPLICATION_SLOT returns tuple corresponding to the slot\");\n> Isn't result pattern matching something we usually test with like()?\n\nOk.\n> \n> +($ret, $stdout, $stderr) = $node_primary->psql(\n> + 'postgres',\n> + \"READ_REPLICATION_SLOT $slotname;\",\n> + extra_params => [ '-d', $connstr_rep ]);\n> No need for extra_params in this test. You can just pass down\n> \"replication => 1\" instead, no?\n\nIn that test file, every replication connection is obtained by using \nconnstr_rep so I thought it would be best to use the same thing.\n\n> \n> --- a/src/test/recovery/t/006_logical_decoding.pl\n> +++ b/src/test/recovery/t/006_logical_decoding.pl\n> [...]\n> +ok($stderr=~ 'READ_REPLICATION_SLOT is only supported for physical slots',\n> + 'Logical replication slot is not supported');\n> This one should use like().\n\nOk.\n\n> \n> + <para>\n> + The slot's <literal>restart_lsn</literal> can also beused as a\n> starting + point if the target directory is empty.\n> + </para>\n> I am not sure that there is a need for this addition as the same thing\n> is said when describing the lookup ordering.\n\nOk, removed.\n\n> \n> + If nothing is found and a slot is specified, use the\n> + <command>READ_REPLICATION_SLOT</command>\n> + command.\n> It may be clearer to say that the position is retrieved from the\n> command.\n\nOk, done. The doc also uses the active voice here now.\n\n> \n> +bool\n> +GetSlotInformation(PGconn *conn, const char *slot_name, XLogRecPtr\n> *restart_lsn, TimeLineID* restart_tli)\n> +{\n> Could you extend that so as we still run the command but don't crash\n> if the caller specifies NULL for any of the result fields? This would\n> be handy.\n\nDone.\n\n> \n> + if (PQgetisnull(res, 0, 0))\n> + {\n> + PQclear(res);\n> + pg_log_error(\"replication slot \\\"%s\\\" does not exist\",\n> slot_name);\n> + return false;\n> + }\n> + if (PQntuples(res) != 1 || PQnfields(res) < 3)\n> + {\n> + pg_log_error(\"could not fetch replication slot: got %d rows\n> and %d fields, expected %d rows and %d or more fields\",\n> + PQntuples(res), PQnfields(res), 1, 3);\n> + PQclear(res);\n> + return false;\n> + }\n> Wouldn't it be better to reverse the order of these two checks?\n\nYes it is, and the PQntuples condition should be removed from the first error \ntest.\n\n> \n> I don't mind the addition of the slot type being part of the result of\n> READ_REPLICATION_SLOT even if it is not mandatory (?), but at least\n> GetSlotInformation() should check after it.\n\nOk.\n\n> \n> +# Setup the slot, and connect to it a first time\n> +$primary->run_log(\n> + [ 'pg_receivewal', '--slot', $slot_name, '--create-slot' ],\n> + 'creating a replication slot');\n> +$primary->psql('postgres',\n> + 'INSERT INTO test_table VALUES (generate_series(1,100));');\n> +$primary->psql('postgres', 'SELECT pg_switch_wal();');\n> +$nextlsn =\n> + $primary->safe_psql('postgres', 'SELECT pg_current_wal_insert_lsn();');\n> +chomp($nextlsn);\n> Wouldn't it be simpler to use CREATE_REPLICATION_SLOT with RESERVE_WAL\n> here, rather than going through pg_receivewal? It seems to me that\n> this would be cheaper without really impacting the coverage.\n\nYou're right, we can skip two invocations of pg_receivewal like this (for the \nslot creation + for starting the slot a first time).\n\n\n-- \nRonan Dunklau",
"msg_date": "Wed, 20 Oct 2021 11:40:18 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le mercredi 20 octobre 2021 11:40:18 CEST, vous avez écrit :\n> > +# Setup the slot, and connect to it a first time\n> > +$primary->run_log(\n> > + [ 'pg_receivewal', '--slot', $slot_name, '--create-slot' ],\n> > + 'creating a replication slot');\n> > +$primary->psql('postgres',\n> > + 'INSERT INTO test_table VALUES (generate_series(1,100));');\n> > +$primary->psql('postgres', 'SELECT pg_switch_wal();');\n> > +$nextlsn =\n> > + $primary->safe_psql('postgres', 'SELECT pg_current_wal_insert_lsn();');\n> > +chomp($nextlsn);\n> > Wouldn't it be simpler to use CREATE_REPLICATION_SLOT with RESERVE_WAL\n> > here, rather than going through pg_receivewal? It seems to me that\n> > this would be cheaper without really impacting the coverage.\n> \n> You're right, we can skip two invocations of pg_receivewal like this (for\n> the slot creation + for starting the slot a first time).\n\nAfter sending the previous patch suite, I figured it would be worthwhile to \nalso have tests covering timeline switches, which was not covered before.\nSo please find attached a new version with an additional patch for those tests, \ncovering both \"resume from last know archive\" and \"resume from the \nreplication slots position\" cases.\n\n-- \nRonan Dunklau",
"msg_date": "Wed, 20 Oct 2021 14:58:26 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 02:58:26PM +0200, Ronan Dunklau wrote:\n> After sending the previous patch suite, I figured it would be worthwhile to \n> also have tests covering timeline switches, which was not covered before.\n\nThat seems independent to me. I'll take a look.\n\n> So please find attached a new version with an additional patch for those tests, \n> covering both \"resume from last know archive\" and \"resume from the \n> replication slots position\" cases.\n\nSo, taking things in order, I have looked at 0003 and 0001, and\nattached are refined versions for both of them.\n\n0003 is an existing hole in the docs, which I think we had better\naddress first and backpatch, taking into account that the starting\npoint calculation considers compressed segments when looking for\ncompleted segments.\n\nRegarding 0001, I have found the last test to check for NULL values\nreturned by READ_REPLICATION_SLOT after dropping the slot overlaps\nwith the first test, so I have removed that. I have expanded a bit\nthe use of like(), and there were some confusion with\nPostgresNode::psql and some extra arguments (see DROP_REPLICATION_SLOT\nand CREATE_REPLICATION_SLOT, and no need for return values in the \nCREATE case either). Some comments, docs and code have been slightly\ntweaked.\n\nHere are some comments about 0002.\n\n+ /* The commpand should always return precisely one tuple */\ns/commpand/command/\n\n+ pg_log_error(\"could not fetch replication slot: got %d rows and %d fields, expected %d rows and %d or more fields\",\n+ PQntuples(res), PQnfields(res), 1, 3);\nShould this be \"could not read\" instead?\n\n+ if (sscanf(PQgetvalue(res, 0, 1), \"%X/%X\", &hi, &lo) != 2)\n+ {\n+ pg_log_error(\"could not parse slot's restart_lsn \\\"%s\\\"\",\n+ PQgetvalue(res, 0, 1));\n+ PQclear(res);\n+ return false;\n+ }\nWouldn't it be saner to initialize *restart_lsn and *restart_tli to\nsome default values at the top of GetSlotInformation() instead, if\nthey are specified by the caller? And I think that we should still\ncomplain even if restart_lsn is NULL.\n\nOn a quick read of 0004, I find the split of the logic with\nchange_timeline() a bit hard to understand. It looks like we should\nbe able to make a cleaner split, but I am not sure how that would\nlook, though.\n--\nMichael",
"msg_date": "Thu, 21 Oct 2021 14:35:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le jeudi 21 octobre 2021, 07:35:08 CEST Michael Paquier a écrit :\n> On Wed, Oct 20, 2021 at 02:58:26PM +0200, Ronan Dunklau wrote:\n> > After sending the previous patch suite, I figured it would be worthwhile\n> > to\n> > also have tests covering timeline switches, which was not covered before.\n> \n> That seems independent to me. I'll take a look.\n> \n> > So please find attached a new version with an additional patch for those\n> > tests, covering both \"resume from last know archive\" and \"resume from\n> > the replication slots position\" cases.\n> \n> So, taking things in order, I have looked at 0003 and 0001, and\n> attached are refined versions for both of them.\n> \n> 0003 is an existing hole in the docs, which I think we had better\n> address first and backpatch, taking into account that the starting\n> point calculation considers compressed segments when looking for\n> completed segments.\n\nOk, do you want me to propose a different patch for previous versions ?\n\n> \n> Regarding 0001, I have found the last test to check for NULL values\n> returned by READ_REPLICATION_SLOT after dropping the slot overlaps\n> with the first test, so I have removed that. I have expanded a bit\n> the use of like(), and there were some confusion with\n> PostgresNode::psql and some extra arguments (see DROP_REPLICATION_SLOT\n> and CREATE_REPLICATION_SLOT, and no need for return values in the\n> CREATE case either). Some comments, docs and code have been slightly\n> tweaked.\n\nThank you for this. \n\n\n> \n> Here are some comments about 0002.\n> \n> + /* The commpand should always return precisely one tuple */\n> s/commpand/command/\n> \n> + pg_log_error(\"could not fetch replication slot: got %d rows and %d\n> fields, expected %d rows and %d or more fields\", + \n> PQntuples(res), PQnfields(res), 1, 3);\n> Should this be \"could not read\" instead?\n> \n> + if (sscanf(PQgetvalue(res, 0, 1), \"%X/%X\", &hi, &lo) != 2)\n> + {\n> + pg_log_error(\"could not parse slot's restart_lsn \\\"%s\\\"\",\n> + PQgetvalue(res, 0, 1));\n> + PQclear(res);\n> + return false;\n> + }\n> Wouldn't it be saner to initialize *restart_lsn and *restart_tli to\n> some default values at the top of GetSlotInformation() instead, if\n> they are specified by the caller? \n\nOk.\n\n> And I think that we should still\n> complain even if restart_lsn is NULL.\n\nDo you mean restart_lsn as the pointer argument to the function, or \nrestart_lsn as the field returned by the command ? If it's the first, I'll \nchange it but if it's the latter it is expected that we sometime run this on a \nslot where WAL has never been reserved yet.\n\n> \n> On a quick read of 0004, I find the split of the logic with\n> change_timeline() a bit hard to understand. It looks like we should\n> be able to make a cleaner split, but I am not sure how that would\n> look, though.\n\nThanks, at least if the proposal to test this seems sensible I can move \nforward. I wanted to avoid having a lot of code duplication since the test \nsetup is a bit more complicated. \nMy first approach was to split it into two functions, setup_standby and \nchange_timeline, but then realized that what would happen between the two \ninvocations would basically be the same for the two test cases, so I ended up \nwith that patch. I'll try to see if I can see a better way of organizing that \ncode.\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Thu, 21 Oct 2021 08:29:54 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 08:29:54AM +0200, Ronan Dunklau wrote:\n> Ok, do you want me to propose a different patch for previous versions ?\n\nThat's not necessary. Thanks for proposing.\n\n> Do you mean restart_lsn as the pointer argument to the function, or \n> restart_lsn as the field returned by the command ? If it's the first, I'll \n> change it but if it's the latter it is expected that we sometime run this on a \n> slot where WAL has never been reserved yet.\n\nrestart_lsn as the pointer of the function.\n--\nMichael",
"msg_date": "Thu, 21 Oct 2021 16:21:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le jeudi 21 octobre 2021, 09:21:44 CEST Michael Paquier a écrit :\n> On Thu, Oct 21, 2021 at 08:29:54AM +0200, Ronan Dunklau wrote:\n> > Ok, do you want me to propose a different patch for previous versions ?\n> \n> That's not necessary. Thanks for proposing.\n> \n> > Do you mean restart_lsn as the pointer argument to the function, or\n> > restart_lsn as the field returned by the command ? If it's the first, I'll\n> > change it but if it's the latter it is expected that we sometime run this\n> > on a slot where WAL has never been reserved yet.\n> \n> restart_lsn as the pointer of the function.\n\nDone. I haven't touched the timeline switch test patch for now, but I still \ninclude it here for completeness.\n\n\n\n\n\n-- \nRonan Dunklau",
"msg_date": "Thu, 21 Oct 2021 10:36:42 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 10:36:42AM +0200, Ronan Dunklau wrote:\n> Done. I haven't touched the timeline switch test patch for now, but I still \n> include it here for completeness.\n\nThanks. I have applied and back-patched 0001, then looked again at\n0002 that adds READ_REPLICATION_SLOT:\n- Change the TLI to use int8 rather than int4, so as we will always be\nright with TimelineID which is unsigned (this was discussed upthread\nbut I got back on it after more thoughts, to avoid any future\nissues).\n- Added an extra initialization for the set of Datum values, just as\nan extra safety net.\n- There was a bug with the timeline returned when executing the\ncommand while in recovery as ThisTimeLineID is 0 in the context of a\nstandby, but we need to support the case of physical slots even when\nstreaming archives from a standby. The fix is similar to what we do\nfor IDENTIFY_SYSTEM, where we need to use the timeline currently\nreplayed from GetXLogReplayRecPtr(), before looking at the past\ntimeline history using restart_lsn and the replayed TLI.\n\nWith that in place, I think that we are good now for this part.\n--\nMichael",
"msg_date": "Sat, 23 Oct 2021 16:44:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Sat, Oct 23, 2021 at 1:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 21, 2021 at 10:36:42AM +0200, Ronan Dunklau wrote:\n> > Done. I haven't touched the timeline switch test patch for now, but I still\n> > include it here for completeness.\n>\n> Thanks. I have applied and back-patched 0001, then looked again at\n> 0002 that adds READ_REPLICATION_SLOT:\n> - Change the TLI to use int8 rather than int4, so as we will always be\n> right with TimelineID which is unsigned (this was discussed upthread\n> but I got back on it after more thoughts, to avoid any future\n> issues).\n> - Added an extra initialization for the set of Datum values, just as\n> an extra safety net.\n> - There was a bug with the timeline returned when executing the\n> command while in recovery as ThisTimeLineID is 0 in the context of a\n> standby, but we need to support the case of physical slots even when\n> streaming archives from a standby. The fix is similar to what we do\n> for IDENTIFY_SYSTEM, where we need to use the timeline currently\n> replayed from GetXLogReplayRecPtr(), before looking at the past\n> timeline history using restart_lsn and the replayed TLI.\n>\n> With that in place, I think that we are good now for this part.\n\nThanks for the updated patch. I have following comments on v10:\n\n1) It's better to initialize nulls with false, we can avoid setting\nthem to true. The instances where the columns are not nulls is going\nto be more than the columns with null values, so we could avoid some\nof nulls[i] = false; instructions.\n+ MemSet(nulls, true, READ_REPLICATION_SLOT_COLS * sizeof(bool));\nI suggest we do the following. The number of instances of hitting the\n\"else\" parts will be less.\nMemSet(nulls, false, READ_REPLICATION_SLOT_COLS * sizeof(bool));\n\n if (slot == NULL || !slot->in_use)\n {\n MemSet(nulls, true, READ_REPLICATION_SLOT_COLS * sizeof(bool));\n LWLockRelease(ReplicationSlotControlLock);\n }\n\nif (!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn))\n{\n}\nelse\n nulls[i] = true;\n\nif (!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn))\n{\n}\nelse\n nulls[i] = true;\n\n2) As I previously mentioned, we are not copying the slot contents\nwhile holding the spinlock, it's just we are taking the memory address\nand releasing the lock, so there is a chance that the memory we are\nlooking at can become unavailable or stale while we access\nslot_contents. So, I suggest we do the memcpy of the *slot to\nslot_contents. I'm sure the memcpy ing the entire ReplicationSlot\ncontents will be costlier, so let's just take the info that we need\n(data.database, data.restart_lsn) into local variables while we hold\nthe spin lock\n+ /* Copy slot contents while holding spinlock */\n+ SpinLockAcquire(&slot->mutex);\n+ slot_contents = *slot;\n+ SpinLockRelease(&slot->mutex);\n+ LWLockRelease(ReplicationSlotControlLock);\n\nThe code will look like following:\n Oid database;\n XLogRecPtr restart_lsn;\n\n /* Take required information from slot contents while holding\nspinlock */\n SpinLockAcquire(&slot->mutex);\n database= slot->data.database;\n restart_lsn= slot->data.restart_lsn;\n SpinLockRelease(&slot->mutex);\n LWLockRelease(ReplicationSlotControlLock);\n\n3) The data that the new command returns to the client can actually\nbecome stale while it is captured and in transit to the client as we\nrelease the spinlock and other backends can drop or alter the info.\nSo, it's better we talk about this in the documentation of the new\ncommand and also in the comments saying \"clients will have to deal\nwith it.\"\n\n4) How about we be more descriptive about the error added? This will\nhelp identify for which replication slot the command has failed from\ntons of server logs which really help in debugging and analysis.\nI suggest we have this:\nerrmsg(\"cannot use \\\"%s\\\" command with logical replication slot\n\\\"%s\\\"\", \"READ_REPLICATION_SLOT\", cmd->slotname);\ninstead of just a plain, non-informative, generic message:\nerrmsg(\"cannot use \\\"%s\\\" with logical replication slots\",\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 23 Oct 2021 22:46:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Sat, Oct 23, 2021 at 10:46:30PM +0530, Bharath Rupireddy wrote:\n> 1) It's better to initialize nulls with false, we can avoid setting\n> them to true. The instances where the columns are not nulls is going\n> to be more than the columns with null values, so we could avoid some\n> of nulls[i] = false; instructions.\n\nI don't think that this is an improvement in this case as in the\ndefault case we'd return a tuple full of NULL values if the slot does\nnot exist, so the existing code is simpler when we don't look at the\nslot contents.\n\n> 2) As I previously mentioned, we are not copying the slot contents\n> while holding the spinlock, it's just we are taking the memory address\n> and releasing the lock, so there is a chance that the memory we are\n> looking at can become unavailable or stale while we access\n> slot_contents. So, I suggest we do the memcpy of the *slot to\n> slot_contents. I'm sure the memcpy ing the entire ReplicationSlot\n> contents will be costlier, so let's just take the info that we need\n> (data.database, data.restart_lsn) into local variables while we hold\n> the spin lock\n\nThe style of the patch is consistent with what we do in other areas\n(see pg_get_replication_slots as one example).\n\n> + /* Copy slot contents while holding spinlock */\n> + SpinLockAcquire(&slot->mutex);\n> + slot_contents = *slot;\n\nAnd what this does is to copy the contents of the slot into a local\narea (note that we use a NameData pointing to an area with\nNAMEDATALEN). Aka if the contents of *slot are changed by whatever\nreason (this cannot change as of the LWLock acquired), what we have\nsaved is unchanged as of this command's context.\n\n> 3) The data that the new command returns to the client can actually\n> become stale while it is captured and in transit to the client as we\n> release the spinlock and other backends can drop or alter the info.\n> So, it's better we talk about this in the documentation of the new\n> command and also in the comments saying \"clients will have to deal\n> with it.\"\n\nThe same can be said with IDENTIFY_SYSTEM when the flushed location\nbecomes irrelevant. I am not sure that this has any need to apply\nhere. We could add that this is useful to get a streaming start\nposition though.\n\n> 4) How about we be more descriptive about the error added? This will\n> help identify for which replication slot the command has failed from\n> tons of server logs which really help in debugging and analysis.\n> I suggest we have this:\n> errmsg(\"cannot use \\\"%s\\\" command with logical replication slot\n> \\\"%s\\\"\", \"READ_REPLICATION_SLOT\", cmd->slotname);\n> instead of just a plain, non-informative, generic message:\n> errmsg(\"cannot use \\\"%s\\\" with logical replication slots\",\n\nYeah. I don't mind adding the slot name in this string.\n--\nMichael",
"msg_date": "Sun, 24 Oct 2021 08:10:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Sun, Oct 24, 2021 at 4:40 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Oct 23, 2021 at 10:46:30PM +0530, Bharath Rupireddy wrote:\n> > 2) As I previously mentioned, we are not copying the slot contents\n> > while holding the spinlock, it's just we are taking the memory address\n> > and releasing the lock, so there is a chance that the memory we are\n> > looking at can become unavailable or stale while we access\n> > slot_contents. So, I suggest we do the memcpy of the *slot to\n> > slot_contents. I'm sure the memcpy ing the entire ReplicationSlot\n> > contents will be costlier, so let's just take the info that we need\n> > (data.database, data.restart_lsn) into local variables while we hold\n> > the spin lock\n>\n> The style of the patch is consistent with what we do in other areas\n> (see pg_get_replication_slots as one example).\n>\n> > + /* Copy slot contents while holding spinlock */\n> > + SpinLockAcquire(&slot->mutex);\n> > + slot_contents = *slot;\n>\n> And what this does is to copy the contents of the slot into a local\n> area (note that we use a NameData pointing to an area with\n> NAMEDATALEN). Aka if the contents of *slot are changed by whatever\n> reason (this cannot change as of the LWLock acquired), what we have\n> saved is unchanged as of this command's context.\n\npg_get_replication_slots holds the ReplicationSlotControlLock until\nthe end of the function so it can be assured that *slot contents will\nnot change. In ReadReplicationSlot, the ReplicationSlotControlLock is\nreleased immediately after taking *slot pointer into slot_contents.\nIsn't it better if we hold the lock until the end of the function so\nthat we can avoid the slot contents becoming stale problems?\n\nHaving said that, the ReplicationSlotCreate,\nSearchNamedReplicationSlot (when need_lock is true) etc. release the\nlock immediately. I'm not sure if I should ignore this staleness\nproblem in ReadReplicationSlot.\n\n> > 3) The data that the new command returns to the client can actually\n> > become stale while it is captured and in transit to the client as we\n> > release the spinlock and other backends can drop or alter the info.\n> > So, it's better we talk about this in the documentation of the new\n> > command and also in the comments saying \"clients will have to deal\n> > with it.\"\n>\n> The same can be said with IDENTIFY_SYSTEM when the flushed location\n> becomes irrelevant. I am not sure that this has any need to apply\n> here. We could add that this is useful to get a streaming start\n> position though.\n\nI think that's okay, let's not make any changes or add any comments in\nregards to the above. The client is basically bound to get the\nsnapshot of the data at the time it requests the database.\n\n> > 4) How about we be more descriptive about the error added? This will\n> > help identify for which replication slot the command has failed from\n> > tons of server logs which really help in debugging and analysis.\n> > I suggest we have this:\n> > errmsg(\"cannot use \\\"%s\\\" command with logical replication slot\n> > \\\"%s\\\"\", \"READ_REPLICATION_SLOT\", cmd->slotname);\n> > instead of just a plain, non-informative, generic message:\n> > errmsg(\"cannot use \\\"%s\\\" with logical replication slots\",\n>\n> Yeah. I don't mind adding the slot name in this string.\n\nThanks.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sun, 24 Oct 2021 09:08:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Sun, Oct 24, 2021 at 09:08:01AM +0530, Bharath Rupireddy wrote:\n> pg_get_replication_slots holds the ReplicationSlotControlLock until\n> the end of the function so it can be assured that *slot contents will\n> not change. In ReadReplicationSlot, the ReplicationSlotControlLock is\n> released immediately after taking *slot pointer into slot_contents.\n> Isn't it better if we hold the lock until the end of the function so\n> that we can avoid the slot contents becoming stale problems?\n\nThe reason is different in the case of pg_get_replication_slots(). We\nhave to hold ReplicationSlotControlLock for the whole duration of the\nshared memory scan to return back to the user a consistent set of\ninformation to the user, for all the slots.\n--\nMichael",
"msg_date": "Sun, 24 Oct 2021 15:51:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Sun, Oct 24, 2021 at 12:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Oct 24, 2021 at 09:08:01AM +0530, Bharath Rupireddy wrote:\n> > pg_get_replication_slots holds the ReplicationSlotControlLock until\n> > the end of the function so it can be assured that *slot contents will\n> > not change. In ReadReplicationSlot, the ReplicationSlotControlLock is\n> > released immediately after taking *slot pointer into slot_contents.\n> > Isn't it better if we hold the lock until the end of the function so\n> > that we can avoid the slot contents becoming stale problems?\n>\n> The reason is different in the case of pg_get_replication_slots(). We\n> have to hold ReplicationSlotControlLock for the whole duration of the\n> shared memory scan to return back to the user a consistent set of\n> information to the user, for all the slots.\n\nThanks. I've no further comments on the v10 patch.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sun, 24 Oct 2021 19:20:57 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Sun, Oct 24, 2021 at 07:20:57PM +0530, Bharath Rupireddy wrote:\n> Thanks. I've no further comments on the v10 patch.\n\nOkay, thanks. I have applied this part, then.\n--\nMichael",
"msg_date": "Mon, 25 Oct 2021 07:43:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 10:36:42AM +0200, Ronan Dunklau wrote:\n> Done. I haven't touched the timeline switch test patch for now, but I still \n> include it here for completeness.\n\nAs 0001 and 0002 have been applied, I have put my hands on 0003, and\nfound a couple of issues upon review.\n\n+ Assert(slot_name != NULL);\n+ Assert(restart_lsn != NULL);\nThere is no need for those asserts, as we should support the case\nwhere the caller gives NULL for those variables.\n\n+ if (PQserverVersion(conn) < 150000)\n+ return false;\nReturning false is incorrect for older server versions as we won't\nfallback to the old method when streaming from older server. What\nthis needs to do is return true and set restart_lsn to\nInvalidXLogRecPtr, so as pg_receivewal would just stream from the\ncurrent flush location. \"false\" should just be returned on error,\nwith pg_log_error().\n\n+$primary->psql('postgres',\n+ 'INSERT INTO test_table VALUES (generate_series(1,100));');\n+$primary->psql('postgres', 'SELECT pg_switch_wal();');\n+$nextlsn =\n+ $primary->safe_psql('postgres', 'SELECT pg_current_wal_insert_lsn();');\n+chomp($nextlsn);\nThere is no need to switch twice to a new WAL segment as we just need\nto be sure that the WAL segment of the restart_lsn is the one\narchived. Note that RESERVE_WAL uses the last redo point, so it is\nbetter to use a checkpoint and reduce the number of logs we stream\ninto the new location.\n\nBetter to add some --no-sync to the new commands of pg_receivewal, to\nnot stress the I/O more than necessary. I have added some extra -n\nwhile on it to avoid loops on failure.\n\nAttached is the updated patch I am finishing with, which is rather\nclean now. I have tweaked a couple of things while on it, and\ndocumented better the new GetSlotInformation() in streamutil.c.\n--\nMichael",
"msg_date": "Mon, 25 Oct 2021 15:51:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le lundi 25 octobre 2021, 08:51:23 CEST Michael Paquier a écrit :\n> On Thu, Oct 21, 2021 at 10:36:42AM +0200, Ronan Dunklau wrote:\n> > Done. I haven't touched the timeline switch test patch for now, but I\n> > still\n> > include it here for completeness.\n> \n> As 0001 and 0002 have been applied, I have put my hands on 0003, and\n> found a couple of issues upon review.\n> \n> + Assert(slot_name != NULL);\n> + Assert(restart_lsn != NULL);\n> There is no need for those asserts, as we should support the case\n> where the caller gives NULL for those variables.\n\nDoes it make sense though ? The NULL slot_name case handling is pretty \nstraight forward has it will be handled by string formatting, but in the case \nof a null restart_lsn, we have no way of knowing if the command was issued at \nall. \n\n> \n> + if (PQserverVersion(conn) < 150000)\n> + return false;\n> Returning false is incorrect for older server versions as we won't\n> fallback to the old method when streaming from older server. What\n> this needs to do is return true and set restart_lsn to\n> InvalidXLogRecPtr, so as pg_receivewal would just stream from the\n> current flush location. \"false\" should just be returned on error,\n> with pg_log_error().\n\nThank you, this was an oversight when moving from the more complicated error \nhandling code. \n\n> \n> +$primary->psql('postgres',\n> + 'INSERT INTO test_table VALUES (generate_series(1,100));');\n> +$primary->psql('postgres', 'SELECT pg_switch_wal();');\n> +$nextlsn =\n> + $primary->safe_psql('postgres', 'SELECT pg_current_wal_insert_lsn();');\n> +chomp($nextlsn);\n> There is no need to switch twice to a new WAL segment as we just need\n> to be sure that the WAL segment of the restart_lsn is the one\n> archived. Note that RESERVE_WAL uses the last redo point, so it is\n> better to use a checkpoint and reduce the number of logs we stream\n> into the new location.\n> \n> Better to add some --no-sync to the new commands of pg_receivewal, to\n> not stress the I/O more than necessary. I have added some extra -n\n> while on it to avoid loops on failure.\n> \n> Attached is the updated patch I am finishing with, which is rather\n> clean now. I have tweaked a couple of things while on it, and\n> documented better the new GetSlotInformation() in streamutil.c.\n> --\n> Michael\n\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 25 Oct 2021 09:15:32 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le lundi 25 octobre 2021, 00:43:11 CEST Michael Paquier a écrit :\n> On Sun, Oct 24, 2021 at 07:20:57PM +0530, Bharath Rupireddy wrote:\n> > Thanks. I've no further comments on the v10 patch.\n> \n> Okay, thanks. I have applied this part, then.\n> --\n> Michael\n\nThank you all for your work on this patch. \n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 25 Oct 2021 09:16:16 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 09:15:32AM +0200, Ronan Dunklau wrote:\n> Does it make sense though ? The NULL slot_name case handling is pretty \n> straight forward has it will be handled by string formatting, but in the case \n> of a null restart_lsn, we have no way of knowing if the command was issued at \n> all.\n\nIf I am following your point, I don't think that it matters much here,\nand it seems useful to me to be able to pass NULL for both of them, so\nas one can check if the slot exists or not with an API designed this\nway.\n--\nMichael",
"msg_date": "Mon, 25 Oct 2021 16:40:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le lundi 25 octobre 2021, 09:40:10 CEST Michael Paquier a écrit :\n> On Mon, Oct 25, 2021 at 09:15:32AM +0200, Ronan Dunklau wrote:\n> > Does it make sense though ? The NULL slot_name case handling is pretty\n> > straight forward has it will be handled by string formatting, but in the\n> > case of a null restart_lsn, we have no way of knowing if the command was\n> > issued at all.\n> \n> If I am following your point, I don't think that it matters much here,\n> and it seems useful to me to be able to pass NULL for both of them, so\n> as one can check if the slot exists or not with an API designed this\n> way.\n\nYou're right, but I'm afraid we would have to check the server version twice \nin any case different from the basic pg_receivewal on (once in the function \nitself, and one before calling it if we want a meaningful result). Maybe we \nshould move the version check outside the GetSlotInformation function to avoid \nthis, and let it fail with a syntax error when the server doesn't support it ?\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 25 Oct 2021 09:50:01 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 09:50:01AM +0200, Ronan Dunklau wrote:\n> Le lundi 25 octobre 2021, 09:40:10 CEST Michael Paquier a écrit :\n>> On Mon, Oct 25, 2021 at 09:15:32AM +0200, Ronan Dunklau wrote:\n>>> Does it make sense though ? The NULL slot_name case handling is pretty\n>>> straight forward has it will be handled by string formatting, but in the\n>>> case of a null restart_lsn, we have no way of knowing if the command was\n>>> issued at all.\n>> \n>> If I am following your point, I don't think that it matters much here,\n>> and it seems useful to me to be able to pass NULL for both of them, so\n>> as one can check if the slot exists or not with an API designed this\n>> way.\n> \n> You're right, but I'm afraid we would have to check the server version twice \n> in any case different from the basic pg_receivewal on (once in the function \n> itself, and one before calling it if we want a meaningful result). Maybe we \n> should move the version check outside the GetSlotInformation function to avoid \n> this, and let it fail with a syntax error when the server doesn't support it ?\n\nWith the approach taken by the patch, we fall down silently to the\nprevious behavior if we connect to a server <= 14, and rely on the new\nbehavior with a server >= 15, ensuring compatibility. Why would you\nwant to make sure that the command is executed when we should just\nenforce that the old behavior is what happens when there is a slot\ndefined and a backend <= 14?\n--\nMichael",
"msg_date": "Mon, 25 Oct 2021 17:10:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le lundi 25 octobre 2021, 10:10:13 CEST Michael Paquier a écrit :\n> On Mon, Oct 25, 2021 at 09:50:01AM +0200, Ronan Dunklau wrote:\n> > Le lundi 25 octobre 2021, 09:40:10 CEST Michael Paquier a écrit :\n> >> On Mon, Oct 25, 2021 at 09:15:32AM +0200, Ronan Dunklau wrote:\n> >>> Does it make sense though ? The NULL slot_name case handling is pretty\n> >>> straight forward has it will be handled by string formatting, but in the\n> >>> case of a null restart_lsn, we have no way of knowing if the command was\n> >>> issued at all.\n> >> \n> >> If I am following your point, I don't think that it matters much here,\n> >> and it seems useful to me to be able to pass NULL for both of them, so\n> >> as one can check if the slot exists or not with an API designed this\n> >> way.\n> > \n> > You're right, but I'm afraid we would have to check the server version\n> > twice in any case different from the basic pg_receivewal on (once in the\n> > function itself, and one before calling it if we want a meaningful\n> > result). Maybe we should move the version check outside the\n> > GetSlotInformation function to avoid this, and let it fail with a syntax\n> > error when the server doesn't support it ?\n> With the approach taken by the patch, we fall down silently to the\n> previous behavior if we connect to a server <= 14, and rely on the new\n> behavior with a server >= 15, ensuring compatibility. Why would you\n> want to make sure that the command is executed when we should just\n> enforce that the old behavior is what happens when there is a slot\n> defined and a backend <= 14?\n\nSorry I haven't been clear. For the use case of this patch, the current \napproach is perfect (and we supply the restart_lsn).\n\nHowever, if we want to support the case of \"just check if the slot exists\", we \nneed to make sure the command is actually executed, and check the version \nbefore calling the function, which would make the check executed twice.\n\nWhat I'm proposing is just that we let the responsibility of checking \nPQServerVersion() to the caller, and remove it from GetSlotInformation, ie:\n\n- if (replication_slot != NULL)\n+ if (replication_slot != NULL && PQserverVersion(conn) >= \n150000)\n {\n if (!GetSlotInformation(conn, replication_slot, \n&stream.startpos,\n &stream.timeline))\n\nThat way, if we introduce a caller wanting to use this function as an API to \ncheck a slot exists, the usage of checking the server version beforehand will \nbe consistent.\n\n\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 25 Oct 2021 10:24:46 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 10:24:46AM +0200, Ronan Dunklau wrote:\n> However, if we want to support the case of \"just check if the slot exists\", we \n> need to make sure the command is actually executed, and check the version \n> before calling the function, which would make the check executed twice.\n> \n> What I'm proposing is just that we let the responsibility of checking \n> PQServerVersion() to the caller, and remove it from GetSlotInformation, ie:\n> \n> - if (replication_slot != NULL)\n> + if (replication_slot != NULL && PQserverVersion(conn) >= \n> 150000)\n> {\n> if (!GetSlotInformation(conn, replication_slot, \n> &stream.startpos,\n> &stream.timeline))\n> \n> That way, if we introduce a caller wanting to use this function as an API to \n> check a slot exists, the usage of checking the server version beforehand will \n> be consistent.\n\nAh, good point. My apologies for not following. Indeed, the patch\nmakes this part of the routine a bit blurry. It is fine by me to do\nas you suggest, and let the caller do the version check as you\npropose, while making the routine fail if directly called for an older\nserver.\n--\nMichael",
"msg_date": "Mon, 25 Oct 2021 17:57:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 12:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Attached is the updated patch I am finishing with, which is rather\n> clean now. I have tweaked a couple of things while on it, and\n> documented better the new GetSlotInformation() in streamutil.c.\n\nThanks for the v11 patch, here are some comments:\n\n1) Remove the extra whitespace in between \"t\" and \":\"\n+ pg_log_error(\"could not read replication slot : %s\",\n\n2) I think we should tweak the below error message\n+ pg_log_error(\"could not read replication slot \\\"%s\\\": %s\",\n slot_name, PQerrorMessage(conn));\nto\n pg_log_error(\"could not read replication slot \\\"%s\\\": %s\",\n \"IDENTIFY_SYSTEM\", PQerrorMessage(conn));\nHaving slot name in the error message helps to isolate the error\nmessage from tons of server logs that gets generated.\n\n3) Also for the same reasons stated as above, change the below error message\npg_log_error(\"could not read replication slot: got %d rows and %d\nfields, expected %d rows and %d or more fields\",\nto\npg_log_error(\"could not read replication slot \\\"%s\\\": got %d rows and\n%d fields, expected %d rows and %d or more fields\", slot_name,....\n\n4) Also for the same reasons, change below\n+ pg_log_error(\"could not parse slot's restart_lsn \\\"%s\\\"\",\nto\npg_log_error(\"could not parse replicaton slot \\\"%s\\\" restart_lsn \\\"%s\\\"\",\n slot_name, PQgetvalue(res, 0, 1));\n\n5) I think we should also have assertion for the timeline id:\n Assert(stream.startpos != InvalidXLogRecPtr);\n Assert(stream.timeline!= 0);\n\n6) Why do we need these two assignements?\n+ if (*restart_lsn)\n+ *restart_lsn = lsn_loc;\n+ if (restart_tli != NULL)\n+ *restart_tli = tli_loc;\n\n+ /* Assign results if requested */\n+ if (restart_lsn)\n+ *restart_lsn = lsn_loc;\n+ if (restart_tli)\n+ *restart_tli = tli_loc;\n\nI think we can just get rid of lsn_loc and tli_loc, initlaize\n*restart_lsn = InvalidXLogRecPtr and *restart_tli = 0 at the start of\nthe function and directly assign the requrested values to *restart_lsn\nand *restart_tli, also see comment (8).\n\n7) Let's be consistent, change the following\n+\n+ if (*restart_lsn)\n+ *restart_lsn = lsn_loc;\n+ if (restart_tli != NULL)\n+ *restart_tli = tli_loc;\nto\n+\n+ if (restart_lsn)\n+ *restart_lsn = lsn_loc;\n+ if (restart_tli != NULL)\n+ *restart_tli = tli_loc;\n\n8) Let's extract the values asked by the caller, change:\n+ /* restart LSN */\n+ if (!PQgetisnull(res, 0, 1))\n\n+ /* current TLI */\n+ if (!PQgetisnull(res, 0, 2))\n+ tli_loc = (TimeLineID) atol(PQgetvalue(res, 0, 2));\n\nto\n\n+ /* restart LSN */\n+ if (restart_lsn && !PQgetisnull(res, 0, 1))\n\n+ /* current TLI */\n+ if (restart_tli && !PQgetisnull(res, 0, 2))\n\n9) 80char limit crossed:\n+GetSlotInformation(PGconn *conn, const char *slot_name, XLogRecPtr\n*restart_lsn, TimeLineID *restart_tli)\n\n10) Missing word \"command\", and use \"issued to the server\", so change the below:\n+ <command>READ_REPLICATION_SLOT</command> is issued to retrieve the\nto\n+ <command>READ_REPLICATION_SLOT</command> command is issued to\nthe server to retrieve the\n\n11) Will replication_slot ever be NULL? If it ever be null, then we\ndon't reach this far right? We see the pg_log_error(\"%s needs a slot\nto be specified using --slot\". Please revmove below if condition:\n+ * server may not support this option.\n+ */\n+ if (replication_slot != NULL)\n\nWe can just add Assert(slot_name); in GetSlotInformation().\n\n12) How about following:\n\"If a starting point cannot be calculated with the previous method,\n<command>READ_REPLICATION_SLOT</command> command with the provided\nslot is issued to the server for retreving the slot's restart_lsn and\ntimelineid\"\ninstead of\n+ and if a replication slot is used, an extra\n+ <command>READ_REPLICATION_SLOT</command> is issued to retrieve the\n+ slot's <literal>restart_lsn</literal> to use as starting point.\n\nThe IDENTIFY_SYSTEM descritption starts with \"If a starting point\ncannot be calculated....\":\n <listitem>\n <para>\n If a starting point cannot be calculated with the previous method,\n the latest WAL flush location is used as reported by the server from\n a <literal>IDENTIFY_SYSTEM</literal> command.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Oct 2021 14:40:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 02:40:05PM +0530, Bharath Rupireddy wrote:\n> 2) I think we should tweak the below error message\n> to\n> pg_log_error(\"could not read replication slot \\\"%s\\\": %s\",\n> \"IDENTIFY_SYSTEM\", PQerrorMessage(conn));\n> Having slot name in the error message helps to isolate the error\n> message from tons of server logs that gets generated.\n\nYes, this suggestion makes sense.\n\n> 3) Also for the same reasons stated as above, change the below error message\n> pg_log_error(\"could not read replication slot: got %d rows and %d\n> fields, expected %d rows and %d or more fields\",\n> to\n> pg_log_error(\"could not read replication slot \\\"%s\\\": got %d rows and\n> %d fields, expected %d rows and %d or more fields\", slot_name,....\n\nWe can even get rid of \"or more\" to match the condition used.\n\n> 4) Also for the same reasons, change below\n> + pg_log_error(\"could not parse slot's restart_lsn \\\"%s\\\"\",\n> to\n> pg_log_error(\"could not parse replicaton slot \\\"%s\\\" restart_lsn \\\"%s\\\"\",\n> slot_name, PQgetvalue(res, 0, 1));\n\nAppending the slot name makes sense.\n\n> 5) I think we should also have assertion for the timeline id:\n> Assert(stream.startpos != InvalidXLogRecPtr);\n> Assert(stream.timeline!= 0);\n\nOkay.\n\n> 6) Why do we need these two assignements?\n> I think we can just get rid of lsn_loc and tli_loc, initlaize\n> *restart_lsn = InvalidXLogRecPtr and *restart_tli = 0 at the start of\n> the function and directly assign the requrested values to *restart_lsn\n> and *restart_tli, also see comment (8).\n\nFWIW, I find the style of the patch easier to follow.\n\n> 9) 80char limit crossed:\n> +GetSlotInformation(PGconn *conn, const char *slot_name, XLogRecPtr\n> *restart_lsn, TimeLineID *restart_tli)\n\npgindent says nothing.\n\n> 10) Missing word \"command\", and use \"issued to the server\", so change the below:\n> + <command>READ_REPLICATION_SLOT</command> is issued to retrieve the\n> to\n> + <command>READ_REPLICATION_SLOT</command> command is issued to\n> the server to retrieve the\n\nOkay.\n\n> 11) Will replication_slot ever be NULL? If it ever be null, then we\n> don't reach this far right? We see the pg_log_error(\"%s needs a slot\n> to be specified using --slot\". Please revmove below if condition:\n> + * server may not support this option.\n\nDid you notice that this applies when creating or dropping a slot, for\ncode paths entirely different than what we are dealing with here?\n--\nMichael",
"msg_date": "Mon, 25 Oct 2021 19:49:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 4:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > 6) Why do we need these two assignements?\n> > I think we can just get rid of lsn_loc and tli_loc, initlaize\n> > *restart_lsn = InvalidXLogRecPtr and *restart_tli = 0 at the start of\n> > the function and directly assign the requrested values to *restart_lsn\n> > and *restart_tli, also see comment (8).\n>\n> FWIW, I find the style of the patch easier to follow.\n\nThen, please change if (*restart_lsn) and if (*restart_tli) to if\n(restart_lsn) and if (restart_tli), just to be consistent with the\nother parts of the patch and the existing code of RunIdentifySystem():\n if (*restart_lsn)\n *restart_lsn = lsn_loc;\n if (restart_tli != NULL)\n *restart_tli = tli_loc;\n\n> > 11) Will replication_slot ever be NULL? If it ever be null, then we\n> > don't reach this far right? We see the pg_log_error(\"%s needs a slot\n> > to be specified using --slot\". Please revmove below if condition:\n> > + * server may not support this option.\n>\n> Did you notice that this applies when creating or dropping a slot, for\n> code paths entirely different than what we are dealing with here?\n\nStreamLog() isn't reached for create and drop slot cases, see [1]. I\nsuggest to remove replication_slot != NULL and have Assert(slot_name)\nin GetSlotInformation():\n /*\n * Try to get the starting point from the slot. This is supported in\n * PostgreSQL 15 and up.\n */\n if (PQserverVersion(conn) >= 150000)\n {\n if (!GetSlotInformation(conn, replication_slot, &stream.startpos,\n &stream.timeline))\n {\n /* Error is logged by GetSlotInformation() */\n return;\n }\n }\n\nHere is another comment on the patch:\nRemove the extra new line above the GetSlotInformation() definition:\n return true;\n }\n\n+ -----> REMOVE THIS\n+/*\n+ * Run READ_REPLICATION_SLOT through a given connection and give back to\n\nApart from the above v12 patch LGTM.\n\n[1]\n /*\n * Drop a replication slot.\n */\n if (do_drop_slot)\n {\n if (verbose)\n pg_log_info(\"dropping replication slot \\\"%s\\\"\", replication_slot);\n\n if (!DropReplicationSlot(conn, replication_slot))\n exit(1);\n exit(0);\n }\n\n /* Create a replication slot */\n if (do_create_slot)\n {\n if (verbose)\n pg_log_info(\"creating replication slot \\\"%s\\\"\", replication_slot);\n\n if (!CreateReplicationSlot(conn, replication_slot, NULL,\nfalse, true, false,\n slot_exists_ok, false))\n exit(1);\n exit(0);\n }\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Oct 2021 17:46:57 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 05:46:57PM +0530, Bharath Rupireddy wrote:\n> StreamLog() isn't reached for create and drop slot cases, see [1]. I\n> suggest to remove replication_slot != NULL and have Assert(slot_name)\n> in GetSlotInformation():\n> /*\n> * Try to get the starting point from the slot. This is supported in\n> * PostgreSQL 15 and up.\n> */\n> if (PQserverVersion(conn) >= 150000)\n> {\n> if (!GetSlotInformation(conn, replication_slot, &stream.startpos,\n> &stream.timeline))\n> {\n> /* Error is logged by GetSlotInformation() */\n> return;\n> }\n> }\n\nPlease note that it is possible to use pg_receivewal without a slot,\nwhich is the default case, so we cannot do what you are suggesting\nhere. An assertion on slot_name in GetSlotInformation() is not that\nhelpful either in my opinion, as we would just crash a couple of lines\ndown the road.\n\nI have changed the patch per Ronan's suggestion to have the version\ncheck out of GetSlotInformation(), addressed what you have reported,\nand the result looked good. So I have applied this part.\n\nWhat remains on this thread is the addition of new tests to make sure\nthat pg_receivewal is able to follow a timeline switch. Now that we\ncan restart from a slot that should be a bit easier to implemented as\na test by creating a slot on a standby. Ronan, are you planning to\nsend a new patch for this part?\n--\nMichael",
"msg_date": "Tue, 26 Oct 2021 10:15:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le mardi 26 octobre 2021, 03:15:40 CEST Michael Paquier a écrit :\n> I have changed the patch per Ronan's suggestion to have the version\n> check out of GetSlotInformation(), addressed what you have reported,\n> and the result looked good. So I have applied this part.\n\nThanks !\n> \n> What remains on this thread is the addition of new tests to make sure\n> that pg_receivewal is able to follow a timeline switch. Now that we\n> can restart from a slot that should be a bit easier to implemented as\n> a test by creating a slot on a standby. Ronan, are you planning to\n> send a new patch for this part?\n\nYes, I will try to simplify the logic of the patch I sent last week. I'll keep \nyou posted here soon.\n\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 26 Oct 2021 08:27:47 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le mardi 26 octobre 2021, 08:27:47 CEST Ronan Dunklau a écrit :\n> Yes, I will try to simplify the logic of the patch I sent last week. I'll\n> keep you posted here soon.\n\nI was able to simplify it quite a bit, by using only one standby for both test \nscenarios.\n\nThis test case verify that after a timeline switch, if we resume from a \nprevious state we will archive: \n - segments from the old timeline\n - segments from the new timeline\n - the timeline history file itself.\n\nI chose to check against a full segment from the previous timeline, but it \nwould have been possible to check that the latest timeline segment was \npartial. I chose not not, in the unlikely event we promote at an exact segment \nboundary. I don't think it matters much, since partial wal files are already \ncovered by other tests.\n\n-- \nRonan Dunklau",
"msg_date": "Tue, 26 Oct 2021 11:01:46 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "At Tue, 26 Oct 2021 11:01:46 +0200, Ronan Dunklau <ronan.dunklau@aiven.io> wrote in \n> Le mardi 26 octobre 2021, 08:27:47 CEST Ronan Dunklau a écrit :\n> > Yes, I will try to simplify the logic of the patch I sent last week. I'll\n> > keep you posted here soon.\n> \n> I was able to simplify it quite a bit, by using only one standby for both test \n> scenarios.\n> \n> This test case verify that after a timeline switch, if we resume from a \n> previous state we will archive: \n> - segments from the old timeline\n> - segments from the new timeline\n> - the timeline history file itself.\n> \n> I chose to check against a full segment from the previous timeline, but it \n> would have been possible to check that the latest timeline segment was \n> partial. I chose not not, in the unlikely event we promote at an exact segment \n> boundary. I don't think it matters much, since partial wal files are already \n> covered by other tests.\n\n+my @walfiles = glob \"$slot_dir/*\";\n\nThis is not used.\n\nEach pg_receivewal run stalls for about 10 or more seconds before\nfinishing, which is not great from the standpoint of recently\nincreasing test run time.\n\nMaybe we want to advance LSN a bit, after taking $nextlsn then pass\n\"-s 1\" to pg_receivewal.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Oct 2021 11:17:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "At Wed, 27 Oct 2021 11:17:28 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Each pg_receivewal run stalls for about 10 or more seconds before\n> finishing, which is not great from the standpoint of recently\n> increasing test run time.\n> \n> Maybe we want to advance LSN a bit, after taking $nextlsn then pass\n> \"-s 1\" to pg_receivewal.\n\nHmm. Sorry, my fingers slipped. We don't need '-s 1' for this purpose.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Oct 2021 11:20:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le mercredi 27 octobre 2021, 04:17:28 CEST Kyotaro Horiguchi a écrit :\n> +my @walfiles = glob \"$slot_dir/*\";\n> \n> This is not used.\n> \n\nSorry, fixed in attached version.\n\n> Each pg_receivewal run stalls for about 10 or more seconds before\n> finishing, which is not great from the standpoint of recently\n> increasing test run time.\n\n> Maybe we want to advance LSN a bit, after taking $nextlsn then pass\n> \"-s 1\" to pg_receivewal.\n\nI incorrectly assumed it was due to the promotion time without looking into \nit. In fact, you're right the LSN was not incremented after we fetched the end \nlsn, and thus we would wait for quite a while. I fixed that too.\n\nThank you for the review !\n\n-- \nRonan Dunklau",
"msg_date": "Wed, 27 Oct 2021 10:00:40 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le mercredi 27 octobre 2021, 10:00:40 CEST Ronan Dunklau a écrit :\n> Le mercredi 27 octobre 2021, 04:17:28 CEST Kyotaro Horiguchi a écrit :\n> > +my @walfiles = glob \"$slot_dir/*\";\n> > \n> > This is not used.\n> \n> Sorry, fixed in attached version.\n> \n> > Each pg_receivewal run stalls for about 10 or more seconds before\n> > finishing, which is not great from the standpoint of recently\n> > increasing test run time.\n> > \n> > Maybe we want to advance LSN a bit, after taking $nextlsn then pass\n> > \"-s 1\" to pg_receivewal.\n> \n> I incorrectly assumed it was due to the promotion time without looking into\n> it. In fact, you're right the LSN was not incremented after we fetched the\n> end lsn, and thus we would wait for quite a while. I fixed that too.\n> \n> Thank you for the review !\n\nSorry I sent an intermediary version of the patch, here is the correct one.\n\n-- \nRonan Dunklau",
"msg_date": "Wed, 27 Oct 2021 10:11:00 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 10:11:00AM +0200, Ronan Dunklau wrote:\n> Sorry I sent an intermediary version of the patch, here is the correct one.\n\nWhile looking at this patch, I have figured out a simple way to make\nthe tests faster without impacting the coverage. The size of the WAL\nsegments archived is a bit of a bottleneck as they need to be zeroed\nby pg_receivewal at creation. This finishes by being a waste of time,\neven if we don't flush the data. So I'd like to switch the test so as\nwe use a segment size of 1MB, first.\n\nA second thing is that we copy too many segments than really needed\nwhen using the slot's restart_lsn as starting point as RESERVE_WAL\nwould use the current redo location, so it seems to me that a\ncheckpoint is in order before the slot creation. A third thing is\nthat generating some extra data after the end LSN we want to use makes\nthe test much faster at the end.\n\nWith those three methods combined, the test goes down from 11s to 9s\nhere. Attached is a patch I'd like to apply to make the test\ncheaper.\n\nI also had a look at your patch. Here are some comments.\n\n+# Cleanup the previous stream directories to reuse them\n+unlink glob \"'${stream_dir}/*'\";\n+unlink glob \"'${slot_dir}/*'\";\nI think that we'd better choose a different location for the\narchives. Keeping the segments of the previous tests is useful for\ndebugging if a previous step of the test failed.\n\n+$standby->psql('',\n+ \"CREATE_REPLICATION_SLOT $folder_slot PHYSICAL (RESERVE_WAL)\",\n+ replication => 1);\nHere as well we could use a restart point to reduce the number of\nsegments archived.\n\n+# Now, try to resume after the promotion, from the folder.\n+$standby->command_ok(\n+ [ 'pg_receivewal', '-D', $stream_dir, '--verbose', '--endpos', $nextlsn,\n+ '--slot', $folder_slot, '--no-sync'],\n+ \"Stream some wal after promoting, resuming from the folder's position\");\nWhat is called the \"resume-from-folder\" test in the patch is IMO\ntoo costly and brings little extra value, requiring two commands of\npg_receivewal (one to populate the folder and one to test the TLI jump\nfrom the previously-populated point) to test basically the same thing\nas when the starting point is taken from the slot. Except that\nrestarting from the slot is twice cheaper. The point of the\nresume-from-folder case is to make sure that we restart from the point\nof the archive folder rather than the slot's restart_lsn, but your\ntest fails this part, in fact, because the first command populating\nthe archive folder also uses \"--slot $folder_slot\", updating the\nslot's restart_lsn before the second pg_receivewal uses this slot\nagain.\n\nI think, as a whole, that testing for the case where an archive folder\nis populated (without a slot!), followed by a second command where we\nuse a slot that has a restart_lsn older than the archive's location,\nto not be that interesting. If we include such a test, there is no\nneed to include that within the TLI jump part, in my opinion. So I\nthink that we had better drop this part of the patch, and keep only\nthe case where we resume from a slot for the TLI jump.\n\nThe commands of pg_receivewal included in the test had better use -n\nso as there is no infinite loop on failure.\n--\nMichael",
"msg_date": "Thu, 28 Oct 2021 21:31:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le jeudi 28 octobre 2021, 14:31:36 CEST Michael Paquier a écrit :\n> On Wed, Oct 27, 2021 at 10:11:00AM +0200, Ronan Dunklau wrote:\n> > Sorry I sent an intermediary version of the patch, here is the correct\n> > one.\n> \n> While looking at this patch, I have figured out a simple way to make\n> the tests faster without impacting the coverage. The size of the WAL\n> segments archived is a bit of a bottleneck as they need to be zeroed\n> by pg_receivewal at creation. This finishes by being a waste of time,\n> even if we don't flush the data. So I'd like to switch the test so as\n> we use a segment size of 1MB, first.\n> \n> A second thing is that we copy too many segments than really needed\n> when using the slot's restart_lsn as starting point as RESERVE_WAL\n> would use the current redo location, so it seems to me that a\n> checkpoint is in order before the slot creation. A third thing is\n> that generating some extra data after the end LSN we want to use makes\n> the test much faster at the end.\n> \n> With those three methods combined, the test goes down from 11s to 9s\n> here. Attached is a patch I'd like to apply to make the test\n> cheaper.\n\nInteresting ideas, thanks. For the record, the time drops from ~4.5s to 3s on \naverage on my machine. \nI think if you reduce the size of the generate_series batches, this should \nprobably be reduced everywhere. With what we do though, inserting a single \nline should work just as well, I wonder why we insist on inserting a hundred \nlines ? I updated your patch with that small modification, it also makes the \ncode less verbose.\n\n\n> \n> I also had a look at your patch. Here are some comments.\n> \n> +# Cleanup the previous stream directories to reuse them\n> +unlink glob \"'${stream_dir}/*'\";\n> +unlink glob \"'${slot_dir}/*'\";\n> I think that we'd better choose a different location for the\n> archives. Keeping the segments of the previous tests is useful for\n> debugging if a previous step of the test failed.\n\nOk.\n> \n> +$standby->psql('',\n> + \"CREATE_REPLICATION_SLOT $folder_slot PHYSICAL (RESERVE_WAL)\",\n> + replication => 1);\n> Here as well we could use a restart point to reduce the number of\n> segments archived.\n\nThe restart point should be very close, as we don't generate any activity on \nthe primary between the backup and the slot's creation. I'm not sure adding \nthe complexity of triggering a checkpoint on the primary and waiting for the \nstandby to catch up on it would be that useful. \n> \n> +# Now, try to resume after the promotion, from the folder.\n> +$standby->command_ok(\n> + [ 'pg_receivewal', '-D', $stream_dir, '--verbose', '--endpos', $nextlsn,\n> + '--slot', $folder_slot, '--no-sync'],\n> + \"Stream some wal after promoting, resuming from the folder's position\");\n> What is called the \"resume-from-folder\" test in the patch is IMO\n> too costly and brings little extra value, requiring two commands of\n> pg_receivewal (one to populate the folder and one to test the TLI jump\n> from the previously-populated point) to test basically the same thing\n> as when the starting point is taken from the slot. Except that\n> restarting from the slot is twice cheaper. The point of the\n> resume-from-folder case is to make sure that we restart from the point\n> of the archive folder rather than the slot's restart_lsn, but your\n> test fails this part, in fact, because the first command populating\n> the archive folder also uses \"--slot $folder_slot\", updating the\n> slot's restart_lsn before the second pg_receivewal uses this slot\n> again.\n> \n> I think, as a whole, that testing for the case where an archive folder\n> is populated (without a slot!), followed by a second command where we\n> use a slot that has a restart_lsn older than the archive's location,\n> to not be that interesting. If we include such a test, there is no\n> need to include that within the TLI jump part, in my opinion. So I\n> think that we had better drop this part of the patch, and keep only\n> the case where we resume from a slot for the TLI jump.\n\nYou're right about the test not being that interesting in that case. I thought \nit would be worthwhile to test both cases, but resume_from_folder doesn't \nactually exercise any code that was not already called before (timeline \ncomputation from segment name).\n\n> The commands of pg_receivewal included in the test had better use -n\n> so as there is no infinite loop on failure.\n\nOk.\n\n-- \nRonan Dunklau",
"msg_date": "Thu, 28 Oct 2021 15:55:12 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 03:55:12PM +0200, Ronan Dunklau wrote:\n> Interesting ideas, thanks. For the record, the time drops from ~4.5s to 3s on \n> average on my machine. \n> I think if you reduce the size of the generate_series batches, this should \n> probably be reduced everywhere. With what we do though, inserting a single \n> line should work just as well, I wonder why we insist on inserting a hundred \n> lines ? I updated your patch with that small modification, it also makes the \n> code less verbose.\n\nThanks for the extra numbers. I have added your suggestions,\nswitching the dummy table to use a primary key with different values,\nwhile on it, as there is an argument that it makes debugging easier,\nand applied the speedup patch.\n\n>> +$standby->psql('',\n>> + \"CREATE_REPLICATION_SLOT $folder_slot PHYSICAL (RESERVE_WAL)\",\n>> + replication => 1);\n>> Here as well we could use a restart point to reduce the number of\n>> segments archived.\n> \n> The restart point should be very close, as we don't generate any activity on \n> the primary between the backup and the slot's creation. I'm not sure adding \n> the complexity of triggering a checkpoint on the primary and waiting for the \n> standby to catch up on it would be that useful. \n\nYes, you are right here. The base backup taken from the primary\nat this point ensures a fresh point.\n\n+# This test is split in two, using the same standby: one test check the\n+# resume-from-folder case, the other the resume-from-slot one.\nThis comment needs a refresh, as the resume-from-folder case is no\nmore.\n\n+$standby->psql(\n+ 'postgres',\n+ \"SELECT pg_promote(wait_seconds => 300)\");\nThis could be $standby->promote.\n\n+# Switch wal to make sure it is not a partial file but a complete\nsegment.\n+$primary->psql('postgres', 'INSERT INTO test_table VALUES (1);');\n+$primary->psql('postgres', 'SELECT pg_switch_wal();');\n+$primary->wait_for_catchup($standby, 'replay', $primary->lsn('write'));\nThis INSERT needs a slight change to adapt to the primary key of the\ntable. This one is on me :p\n\nAnyway, is this first segment switch really necessary? From the data\narchived by pg_receivewal in the command testing the TLI jump, we\nfinish with the following contents (contents generated after fixing\nthe three INSERTs):\n00000001000000000000000B\n00000001000000000000000C\n00000002000000000000000D\n00000002000000000000000E.partial\n00000002.history\n\nSo, even if we don't do the first switch, we'd still have one\ncompleted segment on the previous timeline, before switching to the\nnew timeline and the next segment (pg_receivewal is a bit inconsistent\nwith the backend here, by the way, as the first segment on the new\ntimeline would map with the last segment of the old timeline, but here\nwe have a clean switch as of stop_streaming in pg_receivewal.c).\n\n+# Force a wal switch to make sure at least one full WAL is archived on the new\n+# timeline, and fetch this walfilename.\nNo arguments against the second segment switch to ensure the presence\nof a full segment on the new TLI, of course.\n--\nMichael",
"msg_date": "Fri, 29 Oct 2021 11:27:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "Le vendredi 29 octobre 2021, 04:27:51 CEST Michael Paquier a écrit :\n> On Thu, Oct 28, 2021 at 03:55:12PM +0200, Ronan Dunklau wrote:\n> > Interesting ideas, thanks. For the record, the time drops from ~4.5s to 3s\n> > on average on my machine.\n> > I think if you reduce the size of the generate_series batches, this should\n> > probably be reduced everywhere. With what we do though, inserting a single\n> > line should work just as well, I wonder why we insist on inserting a\n> > hundred lines ? I updated your patch with that small modification, it\n> > also makes the code less verbose.\n> \n> Thanks for the extra numbers. I have added your suggestions,\n> switching the dummy table to use a primary key with different values,\n> while on it, as there is an argument that it makes debugging easier,\n> and applied the speedup patch.\n\nThanks !\n\n> \n> >> +$standby->psql('',\n> >> + \"CREATE_REPLICATION_SLOT $folder_slot PHYSICAL (RESERVE_WAL)\",\n> >> + replication => 1);\n> >> Here as well we could use a restart point to reduce the number of\n> >> segments archived.\n> > \n> > The restart point should be very close, as we don't generate any activity\n> > on the primary between the backup and the slot's creation. I'm not sure\n> > adding the complexity of triggering a checkpoint on the primary and\n> > waiting for the standby to catch up on it would be that useful.\n> \n> Yes, you are right here. The base backup taken from the primary\n> at this point ensures a fresh point.\n> \n> +# This test is split in two, using the same standby: one test check the\n> +# resume-from-folder case, the other the resume-from-slot one.\n> This comment needs a refresh, as the resume-from-folder case is no\n> more.\n> \n\nDone.\n\n> +$standby->psql(\n> + 'postgres',\n> + \"SELECT pg_promote(wait_seconds => 300)\");\n> This could be $standby->promote.\n> \n\nOh, didn't know about that. \n\n> +# Switch wal to make sure it is not a partial file but a complete\n> segment.\n> +$primary->psql('postgres', 'INSERT INTO test_table VALUES (1);');\n> +$primary->psql('postgres', 'SELECT pg_switch_wal();');\n> +$primary->wait_for_catchup($standby, 'replay', $primary->lsn('write'));\n> This INSERT needs a slight change to adapt to the primary key of the\n> table. This one is on me :p\n\nDone. \n\n> \n> Anyway, is this first segment switch really necessary? From the data\n> archived by pg_receivewal in the command testing the TLI jump, we\n> finish with the following contents (contents generated after fixing\n> the three INSERTs):\n> 00000001000000000000000B\n> 00000001000000000000000C\n> 00000002000000000000000D\n> 00000002000000000000000E.partial\n> 00000002.history\n> \n> So, even if we don't do the first switch, we'd still have one\n> completed segment on the previous timeline, before switching to the\n> new timeline and the next segment (pg_receivewal is a bit inconsistent\n> with the backend here, by the way, as the first segment on the new\n> timeline would map with the last segment of the old timeline, but here\n> we have a clean switch as of stop_streaming in pg_receivewal.c).\n\nThe first completed segment on the previous timeline comes from the fact we \nstream from the restart point. I removed the switch to use the walfilename of \nthe replication slot's restart point instead. This means querying both the \nstandby (to get the replication slot's restart_lsn) and the primary (to have \naccess to pg_walfile_name). \n\nWe could use a single query on the primary (using the primary's checkpoint LSN \ninstead) but it feels a bit convoluted just to avoid a query on the standby.\n\n\n-- \nRonan Dunklau",
"msg_date": "Fri, 29 Oct 2021 10:13:44 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal starting position"
},
{
"msg_contents": "On Fri, Oct 29, 2021 at 10:13:44AM +0200, Ronan Dunklau wrote:\n> We could use a single query on the primary (using the primary's checkpoint LSN \n> instead) but it feels a bit convoluted just to avoid a query on the standby.\n\nCheating with pg_walfile_name() running on the primary is fine by me.\nOne thing that stood out while reading this patch again is that we\ncan use $standby->slot($archive_slot) to grab the slot's restart_lsn.\nI have changed that, and applied it. So we are done with this\nthread.\n--\nMichael",
"msg_date": "Mon, 1 Nov 2021 13:26:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal starting position"
}
] |
[
{
"msg_contents": "Hi,\n\nthere is a typo in variable.c.\nAttached a small fix for this.\n\nRegards\nDaniel",
"msg_date": "Tue, 27 Jul 2021 10:04:36 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Small typo in variable.c"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 10:04:36AM +0000, Daniel Westermann (DWE) wrote:\n> there is a typo in variable.c.\n> Attached a small fix for this.\n\n\"iff\" stands for \"if and only if\".\n--\nMichael",
"msg_date": "Tue, 27 Jul 2021 19:46:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small typo in variable.c"
}
] |
[
{
"msg_contents": ">On Tue, Jul 27, 2021 at 10:04:36AM +0000, Daniel Westermann (DWE) wrote:\n>> there is a typo in variable.c.\n>> Attached a small fix for this.\n\n>\"iff\" stands for \"if and only if\".\n\nAh, good to know. Thx\n\nRegards\nDaniel\n\n\n\n\n\n\n\n\n\n\n>On Tue, Jul 27, 2021 at 10:04:36AM +0000, Daniel Westermann (DWE) wrote:\n\n>> there is a typo in variable.c.\n>> Attached a small fix for this.\n\n>\"iff\" stands for \"if and only if\".\n\n\n\nAh, good to know. Thx\n\n\nRegards\nDaniel",
"msg_date": "Tue, 27 Jul 2021 10:54:26 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Small typo in variable.c"
}
] |
[
{
"msg_contents": "The documentation for ALTER EVENT TRIGGER claims \"You must be superuser to alter an event trigger\" which is manifestly false, as shown below:\n\n+CREATE ROLE evt_first_owner SUPERUSER;\n+CREATE ROLE evt_second_owner SUPERUSER;\n+SET SESSION AUTHORIZATION evt_first_owner;\n+CREATE OR REPLACE FUNCTION evt_test_func()\n+RETURNS event_trigger AS $$\n+BEGIN\n+RAISE NOTICE 'event_trigger called with tag %', tg_tag;\n+END;\n+$$ LANGUAGE plpgsql;\n+CREATE EVENT TRIGGER evt_test_trigger ON ddl_command_start\n+ EXECUTE PROCEDURE evt_test_func();\n+RESET SESSION AUTHORIZATION;\n+ALTER ROLE evt_first_owner NOSUPERUSER;\n+SET SESSION AUTHORIZATION evt_first_owner;\n+ALTER EVENT TRIGGER evt_test_trigger DISABLE;\n+ALTER EVENT TRIGGER evt_test_trigger ENABLE;\n+ALTER EVENT TRIGGER evt_test_trigger ENABLE REPLICA;\n+ALTER EVENT TRIGGER evt_test_trigger ENABLE ALWAYS;\n+ALTER EVENT TRIGGER evt_test_trigger RENAME TO evt_new_name;\n+RESET SESSION AUTHORIZATION;\n+ALTER EVENT TRIGGER evt_new_name OWNER TO evt_second_owner;\n+ALTER EVENT TRIGGER evt_new_name OWNER TO evt_first_owner;\n+ERROR: permission denied to change owner of event trigger \"evt_new_name\"\n+HINT: The owner of an event trigger must be a superuser.\n\nPer the documentation, the five ALTER commands performed as evt_first_owner should have failed. They did not. At that time, evt_first_owner owned the event trigger despite not being a superuser.\n\nThe attempt later to assign ownership back to evt_first_owner fails claiming, \"The owner of an event trigger must be a superuser\", but that claim is not precisely true. At best, \"The owner of an event trigger must be a superuser at the time ownership is transferred.\" There are similar oddities with some other object types which make a half-hearted attempt to require the owner to be a superuser, but I will start separate threads for those.\n\nThis behavior is weird enough that I don't know if it is the code or the documentation that is wrong. I'd like to post patches to clean this up, but need community feedback on whether it is the documentation or the behavior that needs adjusting.\n\nOver in [1], I am trying to create new privileged roles and assign them some of the powers currently reserved to superuser. It is hard to make patches over there when the desired behavior of the system is not quite well defined.\n\n[1] https://www.postgresql.org/message-id/214052.1627331086%40sss.pgh.pa.us\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 08:15:24 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Documentation disagrees with behavior of ALTER EVENT TRIGGER"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIn the patches for improving the MSVC build process, I noticed a use of\n`map` in void context. This is considered bad form, and has a\nperlcritic policy forbidding it:\nhttps://metacpan.org/pod/Perl::Critic::Policy::BuiltinFunctions::ProhibitVoidMap.\n\nAttached is a patch that increases severity of that and the\ncorresponding `grep` policy to 5 to enable it in our perlcritic policy,\nand fixes the one use that had already snuck in.\n\n- ilmari",
"msg_date": "Tue, 27 Jul 2021 17:06:19 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "> On 27 Jul 2021, at 18:06, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n\n> Attached is a patch that increases severity of that and the\n> corresponding `grep` policy to 5 to enable it in our perlcritic policy,\n> and fixes the one use that had already snuck in.\n\n+1, the use of foreach also improves readability a fair bit IMO.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 21:09:10 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 09:09:10PM +0200, Daniel Gustafsson wrote:\n> On 27 Jul 2021, at 18:06, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>> Attached is a patch that increases severity of that and the\n>> corresponding `grep` policy to 5 to enable it in our perlcritic policy,\n>> and fixes the one use that had already snuck in.\n> \n> +1, the use of foreach also improves readability a fair bit IMO.\n\nSounds interesting to avoid. pgperlcritic does not complain here\nafter this patch.\n--\nMichael",
"msg_date": "Wed, 28 Jul 2021 13:23:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "\nOn 7/27/21 12:06 PM, Dagfinn Ilmari Mannsåker wrote:\n> Hi hackers,\n>\n> In the patches for improving the MSVC build process, I noticed a use of\n> `map` in void context. This is considered bad form, and has a\n> perlcritic policy forbidding it:\n> https://metacpan.org/pod/Perl::Critic::Policy::BuiltinFunctions::ProhibitVoidMap.\n>\n> Attached is a patch that increases severity of that and the\n> corresponding `grep` policy to 5 to enable it in our perlcritic policy,\n> and fixes the one use that had already snuck in.\n>\n\n\nPersonally I'm OK with it, but previous attempts to enforce perlcritic\npolicies have met with a less than warm reception, and we had to back\noff. Maybe this one will fare better.\n\nI keep the buildfarm code perlcritic compliant down to severity 3 with a\nhandful of exceptions.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 28 Jul 2021 07:10:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "> On 28 Jul 2021, at 13:10, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> On 7/27/21 12:06 PM, Dagfinn Ilmari Mannsåker wrote:\n>> Hi hackers,\n>> \n>> In the patches for improving the MSVC build process, I noticed a use of\n>> `map` in void context. This is considered bad form, and has a\n>> perlcritic policy forbidding it:\n>> https://metacpan.org/pod/Perl::Critic::Policy::BuiltinFunctions::ProhibitVoidMap.\n>> \n>> Attached is a patch that increases severity of that and the\n>> corresponding `grep` policy to 5 to enable it in our perlcritic policy,\n>> and fixes the one use that had already snuck in.\n> \n> Personally I'm OK with it, but previous attempts to enforce perlcritic\n> policies have met with a less than warm reception, and we had to back\n> off. Maybe this one will fare better.\n\nI'm fine with increasing this policy, but I don't have strong feelings. If we\nfeel the perlcritic policy change is too much, I would still prefer to go ahead\nwith the map rewrite part of the patch though.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 28 Jul 2021 13:26:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> Hi hackers,\n>\n> In the patches for improving the MSVC build process, I noticed a use of\n> `map` in void context. This is considered bad form, and has a\n> perlcritic policy forbidding it:\n> https://metacpan.org/pod/Perl::Critic::Policy::BuiltinFunctions::ProhibitVoidMap.\n>\n> Attached is a patch that increases severity of that and the\n> corresponding `grep` policy to 5 to enable it in our perlcritic policy,\n> and fixes the one use that had already snuck in.\n\nAdded to the 2021-09 commitfest: https://commitfest.postgresql.org/34/3278/\n\n- ilmari\n\n\n",
"msg_date": "Sun, 08 Aug 2021 00:15:47 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": true,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 01:26:23PM +0200, Daniel Gustafsson wrote:\n> I'm fine with increasing this policy, but I don't have strong feelings. If we\n> feel the perlcritic policy change is too much, I would still prefer to go ahead\n> with the map rewrite part of the patch though.\n\nI have no issue either about the rewrite part of the patch, so I'd\ntend to just do this part and move on. Daniel, would you like to\napply that?\n--\nMichael",
"msg_date": "Fri, 27 Aug 2021 15:10:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "> On 27 Aug 2021, at 08:10, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Jul 28, 2021 at 01:26:23PM +0200, Daniel Gustafsson wrote:\n>> I'm fine with increasing this policy, but I don't have strong feelings. If we\n>> feel the perlcritic policy change is too much, I would still prefer to go ahead\n>> with the map rewrite part of the patch though.\n> \n> I have no issue either about the rewrite part of the patch, so I'd\n> tend to just do this part and move on. Daniel, would you like to\n> apply that?\n\nSure, I can take care of that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 27 Aug 2021 09:15:43 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Wed, Jul 28, 2021 at 01:26:23PM +0200, Daniel Gustafsson wrote:\n>> I'm fine with increasing this policy, but I don't have strong feelings. If we\n>> feel the perlcritic policy change is too much, I would still prefer to go ahead\n>> with the map rewrite part of the patch though.\n>\n> I have no issue either about the rewrite part of the patch, so I'd\n> tend to just do this part and move on. Daniel, would you like to\n> apply that?\n\nWhy the resistance to the perlcritic part? That one case is the only\nviolation in the tree today, and it's a pattern we don't want to let\nback in (I will surely object every time I see it when reviewing\npatches), so why not automate it?\n\n- ilmari\n\n\n",
"msg_date": "Fri, 27 Aug 2021 11:32:30 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "\nOn 8/27/21 6:32 AM, Dagfinn Ilmari Mannsåker wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>\n>> On Wed, Jul 28, 2021 at 01:26:23PM +0200, Daniel Gustafsson wrote:\n>>> I'm fine with increasing this policy, but I don't have strong feelings. If we\n>>> feel the perlcritic policy change is too much, I would still prefer to go ahead\n>>> with the map rewrite part of the patch though.\n>> I have no issue either about the rewrite part of the patch, so I'd\n>> tend to just do this part and move on. Daniel, would you like to\n>> apply that?\n> Why the resistance to the perlcritic part? That one case is the only\n> violation in the tree today, and it's a pattern we don't want to let\n> back in (I will surely object every time I see it when reviewing\n> patches), so why not automate it?\n>\n\nThere doesn't seem to have been much pushback, so let's try it and see.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 30 Aug 2021 14:27:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 02:27:09PM -0400, Andrew Dunstan wrote:\n> There doesn't seem to have been much pushback, so let's try it and see.\n\nOkay, fine by me.\n--\nMichael",
"msg_date": "Tue, 31 Aug 2021 10:22:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "On Tue, Aug 31, 2021 at 9:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Aug 30, 2021 at 02:27:09PM -0400, Andrew Dunstan wrote:\n> > There doesn't seem to have been much pushback, so let's try it and see.\n>\n> Okay, fine by me.\n\n+1\n\n\n",
"msg_date": "Tue, 31 Aug 2021 12:19:12 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "> On 31 Aug 2021, at 06:19, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> On Tue, Aug 31, 2021 at 9:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Mon, Aug 30, 2021 at 02:27:09PM -0400, Andrew Dunstan wrote:\n>>> There doesn't seem to have been much pushback, so let's try it and see.\n>> \n>> Okay, fine by me.\n> \n> +1\n\nSince there is concensus in the thread with no -1’s, I’ve pushed this to master\nnow.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 31 Aug 2021 11:30:19 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
},
{
"msg_contents": "On Tue, 31 Aug 2021, at 10:30, Daniel Gustafsson wrote:\n> > On 31 Aug 2021, at 06:19, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > \n> > On Tue, Aug 31, 2021 at 9:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> \n> >> On Mon, Aug 30, 2021 at 02:27:09PM -0400, Andrew Dunstan wrote:\n> >>> There doesn't seem to have been much pushback, so let's try it and see.\n> >> \n> >> Okay, fine by me.\n> > \n> > +1\n> \n> Since there is concensus in the thread with no -1’s, I’ve pushed this to master\n> now.\n\nThanks!\n\n- ilmari\n\n\n",
"msg_date": "Tue, 31 Aug 2021 10:55:59 +0100",
"msg_from": "=?UTF-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: perlcritic: prohibit map and grep in void conext"
}
] |
[
{
"msg_contents": "The documentation for ALTER PUBLICATION ... OWNER TO ... claims the new owner must have CREATE privilege on the database, though superuser can change the ownership in spite of this restriction. No explanation is given for this requirement. It seems to just mirror the requirement that many types of objects which exist within namespaces cannot be transferred to new owners who lack CREATE privilege on the namespace. But is it rational to follow that pattern here? I would expect it to follow more closely the behavior of objects which do not exist within namespaces, like AlterSchemaOwner or AlterForeignServerOwner which don't require this. (There are other examples to look at, but those require the new owner to be superuser, so they provide no guidance.)\n\nDuring the development of the feature, Peter E. says in [1], \"I think ALTER PUBLICATION does not need to require CREATE privilege on the database.\" Petr J. replies in [2], \"Right, I removed the check.\" and the contents of the patch file 0002-Add-PUBLICATION-catalogs-and-DDL-v12.patch confirm this. After the feature was first committed in 665d1fad99, Peter updated it in commit 4cfc9484d4, but the reasoning for bringing back this requirement is not clear, as the commit message just says, \"Previously, the new owner had to be a superuser. The new rules are more refined similar to other objects.\" The commit appears not to have had a commitfest entry, nor does it have any associated email discussion that I can find. \n\nTo investigate, I edited all 22 scripts in src/test/subscription/t/ assigning ownership of all publications to nonsuperuser roles which lack CREATE before the rest of the test is run. Nothing changes. Either the tests are not checking the sort of thing this breaks, or this breaks nothing. I also edited src/backend/commands/publicationcmds.c circa line 693 to only raise a warning when the assignee lacks CREATE rather than an error and then ran check-world with TAP tests enabled. Everything passes. So no help there in understanding why this requirement exists.\n\nAssuming the requirement makes sense, I'd like the error message generated when the assignee lacks CREATE privilege to be less cryptic:\n\n ALTER PUBLICATION testpub OWNER TO second_pub_owner;\n ERROR: permission denied for database regression\n\nBut since similarly cryptic messages are produced for other object types that follow this pattern, maybe that should be a separate thread.\n\n[1] https://www.postgresql.org/message-id/acbc4035-5be6-9efd-fb37-1d61b8c35ea5%402ndquadrant.com\n\n[2] https://www.postgresql.org/message-id/ed24d725-1b8c-ed25-19c6-61410e6b1ec6%402ndquadrant.com\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 10:59:01 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Why does the owner of a publication need CREATE privileges on the\n database?"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 11:29 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> The documentation for ALTER PUBLICATION ... OWNER TO ... claims the new owner must have CREATE privilege on the database, though superuser can change the ownership in spite of this restriction. No explanation is given for this requirement.\n>\n\nI am not aware of the original thought process behind this but current\nbehavior seems reasonable because if users need to have CREATE\nprivilege on the database while Create Publication, the same should be\ntrue while we change the owner to a new owner. Basically, at any point\nin time, the owner of the publication should have CREATE privilege on\nthe database which contains the publication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 11:45:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does the owner of a publication need CREATE privileges on the\n database?"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Tue, Jul 27, 2021 at 11:29 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> The documentation for ALTER PUBLICATION ... OWNER TO ... claims the new owner must have CREATE privilege on the database, though superuser can change the ownership in spite of this restriction. No explanation is given for this requirement.\n\n> I am not aware of the original thought process behind this but current\n> behavior seems reasonable because if users need to have CREATE\n> privilege on the database while Create Publication, the same should be\n> true while we change the owner to a new owner.\n\nI think that for most (all?) forms of ALTER, we say that you need the same\nprivileges as you would need to drop the existing object and create a new\none with the new properties. From the standpoint of the privilege\nsystem, ALTER is just a shortcut for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Aug 2021 11:21:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why does the owner of a publication need CREATE privileges on the\n database?"
}
] |
[
{
"msg_contents": "The original import of the SSL tests saved the clientside log in /client-log,\nwhich was later removed in 1caef31d9. The test/ssl .gitignore didn't get the\nmemo though.\n\nThe attached trivial patch removes it from .gitignore, barring objections I'll\npush that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Wed, 28 Jul 2021 00:37:47 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Remove client-log from SSL test .gitignore"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 12:37:47AM +0200, Daniel Gustafsson wrote:\n> The original import of the SSL tests saved the clientside log in /client-log,\n> which was later removed in 1caef31d9. The test/ssl .gitignore didn't get the\n> memo though.\n\nGood catch. Thanks.\n--\nMichael",
"msg_date": "Wed, 28 Jul 2021 15:28:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove client-log from SSL test .gitignore"
}
] |
[
{
"msg_contents": "While cleaning out dead branches in my git repo, I came across an\nearly draft of what eventually became commit ffa2e4670 (\"In libpq,\nalways append new error messages to conn->errorMessage\"). I realized\nthat it contained a good idea that had gotten lost on the way to that\ncommit. Namely, let's reduce all of the 60-or-so \"out of memory\"\nreports in libpq to calls to a common subroutine, and then let's teach\nthe common subroutine a recovery strategy for the not-unlikely\npossibility that it fails to append the \"out of memory\" string to\nconn->errorMessage. That recovery strategy of course is to reset the\nerrorMessage buffer to empty, hopefully regaining some space. We lose\nwhatever we'd had in the buffer before, but we have a better chance of\nthe \"out of memory\" message making its way to the user.\n\nThe first half of that just saves a few hundred bytes of repetitive\ncoding. However, I think that the addition of recovery logic is\nimportant for robustness, because as things stand libpq may be\nworse off than before for OOM handling. Before ffa2e4670, almost\nall of these call sites did printfPQExpBuffer(..., \"out of memory\").\nThat would automatically clear the message buffer to empty, and\nthereby be sure to report the out-of-memory failure if at all\npossible. Now we might fail to report the thing that the user\nreally needs to know to make sense of what happened.\n\nTherefore, I feel like this was an oversight in ffa2e4670,\nand we ought to back-patch the attached into v14.\n\ncc'ing the RMT in case they wish to object.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 27 Jul 2021 18:40:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Out-of-memory error reports in libpq"
},
{
"msg_contents": "On 7/27/21, 3:41 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> The first half of that just saves a few hundred bytes of repetitive\r\n> coding. However, I think that the addition of recovery logic is\r\n> important for robustness, because as things stand libpq may be\r\n> worse off than before for OOM handling. Before ffa2e4670, almost\r\n> all of these call sites did printfPQExpBuffer(..., \"out of memory\").\r\n> That would automatically clear the message buffer to empty, and\r\n> thereby be sure to report the out-of-memory failure if at all\r\n> possible. Now we might fail to report the thing that the user\r\n> really needs to know to make sense of what happened.\r\n\r\nIIUC, before ffa2e4670, callers mainly used printfPQExpBuffer(), which\r\nalways cleared the buffer before attempting to append the OOM message.\r\nWith ffa2e4670 applied, callers always attempt to append the OOM\r\nmessage without resetting the buffer first. With this new change,\r\ncallers will attempt to append the OOM message without resetting the\r\nbuffer first, but if that fails, we fall back to the original behavior\r\nbefore ffa2e4670.\r\n\r\n+\tif (PQExpBufferBroken(errorMessage))\r\n+\t{\r\n+\t\tresetPQExpBuffer(errorMessage);\r\n+\t\tappendPQExpBufferStr(errorMessage, msg);\r\n+\t}\r\n\r\nI see that appendPQExpBufferStr() checks whether the buffer is broken\r\nby way of enlargePQExpBuffer(), so the fallback steps roughly match\r\nthe calls to printfPQExpBuffer() before ffa2e4670.\r\n\r\n-\t\t\tappendPQExpBuffer(&conn->errorMessage,\r\n-\t\t\t\t\t\t\t libpq_gettext(\"out of memory allocating GSSAPI buffer (%d)\\n\"),\r\n-\t\t\t\t\t\t\t payloadlen);\r\n+\t\t\tpqReportOOM(conn);\r\n\r\nI see that some context is lost in a few places (e.g., the one above\r\npoints to a GSSAPI buffer). Perhaps this extra context could be\r\nuseful to identify problematic areas, but it might be unlikely to help\r\nmuch in these parts of libpq. In any case, the vast majority of\r\nexisting callers don't provide any extra context.\r\n\r\nOverall, the patch looks good to me.\r\n\r\n> Therefore, I feel like this was an oversight in ffa2e4670,\r\n> and we ought to back-patch the attached into v14.\r\n\r\nBack-patching to v14 seems reasonable to me.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 27 Jul 2021 23:29:28 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> -\t\t\tappendPQExpBuffer(&conn->errorMessage,\n> -\t\t\t\t\t\t\t libpq_gettext(\"out of memory allocating GSSAPI buffer (%d)\\n\"),\n> -\t\t\t\t\t\t\t payloadlen);\n> +\t\t\tpqReportOOM(conn);\n\n> I see that some context is lost in a few places (e.g., the one above\n> points to a GSSAPI buffer). Perhaps this extra context could be\n> useful to identify problematic areas, but it might be unlikely to help\n> much in these parts of libpq. In any case, the vast majority of\n> existing callers don't provide any extra context.\n\nYeah, there are half a dozen places that currently print something\nmore specific than \"out of memory\". I judged that the value of this\nwas not worth the complexity it'd add to support it in this scheme.\nDifferent opinions welcome of course.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jul 2021 22:31:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 10:31:25PM -0400, Tom Lane wrote:\n> Yeah, there are half a dozen places that currently print something\n> more specific than \"out of memory\". I judged that the value of this\n> was not worth the complexity it'd add to support it in this scheme.\n> Different opinions welcome of course.\n\nI don't mind either that this removes a bit of context. For\nunlikely-going-to-happen errors that's not worth the extra translation\ncost. No objections from me for an integration into 14 as that's\nstraight-forward, and that would minimize conflicts between HEAD and \n14 in the event of a back-patch\n\n+pqReportOOM(PGconn *conn)\n+{\n+ pqReportOOMBuffer(&conn->errorMessage);\n+}\n+\n+/*\n+ * As above, but work with a bare error-message-buffer pointer.\n+ */\n+void\n+pqReportOOMBuffer(PQExpBuffer errorMessage)\n+{\nNot much a fan of having two routines to do this job though. I would\nvote for keeping the one named pqReportOOM() with PQExpBuffer as\nargument.\n--\nMichael",
"msg_date": "Wed, 28 Jul 2021 12:32:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "\nOn 7/27/21 6:40 PM, Tom Lane wrote:\n> While cleaning out dead branches in my git repo, I came across an\n> early draft of what eventually became commit ffa2e4670 (\"In libpq,\n> always append new error messages to conn->errorMessage\"). I realized\n> that it contained a good idea that had gotten lost on the way to that\n> commit. Namely, let's reduce all of the 60-or-so \"out of memory\"\n> reports in libpq to calls to a common subroutine, and then let's teach\n> the common subroutine a recovery strategy for the not-unlikely\n> possibility that it fails to append the \"out of memory\" string to\n> conn->errorMessage. That recovery strategy of course is to reset the\n> errorMessage buffer to empty, hopefully regaining some space. We lose\n> whatever we'd had in the buffer before, but we have a better chance of\n> the \"out of memory\" message making its way to the user.\n>\n> The first half of that just saves a few hundred bytes of repetitive\n> coding. However, I think that the addition of recovery logic is\n> important for robustness, because as things stand libpq may be\n> worse off than before for OOM handling. Before ffa2e4670, almost\n> all of these call sites did printfPQExpBuffer(..., \"out of memory\").\n> That would automatically clear the message buffer to empty, and\n> thereby be sure to report the out-of-memory failure if at all\n> possible. Now we might fail to report the thing that the user\n> really needs to know to make sense of what happened.\n>\n> Therefore, I feel like this was an oversight in ffa2e4670,\n> and we ought to back-patch the attached into v14.\n>\n> cc'ing the RMT in case they wish to object.\n>\n> \t\t\t\n\n\nI'm honored you've confused me with Alvaro :-)\n\nThis seems sensible, and we certainly shouldn't be worse off than\nbefore, so let's do it.\n\nI'm fine with having two functions for call simplicity, but I don't feel\nstrongly about it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 28 Jul 2021 07:20:43 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jul 27, 2021 at 10:31:25PM -0400, Tom Lane wrote:\n>> Yeah, there are half a dozen places that currently print something\n>> more specific than \"out of memory\". I judged that the value of this\n>> was not worth the complexity it'd add to support it in this scheme.\n>> Different opinions welcome of course.\n\n> I don't mind either that this removes a bit of context. For\n> unlikely-going-to-happen errors that's not worth the extra translation\n> cost.\n\nYeah, the extra translatable strings are the main concrete cost of\nkeeping this behavior. But I'm dubious that labeling a small number\nof the possible OOM points is worth anything, especially if they're\nnot providing the failed allocation request size. You can't tell if\nthat request was unreasonable or if it was just an unlucky victim\nof bloat elsewhere. Unifying the reports into a common function\ncould be a starting point for more consistent/detailed OOM reports,\nif anyone cared to work on that. (I hasten to add that I don't.)\n\n> + pqReportOOMBuffer(&conn->errorMessage);\n\n> Not much a fan of having two routines to do this job though. I would\n> vote for keeping the one named pqReportOOM() with PQExpBuffer as\n> argument.\n\nHere I've got to disagree. We do need the form with a PQExpBuffer\nargument, because there are some places where that isn't a pointer\nto a PGconn's errorMessage. But the large majority of the calls\nare \"pqReportOOM(conn)\", and I think having to write that as\n\"pqReportOOM(&conn->errorMessage)\" is fairly ugly and perhaps\nerror-prone.\n\nI'm not wedded to the name \"pqReportOOMBuffer\" though --- maybe\nthere's some better name for that one?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 11:02:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "\nOn 7/28/21 11:02 AM, Tom Lane wrote:\n>\n> Here I've got to disagree. We do need the form with a PQExpBuffer\n> argument, because there are some places where that isn't a pointer\n> to a PGconn's errorMessage. But the large majority of the calls\n> are \"pqReportOOM(conn)\", and I think having to write that as\n> \"pqReportOOM(&conn->errorMessage)\" is fairly ugly and perhaps\n> error-prone.\n>\n> I'm not wedded to the name \"pqReportOOMBuffer\" though --- maybe\n> there's some better name for that one?\n>\n> \t\t\t\n\n\n\nIs it worth making the first one a macro?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:25:22 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Is it worth making the first one a macro?\n\nIt'd be the same from a source-code perspective, but probably a\nshade bulkier in terms of object code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:49:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-27 18:40:48 -0400, Tom Lane wrote:\n> The first half of that just saves a few hundred bytes of repetitive\n> coding. However, I think that the addition of recovery logic is\n> important for robustness, because as things stand libpq may be\n> worse off than before for OOM handling.\n\nAgreed.\n\n\n> Before ffa2e4670, almost all of these call sites did\n> printfPQExpBuffer(..., \"out of memory\"). That would automatically\n> clear the message buffer to empty, and thereby be sure to report the\n> out-of-memory failure if at all possible. Now we might fail to report\n> the thing that the user really needs to know to make sense of what\n> happened.\n\nHm. It seems we should be able to guarantee that the recovery path can print\nsomething, at least in the PGconn case. Is it perhaps worth pre-sizing\nPGConn->errorMessage so it'd fit an error like this?\n\nBut perhaps that's more effort than it's worth.\n\n\n> +void\n> +pqReportOOMBuffer(PQExpBuffer errorMessage)\n> +{\n> +\tconst char *msg = libpq_gettext(\"out of memory\\n\");\n\nI should probably know this, but I don't. Nor did I quickly find an answer. I\nassume gettext() reliably and reasonably deals with OOM?\n\nLooking in the gettext code I'm again scared by the fact that it takes locks\nduring gettext (because of stuff like erroring out of signal handlers, not\nOOMs).\n\nIt does look like it tries to always return the original string in case of\nOOM. Although the code is quite maze-like, so it's not easy to tell..\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:55:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I should probably know this, but I don't. Nor did I quickly find an answer. I\n> assume gettext() reliably and reasonably deals with OOM?\n\nI've always assumed that their fallback in cases of OOM, can't read\nthe message file, yadda yadda is to return the original string.\nI admit I haven't gone and checked their code, but it'd be\nunbelievably stupid to do otherwise.\n\n> Looking in the gettext code I'm again scared by the fact that it takes locks\n> during gettext (because of stuff like erroring out of signal handlers, not\n> OOMs).\n\nHm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 16:51:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm. It seems we should be able to guarantee that the recovery path can print\n> something, at least in the PGconn case. Is it perhaps worth pre-sizing\n> PGConn->errorMessage so it'd fit an error like this?\n\nForgot to address this. Right now, the normal situation is that\nPGConn->errorMessage is \"pre sized\" to 256 bytes, because that's\nwhat pqexpbuffer.c does for all PQExpBuffers. So unless you've\noverrun that, the question is moot. If you have, and you got\nan OOM in trying to expand the PQExpBuffer, then pqexpbuffer.c\nwill release what it has and substitute the \"oom_buffer\" empty\nstring. If you're really unlucky you might then not be able\nto allocate another 256-byte buffer, in which case we end up\nwith an empty-string result. I don't think it's probable,\nbut in a multithread program it could happen.\n\n> But perhaps that's more effort than it's worth.\n\nYeah. I considered changing things so that oom_buffer contains\n\"out of memory\\n\" rather than an empty string, but I'm afraid\nthat that's making unsupportable assumptions about what PQExpBuffers\nare used for.\n\nFor now, I'm content if it's not worse than v13. We've not\nheard a lot of complaints in this area.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 17:37:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Hm. It seems we should be able to guarantee that the recovery path can print\n>> something, at least in the PGconn case. Is it perhaps worth pre-sizing\n>> PGConn->errorMessage so it'd fit an error like this?\n>> But perhaps that's more effort than it's worth.\n\n> Yeah. I considered changing things so that oom_buffer contains\n> \"out of memory\\n\" rather than an empty string, but I'm afraid\n> that that's making unsupportable assumptions about what PQExpBuffers\n> are used for.\n\nActually, wait a minute. There are only a couple of places that ever\nread out the value of conn->errorMessage, so let's make those places\nresponsible for dealing with OOM scenarios. That leads to a nicely\nsmall patch, as attached, and it looks to me like it makes us quite\nbulletproof against such scenarios.\n\nIt might still be worth doing the \"pqReportOOM\" changes to save a\nfew bytes of code space, but I'm less excited about that now.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 28 Jul 2021 20:25:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Em qua., 28 de jul. de 2021 às 21:25, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Hm. It seems we should be able to guarantee that the recovery path can\n> print\n> >> something, at least in the PGconn case. Is it perhaps worth pre-sizing\n> >> PGConn->errorMessage so it'd fit an error like this?\n> >> But perhaps that's more effort than it's worth.\n>\n> > Yeah. I considered changing things so that oom_buffer contains\n> > \"out of memory\\n\" rather than an empty string, but I'm afraid\n> > that that's making unsupportable assumptions about what PQExpBuffers\n> > are used for.\n>\n> Actually, wait a minute. There are only a couple of places that ever\n> read out the value of conn->errorMessage, so let's make those places\n> responsible for dealing with OOM scenarios. That leads to a nicely\n> small patch, as attached, and it looks to me like it makes us quite\n> bulletproof against such scenarios.\n>\n> It might still be worth doing the \"pqReportOOM\" changes to save a\n> few bytes of code space, but I'm less excited about that now.\n>\nIMO, I think that \"char *msg\" is unnecessary, isn't it?\n\n+ if (!PQExpBufferBroken(errorMessage))\n+ res->errMsg = pqResultStrdup(res, errorMessage->data);\n else\n- res->errMsg = NULL;\n+ res->errMsg = libpq_gettext(\"out of memory\\n\");\n\n\n>\n> regards, tom lane\n>\n>\n\nEm qua., 28 de jul. de 2021 às 21:25, Tom Lane <tgl@sss.pgh.pa.us> escreveu:I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Hm. It seems we should be able to guarantee that the recovery path can print\n>> something, at least in the PGconn case. Is it perhaps worth pre-sizing\n>> PGConn->errorMessage so it'd fit an error like this?\n>> But perhaps that's more effort than it's worth.\n\n> Yeah. I considered changing things so that oom_buffer contains\n> \"out of memory\\n\" rather than an empty string, but I'm afraid\n> that that's making unsupportable assumptions about what PQExpBuffers\n> are used for.\n\nActually, wait a minute. There are only a couple of places that ever\nread out the value of conn->errorMessage, so let's make those places\nresponsible for dealing with OOM scenarios. That leads to a nicely\nsmall patch, as attached, and it looks to me like it makes us quite\nbulletproof against such scenarios.\n\nIt might still be worth doing the \"pqReportOOM\" changes to save a\nfew bytes of code space, but I'm less excited about that now.IMO, I think that \"char *msg\" is unnecessary, isn't it?+\tif (!PQExpBufferBroken(errorMessage))+\t\tres->errMsg = pqResultStrdup(res, errorMessage->data); \telse-\t\tres->errMsg = NULL;+\t\tres->errMsg = libpq_gettext(\"out of memory\\n\"); \n\n regards, tom lane",
"msg_date": "Wed, 28 Jul 2021 22:03:29 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> IMO, I think that \"char *msg\" is unnecessary, isn't it?\n\n> + if (!PQExpBufferBroken(errorMessage))\n> + res->errMsg = pqResultStrdup(res, errorMessage->data);\n> else\n> - res->errMsg = NULL;\n> + res->errMsg = libpq_gettext(\"out of memory\\n\");\n\nPlease read the comment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 23:40:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "(This is not a code review - this is just to satisfy my curiosity)\n\nI've seen lots of code like this where I may have been tempted to use\na ternary operator for readability, so I was wondering is there a PG\nconvention to avoid such ternary operator assignments, or is it simply\na personal taste thing, or is there some other reason?\n\nFor example:\n\nif (msg)\n res->errMsg = msg;\nelse\n res->errMsg = libpq_gettext(\"out of memory\\n\");\n\nVERSUS:\n\nres->errMsg = msg ? msg : libpq_gettext(\"out of memory\\n\");\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 29 Jul 2021 17:01:52 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "\nOn 7/29/21 3:01 AM, Peter Smith wrote:\n> (This is not a code review - this is just to satisfy my curiosity)\n>\n> I've seen lots of code like this where I may have been tempted to use\n> a ternary operator for readability, so I was wondering is there a PG\n> convention to avoid such ternary operator assignments, or is it simply\n> a personal taste thing, or is there some other reason?\n>\n> For example:\n>\n> if (msg)\n> res->errMsg = msg;\n> else\n> res->errMsg = libpq_gettext(\"out of memory\\n\");\n>\n> VERSUS:\n>\n> res->errMsg = msg ? msg : libpq_gettext(\"out of memory\\n\");\n>\n\n\nA simple grep on the sources should disabuse you of any idea that there\nis such a convention. The code is littered with examples of the ?: operator.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 06:18:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Em qui., 29 de jul. de 2021 às 00:40, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > IMO, I think that \"char *msg\" is unnecessary, isn't it?\n>\n> > + if (!PQExpBufferBroken(errorMessage))\n> > + res->errMsg = pqResultStrdup(res, errorMessage->data);\n> > else\n> > - res->errMsg = NULL;\n> > + res->errMsg = libpq_gettext(\"out of memory\\n\");\n>\n> Please read the comment.\n>\nYou're right, I missed pqResultStrdup fail.\n\n+1\n\nregards,\nRanier Vilela\n\nEm qui., 29 de jul. de 2021 às 00:40, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> IMO, I think that \"char *msg\" is unnecessary, isn't it?\n\n> + if (!PQExpBufferBroken(errorMessage))\n> + res->errMsg = pqResultStrdup(res, errorMessage->data);\n> else\n> - res->errMsg = NULL;\n> + res->errMsg = libpq_gettext(\"out of memory\\n\");\n\nPlease read the comment.You're right, I missed pqResultStrdup fail.+1regards,Ranier Vilela",
"msg_date": "Thu, 29 Jul 2021 08:17:41 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Em qui., 29 de jul. de 2021 às 04:02, Peter Smith <smithpb2250@gmail.com>\nescreveu:\n\n> (This is not a code review - this is just to satisfy my curiosity)\n>\n> I've seen lots of code like this where I may have been tempted to use\n> a ternary operator for readability, so I was wondering is there a PG\n> convention to avoid such ternary operator assignments, or is it simply\n> a personal taste thing, or is there some other reason?\n>\n> For example:\n>\n> if (msg)\n> res->errMsg = msg;\n> else\n> res->errMsg = libpq_gettext(\"out of memory\\n\");\n>\nThe C compiler will expand:\n\nres->errMsg = msg ? msg : libpq_gettext(\"out of memory\\n\");\n\nto\n\nif (msg)\n res->errMsg = msg;\nelse\n res->errMsg = libpq_gettext(\"out of memory\\n\");\n\nWhat IMHO is much more readable.\n\nregards,\nRanier Vilela\n\nEm qui., 29 de jul. de 2021 às 04:02, Peter Smith <smithpb2250@gmail.com> escreveu:(This is not a code review - this is just to satisfy my curiosity)\n\nI've seen lots of code like this where I may have been tempted to use\na ternary operator for readability, so I was wondering is there a PG\nconvention to avoid such ternary operator assignments, or is it simply\na personal taste thing, or is there some other reason?\n\nFor example:\n\nif (msg)\n res->errMsg = msg;\nelse\n res->errMsg = libpq_gettext(\"out of memory\\n\");The C compiler will expand:res->errMsg = msg ? msg : libpq_gettext(\"out of memory\\n\");to if (msg) res->errMsg = msg;else res->errMsg = libpq_gettext(\"out of memory\\n\");What IMHO is much more readable.regards,Ranier Vilela",
"msg_date": "Thu, 29 Jul 2021 08:23:17 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 7/29/21 3:01 AM, Peter Smith wrote:\n>> I've seen lots of code like this where I may have been tempted to use\n>> a ternary operator for readability, so I was wondering is there a PG\n>> convention to avoid such ternary operator assignments, or is it simply\n>> a personal taste thing, or is there some other reason?\n\n> A simple grep on the sources should disabuse you of any idea that there\n> is such a convention. The code is littered with examples of the ?: operator.\n\nYeah. I happened not to write it that way here, but if I'd been reviewing\nsomeone else's code and they'd done it that way, I'd not have objected.\n\nIn the case at hand, I'd personally avoid a ternary op for the first\nassignment because then the line would run over 80 characters, and\nyou'd have to make decisions about where to break it. (We don't have\na standardized convention about that, and none of the alternatives\nlook very good to my eye.) Then it seemed to make sense to also\nwrite the second step as an \"if\" not a ternary op.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Jul 2021 09:57:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-memory error reports in libpq"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 9:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In the case at hand, I'd personally avoid a ternary op for the first\n> assignment because then the line would run over 80 characters, and\n> you'd have to make decisions about where to break it. (We don't have\n> a standardized convention about that, and none of the alternatives\n> look very good to my eye.)\n\nThis is exactly why I rarely use ?:\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:04:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-memory error reports in libpq"
}
] |
[
{
"msg_contents": "IMO the PG code comments are not an appropriate place for leetspeak creativity.\n\nPSA a patch to replace a few examples that I recently noticed.\n\n\"up2date\" --> \"up-to-date\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 28 Jul 2021 09:39:02 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Replace l337sp34k in comments."
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 09:39:02AM +1000, Peter Smith wrote:\n> IMO the PG code comments are not an appropriate place for leetspeak creativity.\n> \n> PSA a patch to replace a few examples that I recently noticed.\n> \n> \"up2date\" --> \"up-to-date\"\n\nAgreed that this is a bit cleaner to read, so done. Just note that\npgindent has been complaining about the format of some of the updated\ncomments.\n--\nMichael",
"msg_date": "Wed, 28 Jul 2021 10:32:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 11:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 28, 2021 at 09:39:02AM +1000, Peter Smith wrote:\n> > IMO the PG code comments are not an appropriate place for leetspeak creativity.\n> >\n> > PSA a patch to replace a few examples that I recently noticed.\n> >\n> > \"up2date\" --> \"up-to-date\"\n>\n> Agreed that this is a bit cleaner to read, so done. Just note that\n> pgindent has been complaining about the format of some of the updated\n> comments.\n\nThanks for pushing!\n\nBTW, the commit comment [1] attributes most of these to a recent\npatch, but I think that is mistaken. AFAIK they are from when the\nfile was first introduced 8 years ago [2].\n\n------\n[1] https://github.com/postgres/postgres/commit/7b7fbe1e8bb4b2a244d1faa618789db411316e55\n[2] https://github.com/postgres/postgres/commit/b89e151054a05f0f6d356ca52e3b725dd0505e53#diff-034b6d4eaf36425e75d7a7087d09bd6c734dd9ea8398533559d537d13b6b9197\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 29 Jul 2021 09:08:23 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "\nOn 7/27/21 7:39 PM, Peter Smith wrote:\n> IMO the PG code comments are not an appropriate place for leetspeak creativity.\n>\n> PSA a patch to replace a few examples that I recently noticed.\n>\n> \"up2date\" --> \"up-to-date\"\n>\n\nPersonally, I would have written this as just \"up to date\", I don't\nthink the hyphens are required.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 06:22:57 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 11:22, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> Personally, I would have written this as just \"up to date\", I don't\n> think the hyphens are required.\n>\n\nFWIW Mirriam-Webster and the CED suggest \"up-to-date\" when before a noun,\nso the changes should be \"up-to-date answer\" but \"are up to date\".\n\nhttps://dictionary.cambridge.org/dictionary/english/up-to-date\n\nGeoff\n\nOn Thu, 29 Jul 2021 at 11:22, Andrew Dunstan <andrew@dunslane.net> wrote:\nPersonally, I would have written this as just \"up to date\", I don't\nthink the hyphens are required. FWIW Mirriam-Webster and the CED suggest \"up-to-date\" when before a noun, so the changes should be \"up-to-date answer\" but \"are up to date\".https://dictionary.cambridge.org/dictionary/english/up-to-dateGeoff",
"msg_date": "Thu, 29 Jul 2021 13:51:36 +0100",
"msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "\nOn 7/29/21 8:51 AM, Geoff Winkless wrote:\n> On Thu, 29 Jul 2021 at 11:22, Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n> Personally, I would have written this as just \"up to date\", I don't\n> think the hyphens are required.\n>\n> \n> FWIW Mirriam-Webster and the CED suggest \"up-to-date\" when before a\n> noun, so the changes should be \"up-to-date answer\" but \"are up to date\".\n>\n> https://dictionary.cambridge.org/dictionary/english/up-to-date\n> <https://dictionary.cambridge.org/dictionary/english/up-to-date>\n>\n>\n\nInteresting, thanks. My (admittedly old) Concise OED only has the\nversion with spaces, while my (also old) Collins Concise has the\nhyphenated version. I learn something new every day, no matter how trivial.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 11:11:57 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "On 30/07/21 12:51 am, Geoff Winkless wrote:\n> On Thu, 29 Jul 2021 at 11:22, Andrew Dunstan <andrew@dunslane.net \n> <mailto:andrew@dunslane.net>> wrote:\n>\n> Personally, I would have written this as just \"up to date\", I don't\n> think the hyphens are required.\n>\n> FWIW Mirriam-Webster and the CED suggest \"up-to-date\" when before a \n> noun, so the changes should be \"up-to-date answer\" but \"are up to date\".\n>\n> https://dictionary.cambridge.org/dictionary/english/up-to-date \n> <https://dictionary.cambridge.org/dictionary/english/up-to-date>\n>\n> Geoff\n\nThat 'feels' right to me.\n\nThough in code, possibly it would be better to just use 'up-to-date' in \ncode for consistency and to make the it easier to grep?\n\nAs a minor aside: double quotes should be used for speech and single \nquotes for quoting!\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Fri, 30 Jul 2021 09:46:59 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 09:46:59AM +1200, Gavin Flower wrote:\n> That 'feels' right to me.\n> \n> Though in code, possibly it would be better to just use 'up-to-date' in code\n> for consistency and to make the it easier to grep?\n\nThe change in llvmjit_expr.c may not look like an adjective though,\nwhich I admit can be a bit confusing. Still that does not look\ncompletely wrong to me either.\n--\nMichael",
"msg_date": "Fri, 30 Jul 2021 09:48:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 22:46, Gavin Flower\n<GavinFlower@archidevsys.co.nz> wrote:\n> Though in code, possibly it would be better to just use 'up-to-date' in\n> code for consistency and to make the it easier to grep?\n\nIf it's causing an issue, perhaps using a less syntactically\nproblematic synonym like \"current\" might be better?\n\n:)\n\nGeoff\n\n\n",
"msg_date": "Fri, 30 Jul 2021 09:05:53 +0100",
"msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "On 30/07/21 8:05 pm, Geoff Winkless wrote:\n> On Thu, 29 Jul 2021 at 22:46, Gavin Flower\n> <GavinFlower@archidevsys.co.nz> wrote:\n>> Though in code, possibly it would be better to just use 'up-to-date' in\n>> code for consistency and to make the it easier to grep?\n> If it's causing an issue, perhaps using a less syntactically\n> problematic synonym like \"current\" might be better?\n>\n> :)\n>\n> Geoff\n\nOn thinking further...\n\nThe word 'current' means different things in different contexts. If I \nrefer to my current O/S it means the one I'm using now, but it may not \nbe current. The second use of 'current' is the meaning you are thinking \nof, but the first is not. Since people reading documented code are \nfocused on understanding technical aspects, they may miss this subtlety.\n\nI'm aware that standardisation may meet with some resistance, but being \nconsistent might reduce the conceptual impedance when reading the code. \nI'm just trying to reduce the potential for confusion.\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Sat, 31 Jul 2021 09:15:59 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "\n\n> 28 июля 2021 г., в 04:39, Peter Smith <smithpb2250@gmail.com> написал(а):\n> \n> IMO the PG code comments are not an appropriate place for leetspeak creativity.\n> \n> PSA a patch to replace a few examples that I recently noticed.\n> \n> \"up2date\" --> \"up-to-date\"\n\nFWIW, my 2 cents.\nI do not see much difference between up2date, up-to-date, up to date, current, recent, actual, last, newest, correct, fresh etc.\nI'm slightly leaning to 1337 version, but this can totally be ignored.\n\nAs a non-native speaker I'm a bit concerned by the fact that comment is copied 6 times. For me it's not a single bit easier to read comment then code. If this comment is that important, maybe refactor this assignment into function and document once?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 31 Jul 2021 13:21:58 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "On Sat, Jul 31, 2021 at 11:22 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> FWIW, my 2 cents.\n> I do not see much difference between up2date, up-to-date, up to date, current, recent, actual, last, newest, correct, fresh etc.\n\n+1.\n\nTo me it seems normal to debate wording/terminology with new code\ncomments, but that's about it. I find this zeal to change old code\ncomments misguided. It's okay if they're clearly wrong or have typos.\nAnything else is just hypercorrection. And in any case there is a very\nreal chance of making the overall situation worse rather than better.\nProbably in some subtle but important way.\n\nSee also: commit 8a47b775a16fb4f1e154c0f319a030498e123164\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 31 Jul 2021 12:15:34 +0300",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "On Saturday, July 31, 2021, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Sat, Jul 31, 2021 at 11:22 AM Andrey Borodin <x4mmm@yandex-team.ru>\n> wrote:\n> > FWIW, my 2 cents.\n> > I do not see much difference between up2date, up-to-date, up to date,\n> current, recent, actual, last, newest, correct, fresh etc.\n>\n> +1.\n>\n> To me it seems normal to debate wording/terminology with new code\n> comments, but that's about it. I find this zeal to change old code\n> comments misguided.\n>\n\nMaybe in general I would agree but I agree that this warrants an\nexception. While maybe not explicitly stated the use of up2date as a term\nis against the de-facto style guide for our project and should be corrected\nregardless of how long it took to discover the violation. We fix other\nunimportant but obvious typos all the time and this is no different. We\ndon’t ask people to police this but we also don’t turn down well-written\npatches.\n\nDavid J.\n\nOn Saturday, July 31, 2021, Peter Geoghegan <pg@bowt.ie> wrote:On Sat, Jul 31, 2021 at 11:22 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> FWIW, my 2 cents.\n> I do not see much difference between up2date, up-to-date, up to date, current, recent, actual, last, newest, correct, fresh etc.\n\n+1.\n\nTo me it seems normal to debate wording/terminology with new code\ncomments, but that's about it. I find this zeal to change old code\ncomments misguided.\nMaybe in general I would agree but I agree that this warrants an exception. While maybe not explicitly stated the use of up2date as a term is against the de-facto style guide for our project and should be corrected regardless of how long it took to discover the violation. We fix other unimportant but obvious typos all the time and this is no different. We don’t ask people to police this but we also don’t turn down well-written patches.David J.",
"msg_date": "Sat, 31 Jul 2021 08:04:40 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "Hi,\n\nOn 2021-07-31 12:15:34 +0300, Peter Geoghegan wrote:\n> On Sat, Jul 31, 2021 at 11:22 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > FWIW, my 2 cents.\n> > I do not see much difference between up2date, up-to-date, up to date, current, recent, actual, last, newest, correct, fresh etc.\n> \n> +1.\n\n> To me it seems normal to debate wording/terminology with new code\n> comments, but that's about it. I find this zeal to change old code\n> comments misguided. It's okay if they're clearly wrong or have typos.\n> Anything else is just hypercorrection. And in any case there is a very\n> real chance of making the overall situation worse rather than better.\n> Probably in some subtle but important way.\n\nSame here. I find them quite distracting, even.\n\nIt's one thing for such patches to target blindly obvious typos etc, but\nthey often also end up including less clear cut changes, which cost a\nfair bit of time to review/judge.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 1 Aug 2021 14:10:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
},
{
"msg_contents": "\nOn 8/1/21 5:10 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-07-31 12:15:34 +0300, Peter Geoghegan wrote:\n>> On Sat, Jul 31, 2021 at 11:22 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>>> FWIW, my 2 cents.\n>>> I do not see much difference between up2date, up-to-date, up to date, current, recent, actual, last, newest, correct, fresh etc.\n>> +1.\n>> To me it seems normal to debate wording/terminology with new code\n>> comments, but that's about it. I find this zeal to change old code\n>> comments misguided. It's okay if they're clearly wrong or have typos.\n>> Anything else is just hypercorrection. And in any case there is a very\n>> real chance of making the overall situation worse rather than better.\n>> Probably in some subtle but important way.\n> Same here. I find them quite distracting, even.\n>\n> It's one thing for such patches to target blindly obvious typos etc, but\n> they often also end up including less clear cut changes, which cost a\n> fair bit of time to review/judge.\n>\n\nI agree. Errors, ambiguities and typos should be fixed, but purely\nstylistic changes should not be made. In any case, I don't think we need\nto hold the code comments to the same standard as the docs. I think a\nlittle more informality is acceptable in code comments.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 1 Aug 2021 17:27:43 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Replace l337sp34k in comments."
}
] |
[
{
"msg_contents": "Hi all,\n\nIn commit 3f90ec85 we are trying to postpone gathering partial paths for\ntopmost scan/join rel until we know the final targetlist, in order to\nallow more accurate costing of parallel paths. We do this by the\nfollowing code snippet in standard_join_search:\n\n+ /*\n+ * Except for the topmost scan/join rel, consider gathering\n+ * partial paths. We'll do the same for the topmost scan/join rel\n+ * once we know the final targetlist (see grouping_planner).\n+ */\n+ if (lev < levels_needed)\n+ generate_gather_paths(root, rel, false);\n\nThis change may cause a problem if the joinlist contains sub-joinlist\nnodes, in which case 'lev == levels_needed' does not necessarily imply\nit's the topmost for the final scan/join rel. It may only be the topmost\nscan/join rel for the subproblem. And then we would miss the Gather\npaths for this subproblem. It can be illustrated with the query below:\n\ncreate table foo(i int, j int);\ninsert into foo select i, i from generate_series(1,50000)i;\nanalyze foo;\n\nset max_parallel_workers_per_gather to 4;\nset parallel_setup_cost to 0;\nset parallel_tuple_cost to 0;\nset min_parallel_table_scan_size to 0;\n\n# explain (costs off) select * from foo a join foo b on a.i = b.i full join\nfoo c on b.i = c.i;\n QUERY PLAN\n----------------------------------------------------\n Hash Full Join\n Hash Cond: (b.i = c.i)\n -> Hash Join\n Hash Cond: (a.i = b.i)\n -> Gather\n Workers Planned: 4\n -> Parallel Seq Scan on foo a\n -> Hash\n -> Gather\n Workers Planned: 4\n -> Parallel Seq Scan on foo b\n -> Hash\n -> Gather\n Workers Planned: 4\n -> Parallel Seq Scan on foo c\n(15 rows)\n\nPlease note how we do the join for rel a and b. We run Gather above the\nparallel scan and then do the join above the Gather.\n\nThese two base rels are grouped in a subproblem because of the FULL\nJOIN. And due to the mentioned code change, we are unable to gather\npartial paths for their joinrel.\n\nIf we can somehow fix this problem, then we would be able to do better\nplanning by running parallel join first and then doing Gather above the\njoin.\n\n -> Gather\n Workers Planned: 4\n -> Parallel Hash Join\n Hash Cond: (a.i = b.i)\n -> Parallel Seq Scan on foo a\n -> Parallel Hash\n -> Parallel Seq Scan on foo b\n\nTo fix this problem, I'm thinking we can leverage 'root->all_baserels'\nto tell if we are at the topmost scan/join rel, something like:\n\n--- a/src/backend/optimizer/path/allpaths.c\n+++ b/src/backend/optimizer/path/allpaths.c\n@@ -3041,7 +3041,7 @@ standard_join_search(PlannerInfo *root, int\nlevels_needed, List *initial_rels)\n * partial paths. We'll do the same for the\ntopmost scan/join rel\n * once we know the final targetlist (see\ngrouping_planner).\n */\n- if (lev < levels_needed)\n+ if (!bms_equal(rel->relids, root->all_baserels))\n generate_useful_gather_paths(root, rel,\nfalse);\n\n\nAny thoughts?\n\nThanks\nRichard\n\nHi all,In commit 3f90ec85 we are trying to postpone gathering partial paths fortopmost scan/join rel until we know the final targetlist, in order toallow more accurate costing of parallel paths. We do this by thefollowing code snippet in standard_join_search:+ /*+ * Except for the topmost scan/join rel, consider gathering+ * partial paths. We'll do the same for the topmost scan/join rel+ * once we know the final targetlist (see grouping_planner).+ */+ if (lev < levels_needed)+ generate_gather_paths(root, rel, false);This change may cause a problem if the joinlist contains sub-joinlistnodes, in which case 'lev == levels_needed' does not necessarily implyit's the topmost for the final scan/join rel. It may only be the topmostscan/join rel for the subproblem. And then we would miss the Gatherpaths for this subproblem. It can be illustrated with the query below:create table foo(i int, j int);insert into foo select i, i from generate_series(1,50000)i;analyze foo;set max_parallel_workers_per_gather to 4;set parallel_setup_cost to 0;set parallel_tuple_cost to 0;set min_parallel_table_scan_size to 0;# explain (costs off) select * from foo a join foo b on a.i = b.i full join foo c on b.i = c.i; QUERY PLAN---------------------------------------------------- Hash Full Join Hash Cond: (b.i = c.i) -> Hash Join Hash Cond: (a.i = b.i) -> Gather Workers Planned: 4 -> Parallel Seq Scan on foo a -> Hash -> Gather Workers Planned: 4 -> Parallel Seq Scan on foo b -> Hash -> Gather Workers Planned: 4 -> Parallel Seq Scan on foo c(15 rows)Please note how we do the join for rel a and b. We run Gather above theparallel scan and then do the join above the Gather.These two base rels are grouped in a subproblem because of the FULLJOIN. And due to the mentioned code change, we are unable to gatherpartial paths for their joinrel.If we can somehow fix this problem, then we would be able to do betterplanning by running parallel join first and then doing Gather above thejoin. -> Gather Workers Planned: 4 -> Parallel Hash Join Hash Cond: (a.i = b.i) -> Parallel Seq Scan on foo a -> Parallel Hash -> Parallel Seq Scan on foo bTo fix this problem, I'm thinking we can leverage 'root->all_baserels'to tell if we are at the topmost scan/join rel, something like:--- a/src/backend/optimizer/path/allpaths.c+++ b/src/backend/optimizer/path/allpaths.c@@ -3041,7 +3041,7 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels) * partial paths. We'll do the same for the topmost scan/join rel * once we know the final targetlist (see grouping_planner). */- if (lev < levels_needed)+ if (!bms_equal(rel->relids, root->all_baserels)) generate_useful_gather_paths(root, rel, false);Any thoughts?ThanksRichard",
"msg_date": "Wed, 28 Jul 2021 15:42:15 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Problem about postponing gathering partial paths for topmost\n scan/join rel"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 3:42 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> To fix this problem, I'm thinking we can leverage 'root->all_baserels'\n> to tell if we are at the topmost scan/join rel, something like:\n>\n> --- a/src/backend/optimizer/path/allpaths.c\n> +++ b/src/backend/optimizer/path/allpaths.c\n> @@ -3041,7 +3041,7 @@ standard_join_search(PlannerInfo *root, int\n> levels_needed, List *initial_rels)\n> * partial paths. We'll do the same for the\n> topmost scan/join rel\n> * once we know the final targetlist (see\n> grouping_planner).\n> */\n> - if (lev < levels_needed)\n> + if (!bms_equal(rel->relids, root->all_baserels))\n> generate_useful_gather_paths(root, rel,\n> false);\n>\n>\n> Any thoughts?\n>\n\nAttach a patch to include the fix described upthread. Would appreciate\nany comments on this topic.\n\nThanks\nRichard",
"msg_date": "Fri, 30 Jul 2021 15:14:45 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem about postponing gathering partial paths for topmost\n scan/join rel"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Wed, Jul 28, 2021 at 3:42 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> \n> To fix this problem, I'm thinking we can leverage 'root->all_baserels'\n> to tell if we are at the topmost scan/join rel, something like:\n> \n> --- a/src/backend/optimizer/path/allpaths.c\n> +++ b/src/backend/optimizer/path/allpaths.c\n> @@ -3041,7 +3041,7 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels)\n> * partial paths. We'll do the same for the topmost scan/join rel\n> * once we know the final targetlist (see grouping_planner).\n> */\n> - if (lev < levels_needed)\n> + if (!bms_equal(rel->relids, root->all_baserels))\n> generate_useful_gather_paths(root, rel, false);\n> \n> Any thoughts?\n> \n> Attach a patch to include the fix described upthread. Would appreciate\n> any comments on this topic.\n\n\nI think I understand the idea but I'm not sure about the regression test. I\nsuspect that in the plan\n\nEXPLAIN (COSTS OFF)\nSELECT count(*) FROM tenk1 a JOIN tenk1 b ON a.two =3D b.two\n FULL JOIN tenk1 c ON b.two =3D c.two;\n QUERY PLAN \n\n------------------------------------------------------------\n Aggregate\n -> Hash Full Join\n Hash Cond: (b.two =3D c.two)\n -> Gather\n Workers Planned: 4\n -> Parallel Hash Join\n Hash Cond: (a.two =3D b.two)\n -> Parallel Seq Scan on tenk1 a\n -> Parallel Hash\n -> Parallel Seq Scan on tenk1 b\n -> Hash\n -> Gather\n Workers Planned: 4\n -> Parallel Seq Scan on tenk1 c\n\n\nthe Gather node is located below the \"Hash Full Join\" node only because that\nkind of join currently cannot be executed by parallel workers. If the parallel\n\"Hash Full Join\" gets implemented (I've noticed but not checked in detail\n[1]), it might break this test.\n\nI'd prefer a test that demonstrates that the Gather node at the top of the\n\"subproblem plan\" is useful purely from the *cost* perspective, rather than\ndue to executor limitation.\n\n\n[1] https://commitfest.postgresql.org/38/2903/\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 14 Jul 2022 16:02:58 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Problem about postponing gathering partial paths for topmost\n scan/join rel"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 10:02 PM Antonin Houska <ah@cybertec.at> wrote:\n\n> I'd prefer a test that demonstrates that the Gather node at the top of the\n> \"subproblem plan\" is useful purely from the *cost* perspective, rather than\n> due to executor limitation.\n\n\nThis patch provides an additional path (Gather atop of subproblem) which\nwas not available before. But your concern makes sense that we need to\nshow this new path is valuable from competing on cost with other paths.\n\nHow about we change to Nested Loop at the topmost? Something like:\n\nset join_collapse_limit to 2;\n\n# explain (costs off) select * from foo a join foo b on a.i = b.i join foo\nc on b.i > c.i;\n QUERY PLAN\n----------------------------------------------------\n Nested Loop\n Join Filter: (b.i > c.i)\n -> Gather\n Workers Planned: 4\n -> Parallel Hash Join\n Hash Cond: (a.i = b.i)\n -> Parallel Seq Scan on foo a\n -> Parallel Hash\n -> Parallel Seq Scan on foo b\n -> Materialize\n -> Gather\n Workers Planned: 4\n -> Parallel Seq Scan on foo c\n(13 rows)\n\nWithout the patch, the path which is Gather atop of subproblem is not\navailable, and we would get:\n\n# explain (costs off) select * from foo a join foo b on a.i = b.i join foo\nc on b.i > c.i;\n QUERY PLAN\n----------------------------------------------------\n Nested Loop\n Join Filter: (b.i > c.i)\n -> Hash Join\n Hash Cond: (a.i = b.i)\n -> Gather\n Workers Planned: 4\n -> Parallel Seq Scan on foo a\n -> Hash\n -> Gather\n Workers Planned: 4\n -> Parallel Seq Scan on foo b\n -> Materialize\n -> Gather\n Workers Planned: 4\n -> Parallel Seq Scan on foo c\n(15 rows)\n\nThanks\nRichard\n\nOn Thu, Jul 14, 2022 at 10:02 PM Antonin Houska <ah@cybertec.at> wrote:\nI'd prefer a test that demonstrates that the Gather node at the top of the\n\"subproblem plan\" is useful purely from the *cost* perspective, rather than\ndue to executor limitation.This patch provides an additional path (Gather atop of subproblem) whichwas not available before. But your concern makes sense that we need toshow this new path is valuable from competing on cost with other paths.How about we change to Nested Loop at the topmost? Something like:set join_collapse_limit to 2;# explain (costs off) select * from foo a join foo b on a.i = b.i join foo c on b.i > c.i; QUERY PLAN---------------------------------------------------- Nested Loop Join Filter: (b.i > c.i) -> Gather Workers Planned: 4 -> Parallel Hash Join Hash Cond: (a.i = b.i) -> Parallel Seq Scan on foo a -> Parallel Hash -> Parallel Seq Scan on foo b -> Materialize -> Gather Workers Planned: 4 -> Parallel Seq Scan on foo c(13 rows)Without the patch, the path which is Gather atop of subproblem is notavailable, and we would get:# explain (costs off) select * from foo a join foo b on a.i = b.i join foo c on b.i > c.i; QUERY PLAN---------------------------------------------------- Nested Loop Join Filter: (b.i > c.i) -> Hash Join Hash Cond: (a.i = b.i) -> Gather Workers Planned: 4 -> Parallel Seq Scan on foo a -> Hash -> Gather Workers Planned: 4 -> Parallel Seq Scan on foo b -> Materialize -> Gather Workers Planned: 4 -> Parallel Seq Scan on foo c(15 rows)ThanksRichard",
"msg_date": "Fri, 15 Jul 2022 16:03:56 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem about postponing gathering partial paths for topmost\n scan/join rel"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 4:03 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Thu, Jul 14, 2022 at 10:02 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n>> I'd prefer a test that demonstrates that the Gather node at the top of the\n>> \"subproblem plan\" is useful purely from the *cost* perspective, rather\n>> than\n>> due to executor limitation.\n>\n>\n> This patch provides an additional path (Gather atop of subproblem) which\n> was not available before. But your concern makes sense that we need to\n> show this new path is valuable from competing on cost with other paths.\n>\n> How about we change to Nested Loop at the topmost? Something like:\n>\n\nMaybe a better example is that we use a small table 'c' to avoid the\nGather node above scanning 'c', so that the path of parallel nestloop is\npossible to be generated.\n\nset join_collapse_limit to 2;\n\n# explain (costs off) select * from a join b on a.i = b.i join c on b.i >\nc.i;\n QUERY PLAN\n------------------------------------------------\n Nested Loop\n Join Filter: (b.i > c.i)\n -> Seq Scan on c\n -> Gather\n Workers Planned: 4\n -> Parallel Hash Join\n Hash Cond: (a.i = b.i)\n -> Parallel Seq Scan on a\n -> Parallel Hash\n -> Parallel Seq Scan on b\n(10 rows)\n\nThanks\nRichard\n\nOn Fri, Jul 15, 2022 at 4:03 PM Richard Guo <guofenglinux@gmail.com> wrote:On Thu, Jul 14, 2022 at 10:02 PM Antonin Houska <ah@cybertec.at> wrote:\nI'd prefer a test that demonstrates that the Gather node at the top of the\n\"subproblem plan\" is useful purely from the *cost* perspective, rather than\ndue to executor limitation.This patch provides an additional path (Gather atop of subproblem) whichwas not available before. But your concern makes sense that we need toshow this new path is valuable from competing on cost with other paths.How about we change to Nested Loop at the topmost? Something like:Maybe a better example is that we use a small table 'c' to avoid theGather node above scanning 'c', so that the path of parallel nestloop ispossible to be generated.set join_collapse_limit to 2;# explain (costs off) select * from a join b on a.i = b.i join c on b.i > c.i; QUERY PLAN------------------------------------------------ Nested Loop Join Filter: (b.i > c.i) -> Seq Scan on c -> Gather Workers Planned: 4 -> Parallel Hash Join Hash Cond: (a.i = b.i) -> Parallel Seq Scan on a -> Parallel Hash -> Parallel Seq Scan on b(10 rows)ThanksRichard",
"msg_date": "Fri, 15 Jul 2022 17:00:13 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem about postponing gathering partial paths for topmost\n scan/join rel"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 5:00 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Fri, Jul 15, 2022 at 4:03 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n>\n>> On Thu, Jul 14, 2022 at 10:02 PM Antonin Houska <ah@cybertec.at> wrote:\n>>\n>>> I'd prefer a test that demonstrates that the Gather node at the top of\n>>> the\n>>> \"subproblem plan\" is useful purely from the *cost* perspective, rather\n>>> than\n>>> due to executor limitation.\n>>\n>>\n>> This patch provides an additional path (Gather atop of subproblem) which\n>> was not available before. But your concern makes sense that we need to\n>> show this new path is valuable from competing on cost with other paths.\n>>\n>> How about we change to Nested Loop at the topmost? Something like:\n>>\n>\n> Maybe a better example is that we use a small table 'c' to avoid the\n> Gather node above scanning 'c', so that the path of parallel nestloop is\n> possible to be generated.\n>\n\nUpdate the patch with the new test case.\n\nThanks\nRichard",
"msg_date": "Mon, 18 Jul 2022 15:13:05 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem about postponing gathering partial paths for topmost\n scan/join rel"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Fri, Jul 15, 2022 at 5:00 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> \n> On Fri, Jul 15, 2022 at 4:03 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> \n> On Thu, Jul 14, 2022 at 10:02 PM Antonin Houska <ah@cybertec.at> wrote:\n> \n> I'd prefer a test that demonstrates that the Gather node at the top of the\n> \"subproblem plan\" is useful purely from the *cost* perspective, rather than\n> due to executor limitation.\n> \n> This patch provides an additional path (Gather atop of subproblem) which\n> was not available before. But your concern makes sense that we need to\n> show this new path is valuable from competing on cost with other paths.\n> \n> How about we change to Nested Loop at the topmost? Something like:\n> \n> Maybe a better example is that we use a small table 'c' to avoid the\n> Gather node above scanning 'c', so that the path of parallel nestloop is\n> possible to be generated.\n> \n> Update the patch with the new test case.\n\nok, this makes sense to me. Just one minor suggestion: the command\n\n\talter table d_star reset (parallel_workers);\n\nis not necessary because it's immediately followed by\n\n\trollback;\n\nI'm going to set the CF entry to \"ready for committer'\".\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 18 Jul 2022 14:36:21 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Problem about postponing gathering partial paths for topmost\n scan/join rel"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> writes:\n> I'm going to set the CF entry to \"ready for committer'\".\n\nI pushed this with some editorialization:\n\n* Grepping found another caller of generate_useful_gather_paths\nwith the exact same bug, in geqo_eval.c. (A wise man once said\nthat the most powerful bug-finding heuristic he knew was \"where\nelse did we make this same mistake?\")\n\n* I thought it best to make set_rel_pathlist() use an identically-\nworded test for the equivalent purpose for baserels. It's not\nactively broken, but examining bms_membership seems confusingly\ncomplicated, and I doubt it's faster than bms_equal either.\n\n* I omitted the test case because I didn't think it would buy us\nanything. It failed to detect the GEQO variant of the bug, and\nit would fail to detect the most likely future way of breaking\nthis, which is that I forget to change these instances of\nall_baserels next time I rebase [1].\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/39/3755/\n\n\n",
"msg_date": "Sat, 30 Jul 2022 13:15:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem about postponing gathering partial paths for topmost\n scan/join rel"
}
] |
[
{
"msg_contents": "While looking into issues with fairywren and pg_basebackup tests, I\ncreated a similar environment but with more modern Windows / msys2.\nBefore it even got to the test that failed on fairywren it failed the\nfirst TAP test for a variety of reasons, all connected to\nTestLib::perl2host.\n\nFirst, this function is in some cases returning paths for directories\nwith trailing slashes and or embedded double slashes. Both of these can\ncause problems, especially when written to a tablespace map file. Also,\nthe cygpath invocation is returning a path with backslashes whereas \"pwd\n-W' returns a path with forward slashes.\n\nSo the first attached patch rectifies these problems. It fixes issues\nwith doubles and trailing slashes and makes cygpath return a path with\nforward slashes just like the non-cygpath branch.\n\nHowever, there is another problem, which is that if called on a path\nthat includes a symlink, on the test platform I set up it actually\nresolves that link rather than just following it. The end result is that\nthe use of a shorter path via a symlink is effectively defeated. I\nhaven't found any way to stop this behaviour.\n\nThe second patch therefore adjusts the test to avoid calling perl2host\non such a path. It just calls perl2host on the symlink's parent, and\nthereafter uses that result.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 28 Jul 2021 09:31:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "fixing pg_basebackup tests for modern Windows/msys2"
},
{
"msg_contents": "\nOn 7/28/21 9:31 AM, Andrew Dunstan wrote:\n> While looking into issues with fairywren and pg_basebackup tests, I\n> created a similar environment but with more modern Windows / msys2.\n> Before it even got to the test that failed on fairywren it failed the\n> first TAP test for a variety of reasons, all connected to\n> TestLib::perl2host.\n>\n> First, this function is in some cases returning paths for directories\n> with trailing slashes and or embedded double slashes. Both of these can\n> cause problems, especially when written to a tablespace map file. Also,\n> the cygpath invocation is returning a path with backslashes whereas \"pwd\n> -W' returns a path with forward slashes.\n>\n> So the first attached patch rectifies these problems. It fixes issues\n> with doubles and trailing slashes and makes cygpath return a path with\n> forward slashes just like the non-cygpath branch.\n>\n> However, there is another problem, which is that if called on a path\n> that includes a symlink, on the test platform I set up it actually\n> resolves that link rather than just following it. The end result is that\n> the use of a shorter path via a symlink is effectively defeated. I\n> haven't found any way to stop this behaviour.\n>\n> The second patch therefore adjusts the test to avoid calling perl2host\n> on such a path. It just calls perl2host on the symlink's parent, and\n> thereafter uses that result.\n>\n>\n\n\nI've pushed these in master and REL_14_STABLE.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 12:23:30 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: fixing pg_basebackup tests for modern Windows/msys2"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.