threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nGetConfigOptionValues function extracts the config parameters for the\ngiven variable irrespective of whether it results in noshow or not.\nBut the parent function show_all_settings ignores the values parameter\nif it results in noshow. It's unnecessary to fetch all the values\nduring noshow. So a return statement in GetConfigOptionValues() when\nnoshow is set to true is needed. Attached the patch for the same.\nPlease share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\n\n", "msg_date": "Wed, 18 Jan 2023 13:21:05 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Improve GetConfigOptionValues function" }, { "msg_contents": "Attaching the patch.\n\nOn Wed, Jan 18, 2023 at 1:21 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> GetConfigOptionValues function extracts the config parameters for the\n> given variable irrespective of whether it results in noshow or not.\n> But the parent function show_all_settings ignores the values parameter\n> if it results in noshow. It's unnecessary to fetch all the values\n> during noshow. So a return statement in GetConfigOptionValues() when\n> noshow is set to true is needed. Attached the patch for the same.\n> Please share your thoughts.\n>\n> Thanks & Regards,\n> Nitin Jadhav", "msg_date": "Wed, 18 Jan 2023 13:23:30 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "On Wed, Jan 18, 2023 at 1:24 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Attaching the patch.\n>\n> On Wed, Jan 18, 2023 at 1:21 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > GetConfigOptionValues function extracts the config parameters for the\n> > given variable irrespective of whether it results in noshow or not.\n> > But the parent function show_all_settings ignores the values parameter\n> > if it results in noshow. It's unnecessary to fetch all the values\n> > during noshow. So a return statement in GetConfigOptionValues() when\n> > noshow is set to true is needed. Attached the patch for the same.\n> > Please share your thoughts.\n\nYes, the existing caller isn't using the fetched values when noshow is\nset to true. However, I think returning from GetConfigOptionValues()\nwhen noshow is set to true without fetching values limits the use of\nthe function. What if someother caller wants to use the function to\nget the values with noshow passed in and use the values when noshow is\nset to true?\n\nAlso, do we gain anything with the patch? I mean, can\nshow_all_settings()/pg_settings/pg_show_all_settings() get faster, say\nwith a non-superuser without pg_read_all_settings predefined role or\nwith a superuser? I see there're about 6 GUC_NO_SHOW_ALL GUCs and 20\nGUC_SUPERUSER_ONLY GUCs, I'm not sure if it leads to some benefit with\nthe patch.\n\nHaving said above, I don't mind keeping the things the way they're right now.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 13:46:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> GetConfigOptionValues function extracts the config parameters for the\n> given variable irrespective of whether it results in noshow or not.\n> But the parent function show_all_settings ignores the values parameter\n> if it results in noshow. It's unnecessary to fetch all the values\n> during noshow. So a return statement in GetConfigOptionValues() when\n> noshow is set to true is needed. Attached the patch for the same.\n> Please share your thoughts.\n\nI do not think this is an improvement: it causes GetConfigOptionValues\nto be making assumptions about how its results will be used. If\nshow_all_settings() were a big performance bottleneck, and there were\na lot of no-show values that we could optimize, then maybe the extra\ncoupling would be worthwhile. But I don't believe either of those\nthings.\n\nPossibly a better answer is to refactor into separate functions,\nalong the lines of\n\nstatic bool\nConfigOptionIsShowable(struct config_generic *conf)\n\nstatic void\nGetConfigOptionValues(struct config_generic *conf, const char **values)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Jan 2023 11:14:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "On Wed, Jan 18, 2023 at 9:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Possibly a better answer is to refactor into separate functions,\n> along the lines of\n>\n> static bool\n> ConfigOptionIsShowable(struct config_generic *conf)\n>\n> static void\n> GetConfigOptionValues(struct config_generic *conf, const char **values)\n\n+1 and ConfigOptionIsShowable() function can replace explicit showable\nchecks in two other places too.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 22:04:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "> Yes, the existing caller isn't using the fetched values when noshow is\n> set to true. However, I think returning from GetConfigOptionValues()\n> when noshow is set to true without fetching values limits the use of\n> the function. What if someother caller wants to use the function to\n> get the values with noshow passed in and use the values when noshow is\n> set to true?\n\nI agree that it limits the use of the function but I feel it is good\nto focus on the existing use cases and modify the functions\naccordingly. In future, if such a use case occurs, then the author\nshould take care of modifying the required functions. The idea\nsuggested by Tom to refactor the function looks good as it aligns with\ncurrent use cases and it can be used in future cases as you were\nsuggesting.\n\n\n> Also, do we gain anything with the patch? I mean, can\n> show_all_settings()/pg_settings/pg_show_all_settings() get faster, say\n> with a non-superuser without pg_read_all_settings predefined role or\n> with a superuser? I see there're about 6 GUC_NO_SHOW_ALL GUCs and 20\n> GUC_SUPERUSER_ONLY GUCs, I'm not sure if it leads to some benefit with\n> the patch.\n\nAs the number of such parameters (GUC_NO_SHOW_ALL and\nGUC_SUPERUSER_ONLY) are less, we may not see improvements in\nperformance. We can treat it as a kind of refactoring.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Jan 18, 2023 at 1:47 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 1:24 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > Attaching the patch.\n> >\n> > On Wed, Jan 18, 2023 at 1:21 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > GetConfigOptionValues function extracts the config parameters for the\n> > > given variable irrespective of whether it results in noshow or not.\n> > > But the parent function show_all_settings ignores the values parameter\n> > > if it results in noshow. It's unnecessary to fetch all the values\n> > > during noshow. So a return statement in GetConfigOptionValues() when\n> > > noshow is set to true is needed. Attached the patch for the same.\n> > > Please share your thoughts.\n>\n> Yes, the existing caller isn't using the fetched values when noshow is\n> set to true. However, I think returning from GetConfigOptionValues()\n> when noshow is set to true without fetching values limits the use of\n> the function. What if someother caller wants to use the function to\n> get the values with noshow passed in and use the values when noshow is\n> set to true?\n>\n> Also, do we gain anything with the patch? I mean, can\n> show_all_settings()/pg_settings/pg_show_all_settings() get faster, say\n> with a non-superuser without pg_read_all_settings predefined role or\n> with a superuser? I see there're about 6 GUC_NO_SHOW_ALL GUCs and 20\n> GUC_SUPERUSER_ONLY GUCs, I'm not sure if it leads to some benefit with\n> the patch.\n>\n> Having said above, I don't mind keeping the things the way they're right now.\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 14:26:26 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "> Possibly a better answer is to refactor into separate functions,\n> along the lines of\n>\n> static bool\n> ConfigOptionIsShowable(struct config_generic *conf)\n>\n> static void\n> GetConfigOptionValues(struct config_generic *conf, const char **values)\n\nNice suggestion. Attached a patch for the same. Please share the\ncomments if any.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Jan 18, 2023 at 9:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> > GetConfigOptionValues function extracts the config parameters for the\n> > given variable irrespective of whether it results in noshow or not.\n> > But the parent function show_all_settings ignores the values parameter\n> > if it results in noshow. It's unnecessary to fetch all the values\n> > during noshow. So a return statement in GetConfigOptionValues() when\n> > noshow is set to true is needed. Attached the patch for the same.\n> > Please share your thoughts.\n>\n> I do not think this is an improvement: it causes GetConfigOptionValues\n> to be making assumptions about how its results will be used. If\n> show_all_settings() were a big performance bottleneck, and there were\n> a lot of no-show values that we could optimize, then maybe the extra\n> coupling would be worthwhile. But I don't believe either of those\n> things.\n>\n> Possibly a better answer is to refactor into separate functions,\n> along the lines of\n>\n> static bool\n> ConfigOptionIsShowable(struct config_generic *conf)\n>\n> static void\n> GetConfigOptionValues(struct config_generic *conf, const char **values)\n>\n> regards, tom lane", "msg_date": "Thu, 19 Jan 2023 15:26:34 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "On Thu, Jan 19, 2023 at 3:27 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > Possibly a better answer is to refactor into separate functions,\n> > along the lines of\n> >\n> > static bool\n> > ConfigOptionIsShowable(struct config_generic *conf)\n> >\n> > static void\n> > GetConfigOptionValues(struct config_generic *conf, const char **values)\n>\n> Nice suggestion. Attached a patch for the same. Please share the\n> comments if any.\n\nThe v2 patch looks good to me except the comment around\nConfigOptionIsShowable() which is too verbose. How about just \"Return\nwhether the GUC variable is visible or not.\"?\n\nI think you can add it to CF, if not done, to not lose track of it.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 11:29:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "> The v2 patch looks good to me except the comment around\n> ConfigOptionIsShowable() which is too verbose. How about just \"Return\n> whether the GUC variable is visible or not.\"?\n\nSounds good. Updated in the v3 patch attached.\n\n\n> I think you can add it to CF, if not done, to not lose track of it.\n\nAdded https://commitfest.postgresql.org/42/4140/\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Jan 23, 2023 at 11:30 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jan 19, 2023 at 3:27 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > Possibly a better answer is to refactor into separate functions,\n> > > along the lines of\n> > >\n> > > static bool\n> > > ConfigOptionIsShowable(struct config_generic *conf)\n> > >\n> > > static void\n> > > GetConfigOptionValues(struct config_generic *conf, const char **values)\n> >\n> > Nice suggestion. Attached a patch for the same. Please share the\n> > comments if any.\n>\n> The v2 patch looks good to me except the comment around\n> ConfigOptionIsShowable() which is too verbose. How about just \"Return\n> whether the GUC variable is visible or not.\"?\n>\n> I think you can add it to CF, if not done, to not lose track of it.\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 23 Jan 2023 15:29:16 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "On Mon, Jan 23, 2023 at 3:29 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > The v2 patch looks good to me except the comment around\n> > ConfigOptionIsShowable() which is too verbose. How about just \"Return\n> > whether the GUC variable is visible or not.\"?\n>\n> Sounds good. Updated in the v3 patch attached.\n>\n> > I think you can add it to CF, if not done, to not lose track of it.\n>\n> Added https://commitfest.postgresql.org/42/4140/\n\nLGTM. I've marked it RfC.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 16:18:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> LGTM. I've marked it RfC.\n\nAfter looking at this, it seemed to me that the factorization\nwasn't quite right after all: specifically, the new function\ncould be used in several more places if it confines itself to\nbeing a privilege check and doesn't consider GUC_NO_SHOW_ALL.\nSo more like the attached.\n\nYou could argue that the factorization is illusory since each\nof these additional call sites has an error message that knows\nexactly what the conditions are to succeed. But if we want to\ngo that direction then I'd be inclined to forget about the\npermissions-check function altogether and just test the\nconditions in-line everywhere.\n\nAlso, I intentionally dropped the GUC_NO_SHOW_ALL check in\nget_explain_guc_options, because it seems redundant given\nthe preceding GUC_EXPLAIN check. It's unlikely we'd ever have\na variable that's marked both GUC_EXPLAIN and GUC_NO_SHOW_ALL ...\nbut if we did, shouldn't the former take precedence here anyway?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 23 Jan 2023 11:21:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "On Mon, Jan 23, 2023 at 9:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > LGTM. I've marked it RfC.\n>\n> After looking at this, it seemed to me that the factorization\n> wasn't quite right after all: specifically, the new function\n> could be used in several more places if it confines itself to\n> being a privilege check and doesn't consider GUC_NO_SHOW_ALL.\n> So more like the attached.\n\nThanks. It looks even cleaner now.\n\n> Also, I intentionally dropped the GUC_NO_SHOW_ALL check in\n> get_explain_guc_options, because it seems redundant given\n> the preceding GUC_EXPLAIN check. It's unlikely we'd ever have\n> a variable that's marked both GUC_EXPLAIN and GUC_NO_SHOW_ALL ...\n> but if we did, shouldn't the former take precedence here anyway?\n\nYou're right, but there's nothing that prevents users writing GUCs\nwith GUC_EXPLAIN and GUC_NO_SHOW_ALL. FWIW, I prefer retaining the\nbehaviour as-is i.e. we can have explicit if (conf->flags &\nGUC_NO_SHOW_ALL) continue; there in get_explain_guc_options().\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Jan 2023 20:16:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Mon, Jan 23, 2023 at 9:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also, I intentionally dropped the GUC_NO_SHOW_ALL check in\n>> get_explain_guc_options, because it seems redundant given\n>> the preceding GUC_EXPLAIN check. It's unlikely we'd ever have\n>> a variable that's marked both GUC_EXPLAIN and GUC_NO_SHOW_ALL ...\n>> but if we did, shouldn't the former take precedence here anyway?\n\n> You're right, but there's nothing that prevents users writing GUCs\n> with GUC_EXPLAIN and GUC_NO_SHOW_ALL.\n\n\"Users\"? You do realize those flags are only settable by C code,\nright? Moreover, you haven't explained why it would be good that\nyou can't get at the behavior that a GUC is both shown in EXPLAIN\nand not shown in SHOW ALL. If you want \"not shown by either\",\nthat's already accessible by setting only the GUC_NO_SHOW_ALL\nflag. So I'd almost argue this is a bug fix, though I concede\nit's a bit hard to imagine why somebody would want that choice.\nStill, if we have two independent flags they should produce four\nbehaviors, not just three.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:13:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "> After looking at this, it seemed to me that the factorization\n> wasn't quite right after all: specifically, the new function\n> could be used in several more places if it confines itself to\n> being a privilege check and doesn't consider GUC_NO_SHOW_ALL.\n> So more like the attached.\n>\n> You could argue that the factorization is illusory since each\n> of these additional call sites has an error message that knows\n> exactly what the conditions are to succeed. But if we want to\n> go that direction then I'd be inclined to forget about the\n> permissions-check function altogether and just test the\n> conditions in-line everywhere.\n\nI am ok with the above changes. I thought of modifying the\nConfigOptionIsVisible function to take an extra argument, say\nvalidate_superuser_only. If this argument is true then it only\nconsiders GUC_SUPERUSER_ONLY check and return based on that. Otherwise\nit considers both GUC_SUPERUSER_ONLY and GUC_NO_SHOW_ALL and returns\nbased on that. I understand that this just complicates the function\nand has other disadvantages. Instead of testing the conditions\nin-line, I prefer the use of function as done in v4 patch as it\nreduces the code size.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Jan 23, 2023 at 9:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > LGTM. I've marked it RfC.\n>\n> After looking at this, it seemed to me that the factorization\n> wasn't quite right after all: specifically, the new function\n> could be used in several more places if it confines itself to\n> being a privilege check and doesn't consider GUC_NO_SHOW_ALL.\n> So more like the attached.\n>\n> You could argue that the factorization is illusory since each\n> of these additional call sites has an error message that knows\n> exactly what the conditions are to succeed. But if we want to\n> go that direction then I'd be inclined to forget about the\n> permissions-check function altogether and just test the\n> conditions in-line everywhere.\n>\n> Also, I intentionally dropped the GUC_NO_SHOW_ALL check in\n> get_explain_guc_options, because it seems redundant given\n> the preceding GUC_EXPLAIN check. It's unlikely we'd ever have\n> a variable that's marked both GUC_EXPLAIN and GUC_NO_SHOW_ALL ...\n> but if we did, shouldn't the former take precedence here anyway?\n>\n> regards, tom lane\n>\n\n\n", "msg_date": "Wed, 25 Jan 2023 17:36:22 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": ">>> Also, I intentionally dropped the GUC_NO_SHOW_ALL check in\n>>> get_explain_guc_options, because it seems redundant given\n>>> the preceding GUC_EXPLAIN check. It's unlikely we'd ever have\n>>> a variable that's marked both GUC_EXPLAIN and GUC_NO_SHOW_ALL ...\n>>> but if we did, shouldn't the former take precedence here anyway?\n>\n>> You're right, but there's nothing that prevents users writing GUCs\n>> with GUC_EXPLAIN and GUC_NO_SHOW_ALL.\n>\n> \"Users\"? You do realize those flags are only settable by C code,\n> right? Moreover, you haven't explained why it would be good that\n> you can't get at the behavior that a GUC is both shown in EXPLAIN\n> and not shown in SHOW ALL. If you want \"not shown by either\",\n> that's already accessible by setting only the GUC_NO_SHOW_ALL\n> flag. So I'd almost argue this is a bug fix, though I concede\n> it's a bit hard to imagine why somebody would want that choice.\n> Still, if we have two independent flags they should produce four\n> behaviors, not just three.\n\nI agree that the developer can use both GUC_NO_SHOW_ALL and\nGUC_EXPLAIN knowingly or unknowingly for a single GUC. If used by\nmistake then according to the existing code (without patch),\nGUC_NO_SHOW_ALL takes higher precedence whether it is marked first or\nlast in the code. I am more convinced with this behaviour as I feel it\nis safer than exposing the information which the developer might not\nhave intended.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Jan 24, 2023 at 8:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > On Mon, Jan 23, 2023 at 9:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Also, I intentionally dropped the GUC_NO_SHOW_ALL check in\n> >> get_explain_guc_options, because it seems redundant given\n> >> the preceding GUC_EXPLAIN check. It's unlikely we'd ever have\n> >> a variable that's marked both GUC_EXPLAIN and GUC_NO_SHOW_ALL ...\n> >> but if we did, shouldn't the former take precedence here anyway?\n>\n> > You're right, but there's nothing that prevents users writing GUCs\n> > with GUC_EXPLAIN and GUC_NO_SHOW_ALL.\n>\n> \"Users\"? You do realize those flags are only settable by C code,\n> right? Moreover, you haven't explained why it would be good that\n> you can't get at the behavior that a GUC is both shown in EXPLAIN\n> and not shown in SHOW ALL. If you want \"not shown by either\",\n> that's already accessible by setting only the GUC_NO_SHOW_ALL\n> flag. So I'd almost argue this is a bug fix, though I concede\n> it's a bit hard to imagine why somebody would want that choice.\n> Still, if we have two independent flags they should produce four\n> behaviors, not just three.\n>\n> regards, tom lane\n\n\n", "msg_date": "Wed, 25 Jan 2023 17:46:56 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> I agree that the developer can use both GUC_NO_SHOW_ALL and\n> GUC_EXPLAIN knowingly or unknowingly for a single GUC. If used by\n> mistake then according to the existing code (without patch),\n> GUC_NO_SHOW_ALL takes higher precedence whether it is marked first or\n> last in the code. I am more convinced with this behaviour as I feel it\n> is safer than exposing the information which the developer might not\n> have intended.\n\nBoth of you are arguing as though GUC_NO_SHOW_ALL is a security\nproperty. It is not, or at least it's so trivially bypassable\nthat it's useless to consider it one. All it is is a de-clutter\nmechanism.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:53:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "On Tue, Jan 24, 2023 at 8:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > On Mon, Jan 23, 2023 at 9:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Also, I intentionally dropped the GUC_NO_SHOW_ALL check in\n> >> get_explain_guc_options, because it seems redundant given\n> >> the preceding GUC_EXPLAIN check. It's unlikely we'd ever have\n> >> a variable that's marked both GUC_EXPLAIN and GUC_NO_SHOW_ALL ...\n> >> but if we did, shouldn't the former take precedence here anyway?\n>\n> > You're right, but there's nothing that prevents users writing GUCs\n> > with GUC_EXPLAIN and GUC_NO_SHOW_ALL.\n>\n> \"Users\"? You do realize those flags are only settable by C code,\n> right?\n\nI meant extensions here.\n\n> Moreover, you haven't explained why it would be good that\n> you can't get at the behavior that a GUC is both shown in EXPLAIN\n> and not shown in SHOW ALL. If you want \"not shown by either\",\n> that's already accessible by setting only the GUC_NO_SHOW_ALL\n> flag. So I'd almost argue this is a bug fix, though I concede\n> it's a bit hard to imagine why somebody would want that choice.\n> Still, if we have two independent flags they should produce four\n> behaviors, not just three.\n\nGot it. I think I should've looked at GUC_NO_SHOW_ALL and GUC_EXPLAIN\nas separate flags not depending on each other in any way, meaning,\nGUCs marked with GUC_NO_SHOW_ALL mustn't be shown in SHOW ALL and its\nfriends\npg_settings and pg_show_all_settings(), and GUCs marked with\nGUC_EXPLAIN must be shown in explain output. This understanding is\nclear and simple IMO.\n\nHaving said that, I have no objection to the v4 patch posted upthread.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 09:35:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "> Both of you are arguing as though GUC_NO_SHOW_ALL is a security\n> property. It is not, or at least it's so trivially bypassable\n> that it's useless to consider it one. All it is is a de-clutter\n> mechanism.\n\nUnderstood. If that is the case, then I am ok with the patch.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Jan 25, 2023 at 9:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> > I agree that the developer can use both GUC_NO_SHOW_ALL and\n> > GUC_EXPLAIN knowingly or unknowingly for a single GUC. If used by\n> > mistake then according to the existing code (without patch),\n> > GUC_NO_SHOW_ALL takes higher precedence whether it is marked first or\n> > last in the code. I am more convinced with this behaviour as I feel it\n> > is safer than exposing the information which the developer might not\n> > have intended.\n>\n> Both of you are arguing as though GUC_NO_SHOW_ALL is a security\n> property. It is not, or at least it's so trivially bypassable\n> that it's useless to consider it one. All it is is a de-clutter\n> mechanism.\n>\n> regards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:46:43 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve GetConfigOptionValues function" }, { "msg_contents": "Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n>> Both of you are arguing as though GUC_NO_SHOW_ALL is a security\n>> property. It is not, or at least it's so trivially bypassable\n>> that it's useless to consider it one. All it is is a de-clutter\n>> mechanism.\n\n> Understood. If that is the case, then I am ok with the patch.\n\nPushed v4, then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 12:14:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve GetConfigOptionValues function" } ]
[ { "msg_contents": "Hi all,\n\nA recent system update of a Debian SID host has begun to show me this\nissue:\n./src/bin/psql/create_help.pl: Bareword dir handle opened at line 47,\ncolumn 1. See pages 202,204 of PBP.\n([InputOutput::ProhibitBarewordDirHandles] Severity: 5)\n\nThis issue gets fixed here as of the attached.\nComments?\n--\nMichael", "msg_date": "Wed, 18 Jan 2023 17:50:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Issue with psql's create_help.pl under perlcritic" }, { "msg_contents": "\nOn 2023-01-18 We 03:50, Michael Paquier wrote:\n> Hi all,\n>\n> A recent system update of a Debian SID host has begun to show me this\n> issue:\n> ./src/bin/psql/create_help.pl: Bareword dir handle opened at line 47,\n> column 1. See pages 202,204 of PBP.\n> ([InputOutput::ProhibitBarewordDirHandles] Severity: 5)\n>\n> This issue gets fixed here as of the attached.\n> Comments?\n\nLooks fine. Interesting it's not caught by perlcritic on my Fedora 35\ninstance, nor my Ubuntu 22.04 instance.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 08:43:01 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Issue with psql's create_help.pl under perlcritic" }, { "msg_contents": "On Wed, Jan 18, 2023 at 08:43:01AM -0500, Andrew Dunstan wrote:\n> Looks fine.\n\nThanks for double-checking, will apply shortly.\n\n> Interesting it's not caught by perlcritic on my Fedora 35\n> instance, nor my Ubuntu 22.04 instance.\n\nPerhaps that just shows up on the latest version of perlcritic? I use\n1.148 in the box where this issue shows up, for all the stable\nbranches of Postgres.\n--\nMichael", "msg_date": "Thu, 19 Jan 2023 09:33:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Issue with psql's create_help.pl under perlcritic" } ]
[ { "msg_contents": "Hi,\n\nI was playing around with splitting up the tablespace test in regress so\nthat I could use the tablespaces it creates in another test and happened\nto notice that the pg_class validity checks in type_sanity.sql are\nincomplete.\n\nIt seems that 8b08f7d4820fd did not update the pg_class tests in\ntype_sanity to include partitioned indexes and tables.\n\npatch attached.\nI only changed these few lines in type_sanity to be more correct; I\ndidn't change anything else in regress to actually exercise them (e.g.\nensuring a partitioned table is around when running type_sanity). It\nmight be worth moving type_sanity down in the parallel schedule?\n\nIt does seem a bit hard to remember to update these tests in\ntype_sanity.sql when adding some new value for a pg_class field. I\nwonder if there is a better way of testing this.\n\n- Melanie", "msg_date": "Wed, 18 Jan 2023 14:51:32 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Small omission in type_sanity.sql" }, { "msg_contents": "Hi,\n\nOn 2023-01-18 14:51:32 -0500, Melanie Plageman wrote:\n> I only changed these few lines in type_sanity to be more correct; I\n> didn't change anything else in regress to actually exercise them (e.g.\n> ensuring a partitioned table is around when running type_sanity). It\n> might be worth moving type_sanity down in the parallel schedule?\n\nUnfortunately it's not entirely trivial to do that. Despite the comment at the\ntop of type_sanity.sql:\n-- None of the SELECTs here should ever find any matching entries,\n-- so the expected output is easy to maintain ;-).\n\nthere are actually a few queries in there that are expected to return\nobjects. And would return more at the end of the tests.\n\nThat doesn't seem great.\n\n\nTom, is there a reason we run the various sanity tests early-ish in the\nschedule? It does seem to reduce their effectiveness a bit...\n\n\nProblems:\n- shell types show up in a bunch of queries, e.g. \"Look for illegal values in\n pg_type fields.\" due to NOT t1.typisdefined\n- the omission of various relkinds noted by Melanie shows in \"Look for illegal\n values in pg_class fields\"\n- pg_attribute query doesn't know about dropped columns\n- \"Cross-check against pg_type entry\" is far too strict about legal combinations\n of typstorage\n\n\n\n> It does seem a bit hard to remember to update these tests in\n> type_sanity.sql when adding some new value for a pg_class field. I\n> wonder if there is a better way of testing this.\n\nAs evidenced by the above list of failures, moving the test to the end of the\nregression tests would help, if we can get there.\n\n\n\n> --- All tables and indexes should have an access method.\n> -SELECT c1.oid, c1.relname\n> -FROM pg_class as c1\n> -WHERE c1.relkind NOT IN ('S', 'v', 'f', 'c') and\n> - c1.relam = 0;\n> - oid | relname \n> ------+---------\n> +-- All tables and indexes except partitioned tables should have an access\n> +-- method.\n> +SELECT oid, relname, relkind, relam\n> +FROM pg_class\n> +WHERE relkind NOT IN ('S', 'v', 'f', 'c', 'p') and\n> + relam = 0;\n> + oid | relname | relkind | relam \n> +-----+---------+---------+-------\n> (0 rows)\n\nDon't think that one is right, a partitioned table doesn't have an AM.\n\n\n> --- Conversely, sequences, views, types shouldn't have them\n> -SELECT c1.oid, c1.relname\n> -FROM pg_class as c1\n> -WHERE c1.relkind IN ('S', 'v', 'f', 'c') and\n> - c1.relam != 0;\n> - oid | relname \n> ------+---------\n> +-- Conversely, sequences, views, types, and partitioned tables shouldn't have\n> +-- them\n> +SELECT oid, relname, relkind, relam\n> +FROM pg_class\n> +WHERE relkind IN ('S', 'v', 'f', 'c', 'p') and\n> + relam != 0;\n> + oid | relname | relkind | relam \n> +-----+---------+---------+-------\n> (0 rows)\n\nParticularly because you include them again here :)\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 17:25:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Small omission in type_sanity.sql" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Tom, is there a reason we run the various sanity tests early-ish in the\n> schedule? It does seem to reduce their effectiveness a bit...\n\nOriginally, those tests were mainly needed to sanity-check the\nhand-maintained initial catalog data, so it made sense to run them\nearly. Since we taught genbki.pl to do a bunch more work, that's\nperhaps a bit less pressing.\n\nThere's at least one test that intentionally sets up a bogus btree\nopclass, which we'd have to drop again if we wanted to run the\nsanity checks later. Not sure what other issues might surface.\nYou could find out easily enough, of course ...\n\n> Problems:\n> - \"Cross-check against pg_type entry\" is far too strict about legal combinations\n> of typstorage\n\nPerhaps, but it's enforcing policy about what we want in the\ninitial catalog data, not what is possible to support. So\nthere's a bit of divergence of goals here too. Maybe we need\nto split up the tests into initial-data-only tests (run early)\nand tests that should hold for user-created objects too\n(run late)?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 20:39:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small omission in type_sanity.sql" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 20:39:04 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Tom, is there a reason we run the various sanity tests early-ish in the\n> > schedule? It does seem to reduce their effectiveness a bit...\n> \n> Originally, those tests were mainly needed to sanity-check the\n> hand-maintained initial catalog data, so it made sense to run them\n> early.\n\nIt's also kinda useful to have some basic validity testing early on, because\nif there's something wrong with the catalog values, it'll cause lots of issues\nlater.\n\n\n> > Problems:\n> > - \"Cross-check against pg_type entry\" is far too strict about legal combinations\n> > of typstorage\n> \n> Perhaps, but it's enforcing policy about what we want in the\n> initial catalog data, not what is possible to support.\n\nTrue in generaly, but I don't think it matters much in this specific case. We\ndon't gain much by forbidding 'e' -> 'x' mismatches, given that we allow 'x'\n-> 'p'.\n\n\nThere's a lot more such cases in opr_sanity. There's a lot of tests in it that\nonly make sense for validating the initial catalog contents. It might be\nuseful to run a more lenient version of it later though.\n\n\n> So there's a bit of divergence of goals here too. Maybe we need to split up\n> the tests into initial-data-only tests (run early) and tests that should\n> hold for user-created objects too (run late)?\n\nYea, I think so. A bit worried about the duplication that might require.\n\nBut the *sanity tests also do also encode a lot of good cross-checks, that are\nsomewhat easy to break in code (and / or have been broken in the past), so I\nthink it's worth pursuing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:30:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Small omission in type_sanity.sql" }, { "msg_contents": "On 2023-Jan-27, Andres Freund wrote:\n\n> On 2023-01-27 20:39:04 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > Tom, is there a reason we run the various sanity tests early-ish in the\n> > > schedule? It does seem to reduce their effectiveness a bit...\n> > \n> > Originally, those tests were mainly needed to sanity-check the\n> > hand-maintained initial catalog data, so it made sense to run them\n> > early.\n> \n> It's also kinda useful to have some basic validity testing early on, because\n> if there's something wrong with the catalog values, it'll cause lots of issues\n> later.\n\nWe can just list the tests twice in the schedule ...\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 9 Mar 2023 11:45:30 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Small omission in type_sanity.sql" } ]
[ { "msg_contents": "Just my luck, I had to dig into a two-\"character\" emoji that came to me\nas part of a Google Calendar entry --- here it is:\n\n\t👩🏼‍⚕️🩺\n\t\n\t libc\n\tUnicode UTF8 len\n\tU+1F469 f0 9f 91 a9 2 woman\n\tU+1F3FC f0 9f 8f bc 2 emoji modifier fitzpatrick type-3 (skin tone)\n\tU+200D e2 80 8d 0 zero width joiner (ZWJ)\n\tU+2695 e2 9a 95 1 staff with snake\n\tU+FE0F ef b8 8f 0 variation selector-16 (VS16) (previous character as emoji)\n\tU+1FA7A f0 9f a9 ba 2 stethoscope\n\nNow, in Debian 11 character apps like vi, I see:\n\n a woman(2) - a black box(2) - a staff with snake(1) - a stethoscope(2)\n\nDisplay widths are in parentheses. I also see '<200d>' in blue.\n\nIn current Firefox, I see a woman with a stethoscope around her neck,\nand then a stethoscope. Copying the Unicode string above into a browser\nURL bar should show you the same thing, thought it might be too small to\nsee.\n\nFor those looking for details on how these should be handled, see this\nfor an explanation of grapheme clusters that use things like skin tone\nmodifiers and zero-width joiners:\n\n\thttps://tonsky.me/blog/emoji/\n\nThese comments explain the confusion of the term character:\n\n\thttps://stackoverflow.com/questions/27331819/whats-the-difference-between-a-character-a-code-point-a-glyph-and-a-grapheme\n\nand I think this comment summarizes it well:\n\n\thttps://github.com/kovidgoyal/kitty/issues/3998#issuecomment-914807237\n\n\tThis is by design. wcwidth() is utterly broken. Any terminal or terminal\n\tapplication that uses it is also utterly broken. Forget about emoji\n\twcwidth() doesn't even work with combining characters, zero width\n\tjoiners, flags, and a whole bunch of other things.\n\nI decided to see how Postgres, without ICU, handles it:\n\n\tshow lc_ctype;\n\t lc_ctype\n\t-------------\n\t en_US.UTF-8\n\n\tselect octet_length('👩🏼‍⚕️🩺');\n\t octet_length\n\t--------------\n\t 21\n\t\n\tselect character_length('👩🏼‍⚕️🩺');\n\t character_length\n\t------------------\n\t 6\n\nThe octet_length() is verified as correct by counting the UTF8 bytes\nabove. I think character_length() is correct if we consider the number\nof Unicode characters, display and non-display.\n\nI then started looking at how Postgres computes and uses _display_\nwidth. The display width, when properly processed like by Firefox, is 4\n(two double-wide displayed characters.) Based on the libc display\nlengths above and incorrect displayed character lengths in Debian 11, it\nwould be 7.\n\nlibpq has PQdsplen(), which calls pg_encoding_dsplen(), which then calls\nthe per-encoding width function stored in pg_wchar_table.dsplen --- for\nUTF8, the function is pg_utf_dsplen().\n\nThere is no SQL API for display length, but PQdsplen() that can be\ncalled with a string by calling pg_wcswidth() the gdb debugger:\n\n\tpg_wcswidth(const char *pwcs, size_t len, int encoding)\n\tUTF8 encoding == 6\n\n\t(gdb) print (int)pg_wcswidth(\"abcd\", 4, 6)\n\t$8 = 4\n\t(gdb) print (int)pg_wcswidth(\"👩🏼‍⚕️🩺\", 21, 6))\n\t$9 = 7\n\nHere is the psql output:\n\n\tSELECT octet_length('👩🏼‍⚕️🩺'), '👩🏼‍⚕️🩺', character_length('👩🏼‍⚕️🩺');\n\t octet_length | ?column? | character_length\n\t--------------+----------+------------------\n\t 21 | 👩🏼‍⚕️🩺 | 6\n\nMore often called from psql are pg_wcssize() and pg_wcsformat(), which\nalso calls PQdsplen().\n\nI think the question is whether we want to report a string width that\nassumes the display doesn't understand the more complex UTF8\ncontrols/\"characters\" listed above.\n \ntsearch has p_isspecial() calls pg_dsplen() which also uses\npg_wchar_table.dsplen. p_isspecial() also has a small table of what it\ncalls \"strange_letter\",\n\nHere is a report about Unicode variation selector and combining\ncharacters from May, 2022:\n\n https://www.postgresql.org/message-id/flat/013f01d873bb%24ff5f64b0%24fe1e2e10%24%40ndensan.co.jp\n\nIs this something people want improved?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Wed, 18 Jan 2023 19:19:59 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Unicode grapheme clusters" }, { "msg_contents": "čt 19. 1. 2023 v 1:20 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> Just my luck, I had to dig into a two-\"character\" emoji that came to me\n> as part of a Google Calendar entry --- here it is:\n>\n> 👩🏼‍⚕️🩺\n>\n> libc\n> Unicode UTF8 len\n> U+1F469 f0 9f 91 a9 2 woman\n> U+1F3FC f0 9f 8f bc 2 emoji modifier fitzpatrick type-3 (skin\n> tone)\n> U+200D e2 80 8d 0 zero width joiner (ZWJ)\n> U+2695 e2 9a 95 1 staff with snake\n> U+FE0F ef b8 8f 0 variation selector-16 (VS16) (previous\n> character as emoji)\n> U+1FA7A f0 9f a9 ba 2 stethoscope\n>\n> Now, in Debian 11 character apps like vi, I see:\n>\n> a woman(2) - a black box(2) - a staff with snake(1) - a stethoscope(2)\n>\n> Display widths are in parentheses. I also see '<200d>' in blue.\n>\n> In current Firefox, I see a woman with a stethoscope around her neck,\n> and then a stethoscope. Copying the Unicode string above into a browser\n> URL bar should show you the same thing, thought it might be too small to\n> see.\n>\n> For those looking for details on how these should be handled, see this\n> for an explanation of grapheme clusters that use things like skin tone\n> modifiers and zero-width joiners:\n>\n> https://tonsky.me/blog/emoji/\n>\n> These comments explain the confusion of the term character:\n>\n>\n> https://stackoverflow.com/questions/27331819/whats-the-difference-between-a-character-a-code-point-a-glyph-and-a-grapheme\n>\n> and I think this comment summarizes it well:\n>\n>\n> https://github.com/kovidgoyal/kitty/issues/3998#issuecomment-914807237\n>\n> This is by design. wcwidth() is utterly broken. Any terminal or\n> terminal\n> application that uses it is also utterly broken. Forget about emoji\n> wcwidth() doesn't even work with combining characters, zero width\n> joiners, flags, and a whole bunch of other things.\n>\n> I decided to see how Postgres, without ICU, handles it:\n>\n> show lc_ctype;\n> lc_ctype\n> -------------\n> en_US.UTF-8\n>\n> select octet_length('👩🏼‍⚕️🩺');\n> octet_length\n> --------------\n> 21\n>\n> select character_length('👩🏼‍⚕️🩺');\n> character_length\n> ------------------\n> 6\n>\n> The octet_length() is verified as correct by counting the UTF8 bytes\n> above. I think character_length() is correct if we consider the number\n> of Unicode characters, display and non-display.\n>\n> I then started looking at how Postgres computes and uses _display_\n> width. The display width, when properly processed like by Firefox, is 4\n> (two double-wide displayed characters.) Based on the libc display\n> lengths above and incorrect displayed character lengths in Debian 11, it\n> would be 7.\n>\n> libpq has PQdsplen(), which calls pg_encoding_dsplen(), which then calls\n> the per-encoding width function stored in pg_wchar_table.dsplen --- for\n> UTF8, the function is pg_utf_dsplen().\n>\n> There is no SQL API for display length, but PQdsplen() that can be\n> called with a string by calling pg_wcswidth() the gdb debugger:\n>\n> pg_wcswidth(const char *pwcs, size_t len, int encoding)\n> UTF8 encoding == 6\n>\n> (gdb) print (int)pg_wcswidth(\"abcd\", 4, 6)\n> $8 = 4\n> (gdb) print (int)pg_wcswidth(\"👩🏼‍⚕️🩺\", 21, 6))\n> $9 = 7\n>\n> Here is the psql output:\n>\n> SELECT octet_length('👩🏼‍⚕️🩺'), '👩🏼‍⚕️🩺',\n> character_length('👩🏼‍⚕️🩺');\n> octet_length | ?column? | character_length\n> --------------+----------+------------------\n> 21 | 👩🏼‍⚕️🩺 | 6\n>\n> More often called from psql are pg_wcssize() and pg_wcsformat(), which\n> also calls PQdsplen().\n>\n> I think the question is whether we want to report a string width that\n> assumes the display doesn't understand the more complex UTF8\n> controls/\"characters\" listed above.\n>\n> tsearch has p_isspecial() calls pg_dsplen() which also uses\n> pg_wchar_table.dsplen. p_isspecial() also has a small table of what it\n> calls \"strange_letter\",\n>\n> Here is a report about Unicode variation selector and combining\n> characters from May, 2022:\n>\n>\n> https://www.postgresql.org/message-id/flat/013f01d873bb%24ff5f64b0%24fe1e2e10%24%40ndensan.co.jp\n>\n> Is this something people want improved?\n>\n\nSurely it should be fixed. Unfortunately - all the terminals that I can use\ndon't support it. So at this moment it may be premature to fix it, because\nthe visual form will still be broken.\n\nRegards\n\nPavel\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Embrace your flaws. They make you human, rather than perfect,\n> which you will never be.\n>\n>\n>\n\nčt 19. 1. 2023 v 1:20 odesílatel Bruce Momjian <bruce@momjian.us> napsal:Just my luck, I had to dig into a two-\"character\" emoji that came to me\nas part of a Google Calendar entry --- here it is:\n\n        👩🏼‍⚕️🩺\n\n                              libc\n        Unicode     UTF8      len\n        U+1F469  f0 9f 91 a9   2   woman\n        U+1F3FC  f0 9f 8f bc   2   emoji modifier fitzpatrick type-3 (skin tone)\n        U+200D   e2 80 8d      0   zero width joiner (ZWJ)\n        U+2695   e2 9a 95      1   staff with snake\n        U+FE0F   ef b8 8f      0   variation selector-16 (VS16) (previous character as emoji)\n        U+1FA7A  f0 9f a9 ba   2   stethoscope\n\nNow, in Debian 11 character apps like vi, I see:\n\n  a woman(2) - a black box(2) - a staff with snake(1) - a stethoscope(2)\n\nDisplay widths are in parentheses.  I also see '<200d>' in blue.\n\nIn current Firefox, I see a woman with a stethoscope around her neck,\nand then a stethoscope.  Copying the Unicode string above into a browser\nURL bar should show you the same thing, thought it might be too small to\nsee.\n\nFor those looking for details on how these should be handled, see this\nfor an explanation of grapheme clusters that use things like skin tone\nmodifiers and zero-width joiners:\n\n        https://tonsky.me/blog/emoji/\n\nThese comments explain the confusion of the term character:\n\n        https://stackoverflow.com/questions/27331819/whats-the-difference-between-a-character-a-code-point-a-glyph-and-a-grapheme\n\nand I think this comment summarizes it well:\n\n        https://github.com/kovidgoyal/kitty/issues/3998#issuecomment-914807237\n\n        This is by design. wcwidth() is utterly broken. Any terminal or terminal\n        application that uses it is also utterly broken. Forget about emoji\n        wcwidth() doesn't even work with combining characters, zero width\n        joiners, flags, and a whole bunch of other things.\n\nI decided to see how Postgres, without ICU, handles it:\n\n        show lc_ctype;\n          lc_ctype\n        -------------\n         en_US.UTF-8\n\n        select octet_length('👩🏼‍⚕️🩺');\n         octet_length\n        --------------\n                   21\n\n        select character_length('👩🏼‍⚕️🩺');\n         character_length\n        ------------------\n                        6\n\nThe octet_length() is verified as correct by counting the UTF8 bytes\nabove.  I think character_length() is correct if we consider the number\nof Unicode characters, display and non-display.\n\nI then started looking at how Postgres computes and uses _display_\nwidth.  The display width, when properly processed like by Firefox, is 4\n(two double-wide displayed characters.)  Based on the libc display\nlengths above and incorrect displayed character lengths in Debian 11, it\nwould be 7.\n\nlibpq has PQdsplen(), which calls pg_encoding_dsplen(), which then calls\nthe per-encoding width function stored in pg_wchar_table.dsplen --- for\nUTF8, the function is pg_utf_dsplen().\n\nThere is no SQL API for display length, but PQdsplen() that can be\ncalled with a string by calling pg_wcswidth() the gdb debugger:\n\n        pg_wcswidth(const char *pwcs, size_t len, int encoding)\n        UTF8 encoding == 6\n\n        (gdb) print (int)pg_wcswidth(\"abcd\", 4, 6)\n        $8 = 4\n        (gdb) print (int)pg_wcswidth(\"👩🏼‍⚕️🩺\", 21, 6))\n        $9 = 7\n\nHere is the psql output:\n\n        SELECT octet_length('👩🏼‍⚕️🩺'), '👩🏼‍⚕️🩺', character_length('👩🏼‍⚕️🩺');\n         octet_length | ?column? | character_length\n        --------------+----------+------------------\n                   21 | 👩🏼‍⚕️🩺  |                6\n\nMore often called from psql are pg_wcssize() and pg_wcsformat(), which\nalso calls PQdsplen().\n\nI think the question is whether we want to report a string width that\nassumes the display doesn't understand the more complex UTF8\ncontrols/\"characters\" listed above.\n\ntsearch has p_isspecial() calls pg_dsplen() which also uses\npg_wchar_table.dsplen.  p_isspecial() also has a small table of what it\ncalls \"strange_letter\",\n\nHere is a report about Unicode variation selector and combining\ncharacters from May, 2022:\n\n    https://www.postgresql.org/message-id/flat/013f01d873bb%24ff5f64b0%24fe1e2e10%24%40ndensan.co.jp\n\nIs this something people want improved?Surely it should be fixed. Unfortunately - all the terminals that I can use don't support it. So at this moment it may be premature to fix it, because the visual form will still be broken.RegardsPavel\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\nEmbrace your flaws.  They make you human, rather than perfect,\nwhich you will never be.", "msg_date": "Thu, 19 Jan 2023 14:44:57 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Thu, Jan 19, 2023 at 02:44:57PM +0100, Pavel Stehule wrote:\n> Surely it should be fixed. Unfortunately - all the terminals that I can use\n> don't support it. So at this moment it may be premature to fix it, because the\n> visual form will still be broken.\n\nYes, none of my terminal emulators handle grapheme clusters either. In\nfact, viewing this email messed up my screen and I had to use control-L\nto fix it.\n\nI think one big problem is that our Unicode library doesn't have any way\nI know of to query the display device to determine how it\nsupports/renders Unicode characters, so any display width we report\ncould be wrong.\n\nOddly, it seems grapheme clusters were added in Unicode 3.2, which came\nout in 2002:\n\n\thttps://www.unicode.org/reports/tr28/tr28-3.html\n\thttps://www.quora.com/What-is-graphemeCluster\n\nbut somehow I am only seeing studying them now.\n\nAnyway, I added a psql item for this so we don't forget about it:\n\n\thttps://wiki.postgresql.org/wiki/Todo#psql\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Thu, 19 Jan 2023 17:40:47 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "This is how we've always documented it. Postgres treats code points as\n\"characters\" not graphemes.\n\nYou don't need to go to anything as esoteric as emojis to see this either.\nAccented characters like é have no canonical forms that are multiple code\npoints and in some character sets some accented characters can only be\nrepresented that way.\n\nBut I don't think there's any reason to consider changing e existing\nfunctions. They have to be consistent with substr and the other string\nmanipulation functions.\n\nWe could add new functions to work with graphemes but it might bring more\npain keeping it up to date....\n\nThis is how we've always documented it. Postgres treats code points as \"characters\" not graphemes.You don't need to go to anything as esoteric as emojis to see this either. Accented characters like é have no canonical forms that are multiple code points and in some character sets some accented characters can only be represented that way.But I don't think there's any reason to consider changing e existing functions. They have to be consistent with substr and the other string manipulation functions.We could add new functions to work with graphemes but it might bring more pain keeping it up to date....", "msg_date": "Thu, 19 Jan 2023 19:37:48 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Thu, Jan 19, 2023 at 07:37:48PM -0500, Greg Stark wrote:\n> This is how we've always documented it. Postgres treats code points as\n> \"characters\" not graphemes.\n> \n> You don't need to go to anything as esoteric as emojis to see this either.\n> Accented characters like é have no canonical forms that are multiple code\n> points and in some character sets some accented characters can only be\n> represented that way.\n> \n> But I don't think there's any reason to consider changing e existing functions.\n> They have to be consistent with substr and the other string manipulation\n> functions.\n> \n> We could add new functions to work with graphemes but it might bring more pain\n> keeping it up to date....\n\nI am not sure what you are referring to above? character_length? I was\ntalking about display length, and psql uses that --- at some point, our\nlack of support for graphemes will cause psql to not align columns.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Thu, 19 Jan 2023 19:47:49 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I am not sure what you are referring to above? character_length? I was\n> talking about display length, and psql uses that --- at some point, our\n> lack of support for graphemes will cause psql to not align columns.\n\nThat's going to happen regardless, as long as we can't be sure\nwhat the display will do with the characters --- and that's a\nproblem that will persist for a very long time.\n\nIdeally, yeah, it'd be great if all this stuff rendered perfectly;\nbut IMO it's so far outside mainstream usage of psql that it's\nnot something that could possibly repay the investment of time\nto get even a partial solution.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 19:53:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Thu, Jan 19, 2023 at 07:53:43PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I am not sure what you are referring to above? character_length? I was\n> > talking about display length, and psql uses that --- at some point, our\n> > lack of support for graphemes will cause psql to not align columns.\n> \n> That's going to happen regardless, as long as we can't be sure\n> what the display will do with the characters --- and that's a\n> problem that will persist for a very long time.\n> \n> Ideally, yeah, it'd be great if all this stuff rendered perfectly;\n> but IMO it's so far outside mainstream usage of psql that it's\n> not something that could possibly repay the investment of time\n> to get even a partial solution.\n\nWe have a few options:\n\n* TODO item\n* document psql works that way\n* do nothing\n\nI think the big question is how common such cases will be in the future.\nThe report from 2022, and one from 2019 didn't seem to clearly outline\nthe issue so it would good to have something documented somewhere.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Thu, 19 Jan 2023 20:55:46 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "pá 20. 1. 2023 v 2:55 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Thu, Jan 19, 2023 at 07:53:43PM -0500, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > I am not sure what you are referring to above? character_length? I\n> was\n> > > talking about display length, and psql uses that --- at some point, our\n> > > lack of support for graphemes will cause psql to not align columns.\n> >\n> > That's going to happen regardless, as long as we can't be sure\n> > what the display will do with the characters --- and that's a\n> > problem that will persist for a very long time.\n> >\n> > Ideally, yeah, it'd be great if all this stuff rendered perfectly;\n> > but IMO it's so far outside mainstream usage of psql that it's\n> > not something that could possibly repay the investment of time\n> > to get even a partial solution.\n>\n> We have a few options:\n>\n> * TODO item\n> * document psql works that way\n> * do nothing\n>\n> I think the big question is how common such cases will be in the future.\n> The report from 2022, and one from 2019 didn't seem to clearly outline\n> the issue so it would good to have something documented somewhere.\n>\n\nThere can be a note in psql documentation like \"Unicode grapheme clusters\nare not supported yet. It is not well supported by other necessary software\nlike terminal emulators and curses libraries\".\n\nI partially watch an progres in VTE - one of the widely used terminal libs,\nand I am very sceptical so there will be support in the next two years.\n\nMaybe the new microsoft terminal will give this area a new dynamic, but\ncurrently only few people on the planet are working on fixing or enhancing\nterminal's technologies. Unfortunately there is too much historical balast.\n\nRegards\n\nPavel\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Embrace your flaws. They make you human, rather than perfect,\n> which you will never be.\n>\n>\n>\n\npá 20. 1. 2023 v 2:55 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Thu, Jan 19, 2023 at 07:53:43PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I am not sure what you are referring to above?  character_length?  I was\n> > talking about display length, and psql uses that --- at some point, our\n> > lack of support for graphemes will cause psql to not align columns.\n> \n> That's going to happen regardless, as long as we can't be sure\n> what the display will do with the characters --- and that's a\n> problem that will persist for a very long time.\n> \n> Ideally, yeah, it'd be great if all this stuff rendered perfectly;\n> but IMO it's so far outside mainstream usage of psql that it's\n> not something that could possibly repay the investment of time\n> to get even a partial solution.\n\nWe have a few options:\n\n*  TODO item\n*  document psql works that way\n*  do nothing\n\nI think the big question is how common such cases will be in the future.\nThe report from 2022, and one from 2019 didn't seem to clearly outline\nthe issue so it would good to have something documented somewhere.There can be a note in psql documentation like \"Unicode grapheme clusters are not supported yet. It is not well supported by other necessary software like terminal emulators and curses libraries\".I partially watch an progres in VTE - one of the widely used terminal libs, and I am very sceptical so there will be support in the next two years. Maybe the new microsoft terminal will give this area a new dynamic, but currently only few people on the planet are working on fixing or enhancing terminal's technologies. Unfortunately there is too much historical balast.RegardsPavel\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\nEmbrace your flaws.  They make you human, rather than perfect,\nwhich you will never be.", "msg_date": "Fri, 20 Jan 2023 06:06:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Fri, 20 Jan 2023 at 00:07, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> I partially watch an progres in VTE - one of the widely used terminal libs, and I am very sceptical so there will be support in the next two years.\n>\n> Maybe the new microsoft terminal will give this area a new dynamic, but currently only few people on the planet are working on fixing or enhancing terminal's technologies. Unfortunately there is too much historical balast.\n\nFwiw this isn't really about terminal emulators. psql is also used to\ngenerate text files for reports or for display in various ways.\n\nI think it's worth using whatever APIs we have available to implement\nbetter alignment for grapheme clusters and just assume whatever will\neventually be used to display the output will display it \"properly\".\n\nI do not think it's worth trying to implement this ourselves if the\nlibraries aren't there yet. And I don't think it's worth trying to\nadapt to the current state of the current terminal. We don't know that\nthat's the only place the output will be viewed and it'll all be\nwasted effort when the terminals eventually implement full support.\n\n(If we were really crazy about this we could use terminal escape codes\nto query the current cursor position after emitting multicharacter\ngraphemes. But as I said, I don't even think that would be useful,\neven if there weren't other reasons it would be a bad idea)\n\n\n-- \ngreg\n\n\n", "msg_date": "Sat, 21 Jan 2023 11:20:39 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "so 21. 1. 2023 v 17:21 odesílatel Greg Stark <stark@mit.edu> napsal:\n\n> On Fri, 20 Jan 2023 at 00:07, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > I partially watch an progres in VTE - one of the widely used terminal\n> libs, and I am very sceptical so there will be support in the next two\n> years.\n> >\n> > Maybe the new microsoft terminal will give this area a new dynamic, but\n> currently only few people on the planet are working on fixing or enhancing\n> terminal's technologies. Unfortunately there is too much historical balast.\n>\n> Fwiw this isn't really about terminal emulators. psql is also used to\n> generate text files for reports or for display in various ways.\n>\n> I think it's worth using whatever APIs we have available to implement\n> better alignment for grapheme clusters and just assume whatever will\n> eventually be used to display the output will display it \"properly\".\n>\n> I do not think it's worth trying to implement this ourselves if the\n> libraries aren't there yet. And I don't think it's worth trying to\n> adapt to the current state of the current terminal. We don't know that\n> that's the only place the output will be viewed and it'll all be\n> wasted effort when the terminals eventually implement full support.\n>\n> (If we were really crazy about this we could use terminal escape codes\n> to query the current cursor position after emitting multicharacter\n> graphemes. But as I said, I don't even think that would be useful,\n> even if there weren't other reasons it would be a bad idea)\n>\n\n+1\n\nPavel\n\n>\n>\n> --\n> greg\n>\n\nso 21. 1. 2023 v 17:21 odesílatel Greg Stark <stark@mit.edu> napsal:On Fri, 20 Jan 2023 at 00:07, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> I partially watch an progres in VTE - one of the widely used terminal libs, and I am very sceptical so there will be support in the next two years.\n>\n> Maybe the new microsoft terminal will give this area a new dynamic, but currently only few people on the planet are working on fixing or enhancing terminal's technologies. Unfortunately there is too much historical balast.\n\nFwiw this isn't really about terminal emulators. psql is also used to\ngenerate text files for reports or for display in various ways.\n\nI think it's worth using whatever APIs we have available to implement\nbetter alignment for grapheme clusters and just assume whatever will\neventually be used to display the output will display it \"properly\".\n\nI do not think it's worth trying to implement this ourselves if the\nlibraries aren't there yet. And I don't think it's worth trying to\nadapt to the current state of the current terminal. We don't know that\nthat's the only place the output will be viewed and it'll all be\nwasted effort when the terminals eventually implement full support.\n\n(If we were really crazy about this we could use terminal escape codes\nto query the current cursor position after emitting multicharacter\ngraphemes. But as I said, I don't even think that would be useful,\neven if there weren't other reasons it would be a bad idea)+1Pavel\n\n\n-- \ngreg", "msg_date": "Sat, 21 Jan 2023 17:26:22 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> (If we were really crazy about this we could use terminal escape codes\n> to query the current cursor position after emitting multicharacter\n> graphemes. But as I said, I don't even think that would be useful,\n> even if there weren't other reasons it would be a bad idea)\n\nYeah, use of a pager would be enough to break that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Jan 2023 11:30:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Sat, Jan 21, 2023 at 11:20:39AM -0500, Greg Stark wrote:\n> On Fri, 20 Jan 2023 at 00:07, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> > I partially watch an progres in VTE - one of the widely used terminal libs, and I am very sceptical so there will be support in the next two years.\n> >\n> > Maybe the new microsoft terminal will give this area a new dynamic, but currently only few people on the planet are working on fixing or enhancing terminal's technologies. Unfortunately there is too much historical balast.\n> \n> Fwiw this isn't really about terminal emulators. psql is also used to\n> generate text files for reports or for display in various ways.\n> \n> I think it's worth using whatever APIs we have available to implement\n> better alignment for grapheme clusters and just assume whatever will\n> eventually be used to display the output will display it \"properly\".\n> \n> I do not think it's worth trying to implement this ourselves if the\n> libraries aren't there yet. And I don't think it's worth trying to\n> adapt to the current state of the current terminal. We don't know that\n> that's the only place the output will be viewed and it'll all be\n> wasted effort when the terminals eventually implement full support.\n\nWell, as one of the URLs I quoted said:\n\n\tThis is by design. wcwidth() is utterly broken. Any terminal or\n\tterminal application that uses it is also utterly broken. Forget\n\tabout emoji wcwidth() doesn't even work with combining characters,\n\tzero width joiners, flags, and a whole bunch of other things.\n\nSo, either we have to find a function in the library that will do the\nlooping over the string for us, or we need to identify the special\nUnicode characters that create grapheme clusters and handle them in our\ncode.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Sat, 21 Jan 2023 12:37:30 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Sat, Jan 21, 2023 at 12:37:30PM -0500, Bruce Momjian wrote:\n> Well, as one of the URLs I quoted said:\n> \n> \tThis is by design. wcwidth() is utterly broken. Any terminal or\n> \tterminal application that uses it is also utterly broken. Forget\n> \tabout emoji wcwidth() doesn't even work with combining characters,\n> \tzero width joiners, flags, and a whole bunch of other things.\n> \n> So, either we have to find a function in the library that will do the\n> looping over the string for us, or we need to identify the special\n> Unicode characters that create grapheme clusters and handle them in our\n> code.\n\nI just checked if wcswidth() would honor graphene clusters, though\nwcwidth() does not, but it seems wcswidth() treats characters just like\nwcwidth():\n\n\t$ LANG=en_US.UTF-8 grapheme_test\n\twcswidth len=7\n\t\n\tbytes_consumed=4, wcwidth len=2\n\tbytes_consumed=4, wcwidth len=2\n\tbytes_consumed=3, wcwidth len=0\n\tbytes_consumed=3, wcwidth len=1\n\tbytes_consumed=3, wcwidth len=0\n\tbytes_consumed=4, wcwidth len=2\n\nC test program attached. This is on Debian 11.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.", "msg_date": "Sat, 21 Jan 2023 13:12:57 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I just checked if wcswidth() would honor graphene clusters, though\n> wcwidth() does not, but it seems wcswidth() treats characters just like\n> wcwidth():\n\nWell, that's at least potentially fixable within libc, while wcwidth\nclearly can never do this right.\n\nProbably our long-term answer is to avoid depending on wcwidth\nand use wcswidth instead. But it's hard to get excited about\ndoing the legwork for that until popular libc implementations\nget it right.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Jan 2023 13:17:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Sat, Jan 21, 2023 at 01:17:27PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I just checked if wcswidth() would honor graphene clusters, though\n> > wcwidth() does not, but it seems wcswidth() treats characters just like\n> > wcwidth():\n> \n> Well, that's at least potentially fixable within libc, while wcwidth\n> clearly can never do this right.\n> \n> Probably our long-term answer is to avoid depending on wcwidth\n> and use wcswidth instead. But it's hard to get excited about\n> doing the legwork for that until popular libc implementations\n> get it right.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Sat, 21 Jan 2023 13:18:25 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Sat, 21 Jan 2023 at 13:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Probably our long-term answer is to avoid depending on wcwidth\n> and use wcswidth instead. But it's hard to get excited about\n> doing the legwork for that until popular libc implementations\n> get it right.\n\nHere's an interesting blog post about trying to do this in Rust:\n\nhttps://tomdebruijn.com/posts/rust-string-length-width-calculations/\n\nTL;DR... Even counting the number of graphemes isn't enough because\nterminals typically (but not always) display emoji graphemes using two\ncolumns.\n\nAt the end of the day Unicode kind of assumes a variable-width display\nwhere the rendering is handled by something that has access to the\nactual font metrics. So anything trying to line things up in columns\nin a way that works with any rendering system down the line using any\nfont is going to be making a best guess.\n\n-- \ngreg\n\n\n", "msg_date": "Tue, 24 Jan 2023 11:40:01 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Tue, 24 Jan 2023 at 11:40, Greg Stark <stark@mit.edu> wrote:\n\n>\n> At the end of the day Unicode kind of assumes a variable-width display\n> where the rendering is handled by something that has access to the\n> actual font metrics. So anything trying to line things up in columns\n> in a way that works with any rendering system down the line using any\n> font is going to be making a best guess.\n>\n\nReally what is needed is another Unicode attribute: how many columns of a\nmonospaced display each character (or grapheme cluster) should take up. The\nstandard should include a precisely defined function that can take any\nsequence of characters and give back its width in monospaced display\ncharacter spaces. Typefaces should only qualify as monospaced if they\nrespect this standard-defined computation.\n\nNote that this is not actually a new thing: this was included in ASCII\nimplicitly, with a value of 1 for every character, and a value of n for\nevery n-character string. It has always been possible to line up values\ndisplayed on monospaced displays by adding spaces, and it is only the\nomission of this feature from Unicode which currently makes it impossible.\n\nOn Tue, 24 Jan 2023 at 11:40, Greg Stark <stark@mit.edu> wrote:\nAt the end of the day Unicode kind of assumes a variable-width display\nwhere the rendering is handled by something that has access to the\nactual font metrics. So anything trying to line things up in columns\nin a way that works with any rendering system down the line using any\nfont is going to be making a best guess.Really what is needed is another Unicode attribute: how many columns of a monospaced display each character (or grapheme cluster) should take up. The standard should include a precisely defined function that can take any sequence of characters and give back its width in monospaced display character spaces. Typefaces should only qualify as monospaced if they respect this standard-defined computation.Note that this is not actually a new thing: this was included in ASCII implicitly, with a value of 1 for every character, and a value of n for every n-character string. It has always been possible to line up values displayed on monospaced displays by adding spaces, and it is only the omission of this feature from Unicode which currently makes it impossible.", "msg_date": "Tue, 24 Jan 2023 11:47:32 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unicode grapheme clusters" }, { "msg_contents": "On Tue, Jan 24, 2023 at 11:40:01AM -0500, Greg Stark wrote:\n> On Sat, 21 Jan 2023 at 13:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Probably our long-term answer is to avoid depending on wcwidth\n> > and use wcswidth instead. But it's hard to get excited about\n> > doing the legwork for that until popular libc implementations\n> > get it right.\n> \n> Here's an interesting blog post about trying to do this in Rust:\n> \n> https://tomdebruijn.com/posts/rust-string-length-width-calculations/\n> \n> TL;DR... Even counting the number of graphemes isn't enough because\n> terminals typically (but not always) display emoji graphemes using two\n> columns.\n> \n> At the end of the day Unicode kind of assumes a variable-width display\n> where the rendering is handled by something that has access to the\n> actual font metrics. So anything trying to line things up in columns\n> in a way that works with any rendering system down the line using any\n> font is going to be making a best guess.\n\nYes, good article, though I am still surprised this is not discussed\nmore often. Anyway, for psql, we assume a fixed width output device, so\nwe can just assume that for computation. You are right that Unicode\njust doesn't seem to consider fixed width output cases and doesn't\nprovide much guidance.\n\nBeyond psql, should we update our docs to say that character_length()\nfor Unicode returns the number of Unicode code points, and not\nnecessarily the number of displayed characters if grapheme clusters are\npresent?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 24 Jan 2023 14:20:32 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unicode grapheme clusters" } ]
[ { "msg_contents": "I had a conversation a while back with Heikki where he expressed that\nit was annoying that we negotiate SSL/TLS the way we do since it\nintroduces an extra round trip. Aside from the performance\noptimization I think accepting standard TLS connections would open the\ndoor to a number of other opportunities that would be worth it on\ntheir own.\n\nSo I took a look into what it would take to do and I think it would\nactually be quite feasible. The first byte of a standard TLS\nconnection can't look anything like the first byte of any flavour of\nPostgres startup packet because it would be the high order bits of the\nlength so unless we start having multi-megabyte startup packets....\n\nSo I put together a POC patch and it's working quite well and didn't\nrequire very much kludgery. Well, it required some but it's really not\nbad. I do have a bug I'm still trying to work out and the code isn't\nquite in committable form but I can send the POC patch.\n\nOther things it would open the door to in order from least\ncontroversial to most....\n\n* Hiding Postgres behind a standard SSL proxy terminating SSL without\nimplementing the Postgres protocol.\n\n* \"Service Mesh\" type tools that hide multiple services behind a\nsingle host/port (\"Service Mesh\" is just a new buzzword for \"proxy\").\n\n* Browser-based protocol implementations using websockets for things\nlike pgadmin or other tools to connect directly to postgres using\nPostgres wire protocol but using native SSL implementations.\n\n* Postgres could even implement an HTTP based version of its protocol\nand enable things like queries or browser based tools using straight\nup HTTP requests so they don't need to use websockets.\n\n* Postgres could implement other protocols to serve up data like\nstatus queries or monitoring metrics, using HTTP based standard\nprotocols instead of using our own protocol.\n\nIncidentally I find the logic in ProcessStartupPacket incredibly\nconfusing. It took me a while before I realized it's using tail\nrecursion to implement the startup logic. I think it would be way more\nstraightforward and extensible if it used a much more common iterative\nstyle. I think it would make it possible to keep more state than just\nssl_done and gss_done without changing the function signature every\ntime for example.\n\n--\ngreg\n\n\n", "msg_date": "Wed, 18 Jan 2023 22:15:24 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Experiments with Postgres and SSL" }, { "msg_contents": "On Wed, Jan 18, 2023 at 7:16 PM Greg Stark <stark@mit.edu> wrote:\n>\n> So I took a look into what it would take to do and I think it would\n> actually be quite feasible. The first byte of a standard TLS\n> connection can't look anything like the first byte of any flavour of\n> Postgres startup packet because it would be the high order bits of the\n> length so unless we start having multi-megabyte startup packets....\n>\n\nThis is a fascinating idea! I like it a lot.\nBut..do we have to treat any unknown start sequence of bytes as a TLS\nconnection? Or is there some definite subset of possible first bytes\nthat clearly indicates that this is a TLS connection or not?\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Wed, 18 Jan 2023 21:45:15 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Thu, 19 Jan 2023 at 00:45, Andrey Borodin <amborodin86@gmail.com> wrote:\n\n> But..do we have to treat any unknown start sequence of bytes as a TLS\n> connection? Or is there some definite subset of possible first bytes\n> that clearly indicates that this is a TLS connection or not?\n\nAbsolutely not, there's only one MessageType that can initiate a\nconnection, ClientHello, so the initial byte has to be a specific\nvalue. (0x16)\n\nAnd probably to implement HTTP/Websocket it would probably only peek\nat the first byte and check for things like G(ET) and H(EAD) and so\non, possibly only over SSL but in theory it could be over any\nconnection if the request comes before the startup packet.\n\nPersonally I'm motivated by wanting to implement status and monitoring\ndata for things like Prometheus and the like. For that it would just\nbe simple GET queries to recognize. But tunneling pg wire protocol\nover websockets sounds cool but not really something I know a lot\nabout. I note that Neon is doing something similar with a proxy:\nhttps://neon.tech/blog/serverless-driver-for-postgres\n\n\n--\ngreg\n\n\n", "msg_date": "Thu, 19 Jan 2023 12:07:41 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "It would be great if PostgreSQL supported 'start with TLS', however, how\ncould clients activate the feature?\n\nI would like to refrain users from configuring the handshake mode, and I\nwould like to refrain from degrading performance when a new client talks to\nan old database.\n\nWhat if the server that supports 'fast TLS' added an extra notification in\ncase client connects with a classic TLS?\nThen a capable client could remember host:port and try with newer TLS\nappoach the next time it connects.\n\nIt would be transparent to the clients, and the users won't need to\nconfigure 'prefer classic or fast TLS'\nThe old clients could discard the notification.\n\nVladimir\n\n-- \nVladimir\n\nIt would be great if PostgreSQL supported 'start with TLS', however, how could clients activate the feature?I would like to refrain users from configuring the handshake mode, and I would like to refrain from degrading performance when a new client talks to an old database.What if the server that supports 'fast TLS' added an extra notification in case client connects with a classic TLS?Then a capable client could remember host:port and try with newer TLS appoach the next time it connects.It would be transparent to the clients, and the users won't need to configure 'prefer classic or fast TLS'The old clients could discard the notification.Vladimir-- Vladimir", "msg_date": "Thu, 19 Jan 2023 23:49:30 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Thu, 19 Jan 2023 at 15:49, Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n>\n> What if the server that supports 'fast TLS' added an extra notification in case client connects with a classic TLS?\n> Then a capable client could remember host:port and try with newer TLS appoach the next time it connects.\n>\n> It would be transparent to the clients, and the users won't need to configure 'prefer classic or fast TLS'\n> The old clients could discard the notification.\n\nHm. I hadn't really thought about the case of a new client connecting\nto an old server. I don't think it's worth implementing a code path in\nthe server like this as it would then become cruft that would be hard\nto ever get rid of.\n\nI think you can do the same thing, more or less, in the client. Like\nif the driver tries to connect via SSL and gets an error it remembers\nthat host/port and connects using negotiation in the future.\n\nIn practice though, by the time drivers support this it'll probably be\nfar enough in the future that they can just enable it and you can\ndisable it if you're connecting to an old server. The main benefit for\nthe near term is going to be clients that are specifically designed to\ntake advantage of it because it's necessary to enable the environment\nthey need -- like monitoring tools and proxies.\n\nI've attached the POC. It's not near committable, mainly because of\nthe lack of any proper interface to the added fields in Port. I\nactually had a whole API but ripped it out while debugging because it\nwasn't working out.\n\nBut here's an example of psql connecting to the same server via\nnegotiated SSL or through stunnel where stunnel establishes the SSL\nconnection and psql is just doing plain text:\n\nstark@hatter:~/src/postgresql$ ~/pgsql-sslhacked/bin/psql\n'postgresql://localhost:9432/postgres'\npsql (16devel)\nSSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384,\ncompression: off)\nType \"help\" for help.\n\npostgres=# select * from pg_stat_ssl;\n pid | ssl | version | cipher | bits | client_dn |\nclient_serial | issuer_dn\n-------+-----+---------+------------------------+------+-----------+---------------+-----------\n 48771 | t | TLSv1.3 | TLS_AES_256_GCM_SHA384 | 256 | |\n |\n(1 row)\n\npostgres=# \\q\nstark@hatter:~/src/postgresql$ ~/pgsql-sslhacked/bin/psql\n'postgresql://localhost:8999/postgres'\npsql (16devel)\nType \"help\" for help.\n\npostgres=# select * from pg_stat_ssl;\n pid | ssl | version | cipher | bits | client_dn |\nclient_serial | issuer_dn\n-------+-----+---------+------------------------+------+-----------+---------------+-----------\n 48797 | t | TLSv1.3 | TLS_AES_256_GCM_SHA384 | 256 | |\n |\n(1 row)\n\n\n-- \ngreg", "msg_date": "Thu, 19 Jan 2023 18:44:22 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Wed, Jan 18, 2023 at 7:16 PM Greg Stark <stark@mit.edu> wrote:\n> I had a conversation a while back with Heikki where he expressed that\n> it was annoying that we negotiate SSL/TLS the way we do since it\n> introduces an extra round trip. Aside from the performance\n> optimization I think accepting standard TLS connections would open the\n> door to a number of other opportunities that would be worth it on\n> their own.\n\nNice! I want this too, but for security reasons [1] -- I want to be\nable to turn off negotiated (explicit) TLS, to force (implicit)\nTLS-only mode.\n\n> Other things it would open the door to in order from least\n> controversial to most....\n>\n> * Hiding Postgres behind a standard SSL proxy terminating SSL without\n> implementing the Postgres protocol.\n\n+1\n\n> * \"Service Mesh\" type tools that hide multiple services behind a\n> single host/port (\"Service Mesh\" is just a new buzzword for \"proxy\").\n\nIf you want to multiplex protocols on a port, now is an excellent time\nto require clients to use ALPN on implicit-TLS connections. (There are\nno clients that can currently connect via implicit TLS, so you'll\nnever have another chance to force the issue without breaking\nbackwards compatibility.) That should hopefully make it harder to\nALPACA yourself or others [2].\n\nALPN doesn't prevent cross-port attacks though, and speaking of those...\n\n> * Browser-based protocol implementations using websockets for things\n> like pgadmin or other tools to connect directly to postgres using\n> Postgres wire protocol but using native SSL implementations.\n>\n> * Postgres could even implement an HTTP based version of its protocol\n> and enable things like queries or browser based tools using straight\n> up HTTP requests so they don't need to use websockets.\n>\n> * Postgres could implement other protocols to serve up data like\n> status queries or monitoring metrics, using HTTP based standard\n> protocols instead of using our own protocol.\n\nI see big red warning lights going off in my head -- in a previous\nlife, I got to fix vulnerabilities that resulted from bolting HTTP\nonto existing protocol servers. Not only do you opt into the browser\nsecurity model forever, you also gain the ability to speak for any\nother web server already running on the same host.\n\n(I know you have PG committers who are also HTTP experts, and I think\nyou were hacking on mod_perl well before I knew web servers existed.\nJust... please be careful. ;D )\n\n> Incidentally I find the logic in ProcessStartupPacket incredibly\n> confusing. It took me a while before I realized it's using tail\n> recursion to implement the startup logic. I think it would be way more\n> straightforward and extensible if it used a much more common iterative\n> style. I think it would make it possible to keep more state than just\n> ssl_done and gss_done without changing the function signature every\n> time for example.\n\n+1. The complexity of the startup logic, both client- and server-side,\nis a big reason why I want implicit TLS in the first place. That way,\nbugs in that code can't be exploited before the TLS handshake\ncompletes.\n\nThanks!\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/fcc3ebeb7f05775b63f3207ed52a54ea5d17fb42.camel%40vmware.com\n[2] https://alpaca-attack.com/\n\n\n", "msg_date": "Thu, 19 Jan 2023 17:28:18 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": ">I don't think it's worth implementing a code path in\n> the server like this as it would then become cruft that would be hard\n> to ever get rid of.\n\nDo you think the server can de-support the old code path soon?\n\n> I think you can do the same thing, more or less, in the client. Like\n> if the driver tries to connect via SSL and gets an error it remembers\n> that host/port and connects using negotiation in the future.\n\nWell, I doubt everybody would instantaneously upgrade to the database that\nsupports fast TLS,\nso there will be a timeframe when there will be a lot of old databases, and\nthe clients will be new.\nIn that case, going with \"try fast, ignore exception\" would degrade\nperformance for old databases.\n\nI see you suggest caching, however, \"degrading one of the cases\" might be\nmore painful than\n\"not improving one of the cases\".\n\nI would like to refrain from implementing \"parallel connect both ways\nand check which is faster\" in\nPG clients (e.g. https://en.wikipedia.org/wiki/Happy_Eyeballs ).\n\nJust wondering: do you consider back-porting the feature to all supported\nDB versions?\n\n> In practice though, by the time drivers support this it'll probably be\n> far enough in the future\n\nI think drivers release more often than the database, and we can get driver\nsupport even before the database releases.\nI'm from pgjdbc Java driver team, and I think it is unfair to suggest that\n\"driver support is only far enough in the future\".\n\nVladimir\n\n>I don't think it's worth implementing a code path in> the server like this as it would then become cruft that would be hard> to ever get rid of.Do you think the server can de-support the old code path soon?> I think you can do the same thing, more or less, in the client. Like> if the driver tries to connect via SSL and gets an error it remembers> that host/port and connects using negotiation in the future.Well, I doubt everybody would instantaneously upgrade to the database that supports fast TLS,so there will be a timeframe when there will be a lot of old databases, and the clients will be new.In that case, going with \"try fast, ignore exception\" would degrade performance for old databases.I see you suggest caching, however, \"degrading one of the cases\" might be more painful than\"not improving one of the cases\".I would like to refrain from implementing \"parallel connect both ways and check which is faster\" inPG clients (e.g. https://en.wikipedia.org/wiki/Happy_Eyeballs ).Just wondering: do you consider back-porting the feature to all supported DB versions?> In practice though, by the time drivers support this it'll probably be> far enough in the future I think drivers release more often than the database, and we can get driver support even before the database releases.I'm from pgjdbc Java driver team, and I think it is unfair to suggest that \"driver support is only far enough in the future\".Vladimir", "msg_date": "Fri, 20 Jan 2023 09:40:19 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Fri, 20 Jan 2023 at 01:41, Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n>\n> Do you think the server can de-support the old code path soon?\n\nI don't have any intention to de-support anything. I really only\npicture it being an option in environments where the client and server\nare all part of a stack controlled by a single group. User tools and\ngeneral purpose tools are better served by our current more flexible\nsetup.\n\n> Just wondering: do you consider back-porting the feature to all supported DB versions?\n\nI can't see that, no.\n\n> > In practice though, by the time drivers support this it'll probably be\n> > far enough in the future\n>\n> I think drivers release more often than the database, and we can get driver support even before the database releases.\n> I'm from pgjdbc Java driver team, and I think it is unfair to suggest that \"driver support is only far enough in the future\".\n\nInteresting. I didn't realize this would be so attractive to regular\ndriver authors. I did think of the Happy Eyeballs technique too but I\nagree I wouldn't want to go that way either :)\n\nI guess the server doesn't really have to do anything specific to do\nwhat you want. You could just hard code that servers newer than a\nspecific version would have this support. Or it could be done with a\n\"protocol option\" -- which wouldn't actually change any behaviour but\nwould be rejected if the server doesn't support \"fast ssl\" giving you\nthe feedback you expect without having much extra legacy complexity.\n\nI guess a lot depends on the way the driver works and the way the\napplication is structured. Applications that make a single connection\nor don't have shared state across connections wouldn't think this way.\nAnd interfaces like libpq would normally just leave it up to the\napplication to make choices like this. But I guess JVM based\napplications are more likely to have long-lived systems that make many\nconnections and also more likely to make it the driver's\nresponsibility to manage such things.\n\n\n\n--\ngreg\n\n\n", "msg_date": "Fri, 20 Jan 2023 11:08:32 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": ">You could just hard code that servers newer than a\n> specific version would have this support\n\nSuppose PostgreSQL 21 implements \"fast TLS\"\nSuppose pgjdbc 43 supports \"fast TLS\"\nSuppose PgBouncer 1.17.0 does not support \"fast TLS\" yet\n\nIf pgjdbc connects to the DB via balancer, then the server would\nrespond with \"server_version=21\".\nThe balancer would forward \"server_version\", so the driver would\nassume \"fast TLS is supported\".\n\nIn practice, fast TLS can't be used in that configuration since the\nconnection will fail when the driver attempts to ask\n\"fast TLS\" from the PgBouncer.\n\n> Or it could be done with a \"protocol option\"\n\nWould you please clarify what you mean by \"protocol option\"?\n\n>I guess a lot depends on the way the driver works and the way the\n> application is structured\n\nThere are cases when applications pre-create connections on startup,\nso the faster connections are created the better.\nThe same case happens when the admin issues \"reset connection pool\",\nso it discards old connections and creates new ones.\nPeople rarely know all the knobs, so I would like to have a \"fast by\ndefault\" design (e.g. server sending a notification \"you may use fast\nmode the next time\")\nrather than \"keep old behaviour and require everybody to add fast=true\nto their configuration\" (e.g. users having to configure\n\"try_fast_tls_first=true\")\n\nVladimir\n\n\n", "msg_date": "Fri, 20 Jan 2023 20:11:56 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 20/01/2023 03:28, Jacob Champion wrote:\n> On Wed, Jan 18, 2023 at 7:16 PM Greg Stark <stark@mit.edu> wrote:\n>> * \"Service Mesh\" type tools that hide multiple services behind a\n>> single host/port (\"Service Mesh\" is just a new buzzword for \"proxy\").\n> \n> If you want to multiplex protocols on a port, now is an excellent time\n> to require clients to use ALPN on implicit-TLS connections. (There are\n> no clients that can currently connect via implicit TLS, so you'll\n> never have another chance to force the issue without breaking\n> backwards compatibility.) That should hopefully make it harder to\n> ALPACA yourself or others [2].\n\nGood idea. Do we want to just require the protocol to be \"postgres\", or \nperhaps \"postgres/3.0\"? Need to register that with IANA, I guess.\n\nWe implemented a protocol version negotiation mechanism in the libpq \nprotocol itself, how would this interact with it? If it's just \n\"postgres\", then I guess we'd still negotiate the protocol version and \nlist of extensions after the TLS handshake.\n\n>> Incidentally I find the logic in ProcessStartupPacket incredibly\n>> confusing. It took me a while before I realized it's using tail\n>> recursion to implement the startup logic. I think it would be way more\n>> straightforward and extensible if it used a much more common iterative\n>> style. I think it would make it possible to keep more state than just\n>> ssl_done and gss_done without changing the function signature every\n>> time for example.\n> \n> +1. The complexity of the startup logic, both client- and server-side,\n> is a big reason why I want implicit TLS in the first place. That way,\n> bugs in that code can't be exploited before the TLS handshake\n> completes.\n\n+1. We need to support explicit TLS for a long time, so we can't \nsimplify by just removing it. But let's refactor the code somehow, to \nmake it more clear.\n\nLooking at the patch, I think it accepts an SSLRequest packet even if \nimplicit TLS has already been established. That's surely wrong, and \nshows how confusing the code is. (Or I'm reading it incorrectly, which \nalso shows how confusing it is :-) )\n\nRegarding Vladimir's comments on how clients can migrate to this, I \ndon't have any great suggestions. To summarize, there are several options:\n\n- Add an \"fast_tls\" option that the user can enable if they know the \nserver supports it\n\n- First connect in old-fashioned way, and remember the server version. \nLater, if you reconnect to the same server, use implicit TLS if the \nserver version was high enough. This would be most useful for connection \npools.\n\n- Connect both ways at the same time, and continue with the fastest, \ni.e. \"happy eyeballs\"\n\n- Try implicit TLS first, and fall back to explicit TLS if it fails.\n\nFor libpq, we don't necessarily need to do anything right now. We can \nadd the implicit TLS support in a later version. Not having libpq \nsupport makes it hard to test the server codepath, though. Maybe just \ntest it with 'stunnel' or 'openssl s_client'.\n\n- Heikki\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 14:26:40 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Wed, Feb 22, 2023 at 4:26 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 20/01/2023 03:28, Jacob Champion wrote:\n> > If you want to multiplex protocols on a port, now is an excellent time\n> > to require clients to use ALPN on implicit-TLS connections. (There are\n> > no clients that can currently connect via implicit TLS, so you'll\n> > never have another chance to force the issue without breaking\n> > backwards compatibility.) That should hopefully make it harder to\n> > ALPACA yourself or others [2].\n>\n> Good idea. Do we want to just require the protocol to be \"postgres\", or\n> perhaps \"postgres/3.0\"? Need to register that with IANA, I guess.\n\nUnless you plan to make the next minor protocol version fundamentally\nincompatible, I don't think there's much reason to add '.0'. (And even\nif that does happen, 'postgres/3.1' is still distinct from\n'postgres/3'. Or 'postgres' for that matter.) The Expert Review\nprocess might provide some additional guidance?\n\n> We implemented a protocol version negotiation mechanism in the libpq\n> protocol itself, how would this interact with it? If it's just\n> \"postgres\", then I guess we'd still negotiate the protocol version and\n> list of extensions after the TLS handshake.\n\nYeah. You could choose to replace major version negotiation completely\nwith ALPN, I suppose, but there might not be any maintenance benefit\nif you still have to support plaintext negotiation. Maybe there are\nperformance implications to handling the negotiation earlier vs.\nlater?\n\nNote that older versions of TLS will expose the ALPN in plaintext...\nbut that may not be a factor by the time a postgres/4 shows up, and if\nthe next protocol is incompatible then it may not be feasible to hide\nthe differences via transport encryption anyway.\n\n> Regarding Vladimir's comments on how clients can migrate to this, I\n> don't have any great suggestions. To summarize, there are several options:\n>\n> - Add an \"fast_tls\" option that the user can enable if they know the\n> server supports it\n\nI like that such an option could eventually be leveraged for a\npostgresqls:// URI scheme (which should not fall back, ever). There\nwould be other things we'd have to change first to make that a reality\n-- postgresqls://example.com?host=evil.local is problematic, for\nexample -- but it'd be really nice to have an HTTPS equivalent.\n\n--Jacob\n\n\n", "msg_date": "Wed, 22 Feb 2023 14:19:28 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Wed, 22 Feb 2023 at 07:27, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 20/01/2023 03:28, Jacob Champion wrote:\n> > On Wed, Jan 18, 2023 at 7:16 PM Greg Stark <stark@mit.edu> wrote:\n> >> * \"Service Mesh\" type tools that hide multiple services behind a\n> >> single host/port (\"Service Mesh\" is just a new buzzword for \"proxy\").\n> >\n> > If you want to multiplex protocols on a port, now is an excellent time\n> > to require clients to use ALPN on implicit-TLS connections. (There are\n> > no clients that can currently connect via implicit TLS, so you'll\n> > never have another chance to force the issue without breaking\n> > backwards compatibility.) That should hopefully make it harder to\n> > ALPACA yourself or others [2].\n>\n> Good idea. Do we want to just require the protocol to be \"postgres\", or\n> perhaps \"postgres/3.0\"? Need to register that with IANA, I guess.\n\nI had never heard of this before, it does seem useful. But if I\nunderstand it right it's entirely independent of this patch. We can\nadd it to all our Client/Server exchanges whether they're the initial\ndirect SSL connection or the STARTTLS negotiation?\n\n\n> We implemented a protocol version negotiation mechanism in the libpq\n> protocol itself, how would this interact with it? If it's just\n> \"postgres\", then I guess we'd still negotiate the protocol version and\n> list of extensions after the TLS handshake.\n>\n> >> Incidentally I find the logic in ProcessStartupPacket incredibly\n> >> confusing. It took me a while before I realized it's using tail\n> >> recursion to implement the startup logic. I think it would be way more\n> >> straightforward and extensible if it used a much more common iterative\n> >> style. I think it would make it possible to keep more state than just\n> >> ssl_done and gss_done without changing the function signature every\n> >> time for example.\n> >\n> > +1. The complexity of the startup logic, both client- and server-side,\n> > is a big reason why I want implicit TLS in the first place. That way,\n> > bugs in that code can't be exploited before the TLS handshake\n> > completes.\n>\n> +1. We need to support explicit TLS for a long time, so we can't\n> simplify by just removing it. But let's refactor the code somehow, to\n> make it more clear.\n>\n> Looking at the patch, I think it accepts an SSLRequest packet even if\n> implicit TLS has already been established. That's surely wrong, and\n> shows how confusing the code is. (Or I'm reading it incorrectly, which\n> also shows how confusing it is :-) )\n\nI'll double check it but I think I tested that that wasn't the case. I\nthink it accepts the SSL request packet and sends back an N which the\nclient libpq just interprets as the server not supporting SSL and does\nan unencrypted connection (which is tunneled over stunnel unbeknownst\nto libpq).\n\nI agree I would want to flatten this logic to an iterative approach\nbut having wrapped my head around it now I'm not necessarily rushing\nto do it now. The main advantage of flattening it would be to make it\neasy to support other protocol types which I think could be really\ninteresting. It would be much clearer to document the state machine if\nall the state is in one place and the code just loops through\nprocessing startup packets and going to a new state until the\nconnection is established. That's true now but you have to understand\nhow the state is passed in the function parameters and notice that all\nthe recursion is tail recursive (I think). And extending that state\nwould require extending the function signature which would get awkward\nquickly.\n\n> Regarding Vladimir's comments on how clients can migrate to this, I\n> don't have any great suggestions. To summarize, there are several options:\n>\n> - Add an \"fast_tls\" option that the user can enable if they know the\n> server supports it\n>\n> - First connect in old-fashioned way, and remember the server version.\n> Later, if you reconnect to the same server, use implicit TLS if the\n> server version was high enough. This would be most useful for connection\n> pools.\n\nVladimir pointed out that this doesn't necessarily work. The server\nmay be new enough to support it but it could be behind a proxy like\npgbouncer or something. The same would be true if the server reported\na \"connection option\" instead of depending on version.\n\n> - Connect both ways at the same time, and continue with the fastest,\n> i.e. \"happy eyeballs\"\n\nThat seems way too complex for us to bother with imho.\n\n> - Try implicit TLS first, and fall back to explicit TLS if it fails.\n\n> For libpq, we don't necessarily need to do anything right now. We can\n> add the implicit TLS support in a later version. Not having libpq\n> support makes it hard to test the server codepath, though. Maybe just\n> test it with 'stunnel' or 'openssl s_client'.\n\nI think we should have an option to explicitly enable it in psql, if\nonly for testing. And then wait five years and switch the default on\nit then. In the meantime users can just set it based on their setup.\nThat's not the way to the quickest adoption but imho the main\nadvantages of this option are the options it gives users, not the\nlatency improvement, so I'm not actually super concerned about\nadoption rate.\n\nI assume we'll keep the negotiated mode indefinitely because it can\nhandle any other protocols we might want. For instance, it currently\nhandles GSSAPI -- which raises the question, are we happy with GSSAPI\nhaving this extra round trip? Is there a similar change we could make\nfor it? My understanding is that GSSAPI is an abstract interface and\nthe actual protocol it's invoking could be anything so we can't make\nany assumptions about what the first packet looks like. Perhaps we can\ndo something about pipelining GSSAPI messages so if the negotiation\nfails the server just closes the connection but if it accepts it it\ndoes a similar trick with unreading the buffered data and processing\nit through the GSSAPI calls.\n\n\n-- \ngreg\n\n\n", "msg_date": "Tue, 28 Feb 2023 13:32:45 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Tue, Feb 28, 2023 at 10:33 AM Greg Stark <stark@mit.edu> wrote:\n> On Wed, 22 Feb 2023 at 07:27, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > Good idea. Do we want to just require the protocol to be \"postgres\", or\n> > perhaps \"postgres/3.0\"? Need to register that with IANA, I guess.\n>\n> I had never heard of this before, it does seem useful. But if I\n> understand it right it's entirely independent of this patch.\n\nIt can be. If you want to use it in the strongest possible way,\nthough, you'd have to require its use by clients. Introducing that\nrequirement later would break existing ones, so I think it makes sense\nto do it at the same time as the initial implementation, if there's\ninterest.\n\n> We can\n> add it to all our Client/Server exchanges whether they're the initial\n> direct SSL connection or the STARTTLS negotiation?\n\nI'm not sure it would buy you anything during the STARTTLS-style\nopening. You already know what protocol you're speaking in that case.\n(So with the ALPACA example, the damage is already done.)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 28 Feb 2023 11:02:42 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "Here's an updated patch for direct SSL connections.\n\nI've added libpq client support with a new connection parameter. This\nallows testing it easily with psql. It's still a bit hard to see\nwhat's going on though. I'm thinking it would be good to have libpq\nkeep a string which describes what negotiations were attempted and\nfailed and what was eventually accepted which psql could print with\nthe SSL message or expose some other way.\n\nIn the end I didn't see how adding an API for this really helped any\nmore than just saying the API is to stuff the unread data into the\nPort structure. So I just documented that. If anyone has any better\nidea...\n\nI added documentation for the libpq connection setting.\n\nOne thing, I *think* it's ok to replace the send(2) call with\nsecure_write() in the negotiation. It does mean it's possible for the\nconnection to fail with FATAL at that point instead of COMMERROR but I\ndon't think that's a problem.\n\nI haven't added tests. I'm not sure how to test this since to test it\nproperly means running the server with every permutation of ssl and\ngssapi configurations.\n\nIncidentally, some of the configuration combinations -- namely\nsslnegotiation=direct and default gssencmode and sslmode results in a\ncounter-intuitive behaviour. But I don't see a better option that\ndoesn't mean making the defaults less useful.", "msg_date": "Thu, 16 Mar 2023 16:00:28 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "Here's a first cut at ALPN support.\n\nCurrently it's using a hard coded \"Postgres/3.0\" protocol (hard coded\nboth in the client and the server...). And it's hard coded to be\nrequired for direct connections and supported but not required for\nregular connections.\n\nIIRC I put a variable labeled a \"GUC\" but forgot to actually make it a\nGUC. But I'm thinking of maybe removing that variable since I don't\nsee much of a use case for controlling this manually. I *think* ALPN\nis supported by all the versions of OpenSSL we support.\n\nThe other patches are unchanged (modulo a free() that I missed in the\nclient before). They still have the semi-open issues I mentioned in\nthe previous email.\n\n\n\n\n--\ngreg", "msg_date": "Mon, 20 Mar 2023 16:31:03 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "Sorry, checking the cfbot apparently I had a typo in the #ifndef USE_SSL case.", "msg_date": "Mon, 20 Mar 2023 16:35:38 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Mon, 20 Mar 2023 at 16:31, Greg Stark <stark@mit.edu> wrote:\n>\n> Here's a first cut at ALPN support.\n>\n> Currently it's using a hard coded \"Postgres/3.0\" protocol\n\nApparently that is explicitly disrecommended by the IETF folk. They\nwant something like \"TBD\" so people don't start using a string until\nit's been added to the registry. So I've changed this for now (to\n\"TBD-pgsql\")\n\nOk, I think this has pretty much everything I was hoping to do.\n\nThe one thing I'm not sure of is it seems some codepaths in postmaster\nhave ereport(COMMERROR) followed by returning an error whereas other\ncodepaths just have ereport(FATAL). And I don't actually see much\nlogic in which do which. (I get the principle behind COMMERR it just\nseems like it doesn't really match the code).\n\nI realized I had exactly the infrastructure needed to allow pipelining\nthe SSL ClientHello like Neon wanted to do so I added that too. It's\nkind of redundant with direct SSL connections but seems like there may\nbe reasons to use that instead.\n\n\n\n-- \ngreg", "msg_date": "Fri, 31 Mar 2023 03:14:03 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "And the cfbot wants a small tweak", "msg_date": "Fri, 31 Mar 2023 03:59:49 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 31/03/2023 10:59, Greg Stark wrote:\n> IIRC I put a variable labeled a \"GUC\" but forgot to actually make it a\n> GUC. But I'm thinking of maybe removing that variable since I don't\n> see much of a use case for controlling this manually. I *think* ALPN\n> is supported by all the versions of OpenSSL we support.\n\n+1 on removing the variable. Let's make ALPN mandatory for direct SSL \nconnections, like Jacob suggested. And for old-style handshakes, accept \nand check ALPN if it's given.\n\nI don't see the point of the libpq 'sslalpn' option either. Let's send \nALPN always.\n\nAdmittedly having the options make testing different of combinations of \nold and new clients and servers a little easier. But I don't think we \nshould add options for the sake of backwards compatibility tests.\n\n> --- a/src/backend/libpq/pqcomm.c\n> +++ b/src/backend/libpq/pqcomm.c\n> @@ -1126,13 +1126,16 @@ pq_discardbytes(size_t len)\n> /* --------------------------------\n> * pq_buffer_has_data - is any buffered data available to read?\n> *\n> - * This will *not* attempt to read more data.\n> + * Actually returns the number of bytes in the buffer...\n> + *\n> + * This will *not* attempt to read more data. And reading up to that number of\n> + * bytes should not cause reading any more data either.\n> * --------------------------------\n> */\n> -bool\n> +size_t\n> pq_buffer_has_data(void)\n> {\n> - return (PqRecvPointer < PqRecvLength);\n> + return (PqRecvLength - PqRecvPointer);\n> }\n\nLet's rename the function.\n\n> \t\t/* push unencrypted buffered data back through SSL setup */\n> \t\tlen = pq_buffer_has_data();\n> \t\tif (len > 0)\n> \t\t{\n> \t\t\tbuf = palloc(len);\n> \t\t\tif (pq_getbytes(buf, len) == EOF)\n> \t\t\t\treturn STATUS_ERROR; /* shouldn't be possible */\n> \t\t\tport->raw_buf = buf;\n> \t\t\tport->raw_buf_remaining = len;\n> \t\t\tport->raw_buf_consumed = 0;\n> \t\t}\n> \n> \t\tAssert(pq_buffer_has_data() == 0);\n> \t\tif (secure_open_server(port) == -1)\n> \t\t{\n> \t\t\tereport(COMMERROR,\n> \t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\n> \t\t\t\t\t errmsg(\"SSL Protocol Error during direct SSL connection initiation\")));\n> \t\t\treturn STATUS_ERROR;\n> \t\t}\n> \n> \t\tif (port->raw_buf_remaining > 0)\n> \t\t{\n> \t\t\tereport(COMMERROR,\n> \t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\n> \t\t\t\t\t errmsg(\"received unencrypted data after SSL request\"),\n> \t\t\t\t\t errdetail(\"This could be either a client-software bug or evidence of an attempted man-in-the-middle attack.\")));\n> \t\t\treturn STATUS_ERROR;\n> \t\t}\n> \t\tif (port->raw_buf)\n> \t\t\tpfree(port->raw_buf);\n\nThis pattern is repeated in both callers of secure_open_server(). Could \nwe move this into secure_open_server() itself? That would feel pretty \nnatural, be-secure.c already contains the secure_raw_read() function \nthat reads the 'raw_buf' field.\n\n> const char *\n> PQsslAttribute(PGconn *conn, const char *attribute_name)\n> {\n> \t...\n> \n> \tif (strcmp(attribute_name, \"alpn\") == 0)\n> \t{\n> \t\tconst unsigned char *data;\n> \t\tunsigned int len;\n> \t\tstatic char alpn_str[256]; /* alpn doesn't support longer than 255 bytes */\n> \t\tSSL_get0_alpn_selected(conn->ssl, &data, &len);\n> \t\tif (data == NULL || len==0 || len > sizeof(alpn_str)-1)\n> \t\t\treturn NULL;\n> \t\tmemcpy(alpn_str, data, len);\n> \t\talpn_str[len] = 0;\n> \t\treturn alpn_str;\n> \t}\n\nUsing a static buffer doesn't look right. If you call PQsslAttribute on \ntwo different connections from two different threads concurrently, they \nwill write to the same buffer. I see that you copied it from the \n\"key_bits\" handlng, but it has the same issue.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 4 Jul 2023 17:15:49 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Tue, Jul 04, 2023 at 05:15:49PM +0300, Heikki Linnakangas wrote:\n> I don't see the point of the libpq 'sslalpn' option either. Let's send ALPN\n> always.\n> \n> Admittedly having the options make testing different of combinations of old\n> and new clients and servers a little easier. But I don't think we should add\n> options for the sake of backwards compatibility tests.\n\nHmm. I would actually argue in favor of having these with tests in\ncore to stress the previous SSL hanshake protocol, as not having these\nparameters would mean that we rely only on major version upgrades in\nthe buildfarm to test the backward-compatible code path, making issues\nmuch harder to catch. And we still need to maintain the\nbackward-compatible path for 10 years based on what pg_dump and\npg_upgrade need to support.\n--\nMichael", "msg_date": "Wed, 5 Jul 2023 08:33:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 05/07/2023 02:33, Michael Paquier wrote:\n> On Tue, Jul 04, 2023 at 05:15:49PM +0300, Heikki Linnakangas wrote:\n>> I don't see the point of the libpq 'sslalpn' option either. Let's send ALPN\n>> always.\n>>\n>> Admittedly having the options make testing different of combinations of old\n>> and new clients and servers a little easier. But I don't think we should add\n>> options for the sake of backwards compatibility tests.\n> \n> Hmm. I would actually argue in favor of having these with tests in\n> core to stress the previous SSL hanshake protocol, as not having these\n> parameters would mean that we rely only on major version upgrades in\n> the buildfarm to test the backward-compatible code path, making issues\n> much harder to catch. And we still need to maintain the\n> backward-compatible path for 10 years based on what pg_dump and\n> pg_upgrade need to support.\n\nOk, let's keep it.\n\nI started to review this again. There's a lot of little things to fix \nbefore this is ready for commit, but overall this looks pretty good. A \nfew notes / questions on the first two patches (in addition to the few \ncomments I made earlier):\n\nIf the client sends TLS HelloClient directly, but the server does not \nsupport TLS, it just closes the connection. It would be nice to still \nsend some kind of an error to the client. Maybe a TLS alert packet? I \ndon't want to start implementing TLS, but I think a TLS alert packet \nwith a suitable error code would be just a constant.\n\nThe new CONNECTION_DIRECT_SSL_STARTUP state needs to be moved to end of \nthe enum. We cannot change the integer values of existing of enum \nvalues, or clients compiled with old libpq version would mix up the states.\n\n> \t/*\n> \t * validate sslnegotiation option, default is \"postgres\" for the postgres\n> \t * style negotiated connection with an extra round trip but more options.\n> \t */\n\nWhat \"more options\" does the negotiated connection provide?\n\n> \tif (conn->sslnegotiation)\n> \t{\n> \t\tif (strcmp(conn->sslnegotiation, \"postgres\") != 0\n> \t\t\t&& strcmp(conn->sslnegotiation, \"direct\") != 0\n> \t\t\t&& strcmp(conn->sslnegotiation, \"requiredirect\") != 0)\n> \t\t{\n> \t\t\tconn->status = CONNECTION_BAD;\n> \t\t\tlibpq_append_conn_error(conn, \"invalid %s value: \\\"%s\\\"\",\n> \t\t\t\t\t\t\t\t\t\"sslnegotiation\", conn->sslnegotiation);\n> \t\t\treturn false;\n> \t\t}\n> \n> #ifndef USE_SSL\n> \t\tif (conn->sslnegotiation[0] != 'p') {\n> \t\t\tconn->status = CONNECTION_BAD;\n> \t\t\tlibpq_append_conn_error(conn, \"sslnegotiation value \\\"%s\\\" invalid when SSL support is not compiled in\",\n> \t\t\t\t\t\t\t\t\tconn->sslnegotiation);\n> \t\t\treturn false;\n> \t\t}\n> #endif\n> \t}\n\nAt the same time, the patch allows the combination of \"sslmode=disable\" \nand \"sslnegotiation=requiredirect\". Seems inconsistent to error out if \ncompiled without SSL support.\n\n> \telse\n> \t{\n> \t\tlibpq_append_conn_error(conn, \"sslnegotiation missing?\");\n> \t\treturn false;\n> \t}\n\nIn the other similar settings, like 'channel_binding' and 'sslcertmode', \nwe strdup() the compiled-in default if the option is NULL. I'm not sure \nif that's necessary, I think the compiled-in defaults should get filled \nin conninfo_add_defaults(). If so, then those other places could be \nturned into errors like this too. This seems to be a bit of a mess even \nbefore this patch.\n\nIn pg_conn struct:\n\n> + bool allow_direct_ssl_try; /* Try to make a direct SSL connection\n> + * without an \"SSL negotiation packet\" */\n> bool allow_ssl_try; /* Allowed to try SSL negotiation */\n> bool wait_ssl_try; /* Delay SSL negotiation until after\n> * attempting normal connection */\n\nIt's getting hard to follow what combinations of these booleans are \nvalid and what they're set to at different stages. I think it's time to \nturn all these into one enum, or something like that.\n\nI intend to continue reviewing this after Jan 8th. I'd still like to get \nthis into v17.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sat, 30 Dec 2023 23:51:34 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "Some more comments on this:\n\n1. It feels weird that the combination of \"gssencmode=require \nsslnegotiation=direct\" combination is forbidden. Sure, the ssl \nnegotiation will never happen with gssencmode=require, so the \nsslnegotiation option has no effect. But by that token, should we also \nforbid the combination \"sslmode=disable sslnegotiation=direct\"? I think \nnot. The sslnegotiation option should mean \"if we are going to try SSL, \nshould we try it in direct or negotiated mode?\"\n\n2. Should we allow direct SSL only at the very beginning of a TCP \nconnection, or should we also allow it after we have requested GSS and \nthe server said no? Like this:\n\nClient: GSSENCRequest\nServer: 'N' (gss not supported)\nClient: TLS client Hello\n\nOn one hand, why not? It saves you a round-trip in this case too. If we \ndon't allow it, the client will have to send SSLRequest and wait for \nresponse, or reconnect to try direct SSL. On the other hand, flexibility \nis not necessarily a good thing in security-critical code like this.\n\nThe patch set is confused on whether that's allowed or not. The server \nrejects it. But if you use \"gssencmode=prefer \nsslnegotiation=requiredrect\", libpq will attempt to do it, and fail.\n\n3. With \"sslmode=verify-full sslnegotiation=direct\", if the direct SSL \nconnection fails because of a problem with the certificate, libpq will \ntry again in negotiated SSL mode. That seems pointless. If the server \nresponded to the direct TLS Client Hello message with a valid \nServerHello, that indicates that the server supports direct SSL. If \nanything goes wrong after that, retrying in negotiated mode is not going \nto help.\n\n4. The number of combinations of sslmode, gssencmode and sslnegotiation \nsettings is scary. And we have very few tests for them.\n\n\nAttached patch set addresses the above, but is very much WIP. I \nrefactored the state machine in libpq, to make the states and \ntransitions more clear. I think that helps, but it's still pretty \ncomplex. I'm all ears for ideas on how to simplify it further.\n\nI added a new test suite to test the different libpq options. See \nsrc/test/libpq_encryption. I think this is very much needed, but I'm \nstill not very happy with the implementation. Some combinations are \nstill impossible to test, like connecting to an older server that \ndoesn't support direct SSL, or having the server respond with 'N' to \nGSSEncRequest. I'd also like to check more details of each connection \nattempt, like how many TCP connections are established, to check for \nthings like 3. above. Maybe we need to add more logging to libpq or the \nserver and check the logs after each test.\n\nI'm tempted to implement a mock server from scratch that could easily be \ninstructed to accept/reject the connection at just the right places. But \nthat's a lot of work.\n\nI'm going to put this down for now. The attached patch set is even more \nraw than v6, but I'm including it here to \"save the work\".\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 10 Jan 2024 10:30:49 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "I've been asked to take a look at this thread and review some patches,\nand the subject looks interesting enough, so here I am.\n\nOn Thu, 19 Jan 2023 at 04:16, Greg Stark <stark@mit.edu> wrote:\n> I had a conversation a while back with Heikki where he expressed that\n> it was annoying that we negotiate SSL/TLS the way we do since it\n> introduces an extra round trip. Aside from the performance\n> optimization I think accepting standard TLS connections would open the\n> door to a number of other opportunities that would be worth it on\n> their own.\n\nI agree that this would be very nice.\n\n> Other things it would open the door to in order from least\n> controversial to most....\n>\n> * Hiding Postgres behind a standard SSL proxy terminating SSL without\n> implementing the Postgres protocol.\n\nI think there is also the option \"hiding Postgres behind a standard\nSNI-based SSL router that does not terminate SSL\", as that's arguably\na more secure way to deploy any SSL service than SSL-terminating\nproxies.\n\n> * \"Service Mesh\" type tools that hide multiple services behind a\n> single host/port (\"Service Mesh\" is just a new buzzword for \"proxy\").\n\nPeople proxying PostgreSQL seems fine, and enabling better proxying\nseems reasonable.\n\n> * Browser-based protocol implementations using websockets for things\n> like pgadmin or other tools to connect directly to postgres using\n> Postgres wire protocol but using native SSL implementations.\n>\n> * Postgres could even implement an HTTP based version of its protocol\n> and enable things like queries or browser based tools using straight\n> up HTTP requests so they don't need to use websockets.\n>\n> * Postgres could implement other protocols to serve up data like\n> status queries or monitoring metrics, using HTTP based standard\n> protocols instead of using our own protocol.\n\nI don't think we should be trying to serve anything HTTP-like, even\nwith a ten-foot pole, on a port that we serve the PostgreSQL wire\nprotocol on.\n\nIf someone wants to multiplex the PostgreSQL wire protocol on the same\nport that serves HTTPS traffic, they're welcome to do so with their\nown proxy, but I'd rather we keep the PostgreSQL server's socket\nhandling fundamentaly incapable of servicng protocols primarily used\nin web browsers on the same socket that handles normal psql data\nconnections.\n\nPostgreSQL may have its own host-based authentication with HBA, but\nI'd rather not have to depend on it to filter incoming connections\nbetween valid psql connections and people trying to grab the latest\nmonitoring statistics at some http endpoint - I'd rather use my trusty\nfirewall that can already limit access to specific ports very\nefficiently without causing undue load on the database server.\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 19 Feb 2024 14:14:23 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Wed, 10 Jan 2024 at 09:31, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> Some more comments on this:\n>\n> 1. It feels weird that the combination of \"gssencmode=require\n> sslnegotiation=direct\" combination is forbidden. Sure, the ssl\n> negotiation will never happen with gssencmode=require, so the\n> sslnegotiation option has no effect. But by that token, should we also\n> forbid the combination \"sslmode=disable sslnegotiation=direct\"? I think\n> not. The sslnegotiation option should mean \"if we are going to try SSL,\n> should we try it in direct or negotiated mode?\"\n\nI'm not sure about this either. The 'gssencmode' option is already\nquite weird in that it seems to override the \"require\"d priority of\n\"sslmode=require\", which it IMO really shouldn't.\n\n> 2. Should we allow direct SSL only at the very beginning of a TCP\n> connection, or should we also allow it after we have requested GSS and\n> the server said no? Like this:\n>\n> Client: GSSENCRequest\n> Server: 'N' (gss not supported)\n> Client: TLS client Hello\n>\n> On one hand, why not? It saves you a round-trip in this case too. If we\n> don't allow it, the client will have to send SSLRequest and wait for\n> response, or reconnect to try direct SSL. On the other hand, flexibility\n> is not necessarily a good thing in security-critical code like this.\n\nI think this should be \"no\".\nOnce we start accepting PostgreSQL protocol packets (such as the\nGSSENCRequest packet) I don't think we should start treating data\nstream corruption as attempted SSL connections.\n\n> The patch set is confused on whether that's allowed or not. The server\n> rejects it. But if you use \"gssencmode=prefer\n> sslnegotiation=requiredrect\", libpq will attempt to do it, and fail.\n\nThat should then be detected as an incorrect combination of flags in\npsql: you can't have direct-to-ssl and put something in front of it.\n\n> 3. With \"sslmode=verify-full sslnegotiation=direct\", if the direct SSL\n> connection fails because of a problem with the certificate, libpq will\n> try again in negotiated SSL mode. That seems pointless. If the server\n> responded to the direct TLS Client Hello message with a valid\n> ServerHello, that indicates that the server supports direct SSL. If\n> anything goes wrong after that, retrying in negotiated mode is not going\n> to help.\n\nThis makes sense.\n\n> 4. The number of combinations of sslmode, gssencmode and sslnegotiation\n> settings is scary. And we have very few tests for them.\n\nYeah, it's not great. We could easily automate this better though. I\nmean, can't we run the tests using a \"cube\" configuration, i.e. test\nevery combination of parameters? We would use a mapping function of\n(psql connection parameter values -> expectations), which would be\nalong the lines of the attached pl testfile. I feel it's a bit more\napproachable than the lists of manual option configurations, and makes\nit a bit easier to program the logic of which connection security\noption we should have used to connect.\nThe attached file would be a drop-in replacement; it's tested to work\nwith SSL only - without GSS - because I've been having issues getting\nGSS working on my machine.\n\n> I'm going to put this down for now. The attached patch set is even more\n> raw than v6, but I'm including it here to \"save the work\".\n\nv6 doesn't apply cleanly anymore after 774bcffe, but here are some notes:\n\nSeveral patches are still very much WIP. Reviewing them on a\npatch-by-patch basis is therefore nigh impossible; the specific\nreviews below are thus on changes that could be traced back to a\nspecific patch. A round of cleanup would be appreciated.\n\n> 0003: Direct SSL connections postmaster support\n> [...]\n> -extern bool pq_buffer_has_data(void);\n> +extern size_t pq_buffer_has_data(void);\n\nThis should probably be renamed to pg_buffer_remaining_data or such,\nif we change the signature like this.\n\n> + /* Read from the \"unread\" buffered data first. c.f. libpq-be.h */\n> + if (port->raw_buf_remaining > 0)\n> + {\n> + /* consume up to len bytes from the raw_buf */\n> + if (len > port->raw_buf_remaining)\n> + len = port->raw_buf_remaining;\n\nShouldn't we also try to read from the socket, instead of only\nconsuming bytes from the raw buffer if it contains bytes?\n\n> 0008: Allow pipelining data after ssl request\n> + /*\n> + * At this point we should have no data already buffered. If we do,\n> + * it was received before we performed the SSL handshake, so it wasn't\n> + * encrypted and indeed may have been injected by a man-in-the-middle.\n> + * We report this case to the client.\n> + */\n> + if (port->raw_buf_remaining > 0)\n> + ereport(FATAL,\n> + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> + errmsg(\"received unencrypted data after SSL request\"),\n> + errdetail(\"This could be either a client-software bug or evidence of an attempted man-in-the-middle attack.\")));\n\nWe currently don't support 0-RTT SSL connections because (among other\nreasons) we haven't yet imported many features from TLS1.3, but it\nseems reasonable that clients may want to use 0RTT (or, session\nresumption in 0 round trips), which would allow encrypted data after\nthe SSL startup packet.\nIt seems wise to add something to this note to these comments in\nProcessStartupPacket.\n\n> ALPN\n\nDoes the TLS ALPN spec allow protocol versions in the protocol tag? It\nwould be very useful to detect clients with new capabilities at the\nfirst connection, rather than having to wait for one round trip, and\nwould allow one avenue for changing the protocol version.\n\nApart from this, I didn't really find any serious problems in the sum\nof these patches. The intermediate states were not great though, with\nvarious broken states in between.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Thu, 22 Feb 2024 00:43:11 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 22/02/2024 01:43, Matthias van de Meent wrote:\n> On Wed, 10 Jan 2024 at 09:31, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> 4. The number of combinations of sslmode, gssencmode and sslnegotiation\n>> settings is scary. And we have very few tests for them.\n> \n> Yeah, it's not great. We could easily automate this better though. I\n> mean, can't we run the tests using a \"cube\" configuration, i.e. test\n> every combination of parameters? We would use a mapping function of\n> (psql connection parameter values -> expectations), which would be\n> along the lines of the attached pl testfile. I feel it's a bit more\n> approachable than the lists of manual option configurations, and makes\n> it a bit easier to program the logic of which connection security\n> option we should have used to connect.\n> The attached file would be a drop-in replacement; it's tested to work\n> with SSL only - without GSS - because I've been having issues getting\n> GSS working on my machine.\n\n+1 testing all combinations. I don't think the 'mapper' function \napproach in your version is much better than the original though. Maybe \nit would be better with just one 'mapper' function that contains all the \nrules, along the lines of: (This isn't valid perl, just pseudo-code)\n\nsub expected_outcome\n{\n my ($user, $sslmode, $negotiation, $gssmode) = @_;\n\n my @possible_outcomes = { 'plain', 'ssl', 'gss' }\n\n delete $possible_outcomes{'plain'} if $sslmode eq 'require';\n delete $possible_outcomes{'ssl'} if $sslmode eq 'disable';\n\n delete $possible_outcomes{'plain'} if $user eq 'ssluser';\n delete $possible_outcomes{'plain'} if $user eq 'ssluser';\n\n if $sslmode eq 'allow' {\n\t# move 'plain' before 'ssl' in the list\n }\n if $sslmode eq 'prefer' {\n\t# move 'ssl' before 'plain' in the list\n }\n\n # more rules here\n\n\n # If there are no outcomes left in $possible_outcomes, return 'fail'\n # If there's exactly one outcome left, return that.\n # If there's more, return the first one.\n}\n\n\nOr maybe a table that lists all the combinations and the expected \noutcome. Something lieke this:\n\n \tnossluser\tnogssuser\tssluser\tgssuser\t\t\nsslmode=require\tfail\t\t...\nsslmode=prefer\tplain\nsslmode=disable\tplain\n\n\nThe problem is that there are more than two dimensions. So maybe an \nexhaustive list like this:\n\nuser\t\tsslmode\t\tgssmode\t\toutcome\n\nnossluser\trequire\t\tdisable\t\tfail\nnossluser\tprefer\t\tdisable\t\tplain\nnossluser\tdisable\t\tdisable\t\tplain\nssluser\t\trequire\t\tdisable\t\tssl\n...\n\n\nI'm just throwing around ideas here, can you experiment with different \napproaches and see what looks best?\n\n>> ALPN\n> \n> Does the TLS ALPN spec allow protocol versions in the protocol tag? It\n> would be very useful to detect clients with new capabilities at the\n> first connection, rather than having to wait for one round trip, and\n> would allow one avenue for changing the protocol version.\n\nLooking at the list of registered ALPN tags [0], I can see \"http/0.9\"; \n\"http/1.0\" and \"http/1.1\". I think we'd want to changing the major \nprotocol version in a way that would introduce a new roundtrip, though.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 22 Feb 2024 19:02:51 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Thu, 22 Feb 2024 at 18:02, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 22/02/2024 01:43, Matthias van de Meent wrote:\n>> On Wed, 10 Jan 2024 at 09:31, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> 4. The number of combinations of sslmode, gssencmode and sslnegotiation\n>>> settings is scary. And we have very few tests for them.\n>>\n>> Yeah, it's not great. We could easily automate this better though. I\n>> mean, can't we run the tests using a \"cube\" configuration, i.e. test\n>> every combination of parameters? We would use a mapping function of\n>> (psql connection parameter values -> expectations), which would be\n>> along the lines of the attached pl testfile. I feel it's a bit more\n>> approachable than the lists of manual option configurations, and makes\n>> it a bit easier to program the logic of which connection security\n>> option we should have used to connect.\n>> The attached file would be a drop-in replacement; it's tested to work\n>> with SSL only - without GSS - because I've been having issues getting\n>> GSS working on my machine.\n>\n> +1 testing all combinations. I don't think the 'mapper' function\n> approach in your version is much better than the original though. Maybe\n> it would be better with just one 'mapper' function that contains all the\n> rules, along the lines of: (This isn't valid perl, just pseudo-code)\n>\n> sub expected_outcome\n> {\n[...]\n> }\n>\n> Or maybe a table that lists all the combinations and the expected\n> outcome. Something lieke this:\n[...]\n>\n> The problem is that there are more than two dimensions. So maybe an\n> exhaustive list like this:\n>\n> user sslmode gssmode outcome\n>\n> nossluser require disable fail\n> ...\n\n> I'm just throwing around ideas here, can you experiment with different\n> approaches and see what looks best?\n\nOne issue with exhaustive tables is that they would require a product\nof all options to be listed, and that'd require at least 216 rows to\nmanage: server_ssl 2 * server_gss 2 * users 3 * client_ssl 4 *\nclient_gss 3 * client_ssldirect 3 = 216 different states. I think the\nexpected_autcome version is easier in that regard.\n\nAttached an updated version using a single unified connection type\nvalidator using an approach similar to yours. Note that it does fail 8\ntests, all of which are attributed to the current handling of\n`sslmode=require gssencmode=prefer`: right now, we allow GSS in that\ncase, even though the user require-d sslmode.\n\nAn alternative check that does pass tests with the code of the patch\nis commented out, at lines 209-216.\n\n>>> ALPN\n>>\n>> Does the TLS ALPN spec allow protocol versions in the protocol tag? It\n>> would be very useful to detect clients with new capabilities at the\n>> first connection, rather than having to wait for one round trip, and\n>> would allow one avenue for changing the protocol version.\n>\n> Looking at the list of registered ALPN tags [0], I can see \"http/0.9\";\n> \"http/1.0\" and \"http/1.1\".\n\nAh, nice.\n\n> I think we'd want to changing the major\n> protocol version in a way that would introduce a new roundtrip, though.\n\nI don't think I understand what you meant here, could you correct the\nsentence or expand why we want to do that?\nNote that with ALPN you could negotiate postgres/3.0 or postgres/4.0\nduring the handshake, which could save round-trips.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Wed, 28 Feb 2024 13:00:52 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 28/02/2024 14:00, Matthias van de Meent wrote:\n> I don't think I understand what you meant here, could you correct the\n> sentence or expand why we want to do that?\n> Note that with ALPN you could negotiate postgres/3.0 or postgres/4.0\n> during the handshake, which could save round-trips.\n\nSorry, I missed \"avoid\" there. I meant:\n\nI think we'd want to *avoid* changing the major protocol version in a \nway that would introduce a new roundtrip, though.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 28 Feb 2024 16:10:20 +0400", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Wed, Feb 28, 2024 at 4:10 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I think we'd want to *avoid* changing the major protocol version in a\n> way that would introduce a new roundtrip, though.\n\nI'm starting to get up to speed with this patchset. So far I'm mostly\ntesting how it works; I have yet to take an in-depth look at the\nimplementation.\n\nI'll squint more closely at the MITM-protection changes in 0008 later.\nFirst impressions, though: it looks like that code has gotten much\nless straightforward, which I think is dangerous given the attack it's\npreventing. (Off-topic: I'm skeptical of future 0-RTT support. Our\nprotocol doesn't seem particularly replay-safe to me.)\n\nIf we're interested in ALPN negotiation in the future, we may also\nwant to look at GREASE [1] to keep those options open in the presence\nof third-party implementations. Unfortunately OpenSSL doesn't do this\nautomatically yet.\n\nIf we don't have a reason not to, it'd be good to follow the strictest\nrecommendations from [2] to avoid cross-protocol attacks. (For anyone\ncurrently running web servers and Postgres on the same host, they\nreally don't want browsers \"talking\" to their Postgres servers.) That\nwould mean checking the negotiated ALPN on both the server and client\nside, and failing if it's not what we expect.\n\nI'm not excited about the proliferation of connection options. I don't\nhave a lot of ideas on how to fix it, though, other than to note that\nthe current sslnegotiation option names are very unintuitive to me:\n- \"postgres\": only legacy handshakes\n- \"direct\": might be direct... or maybe legacy\n- \"requiredirect\": only direct handshakes... unless other options are\nenabled and then we fall back again to legacy? How many people willing\nto break TLS compatibility with old servers via \"requiredirect\" are\ngoing to be okay with lazy fallback to GSS or otherwise?\n\nHeikki mentioned possibly hard-coding a TLS alert if direct SSL is\nattempted without server TLS support. I think that's a cool idea, but\nwithout an official \"TLS not supported\" alert code (which, honestly,\nwould be strange to standardize) I'm kinda -0.5 on it. If the client\ntells me about a handshake_failure or similar, I'm going to start\ninvestigating protocol versions and ciphersuites; I'm not going to\nthink to myself that maybe the server lacks TLS support altogether.\n(Plus, we need to have a good error message when connecting to older\nservers anyway. I think we should be able to key off of the EOF coming\nback from OpenSSL; it'd be a good excuse to give that part of the code\nsome love.)\n\nFor the record, I'm adding some one-off tests for this feature to a\nlocal copy of my OAuth pytest suite, which is designed to do the kinds\nof testing you're running into trouble with. It's not in any way\nviable for a PG17 commit, but if you're interested I can make the\npatches available.\n\n--Jacob\n\n[1] https://www.rfc-editor.org/rfc/rfc8701.html\n[2] https://alpaca-attack.com/libs.html\n\n\n", "msg_date": "Fri, 1 Mar 2024 13:49:12 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 01/03/2024 23:49, Jacob Champion wrote:\n> On Wed, Feb 28, 2024 at 4:10 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> I think we'd want to *avoid* changing the major protocol version in a\n>> way that would introduce a new roundtrip, though.\n> \n> I'm starting to get up to speed with this patchset. So far I'm mostly\n> testing how it works; I have yet to take an in-depth look at the\n> implementation.\n\nThank you!\n\n> I'll squint more closely at the MITM-protection changes in 0008 later.\n> First impressions, though: it looks like that code has gotten much\n> less straightforward, which I think is dangerous given the attack it's\n> preventing. (Off-topic: I'm skeptical of future 0-RTT support. Our\n> protocol doesn't seem particularly replay-safe to me.)\n\nLet's drop that patch. AFAICS it's not needed by the rest of the patches.\n\n> If we're interested in ALPN negotiation in the future, we may also\n> want to look at GREASE [1] to keep those options open in the presence\n> of third-party implementations. Unfortunately OpenSSL doesn't do this\n> automatically yet.\n\nCan you elaborate? Do we need to do something extra in the server to be \ncompatible with GREASE?\n\n> If we don't have a reason not to, it'd be good to follow the strictest\n> recommendations from [2] to avoid cross-protocol attacks. (For anyone\n> currently running web servers and Postgres on the same host, they\n> really don't want browsers \"talking\" to their Postgres servers.) That\n> would mean checking the negotiated ALPN on both the server and client\n> side, and failing if it's not what we expect.\n\nHmm, I thought that's what the patches does. But looking closer, libpq \nis not checking that ALPN was used. We should add that. Am I right?\n\n> I'm not excited about the proliferation of connection options. I don't\n> have a lot of ideas on how to fix it, though, other than to note that\n> the current sslnegotiation option names are very unintuitive to me:\n> - \"postgres\": only legacy handshakes\n> - \"direct\": might be direct... or maybe legacy\n> - \"requiredirect\": only direct handshakes... unless other options are\n> enabled and then we fall back again to legacy? How many people willing\n> to break TLS compatibility with old servers via \"requiredirect\" are\n> going to be okay with lazy fallback to GSS or otherwise?\n\nYeah, this is my biggest complaint about all this. Not so much the names \nof the options, but the number of combinations of different options, and \nhow we're going to test them all. I don't have any great solutions, \nexcept adding a lot of tests to cover them, like Matthias did.\n\n> Heikki mentioned possibly hard-coding a TLS alert if direct SSL is\n> attempted without server TLS support. I think that's a cool idea, but\n> without an official \"TLS not supported\" alert code (which, honestly,\n> would be strange to standardize) I'm kinda -0.5 on it. If the client\n> tells me about a handshake_failure or similar, I'm going to start\n> investigating protocol versions and ciphersuites; I'm not going to\n> think to myself that maybe the server lacks TLS support altogether.\n\nAgreed.\n\n> (Plus, we need to have a good error message when connecting to older\n> servers anyway.I think we should be able to key off of the EOF coming\n> back from OpenSSL; it'd be a good excuse to give that part of the code\n> some love.)\n\nHmm, if OpenSSL sends ClientHello and the server responds with a \nPostgres error packet, OpenSSL will presumably consume the error packet \nor at least part of it. But with our custom BIO, we can peek at the \nserver response before handing it to OpenSSL.\n\nIf it helps, we could backport a nicer error message to old server \nversions, similar to what we did with SCRAM in commit 96d0f988b1.\n\n> For the record, I'm adding some one-off tests for this feature to a\n> local copy of my OAuth pytest suite, which is designed to do the kinds\n> of testing you're running into trouble with. It's not in any way\n> viable for a PG17 commit, but if you're interested I can make the\n> patches available.\n\nYes please, it would be nice to see what tests you've performed, and \nhave it archived.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 4 Mar 2024 17:29:41 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "I hope I didn't joggle your elbow reviewing this, Jacob, but I spent \nsome time rebase and fix various little things:\n\n- Incorporated Matthias's test changes\n\n- Squashed the client, server and documentation patches. Not much point \nin keeping them separate, as one requires the other, and if you're only \ninterested e.g. in the server parts, just look at src/backend.\n\n- Squashed some of my refactorings with the main patches, because I'm \ncertain enough that they're desirable. I kept the last libpq state \nmachine refactoring separate though. I'm pretty sure we need a \nrefactoring like that, but I'm not 100% sure about the details.\n\n- Added some comments to the new state machine logic in fe-connect.c.\n\n- Removed the XXX comments about TLS alerts.\n\n- Removed the \"Allow pipelining data after ssl request\" patch\n\n- Reordered the patches so that the first two patches add the tests \ndifferent combinations of sslmode, gssencmode and server support. That \ncould be committed separately, without the rest of the patches. A later \npatch expands the tests for the new sslnegotiation option.\n\n\nThe tests are still not distinguishing whether a connection was \nestablished in direct or negotiated mode. So if we e.g. had a bug that \naccidentally disabled direct SSL connection completely and always used \nnegotiated mode, the tests would still pass. I'd like to see some tests \nthat would catch that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 5 Mar 2024 16:08:54 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Tue, Mar 5, 2024 at 6:09 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> I hope I didn't joggle your elbow reviewing this\n\nNope, not at all!\n\n> The tests are still not distinguishing whether a connection was\n> established in direct or negotiated mode. So if we e.g. had a bug that\n> accidentally disabled direct SSL connection completely and always used\n> negotiated mode, the tests would still pass. I'd like to see some tests\n> that would catch that.\n\n+1\n\nOn Mon, Mar 4, 2024 at 7:29 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 01/03/2024 23:49, Jacob Champion wrote:\n> > I'll squint more closely at the MITM-protection changes in 0008 later.\n> > First impressions, though: it looks like that code has gotten much\n> > less straightforward, which I think is dangerous given the attack it's\n> > preventing. (Off-topic: I'm skeptical of future 0-RTT support. Our\n> > protocol doesn't seem particularly replay-safe to me.)\n>\n> Let's drop that patch. AFAICS it's not needed by the rest of the patches.\n\nOkay, sounds good.\n\n> > If we're interested in ALPN negotiation in the future, we may also\n> > want to look at GREASE [1] to keep those options open in the presence\n> > of third-party implementations. Unfortunately OpenSSL doesn't do this\n> > automatically yet.\n>\n> Can you elaborate?\n\nSure: now that we're letting middleboxes and proxies inspect and react\nto connections based on ALPN, it's possible that some intermediary\nmight incorrectly fixate on the \"postgres\" ID (or whatever we choose\nin the end), and shut down connections that carry additional protocols\nrather than ignoring them. That would prevent future graceful upgrades\nwhere the client sends both \"postgres/X\" and \"postgres/X+1\". While\nthat wouldn't be our fault, it'd be cold comfort to whoever has that\nmiddlebox.\n\nGREASE is a set of reserved protocol IDs that you can add randomly to\nyour ALPN list, so any middleboxes that fail to follow the rules will\njust break outright rather than silently proliferating. (Hence the\npun: GREASE keeps the joints in the pipe from rusting into place.) The\nRFC goes into more detail about how to do it. And I don't know if it's\nnecessary for a v1, but it'd be something to keep in mind.\n\n> Do we need to do something extra in the server to be\n> compatible with GREASE?\n\nNo, I think that as long as we use OpenSSL's APIs correctly on the\nserver side, we'll be compatible by default. This would be a\nclient-side implementation, to push random GREASE strings into the\nALPN list. (There is a risk that if/when OpenSSL finally starts\nsupporting this transparently, we'd need to remove it from our code.)\n\n> > If we don't have a reason not to, it'd be good to follow the strictest\n> > recommendations from [2] to avoid cross-protocol attacks. (For anyone\n> > currently running web servers and Postgres on the same host, they\n> > really don't want browsers \"talking\" to their Postgres servers.) That\n> > would mean checking the negotiated ALPN on both the server and client\n> > side, and failing if it's not what we expect.\n>\n> Hmm, I thought that's what the patches does. But looking closer, libpq\n> is not checking that ALPN was used. We should add that. Am I right?\n\nRight. Also, it looks like the server isn't failing the TLS handshake\nitself, but instead just dropping the connection after the handshake.\nIn a cross-protocol attack, there's a danger that the client (which is\nnot speaking our protocol) could still treat the server as\nauthoritative in that situation.\n\n> > I'm not excited about the proliferation of connection options. I don't\n> > have a lot of ideas on how to fix it, though, other than to note that\n> > the current sslnegotiation option names are very unintuitive to me:\n> > - \"postgres\": only legacy handshakes\n> > - \"direct\": might be direct... or maybe legacy\n> > - \"requiredirect\": only direct handshakes... unless other options are\n> > enabled and then we fall back again to legacy? How many people willing\n> > to break TLS compatibility with old servers via \"requiredirect\" are\n> > going to be okay with lazy fallback to GSS or otherwise?\n>\n> Yeah, this is my biggest complaint about all this. Not so much the names\n> of the options, but the number of combinations of different options, and\n> how we're going to test them all. I don't have any great solutions,\n> except adding a lot of tests to cover them, like Matthias did.\n\nThe default gssencmode=prefer is especially problematic if I'm trying\nto use sslnegotiation=requiredirect for security. It'll appear to work\nat first, but if somehow I get a credential cache into my environment,\nlibpq will suddenly fall back to plaintext negotiation :(\n\n> > (Plus, we need to have a good error message when connecting to older\n> > servers anyway.I think we should be able to key off of the EOF coming\n> > back from OpenSSL; it'd be a good excuse to give that part of the code\n> > some love.)\n>\n> Hmm, if OpenSSL sends ClientHello and the server responds with a\n> Postgres error packet, OpenSSL will presumably consume the error packet\n> or at least part of it. But with our custom BIO, we can peek at the\n> server response before handing it to OpenSSL.\n\nI don't think an error packet is going to come back with the\ncurrently-shipped implementations. IIUC, COMMERROR packets are\nswallowed instead of emitted before authentication completes. So I see\nEOFs when trying to connect to older servers. Do you know of any\nsituations where we'd see an actual error message on the wire?\n\n> If it helps, we could backport a nicer error message to old server\n> versions, similar to what we did with SCRAM in commit 96d0f988b1.\n\nThat might be nice regardless, instead of pushing \"invalid length of\nstartup packet\" into the logs.\n\n> > For the record, I'm adding some one-off tests for this feature to a\n> > local copy of my OAuth pytest suite, which is designed to do the kinds\n> > of testing you're running into trouble with. It's not in any way\n> > viable for a PG17 commit, but if you're interested I can make the\n> > patches available.\n>\n> Yes please, it would be nice to see what tests you've performed, and\n> have it archived.\n\nI've cleaned it up a bit and put it up at [1]. (If you want, I can\nattach the GitHub-generated ZIP, so the mailing list has a snapshot.)\n\nThese include happy-path tests for direct SSL, some failure modes, and\nan example test that combines the GSS and SSL negotiation paths. So\nthere might be test bugs, but with the v8 patchset, I see the\nfollowing failures:\n\n> FAILED client/test_tls.py::test_direct_ssl_without_alpn - AssertionError: client sent unexpected data\n\nI.e. the client doesn't disconnect if the server doesn't select our protocol.\n\n> FAILED client/test_tls.py::test_direct_ssl_failed_negotiation[direct-True] - AssertionError: Regex pattern did not match.\n> FAILED client/test_tls.py::test_direct_ssl_failed_negotiation[requiredirect-False] - AssertionError: Regex pattern did not match.\n> FAILED client/test_tls.py::test_gssapi_negotiation - AssertionError: Regex pattern did not match.\n\nThese are complaining about the \"SSL SYSCALL error: EOF detected\"\nerror messages that the client returns.\n\n> FAILED server/test_tls.py::test_direct_ssl_without_alpn[no application protocols] - Failed: DID NOT RAISE <class 'ssl.SSLError'>\n> FAILED server/test_tls.py::test_direct_ssl_without_alpn[incorrect application protocol] - Failed: DID NOT RAISE <class 'ssl.SSLError'>\n\nI.e. the server allows the handshake to complete without a proper ALPN\nselection.\n\nThanks,\n--Jacob\n\n[1] https://github.com/jchampio/pg-pytest-suite\n\n\n", "msg_date": "Tue, 5 Mar 2024 10:57:31 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "I keep forgetting -- attached is the diff I'm carrying to plug\nlibpq_encryption into Meson. (The current patchset has a meson.build\nfor it, but it's not connected.)\n\n--Jacob", "msg_date": "Tue, 5 Mar 2024 14:30:40 -0800", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Tue, 5 Mar 2024 at 15:08, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> I hope I didn't joggle your elbow reviewing this, Jacob, but I spent\n> some time rebase and fix various little things:\n\nWith the recent changes to backend startup committed by you, this\npatchset has gotten major apply failures.\n\nCould you provide a new version of the patchset so that it can be\nreviewed in the context of current HEAD?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 28 Mar 2024 12:15:27 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 28/03/2024 13:15, Matthias van de Meent wrote:\n> On Tue, 5 Mar 2024 at 15:08, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> I hope I didn't joggle your elbow reviewing this, Jacob, but I spent\n>> some time rebase and fix various little things:\n> \n> With the recent changes to backend startup committed by you, this\n> patchset has gotten major apply failures.\n> \n> Could you provide a new version of the patchset so that it can be\n> reviewed in the context of current HEAD?\n\nHere you are.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 28 Mar 2024 14:37:07 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Thu, 28 Mar 2024, 13:37 Heikki Linnakangas, <hlinnaka@iki.fi> wrote:\n>\n> On 28/03/2024 13:15, Matthias van de Meent wrote:\n> > On Tue, 5 Mar 2024 at 15:08, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >>\n> >> I hope I didn't joggle your elbow reviewing this, Jacob, but I spent\n> >> some time rebase and fix various little things:\n> >\n> > With the recent changes to backend startup committed by you, this\n> > patchset has gotten major apply failures.\n> >\n> > Could you provide a new version of the patchset so that it can be\n> > reviewed in the context of current HEAD?\n>\n> Here you are.\n\nSorry for the delay. I've run some tests and didn't find any specific\nissues in the patchset.\n\nI did get sidetracked on trying to further improve the test suite,\nwhere I was trying to find out how to use Test::More::subtests, but\nhave now decided it's not worth the lost time now vs adding this as a\nfeature in 17.\n\nSome remaining comments:\n\npatches 0001/0002: not reviewed in detail.\n\nPatch 0003:\n\nThe read size in secure_raw_read is capped to port->raw_buf_remaining\nif the raw buf has any data. While the user will probably call into\nthis function again, I think that's a waste of cycles.\n\npq_buffer_has_data now doesn't have any protections against\ndesynchronized state between PqRecvLength and PqRecvPointer. An\nAssert(PqRecvLength >= PqRecvPointer) to that value would be\nappreciated.\n\n(in backend_startup.c)\n> + elog(LOG, \"Detected direct SSL handshake\");\n\nI think this should be gated at a lower log level, or a GUC, as this\nwouls easily DOS a logfile by bulk sending of SSL handshake bytes.\n\n0004:\n\nbackend_startup.c\n> + if (!ssl_enable_alpn)\n> + {\n> + elog(WARNING, \"Received direct SSL connection without ssl_enable_alpn enabled\");\n\nThis is too verbose, too.\n\n> + if (!port->alpn_used)\n> + {\n> + ereport(COMMERROR,\n> + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> + errmsg(\"Received direct SSL connection request without required ALPN protocol negotiation extension\")));\n\nIf ssl_enable_alpn is disabled, we shouln't report a COMMERROR when\nthe client does indeed not have alpn enabled.\n\n0005:\n\nAs mentioned above, I'd have loved to use subtests here for the cube()\nof tests, but I got in too much of a rabbit hole to get that done.\n\n0006:\n\nIn CONNECTION_FAILED, we use connection_failed() to select whether we\nneed a new connection or stop trying altogether, but that function's\ndescription states:\n\n> + * Out-of-line portion of the CONNECTION_FAILED() macro\n> + *\n> + * Returns true, if we should retry the connection with different encryption method.\n\nWhich to me reads like we should reuse the connection, and try a\ndifferent method on that same connection. Maybe we can improve the\nwording to something like\n+ * Returns true, if we should reconnect with a different encryption method.\nto make the reconnect part more clear.\n\nIn select_next_encryption_method, there are several copies of this pattern:\n\nif ((remaining_methods & ENC_METHOD) != 0)\n{\n conn->current_enc_method = ENC_METHOD;\n return true;\n}\n\nI think a helper macro would reduce the verbosity of the scaffolding,\nlike in the attached SELECT_NEXT_METHOD.diff.txt.\n\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Thu, 4 Apr 2024 13:08:08 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "Committed this. Thank you to everyone involved!\n\nOn 04/04/2024 14:08, Matthias van de Meent wrote:\n> Patch 0003:\n> \n> The read size in secure_raw_read is capped to port->raw_buf_remaining\n> if the raw buf has any data. While the user will probably call into\n> this function again, I think that's a waste of cycles.\n\nHmm, yeah, I suppose we could read more data in the same call. It seems \nsimpler not to. The case that \"raw_buf_remaining > 0\" is a very rare.\n\n> pq_buffer_has_data now doesn't have any protections against\n> desynchronized state between PqRecvLength and PqRecvPointer. An\n> Assert(PqRecvLength >= PqRecvPointer) to that value would be\n> appreciated.\n\nAdded.\n\n> 0006:\n> \n> In CONNECTION_FAILED, we use connection_failed() to select whether we\n> need a new connection or stop trying altogether, but that function's\n> description states:\n> \n>> + * Out-of-line portion of the CONNECTION_FAILED() macro\n>> + *\n>> + * Returns true, if we should retry the connection with different encryption method.\n> \n> Which to me reads like we should reuse the connection, and try a\n> different method on that same connection. Maybe we can improve the\n> wording to something like\n> + * Returns true, if we should reconnect with a different encryption method.\n> to make the reconnect part more clear.\n\nChanged to \"Returns true, if we should reconnect and retry with a \ndifferent encryption method\".\n\n> In select_next_encryption_method, there are several copies of this pattern:\n> \n> if ((remaining_methods & ENC_METHOD) != 0)\n> {\n> conn->current_enc_method = ENC_METHOD;\n> return true;\n> }\n> \n> I think a helper macro would reduce the verbosity of the scaffolding,\n> like in the attached SELECT_NEXT_METHOD.diff.txt.\n\nApplied.\n\nIn addition to the above, I made heavy changes to the tests. I wanted to \ntest not just the outcome (SSL, GSSAPI, plaintext, or fail), but also \nthe steps and reconnections needed to get there. To facilitate that, I \nrewrote how the expected outcome was represented in the test script. It \nnow uses a table-driven approach, with a line for each test iteration, \nie. for each different combination of options that are tested.\n\nI then added some more logging, so that whenever the server receives an \nSSLRequest or GSSENCRequest packet, it logs a line. That's controlled by \na new not-in-sample GUC (\"trace_connection_negotiation\"), intended only \nfor the test and debugging. The test scrapes the log for the lines that \nit prints, and the expected output includes a compact trace of expected \nevents. For example, the expected output for \"user=testuser \ngssencmode=prefer sslmode=prefer sslnegotiation=direct\", when GSS and \nSSL are both disabled in the server, looks like this:\n\n# USER GSSENCMODE SSLMODE SSLNEGOTIATION EVENTS -> OUTCOME\ntestuser prefer prefer direct connect, \ndirectsslreject, reconnect, sslreject, authok -> plain\n\nThat means, we expect libpq to first try direct SSL, which is rejected \nby the server. It should then reconnect and attempt traditional \nnegotiated SSL, which is also rejected. Finally, it should try plaintext \nauthentication, without reconnecting, which succeeds.\n\nThat actually revealed a couple of slightly bogus behaviors with the \ncurrent code. Here's one example:\n\n# XXX: libpq retries the connection unnecessarily in this case:\nnogssuser require allow connect, gssaccept, authfail, \nreconnect, gssaccept, authfail -> fail\n\nThat means, with \"gssencmode=require sslmode=allow\", if the server \naccepts the GSS encryption but refuses the connection at authentication, \nlibpq will reconnect and go through the same motions again. The second \nattempt is pointless, we know it's going to fail. The refactoring to the \nlibpq state machine fixed that issue as a side-effect.\n\nI removed the server ssl_enable_alpn and libpq sslalpn options. The idea \nwas that they could be useful for testing, but we didn't actually have \nany tests that would use them, and you get the same result by testing \nwith an older server or client version. I'm open to adding them back if \nwe also add tests that benefit from them, but they were pretty pointless \nas they were.\n\nOne important open item now is that we need to register a proper ALPN \nprotocol ID with IANA.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 04:25:54 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Committed this. Thank you to everyone involved!\n\nLooks like perlcritic isn't too happy with the test code:\nkoel and crake say\n\n./src/test/libpq_encryption/t/001_negotiate_encryption.pl: Return value of flagged function ignored - chmod at line 138, column 2. See pages 208,278 of PBP. ([InputOutput::RequireCheckedSyscalls] Severity: 5)\n./src/test/libpq_encryption/t/001_negotiate_encryption.pl: Return value of flagged function ignored - open at line 184, column 1. See pages 208,278 of PBP. ([InputOutput::RequireCheckedSyscalls] Severity: 5)\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 21:28:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 08/04/2024 04:28, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> Committed this. Thank you to everyone involved!\n> \n> Looks like perlcritic isn't too happy with the test code:\n> koel and crake say\n> \n> ./src/test/libpq_encryption/t/001_negotiate_encryption.pl: Return value of flagged function ignored - chmod at line 138, column 2. See pages 208,278 of PBP. ([InputOutput::RequireCheckedSyscalls] Severity: 5)\n> ./src/test/libpq_encryption/t/001_negotiate_encryption.pl: Return value of flagged function ignored - open at line 184, column 1. See pages 208,278 of PBP. ([InputOutput::RequireCheckedSyscalls] Severity: 5)\n\nFixed, thanks.\n\nI'll make a note in my personal TODO list to add perlcritic to cirrus CI \nif possible. I rely heavily on that nowadays to catch issues before the \nbuildfarm.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 04:40:00 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 08/04/2024 04:25, Heikki Linnakangas wrote:\n> One important open item now is that we need to register a proper ALPN\n> protocol ID with IANA.\n\nI sent a request for that: \nhttps://mailarchive.ietf.org/arch/msg/tls-reg-review/9LWPzQfOpbc8dTT7vc9ahNeNaiw/\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 11:38:57 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 08.04.24 10:38, Heikki Linnakangas wrote:\n> On 08/04/2024 04:25, Heikki Linnakangas wrote:\n>> One important open item now is that we need to register a proper ALPN\n>> protocol ID with IANA.\n> \n> I sent a request for that: \n> https://mailarchive.ietf.org/arch/msg/tls-reg-review/9LWPzQfOpbc8dTT7vc9ahNeNaiw/\n\nWhy did you ask for \"pgsql\"? The IANA protocol name for port 5432 is \n\"postgres\". This seems confusing.\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 22:51:29 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 01.03.24 22:49, Jacob Champion wrote:\n> If we're interested in ALPN negotiation in the future, we may also\n> want to look at GREASE [1] to keep those options open in the presence\n> of third-party implementations. Unfortunately OpenSSL doesn't do this\n> automatically yet.\n> \n> If we don't have a reason not to, it'd be good to follow the strictest\n> recommendations from [2] to avoid cross-protocol attacks. (For anyone\n> currently running web servers and Postgres on the same host, they\n> really don't want browsers \"talking\" to their Postgres servers.) That\n> would mean checking the negotiated ALPN on both the server and client\n> side, and failing if it's not what we expect.\n\nI've been reading up on ALPN. There is another thread that is \ndiscussing PostgreSQL protocol version negotiation, and ALPN also has \n\"protocol negotiation\" in the name and there is some discussion in this \nthread about the granularity oft the protocol names.\n\nI'm concerned that there appears to be some confusion over whether ALPN \nis a performance feature or a security feature. RFC 7301 appears to be \npretty clear that it's for performance, not for security.\n\nLooking at the ALPACA attack, I'm not convinced that it's very relevant \nfor PostgreSQL. It's basically just a case of, you connected to the \nwrong server. And web browsers routinely open additional connections \nbased on what data they have previously received, and they liberally \nsend along session cookies to those new connections, so I understand \nthat this can be a problem. But I don't see how ALPN is a good defense. \n It can help only if all other possible services other than http \nimplement it and say, you're a web browser, go away. And what if the \nrogue server is in fact a web server, then it doesn't help at all. I \nguess there could be some common configurations where there is a web \nserver, and ftp server, and some mail servers running on the same TLS \nend point. But in how many cases is there also a PostgreSQL server \nrunning on the same end point? The page about ALPACA also suggests SNI \nas a mitigation, which seems more sensible, because the burden is then \non the client to do the right thing, and not on all other servers to \nsend away clients doing the wrong thing. And of course libpq already \nsupports SNI.\n\nFor the protocol negotiation aspect, how does this work if the wrapped \nprotocol already has a version negotiation system? For example, various \nHTTP versions are registered as separate protocols for ALPN. What if \nALPN says it's HTTP/1.0 but the actual HTTP requests specify 1.1, or \nvice versa? What is the actual mechanism where the performance benefits \n(saving round-trips) are created? I haven't caught up with HTTP 2 and \nso on, so maybe there are additional things at play there, but it is not \nfully explained in the RFCs. I suppose PostgreSQL would keep its \ninternal protocol version negotiation in any case, but then what do we \nneed ALPN on top for?\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 22:57:08 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On Wed, Apr 24, 2024 at 1:57 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> I'm concerned that there appears to be some confusion over whether ALPN\n> is a performance feature or a security feature. RFC 7301 appears to be\n> pretty clear that it's for performance, not for security.\n\nIt was also designed to give benefits for more complex topologies\n(proxies, cert selection, etc.), but yeah, this is a mitigation\ntechnique that just uses what is already widely implemented.\n\n> Looking at the ALPACA attack, I'm not convinced that it's very relevant\n> for PostgreSQL. It's basically just a case of, you connected to the\n> wrong server.\n\nI think that's an oversimplification. This prevents active MITM, where\nan adversary has connected you to the wrong server.\n\n> But I don't see how ALPN is a good defense.\n> It can help only if all other possible services other than http\n> implement it and say, you're a web browser, go away.\n\nWhy? An ALPACA-aware client will fail the connection if the server\ndoesn't advertise the correct protocol. An ALPACA-aware server will\nfail the handshake if the client doesn't advertise the correct\nprotocol. They protect themselves, and their peers, without needing\ntheir peers to understand.\n\n> And what if the\n> rogue server is in fact a web server, then it doesn't help at all.\n\nIt's not a rogue server; the attack is using other friendly services\nagainst you. If you're able to set up an attacker-controlled server,\nusing the same certificate as the valid server, on a host covered by\nthe cert, I think it's game over for many other reasons.\n\nIf you mean that you can't prevent an attacker from redirecting one\nweb server's traffic to another (friendly) web server that's running\non the same host, that's correct. Web admins who care would need to\nimplement countermeasures, like Origin header filtering or something?\nI don't think we have a similar concept to that -- it'd be nice! --\nbut we don't need to have one in order to provide protection for the\nother network protocols we exist next to.\n\n> I\n> guess there could be some common configurations where there is a web\n> server, and ftp server, and some mail servers running on the same TLS\n> end point. But in how many cases is there also a PostgreSQL server\n> running on the same end point?\n\nNot only have I seen those cohosted, I've deployed such setups myself.\nIsn't that basically cPanel's MO, and a standard setup for <shared web\nhosting provider here>? (It's been a while and I don't have a setup\nhandy to double-check, sorry; feel free to push back if they don't do\nthat anymore.)\n\nA quick search for \"running web server and Postgres on the same host\"\nseems to yield plenty of conversations. Some of those conversations\nsay \"don't do it\", but of course others do not :) Some actively\nencourage it for simplicity.\n\n> The page about ALPACA also suggests SNI\n> as a mitigation, which seems more sensible, because the burden is then\n> on the client to do the right thing, and not on all other servers to\n> send away clients doing the wrong thing. And of course libpq already\n> supports SNI.\n\nThat mitigates a different attack. From the ALPACA site [1]:\n\n> Implementing these [ALPN] countermeasures is effective in preventing cross-protocol attacks irregardless of hostnames and ports used for application servers.\n> ...\n> Implementing these [SNI] countermeasures is effective in preventing same-protocol attacks on servers with different hostnames, as well as cross-protocol attacks on servers with different hostnames even if the ALPN countermeasures can not be implemented.\n\nSNI is super useful; it's just not always enough. And a strict SNI\ncheck would also be good to do, but it doesn't seem imperative to tie\nit to this feature, since same-protocol attacks were already possible\nAFAICT. It's the cross-protocol attacks that are new, made possible by\nthe new handshake.\n\n> For the protocol negotiation aspect, how does this work if the wrapped\n> protocol already has a version negotiation system? For example, various\n> HTTP versions are registered as separate protocols for ALPN. What if\n> ALPN says it's HTTP/1.0 but the actual HTTP requests specify 1.1, or\n> vice versa?\n\nIf a client or server incorrectly negotiates a protocol and then\nstarts speaking a different one, then it's just protocol-dependent\nwhether that works or not. HTTP/1.0 and HTTP/1.1 would still be\ncross-compatible in some cases. The others, not so much.\n\n> What is the actual mechanism where the performance benefits\n> (saving round-trips) are created?\n\nThe negotiation gets done as part of the TLS handshake, which had to\nbe done anyway.\n\n> I haven't caught up with HTTP 2 and\n> so on, so maybe there are additional things at play there, but it is not\n> fully explained in the RFCs.\n\nPractically speaking, HTTP/2 is negotiated via ALPN in the real world,\nat least last I checked. I don't think browsers ever supported the\nplaintext h2c:// scheme. There's also an in-band `Upgrade: h2c` path\ndefined that does not use ALPN at all, but again I don't think any\nbrowsers use it.\n\n> I suppose PostgreSQL would keep its\n> internal protocol version negotiation in any case, but then what do we\n> need ALPN on top for?\n\nThat is entirely up to us. If there's a 4.0 protocol that's completely\nincompatible at the network level (multiplexing? QUIC?) then issuing a\nnew ALPN would probably be useful.\n\nThanks,\n--Jacob\n\n[1] https://alpaca-attack.com/libs.html\n\n\n", "msg_date": "Wed, 24 Apr 2024 17:57:21 -0700", "msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" }, { "msg_contents": "On 24/04/2024 23:51, Peter Eisentraut wrote:\n> On 08.04.24 10:38, Heikki Linnakangas wrote:\n>> On 08/04/2024 04:25, Heikki Linnakangas wrote:\n>>> One important open item now is that we need to register a proper ALPN\n>>> protocol ID with IANA.\n>>\n>> I sent a request for that:\n>> https://mailarchive.ietf.org/arch/msg/tls-reg-review/9LWPzQfOpbc8dTT7vc9ahNeNaiw/\n> \n> Why did you ask for \"pgsql\"? The IANA protocol name for port 5432 is\n> \"postgres\". This seems confusing.\n\nOh, I was not aware of that. According to [1], it's actually \n\"postgresql\". The ALPN registration has not been approved yet, so I'll \nreply on the ietf thread to point that out.\n\n[1] \nhttps://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 25 Apr 2024 10:07:54 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Experiments with Postgres and SSL" } ]
[ { "msg_contents": "pg_stat_progress_copy was added in v14 (8a4f618e7, 9d2d45700).\n\nBut if a command JOINs file_fdw tables, the progress report gets bungled\nup. This will warn/assert during file_fdw tests.\n\ndiff --git a/src/backend/utils/activity/backend_progress.c b/src/backend/utils/activity/backend_progress.c\nindex 6743e68cef6..7abcb4f60db 100644\n--- a/src/backend/utils/activity/backend_progress.c\n+++ b/src/backend/utils/activity/backend_progress.c\n@@ -10,6 +10,7 @@\n */\n #include \"postgres.h\"\n \n+#include \"commands/progress.h\"\n #include \"port/atomics.h\"\t\t/* for memory barriers */\n #include \"utils/backend_progress.h\"\n #include \"utils/backend_status.h\"\n@@ -105,6 +106,20 @@ pgstat_progress_end_command(void)\n \tif (beentry->st_progress_command == PROGRESS_COMMAND_INVALID)\n \t\treturn;\n \n+// This currently fails file_fdw tests, since pgstat_progress evidently fails\n+// to support simultaneous copy commands, as happens during JOIN.\n+\t/* bytes progress is not available in all cases */\n+\tif (beentry->st_progress_command == PROGRESS_COMMAND_COPY &&\n+\t\t\tbeentry->st_progress_param[PROGRESS_COPY_BYTES_TOTAL] > 0)\n+\t{\n+\t\tvolatile int64 *a = beentry->st_progress_param;\n+\t\tif (a[PROGRESS_COPY_BYTES_PROCESSED] > a[PROGRESS_COPY_BYTES_TOTAL])\n+\t\t\telog(WARNING, \"PROGRESS_COPY_BYTES_PROCESSED %ld %ld\",\n+\t\t\t\t\ta[PROGRESS_COPY_BYTES_PROCESSED],\n+\t\t\t\t\ta[PROGRESS_COPY_BYTES_TOTAL]);\n+\t\t// Assert(a[PROGRESS_COPY_BYTES_PROCESSED] <= a[PROGRESS_COPY_BYTES_TOTAL]);\n+\t}\n+\n \tPGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n \tbeentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n \tbeentry->st_progress_command_target = InvalidOid;\n\n\n", "msg_date": "Wed, 18 Jan 2023 23:47:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Thu, 19 Jan 2023 at 06:47, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> pg_stat_progress_copy was added in v14 (8a4f618e7, 9d2d45700).\n>\n> But if a command JOINs file_fdw tables, the progress report gets bungled\n> up. This will warn/assert during file_fdw tests.\n\nI don't know what to do with that other than disabling COPY progress\nreporting for file_fdw, i.e. calls to BeginCopyFrom that don't supply\na pstate. This is probably the best option, because a table backed by\nfile_fdw would also interfere with COPY TO's progress reporting.\n\nAttached a patch that solves this specific issue in a\nbinary-compatible way. I'm not super happy about relying on behavior\nof callers of BeginCopyFrom (assuming that users that run copy\nconcurrently will not provide a ParseState* to BeginCopyFrom), but it\nis what it is.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Sat, 21 Jan 2023 01:51:28 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Fri, Jan 20, 2023 at 4:51 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Thu, 19 Jan 2023 at 06:47, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > pg_stat_progress_copy was added in v14 (8a4f618e7, 9d2d45700).\n> >\n> > But if a command JOINs file_fdw tables, the progress report gets bungled\n> > up. This will warn/assert during file_fdw tests.\n>\n> I don't know what to do with that other than disabling COPY progress\n> reporting for file_fdw, i.e. calls to BeginCopyFrom that don't supply\n> a pstate. This is probably the best option, because a table backed by\n> file_fdw would also interfere with COPY TO's progress reporting.\n>\n> Attached a patch that solves this specific issue in a\n> binary-compatible way. I'm not super happy about relying on behavior\n> of callers of BeginCopyFrom (assuming that users that run copy\n> concurrently will not provide a ParseState* to BeginCopyFrom), but it\n> is what it is.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\nHi,\nIn `BeginCopyFrom`, I see the following :\n\n if (pstate)\n {\n cstate->range_table = pstate->p_rtable;\n cstate->rteperminfos = pstate->p_rteperminfos;\n\nIs it possible to check range_table / rteperminfos so that we don't\nintroduce the bool field ?\n\nCheers\n\nOn Fri, Jan 20, 2023 at 4:51 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Thu, 19 Jan 2023 at 06:47, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> pg_stat_progress_copy was added in v14 (8a4f618e7, 9d2d45700).\n>\n> But if a command JOINs file_fdw tables, the progress report gets bungled\n> up.  This will warn/assert during file_fdw tests.\n\nI don't know what to do with that other than disabling COPY progress\nreporting for file_fdw, i.e. calls to BeginCopyFrom that don't supply\na pstate. This is probably the best option, because a table backed by\nfile_fdw would also interfere with COPY TO's progress reporting.\n\nAttached a patch that solves this specific issue in a\nbinary-compatible way. I'm not super happy about relying on behavior\nof callers of BeginCopyFrom (assuming that users that run copy\nconcurrently will not provide a ParseState* to BeginCopyFrom), but it\nis what it is.\n\nKind regards,\n\nMatthias van de MeentHi,In `BeginCopyFrom`, I see the following :        if (pstate)        {                cstate->range_table = pstate->p_rtable;                cstate->rteperminfos = pstate->p_rteperminfos; Is it possible to check range_table / rteperminfos so that we don't introduce the bool field ?Cheers", "msg_date": "Fri, 20 Jan 2023 17:03:43 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Sat, 21 Jan 2023 at 02:04, Ted Yu <yuzhihong@gmail.com> wrote:\n>\n>\n>\n> On Fri, Jan 20, 2023 at 4:51 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>>\n>> Attached a patch that solves this specific issue in a\n>> binary-compatible way. I'm not super happy about relying on behavior\n>> of callers of BeginCopyFrom (assuming that users that run copy\n>> concurrently will not provide a ParseState* to BeginCopyFrom), but it\n>> is what it is.\n>\n> Is it possible to check range_table / rteperminfos so that we don't introduce the bool field ?\n\nI think yes, but I'm not sure we can depend on rteperminfos to be set,\nand the same for p_rtable. I also don't think it's a good idea for\ncode clarity: there is no good reason why the (un)availability of\neither range_table or rteperminfos would allow progress reporting - it\nwould require additional extensive documentation around both the usage\nsites and the field itself. Adding a well-named field provides a much\nbetter experience in my opinion.\n\nIf someone were opposed to adding that field in backbranches I'm fine\nwith using one of these instead, assuming additional clear\ndocumentation is added as well.\n\n- Matthias\n\n\n", "msg_date": "Sat, 21 Jan 2023 02:22:12 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Sat, Jan 21, 2023 at 01:51:28AM +0100, Matthias van de Meent wrote:\n> On Thu, 19 Jan 2023 at 06:47, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > pg_stat_progress_copy was added in v14 (8a4f618e7, 9d2d45700).\n> >\n> > But if a command JOINs file_fdw tables, the progress report gets bungled\n> > up. This will warn/assert during file_fdw tests.\n> \n> I don't know what to do with that other than disabling COPY progress\n> reporting for file_fdw, i.e. calls to BeginCopyFrom that don't supply\n> a pstate. This is probably the best option, because a table backed by\n> file_fdw would also interfere with COPY TO's progress reporting.\n> \n> Attached a patch that solves this specific issue in a\n> binary-compatible way. I'm not super happy about relying on behavior\n> of callers of BeginCopyFrom (assuming that users that run copy\n> concurrently will not provide a ParseState* to BeginCopyFrom), but it\n> is what it is.\n\nThanks for looking. Maybe another option is to avoid progress reporting\nin 2nd and later CopyFrom() if another COPY was already running in that\nbackend.\n\nWould you do anything different in the master branch, with no\ncompatibility constraints ? I think the progress reporting would still\nbe limited to one row per backend, not one per CopyFrom().\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 20 Jan 2023 19:28:02 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Sat, 21 Jan 2023 at 02:28, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Jan 21, 2023 at 01:51:28AM +0100, Matthias van de Meent wrote:\n> > On Thu, 19 Jan 2023 at 06:47, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > pg_stat_progress_copy was added in v14 (8a4f618e7, 9d2d45700).\n> > >\n> > > But if a command JOINs file_fdw tables, the progress report gets bungled\n> > > up. This will warn/assert during file_fdw tests.\n> >\n> > I don't know what to do with that other than disabling COPY progress\n> > reporting for file_fdw, i.e. calls to BeginCopyFrom that don't supply\n> > a pstate. This is probably the best option, because a table backed by\n> > file_fdw would also interfere with COPY TO's progress reporting.\n> >\n> > Attached a patch that solves this specific issue in a\n> > binary-compatible way. I'm not super happy about relying on behavior\n> > of callers of BeginCopyFrom (assuming that users that run copy\n> > concurrently will not provide a ParseState* to BeginCopyFrom), but it\n> > is what it is.\n>\n> Thanks for looking. Maybe another option is to avoid progress reporting\n> in 2nd and later CopyFrom() if another COPY was already running in that\n> backend.\n\nLet me think about it. I think it would work, but I'm not sure that's\na great option - it adds backend state that we would need to add\ncleanup handles for. But then again, COPY ... TO could use TRIGGER to\ntrigger actual COPY FROM statements, which would also break progress\nreporting in a vanilla instance without extensions.\n\nI'm not sure what the right thing to do is here.\n\n> Would you do anything different in the master branch, with no\n> compatibility constraints ? I think the progress reporting would still\n> be limited to one row per backend, not one per CopyFrom().\n\nI think I would at least introduce another parameter to BeginCopyFrom\nfor progress reporting (instead of relying on pstate != NULL), like\nhow we have a bit in reindex_index's params->options that specifies\nwhether we want progress reporting (which is unset for parallel\nworkers iirc).\n\n- Matthias\n\n\n", "msg_date": "Sat, 21 Jan 2023 02:45:40 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Sat, Jan 21, 2023 at 02:45:40AM +0100, Matthias van de Meent wrote:\n> Let me think about it. I think it would work, but I'm not sure that's\n> a great option - it adds backend state that we would need to add\n> cleanup handles for. But then again, COPY ... TO could use TRIGGER to\n> trigger actual COPY FROM statements, which would also break progress\n> reporting in a vanilla instance without extensions.\n> \n> I'm not sure what the right thing to do is here.\n\nSimply disabling COPY reporting for file_fdw does not sound appealing\nto me, because that can be really useful for users. As long as you\nrely on two code paths that call separately the progress reporting,\nthings are doomed with the current infrastructure. For example,\nthinking about some fancy cases, you could you create an index that\nuses a function as expression, function that includes utility commands\nthat do progress reporting. Things would equally go wrong in the\nprogress view.\n\nWhat are the assertions/warnings that get triggered in file_fdw?\nJoining together file_fdw with a plain COPY is surely a fancy case,\neven if COPY TO would allow that.\n\n>> Would you do anything different in the master branch, with no\n>> compatibility constraints ? I think the progress reporting would still\n>> be limited to one row per backend, not one per CopyFrom().\n> \n> I think I would at least introduce another parameter to BeginCopyFrom\n> for progress reporting (instead of relying on pstate != NULL), like\n> how we have a bit in reindex_index's params->options that specifies\n> whether we want progress reporting (which is unset for parallel\n> workers iirc).\n\nThis makes sense, independently of the discussion about what should be\ndone with cross-runs of these APIs. This could be extended with\nuser-controllable options for each one of them. It does not take care\nof the root of the problem, just bypasses it.\n--\nMichael", "msg_date": "Sat, 28 Jan 2023 11:55:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Sat, Jan 21, 2023 at 02:45:40AM +0100, Matthias van de Meent wrote:\n> > Would you do anything different in the master branch, with no\n> > compatibility constraints ? I think the progress reporting would still\n> > be limited to one row per backend, not one per CopyFrom().\n> \n> I think I would at least introduce another parameter to BeginCopyFrom\n> for progress reporting (instead of relying on pstate != NULL), like\n> how we have a bit in reindex_index's params->options that specifies\n> whether we want progress reporting (which is unset for parallel\n> workers iirc).\n\nThis didn't get fixed for v16, and it seems unlikely that it'll be\naddressed in back branches.\n\nBut while I was reviewing forgotten threads, it occurred to me to raise\nthe issue in time to fix it for v17.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 May 2024 07:27:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Tue, May 07, 2024 at 07:27:54AM -0500, Justin Pryzby wrote:\n> This didn't get fixed for v16, and it seems unlikely that it'll be\n> addressed in back branches.\n> \n> But while I was reviewing forgotten threads, it occurred to me to raise\n> the issue in time to fix it for v17.\n\nThanks for the reminder.\n\nFWIW, I'm rather annoyed by the fact that we rely on the ParseState to\ndecide if reporting should happen or not. file_fdw tells, even if\nthat's accidental, that status reporting can be useful if working on a\nsingle table. So, shutting down the whole reporting if a caller if\nBeginCopyFrom() does not give a ParseState is too heavy-handed, IMO.\n\nThe addition of report_progress in the COPY FROM state data is a good\nidea, though isn't that something we should give as an argument of\nBeginCopyFrom() instead if sticking this knowledge in COPY FROM?\n\nDifferent idea: could it be something worth controlling with a\nquery-level option? It would then be possible to provide the same\nlevel of control for COPY TO which has reporting paths, given the\napplication control over the reporting even with file_fdw, and store\nthe value controlling the reporting in CopyFormatOptions. I am\nwondering if there would be a case where someone wants to do a COPY\nbut hide entirely the reporting done.\n\nThe query-level option has some appeal.\n--\nMichael", "msg_date": "Wed, 8 May 2024 10:12:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Tue, May 7, 2024 at 9:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n> FWIW, I'm rather annoyed by the fact that we rely on the ParseState to\n> decide if reporting should happen or not. file_fdw tells, even if\n> that's accidental, that status reporting can be useful if working on a\n> single table. So, shutting down the whole reporting if a caller if\n> BeginCopyFrom() does not give a ParseState is too heavy-handed, IMO.\n\nI think you're hoping for too much. The progress reporting\ninfrastructure is fundamentally designed around the idea that there\ncan only be one progress-reporting operation in progress at a time.\nFor COPY, that is, I believe, true, but for file_fdw, it's false. If\nwe want to do any kind of progress reporting from within plannable\nqueries, we need some totally different and much more complex\ninfrastructure that can report progress for, probably, each plan node\nindividually. I think it's likely a mistake to try to shoehorn cases\nlike this into the infrastructure\nthat we have today. It will just encourage people to try to use the\ncurrent infrastructure in ways that are less and less like what it was\nactually designed to do; whereas what we should be doing if we want\nthis kind of functionality, at least IMHO, is building infrastructure\nthat's actually fit for purpose.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 May 2024 10:07:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" }, { "msg_contents": "On Wed, May 08, 2024 at 10:07:15AM -0400, Robert Haas wrote:\n> I think you're hoping for too much. The progress reporting\n> infrastructure is fundamentally designed around the idea that there\n> can only be one progress-reporting operation in progress at a time.\n> For COPY, that is, I believe, true, but for file_fdw, it's false. If\n> we want to do any kind of progress reporting from within plannable\n> queries, we need some totally different and much more complex\n> infrastructure that can report progress for, probably, each plan node\n> individually. I think it's likely a mistake to try to shoehorn cases\n> like this into the infrastructure\n> that we have today. It will just encourage people to try to use the\n> current infrastructure in ways that are less and less like what it was\n> actually designed to do; whereas what we should be doing if we want\n> this kind of functionality, at least IMHO, is building infrastructure\n> that's actually fit for purpose.\n\nHmm. OK. I have been looking around for cases out there where\nBeginCopyFrom() could be called with a pstate where the reporting\ncould matter, and could not find anything worth worrying about. It\nstill makes me a bit uneasy to not have a separate argument in the\nfunction to control that. Now, if you, Justin and Matthias agree with\nthis approach, I won't stand in the way either.\n--\nMichael", "msg_date": "Thu, 9 May 2024 08:57:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: bug: copy progress reporting of backends which run multiple COPYs" } ]
[ { "msg_contents": "Dear hackers, my good friend Hou Jiaxing and I have implemented a version of the code that supports multiple integer range conditions in the in condition control of the for loop statement in the plpgsql procedural language. A typical example is as follows:\r\n\r\npostgres=# do $$\r\ndeclare\r\n i int := 10;\r\nbegin\r\n for i in 1..10 by 3, reverse i+10..i+1 by 3 loop\r\n raise info '%', i;\r\n end loop;\r\nend $$;\r\nINFO: 1\r\nINFO: 4\r\nINFO: 7\r\nINFO: 10\r\nINFO: 20\r\nINFO: 17\r\nINFO: 14\r\nINFO: 11\r\ndo\r\npostgres=#\r\n\r\nHope to get your feedback, thank you!\r\n\r\n\r\n\r\n2903807914@qq.com", "msg_date": "Thu, 19 Jan 2023 17:23:05 +0800", "msg_from": "\"2903807914@qq.com\" <2903807914@qq.com>", "msg_from_op": true, "msg_subject": "Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hi\n\n\nčt 19. 1. 2023 v 10:23 odesílatel 2903807914@qq.com <2903807914@qq.com>\nnapsal:\n\n> Dear hackers, my good friend Hou Jiaxing and I have implemented a version\n> of the code that supports multiple integer range conditions in the in\n> condition control of the for loop statement in the plpgsql procedural\n> language. A typical example is as follows:\n>\n> postgres=# do $$\n> declare\n> i int := 10;\n> begin\n> for i in 1..10 by 3, reverse i+10..i+1 by 3 loop\n> raise info '%', i;\n> end loop;\n> end $$;\n> INFO: 1\n> INFO: 4\n> INFO: 7\n> INFO: 10\n> INFO: 20\n> INFO: 17\n> INFO: 14\n> INFO: 11\n> do\n> postgres=#\n>\n> Hope to get your feedback, thank you!\n>\n\nI don't like it. The original design of ADA language is to be a safe and\nsimple language. Proposed design is in 100% inversion.\n\nWhat use case it should to support?\n\nRegards\n\nPavel\n\n\n>\n> ------------------------------\n> 2903807914@qq.com\n>\n\nHičt 19. 1. 2023 v 10:23 odesílatel 2903807914@qq.com <2903807914@qq.com> napsal:\nDear hackers, my good friend Hou Jiaxing and I have implemented a version of the code that supports multiple integer range conditions in the in condition control of the for loop statement in the plpgsql procedural language. A typical example is as follows:postgres=# do $$declare    i int := 10;begin    for i in 1..10 by 3, reverse i+10..i+1 by 3 loop       raise info '%', i;    end loop;end $$;INFO: 1INFO: 4INFO: 7INFO: 10INFO: 20INFO: 17INFO: 14INFO: 11dopostgres=#Hope to get your feedback, thank you!I don't like it. The original design of ADA language is to be a safe and simple language. Proposed design is in 100% inversion.What use case it should to support?RegardsPavel \n\n2903807914@qq.com", "msg_date": "Thu, 19 Jan 2023 14:04:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hello, thank you very much for your reply. But I think you may have misunderstood what we have done. \r\n\r\nWhat we do this time is that we can use multiple range ranges (condition_iterator) after in. Previously, we can only use such an interval [lower, upper] after in, but in some scenarios, we may need a list: condition_ iterator[,condition_iterator ...]\r\n\r\ncondition_iterator:\r\n[ REVERSE ] expression .. expression [ BY expression ] \r\n\r\nThanks again!\r\n\r\n\r\nsongjinzhou (2903807914@qq.com)\r\n \r\nFrom: Pavel Stehule\r\nDate: 2023-01-19 21:04\r\nTo: 2903807914@qq.com\r\nCC: pgsql-hackers; 1276576182\r\nSubject: Re: Support plpgsql multi-range in conditional control\r\nHi\r\n\r\n\r\nčt 19. 1. 2023 v 10:23 odesílatel 2903807914@qq.com <2903807914@qq.com> napsal:\r\nDear hackers, my good friend Hou Jiaxing and I have implemented a version of the code that supports multiple integer range conditions in the in condition control of the for loop statement in the plpgsql procedural language. A typical example is as follows:\r\n\r\npostgres=# do $$\r\ndeclare\r\n i int := 10;\r\nbegin\r\n for i in 1..10 by 3, reverse i+10..i+1 by 3 loop\r\n raise info '%', i;\r\n end loop;\r\nend $$;\r\nINFO: 1\r\nINFO: 4\r\nINFO: 7\r\nINFO: 10\r\nINFO: 20\r\nINFO: 17\r\nINFO: 14\r\nINFO: 11\r\ndo\r\npostgres=#\r\n\r\nHope to get your feedback, thank you!\r\n\r\nI don't like it. The original design of ADA language is to be a safe and simple language. Proposed design is in 100% inversion.\r\n\r\nWhat use case it should to support?\r\n\r\nRegards\r\n\r\nPavel\r\n \r\n\r\n\r\n\r\n2903807914@qq.com\r\n\n\nHello, thank you very much for your reply. But I think you may have misunderstood what we have done. What we do this time is that we can use multiple range ranges (condition_iterator) after in. Previously, we can only use such an interval [lower, upper] after in, but in some scenarios, we may need a list: condition_ iterator[,condition_iterator ...]condition_iterator:[ REVERSE ] expression .. expression [ BY expression ] \nThanks again!\nsongjinzhou (2903807914@qq.com)\n From: Pavel StehuleDate: 2023-01-19 21:04To: 2903807914@qq.comCC: pgsql-hackers; 1276576182Subject: Re: Support plpgsql multi-range in conditional controlHičt 19. 1. 2023 v 10:23 odesílatel 2903807914@qq.com <2903807914@qq.com> napsal:\nDear hackers, my good friend Hou Jiaxing and I have implemented a version of the code that supports multiple integer range conditions in the in condition control of the for loop statement in the plpgsql procedural language. A typical example is as follows:postgres=# do $$declare    i int := 10;begin    for i in 1..10 by 3, reverse i+10..i+1 by 3 loop       raise info '%', i;    end loop;end $$;INFO: 1INFO: 4INFO: 7INFO: 10INFO: 20INFO: 17INFO: 14INFO: 11dopostgres=#Hope to get your feedback, thank you!I don't like it. The original design of ADA language is to be a safe and simple language. Proposed design is in 100% inversion.What use case it should to support?RegardsPavel \n\n2903807914@qq.com", "msg_date": "Thu, 19 Jan 2023 22:19:54 +0800", "msg_from": "\"2903807914@qq.com\" <2903807914@qq.com>", "msg_from_op": true, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> čt 19. 1. 2023 v 10:23 odesílatel 2903807914@qq.com <2903807914@qq.com>\n> napsal:\n>> Dear hackers, my good friend Hou Jiaxing and I have implemented a version\n>> of the code that supports multiple integer range conditions in the in\n>> condition control of the for loop statement in the plpgsql procedural\n>> language. A typical example is as follows:\n\n> I don't like it. The original design of ADA language is to be a safe and\n> simple language. Proposed design is in 100% inversion.\n\nYeah, I'm pretty dubious about this too. plpgsql's FOR-loop syntax is\nalready badly overloaded, to the point where it's hard to separate\nthe true intent of a statement. We have very ad-hoc rules in there\nlike \"if the first thing after IN is a var of type refcursor, then\nit's FOR-IN-cursor, otherwise it couldn't possibly be that\". (So\nmuch for functions returning refcursor, for example.) Similarly the\n\"FOR x IN m..n\" syntax has a shaky assumption that \"..\" couldn't\npossibly appear in mainline SQL. If you make any sort of syntax\nerror you're likely to get a very unintelligible complaint --- or\nworse, it might take it and do something you did not expect.\n\nI fear that allowing more complexity in \"FOR x IN m..n\" will make\nthose problems even worse. The proposed patch gives comma a special\nstatus akin to \"..\"'s, but comma definitely *can* appear within SQL\nexpressions --- admittedly, it should only appear within parentheses,\nbut now you're reliant on the user keeping their parenthesization\nstraight in order to avoid going off into the weeds. I think this\nchange increases the chances of confusion with FOR-IN-SELECT as well.\n\nIf there were a compelling use-case for what you suggest then\nmaybe it'd be worth accepting those risks. But I share Pavel's\nopinion that there's little use-case. We've not heard a request\nfor such a feature before, AFAIR.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:23:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "čt 19. 1. 2023 v 15:20 odesílatel 2903807914@qq.com <2903807914@qq.com>\nnapsal:\n\n> Hello, thank you very much for your reply. But I think you may have\n> misunderstood what we have done.\n>\n> What we do this time is that we can use multiple range ranges\n> (condition_iterator) after in. Previously, we can only use such an interval\n> [lower, upper] after in, but in some scenarios, we may need a list: *condition_\n> iterator[,condition_iterator ...]*\n>\n> condition_iterator:\n> [ REVERSE ] expression .. expression [ BY expression ]\n>\n\nthen you can use second outer for over an array or just while cycle\n\nReards\n\nPavel\n\n\n>\n> Thanks again!\n> ------------------------------\n> songjinzhou (2903807914@qq.com)\n>\n>\n> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n> *Date:* 2023-01-19 21:04\n> *To:* 2903807914@qq.com\n> *CC:* pgsql-hackers <pgsql-hackers@lists.postgresql.org>; 1276576182\n> <1276576182@qq.com>\n> *Subject:* Re: Support plpgsql multi-range in conditional control\n> Hi\n>\n>\n> čt 19. 1. 2023 v 10:23 odesílatel 2903807914@qq.com <2903807914@qq.com>\n> napsal:\n>\n>> Dear hackers, my good friend Hou Jiaxing and I have implemented a version\n>> of the code that supports multiple integer range conditions in the in\n>> condition control of the for loop statement in the plpgsql procedural\n>> language. A typical example is as follows:\n>>\n>> postgres=# do $$\n>> declare\n>> i int := 10;\n>> begin\n>> for i in 1..10 by 3, reverse i+10..i+1 by 3 loop\n>> raise info '%', i;\n>> end loop;\n>> end $$;\n>> INFO: 1\n>> INFO: 4\n>> INFO: 7\n>> INFO: 10\n>> INFO: 20\n>> INFO: 17\n>> INFO: 14\n>> INFO: 11\n>> do\n>> postgres=#\n>>\n>> Hope to get your feedback, thank you!\n>>\n>\n> I don't like it. The original design of ADA language is to be a safe and\n> simple language. Proposed design is in 100% inversion.\n>\n> What use case it should to support?\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> ------------------------------\n>> 2903807914@qq.com\n>>\n>\n\nčt 19. 1. 2023 v 15:20 odesílatel 2903807914@qq.com <2903807914@qq.com> napsal:\nHello, thank you very much for your reply. But I think you may have misunderstood what we have done. What we do this time is that we can use multiple range ranges (condition_iterator) after in. Previously, we can only use such an interval [lower, upper] after in, but in some scenarios, we may need a list: condition_ iterator[,condition_iterator ...]condition_iterator:[ REVERSE ] expression .. expression [ BY expression ] then you can use second outer for over an array or just while cycleReardsPavel \nThanks again!\nsongjinzhou (2903807914@qq.com)\n From: Pavel StehuleDate: 2023-01-19 21:04To: 2903807914@qq.comCC: pgsql-hackers; 1276576182Subject: Re: Support plpgsql multi-range in conditional controlHičt 19. 1. 2023 v 10:23 odesílatel 2903807914@qq.com <2903807914@qq.com> napsal:\nDear hackers, my good friend Hou Jiaxing and I have implemented a version of the code that supports multiple integer range conditions in the in condition control of the for loop statement in the plpgsql procedural language. A typical example is as follows:postgres=# do $$declare    i int := 10;begin    for i in 1..10 by 3, reverse i+10..i+1 by 3 loop       raise info '%', i;    end loop;end $$;INFO: 1INFO: 4INFO: 7INFO: 10INFO: 20INFO: 17INFO: 14INFO: 11dopostgres=#Hope to get your feedback, thank you!I don't like it. The original design of ADA language is to be a safe and simple language. Proposed design is in 100% inversion.What use case it should to support?RegardsPavel \n\n2903807914@qq.com", "msg_date": "Thu, 19 Jan 2023 16:54:43 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "čt 19. 1. 2023 v 16:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> čt 19. 1. 2023 v 15:20 odesílatel 2903807914@qq.com <2903807914@qq.com>\n> napsal:\n>\n>> Hello, thank you very much for your reply. But I think you may have\n>> misunderstood what we have done.\n>>\n>> What we do this time is that we can use multiple range ranges\n>> (condition_iterator) after in. Previously, we can only use such an interval\n>> [lower, upper] after in, but in some scenarios, we may need a list: *condition_\n>> iterator[,condition_iterator ...]*\n>>\n>> condition_iterator:\n>> [ REVERSE ] expression .. expression [ BY expression ]\n>>\n>\n> then you can use second outer for over an array or just while cycle\n>\n\nI wrote simple example:\n\ncreate type range_expr as (r int4range, s int);\n\ndo\n$$\ndeclare re range_expr;\nbegin\n foreach re in array ARRAY[('[10, 20]', 1), ('[100, 200]', 10)]\n loop\n for i in lower(re.r) .. upper(re.r) by re.s\n loop\n raise notice '%', i;\n end loop;\n end loop;\nend;\n$$;\n\nBut just I don't know what is wrong on\n\nbegin\n for i in 10..20\n loop\n raise notice '%', i;\n end loop;\n\n for i in 100 .. 200 by 10\n loop\n raise notice '%', i;\n end loop;\nend;\n\nand if there are some longer bodies you should use function or procedure.\nAny different cycle is separated. PLpgSQL (like PL/SQL or ADA) are verbose\nlanguages. There is no goal to have short, heavy code.\n\nRegards\n\nPavel\n\nčt 19. 1. 2023 v 16:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 19. 1. 2023 v 15:20 odesílatel 2903807914@qq.com <2903807914@qq.com> napsal:\nHello, thank you very much for your reply. But I think you may have misunderstood what we have done. What we do this time is that we can use multiple range ranges (condition_iterator) after in. Previously, we can only use such an interval [lower, upper] after in, but in some scenarios, we may need a list: condition_ iterator[,condition_iterator ...]condition_iterator:[ REVERSE ] expression .. expression [ BY expression ] then you can use second outer for over an array or just while cycleI wrote simple example:create type range_expr as (r int4range, s int);do $$declare re range_expr;begin  foreach re in array ARRAY[('[10, 20]', 1), ('[100, 200]', 10)]  loop    for i in lower(re.r) .. upper(re.r) by re.s    loop      raise notice '%', i;    end loop;  end loop;end;$$;But just I don't know what is wrong on begin  for i in 10..20  loop    raise notice '%', i;  end loop;  for i in 100 .. 200 by 10  loop    raise notice '%', i;  end loop;end;and if there are some longer bodies you should use function or procedure. Any different cycle is separated. PLpgSQL (like PL/SQL or ADA) are verbose languages. There is no goal to have short, heavy code.RegardsPavel", "msg_date": "Thu, 19 Jan 2023 17:17:24 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hello, Pavel Stehule:\r\n\r\nThank you very much for your verification. The test cases you provided work well here:\r\n\r\n\r\n\r\nFor your second example, we can easily merge, as follows:\r\n\r\n\r\n\r\nFor scenarios that can be merged, we can choose to use this function to reduce code redundancy; If the operations performed in the loop are different, you can still select the previous use method, as follows:\r\n\r\n\r\n\r\nIn response to Tom's question about cursor and the case of in select: I don't actually allow such syntax here. The goal is simple: we only expand the range of integers after in, and other cases remain the same.\r\nThank you again for your ideas. Such a discussion is very meaningful!\r\n\r\n\r\n\r\n\r\nsongjinzhou(2903807914@qq.com)\r\n \r\nFrom: Pavel Stehule\r\nDate: 2023-01-20 00:17\r\nTo: 2903807914@qq.com\r\nCC: pgsql-hackers\r\nSubject: Re: Re: Support plpgsql multi-range in conditional control\r\n\r\n\r\nčt 19. 1. 2023 v 16:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\r\n\r\n\r\nčt 19. 1. 2023 v 15:20 odesílatel 2903807914@qq.com <2903807914@qq.com> napsal:\r\nHello, thank you very much for your reply. But I think you may have misunderstood what we have done. \r\n\r\nWhat we do this time is that we can use multiple range ranges (condition_iterator) after in. Previously, we can only use such an interval [lower, upper] after in, but in some scenarios, we may need a list: condition_ iterator[,condition_iterator ...]\r\n\r\ncondition_iterator:\r\n[ REVERSE ] expression .. expression [ BY expression ] \r\n\r\nthen you can use second outer for over an array or just while cycle\r\n\r\nI wrote simple example:\r\n\r\ncreate type range_expr as (r int4range, s int);\r\n\r\ndo \r\n$$\r\ndeclare re range_expr;\r\nbegin\r\n foreach re in array ARRAY[('[10, 20]', 1), ('[100, 200]', 10)]\r\n loop\r\n for i in lower(re.r) .. upper(re.r) by re.s\r\n loop\r\n raise notice '%', i;\r\n end loop;\r\n end loop;\r\nend;\r\n$$;\r\n\r\nBut just I don't know what is wrong on \r\n\r\nbegin\r\n for i in 10..20\r\n loop\r\n raise notice '%', i;\r\n end loop;\r\n\r\n for i in 100 .. 200 by 10\r\n loop\r\n raise notice '%', i;\r\n end loop;\r\nend;\r\n\r\nand if there are some longer bodies you should use function or procedure. Any different cycle is separated. PLpgSQL (like PL/SQL or ADA) are verbose languages. There is no goal to have short, heavy code.\r\n\r\nRegards\r\n\r\nPavel", "msg_date": "Fri, 20 Jan 2023 11:25:48 +0800", "msg_from": "songjinzhou <2903807914@qq.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hi\n\npá 20. 1. 2023 v 4:25 odesílatel songjinzhou <2903807914@qq.com> napsal:\n\n> Hello, Pavel Stehule:\n>\n> Thank you very much for your verification. The test cases you provided\n> work well here:\n>\n>\n>\n> For your second example, we can easily merge, as follows:\n>\n>\n>\n> For scenarios that can be merged, we can choose to use this function to\n> reduce code redundancy; If the operations performed in the loop are\n> different, you can still select the previous use method, as follows:\n>\n>\n>\n> In response to Tom's question about cursor and the case of in select: I\n> don't actually allow such syntax here. The goal is simple: we only expand\n> the range of integers after in, and other cases remain the same.\n> Thank you again for your ideas. Such a discussion is very meaningful!\n>\n>\n> ------------------------------\n> songjinzhou(2903807914@qq.com)\n>\n>\n> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n> *Date:* 2023-01-20 00:17\n> *To:* 2903807914@qq.com\n> *CC:* pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n> *Subject:* Re: Re: Support plpgsql multi-range in conditional control\n>\n>\n> čt 19. 1. 2023 v 16:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> čt 19. 1. 2023 v 15:20 odesílatel 2903807914@qq.com <2903807914@qq.com>\n>> napsal:\n>>\n>>> Hello, thank you very much for your reply. But I think you may have\n>>> misunderstood what we have done.\n>>>\n>>> What we do this time is that we can use multiple range ranges\n>>> (condition_iterator) after in. Previously, we can only use such an interval\n>>> [lower, upper] after in, but in some scenarios, we may need a list: *condition_\n>>> iterator[,condition_iterator ...]*\n>>>\n>>> condition_iterator:\n>>> [ REVERSE ] expression .. expression [ BY expression ]\n>>>\n>>\n>> then you can use second outer for over an array or just while cycle\n>>\n>\n> I wrote simple example:\n>\n> create type range_expr as (r int4range, s int);\n>\n> do\n> $$\n> declare re range_expr;\n> begin\n> foreach re in array ARRAY[('[10, 20]', 1), ('[100, 200]', 10)]\n> loop\n> for i in lower(re.r) .. upper(re.r) by re.s\n> loop\n> raise notice '%', i;\n> end loop;\n> end loop;\n> end;\n> $$;\n>\n> But just I don't know what is wrong on\n>\n> begin\n> for i in 10..20\n> loop\n> raise notice '%', i;\n> end loop;\n>\n> for i in 100 .. 200 by 10\n> loop\n> raise notice '%', i;\n> end loop;\n> end;\n>\n> and if there are some longer bodies you should use function or procedure.\n> Any different cycle is separated. PLpgSQL (like PL/SQL or ADA) are verbose\n> languages. There is no goal to have short, heavy code.\n>\n> Regards\n>\n> Pavel\n>\n>\nMaybe you didn't understand my reply. Without some significant real use\ncase, I am strongly against the proposed feature and merging your patch to\nupstream. I don't see any reason to enhance language with this feature.\n\nRegards\n\nPavel", "msg_date": "Fri, 20 Jan 2023 05:28:02 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hello, this usage scenario is from Oracle's PL/SQL language (I have been doing the function development of PL/SQL language for some time). I think this patch is very practical and will expand our for loop scenario. In short, I look forward to your reply.\r\n\r\nHappy Chinese New Year!\r\n\r\n\r\n\r\nsongjinzhou(2903807914@qq.com)\r\n\r\nMaybe you didn't understand my reply. Without some significant real use case, I am strongly against the proposed feature and merging your patch to upstream. I don't see any reason to enhance language with this feature.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n\n\nHello, this usage scenario is from Oracle's PL/SQL language (I have been doing the function development of PL/SQL language for some time). I think this patch is very practical and will expand our for loop scenario. In short, I look forward to your reply.Happy Chinese New Year!\n\nsongjinzhou(2903807914@qq.com)\nMaybe you didn't understand my reply. Without some significant real use case, I am strongly against the proposed feature and merging your patch to upstream. I don't see any reason to enhance language with this feature.RegardsPavel", "msg_date": "Wed, 25 Jan 2023 22:18:17 +0800", "msg_from": "songjinzhou <2903807914@qq.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hi\n\n\nst 25. 1. 2023 v 15:18 odesílatel songjinzhou <2903807914@qq.com> napsal:\n\n> Hello, this usage scenario is from Oracle's PL/SQL language (I have been\n> doing the function development of PL/SQL language for some time). I think\n> this patch is very practical and will expand our for loop scenario. In\n> short, I look forward to your\n>\n\nI don't see any real usage. PL/SQL doesn't support proposed syntax.\n\nRegards\n\nPavel\n\n\n\n> reply.\n>\n> Happy Chinese New Year!\n>\n> ------------------------------\n> songjinzhou(2903807914@qq.com)\n>\n>\n> Maybe you didn't understand my reply. Without some significant real use\n> case, I am strongly against the proposed feature and merging your patch to\n> upstream. I don't see any reason to enhance language with this feature.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n\nHist 25. 1. 2023 v 15:18 odesílatel songjinzhou <2903807914@qq.com> napsal:\nHello, this usage scenario is from Oracle's PL/SQL language (I have been doing the function development of PL/SQL language for some time). I think this patch is very practical and will expand our for loop scenario. In short, I look forward to yourI don't see any real usage. PL/SQL doesn't support proposed syntax.RegardsPavel  reply.Happy Chinese New Year!\n\nsongjinzhou(2903807914@qq.com)\nMaybe you didn't understand my reply. Without some significant real use case, I am strongly against the proposed feature and merging your patch to upstream. I don't see any reason to enhance language with this feature.RegardsPavel", "msg_date": "Wed, 25 Jan 2023 15:21:26 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hello, this is the target I refer to. At present, our patch supports this usage, so I later thought of developing this patch.\r\n\r\n\r\n\r\n\r\nsongjinzhou(2903807914@qq.com)\r\n \r\nFrom: Pavel Stehule\r\nDate: 2023-01-25 22:21\r\nTo: songjinzhou\r\nCC: pgsql-hackers\r\nSubject: Re: Re: Support plpgsql multi-range in conditional control\r\nHi\r\n\r\n\r\nst 25. 1. 2023 v 15:18 odesílatel songjinzhou <2903807914@qq.com> napsal:\r\nHello, this usage scenario is from Oracle's PL/SQL language (I have been doing the function development of PL/SQL language for some time). I think this patch is very practical and will expand our for loop scenario. In short, I look forward to your\r\n\r\nI don't see any real usage. PL/SQL doesn't support proposed syntax.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n \r\nreply.\r\n\r\nHappy Chinese New Year!\r\n\r\n\r\n\r\nsongjinzhou(2903807914@qq.com)\r\n\r\nMaybe you didn't understand my reply. Without some significant real use case, I am strongly against the proposed feature and merging your patch to upstream. I don't see any reason to enhance language with this feature.\r\n\r\nRegards\r\n\r\nPavel", "msg_date": "Wed, 25 Jan 2023 22:39:25 +0800", "msg_from": "songjinzhou <2903807914@qq.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hi\n\n\nst 25. 1. 2023 v 15:39 odesílatel songjinzhou <2903807914@qq.com> napsal:\n\n> Hello, this is the target I refer to. At present, our patch supports this\n> usage, so I later thought of developing this patch.\n>\n>\n> ------------------------------\n> songjinzhou(2903807914@qq.com)\n>\n>\n> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n> *Date:* 2023-01-25 22:21\n> *To:* songjinzhou <2903807914@qq.com>\n> *CC:* pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n> *Subject:* Re: Re: Support plpgsql multi-range in conditional control\n> Hi\n>\n>\nok, I was wrong, PL/SQL supports this syntax. But what is the real use\ncase? This is an example from the book.\n\nRegards\n\nPavel\n\n\n> st 25. 1. 2023 v 15:18 odesílatel songjinzhou <2903807914@qq.com> napsal:\n>\n>> Hello, this usage scenario is from Oracle's PL/SQL language (I have been\n>> doing the function development of PL/SQL language for some time). I think\n>> this patch is very practical and will expand our for loop scenario. In\n>> short, I look forward to your\n>>\n>\n> I don't see any real usage. PL/SQL doesn't support proposed syntax.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> reply.\n>>\n>> Happy Chinese New Year!\n>>\n>> ------------------------------\n>> songjinzhou(2903807914@qq.com)\n>>\n>>\n>> Maybe you didn't understand my reply. Without some significant real use\n>> case, I am strongly against the proposed feature and merging your patch to\n>> upstream. I don't see any reason to enhance language with this feature.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>", "msg_date": "Wed, 25 Jan 2023 16:24:14 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hello, my personal understanding is that you can use multiple iterative controls (as a merge) in a fo loop, otherwise we can only separate these iterative controls, but in fact, they may do the same thing.\r\n\r\n\r\n\r\nsongjinzhou(2903807914@qq.com)\r\n \r\nFrom: Pavel Stehule\r\nDate: 2023-01-25 23:24\r\nTo: songjinzhou\r\nCC: pgsql-hackers\r\nSubject: Re: Re: Support plpgsql multi-range in conditional control\r\nHi\r\n\r\n\r\nst 25. 1. 2023 v 15:39 odesílatel songjinzhou <2903807914@qq.com> napsal:\r\nHello, this is the target I refer to. At present, our patch supports this usage, so I later thought of developing this patch.\r\n\r\n\r\n\r\n\r\nsongjinzhou(2903807914@qq.com)\r\n \r\nFrom: Pavel Stehule\r\nDate: 2023-01-25 22:21\r\nTo: songjinzhou\r\nCC: pgsql-hackers\r\nSubject: Re: Re: Support plpgsql multi-range in conditional control\r\nHi\r\n\r\n\r\nok, I was wrong, PL/SQL supports this syntax. But what is the real use case? This is an example from the book. \r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\nst 25. 1. 2023 v 15:18 odesílatel songjinzhou <2903807914@qq.com> napsal:\r\nHello, this usage scenario is from Oracle's PL/SQL language (I have been doing the function development of PL/SQL language for some time). I think this patch is very practical and will expand our for loop scenario. In short, I look forward to your\r\n\r\nI don't see any real usage. PL/SQL doesn't support proposed syntax.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n \r\nreply.\r\n\r\nHappy Chinese New Year!\r\n\r\n\r\n\r\nsongjinzhou(2903807914@qq.com)\r\n\r\nMaybe you didn't understand my reply. Without some significant real use case, I am strongly against the proposed feature and merging your patch to upstream. I don't see any reason to enhance language with this feature.\r\n\r\nRegards\r\n\r\nPavel", "msg_date": "Wed, 25 Jan 2023 23:39:00 +0800", "msg_from": "songjinzhou <2903807914@qq.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "Hi\n\nst 25. 1. 2023 v 16:39 odesílatel songjinzhou <2903807914@qq.com> napsal:\n\n> Hello, my personal understanding is that you can use multiple iterative\n> controls (as a merge) in a fo loop, otherwise we can only separate these\n> iterative controls, but in fact, they may do the same thing.\n>\n\n1. please, don't use top posting in this mailing list\nhttps://en.wikipedia.org/wiki/Posting_styl\n\n2. I understand the functionality, but I don't think there is a real\nnecessity to support this functionality. Not in this static form, and just\nfor integer type.\n\nPostgres has a nice generic type \"multirange\". I can imagine some iterator\nover the value of multirange, but I cannot imagine the necessity of a\nricher iterator over just integer range. So the question is, what is the\nreal possible use case of this proposed functionality?\n\nRegards\n\nPavel\n\n\n\n>\n> ------------------------------\n> songjinzhou(2903807914@qq.com)\n>\n>\n> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n> *Date:* 2023-01-25 23:24\n> *To:* songjinzhou <2903807914@qq.com>\n> *CC:* pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n> *Subject:* Re: Re: Support plpgsql multi-range in conditional control\n> Hi\n>\n>\n> st 25. 1. 2023 v 15:39 odesílatel songjinzhou <2903807914@qq.com> napsal:\n>\n>> Hello, this is the target I refer to. At present, our patch supports this\n>> usage, so I later thought of developing this patch.\n>>\n>>\n>> ------------------------------\n>> songjinzhou(2903807914@qq.com)\n>>\n>>\n>> *From:* Pavel Stehule <pavel.stehule@gmail.com>\n>> *Date:* 2023-01-25 22:21\n>> *To:* songjinzhou <2903807914@qq.com>\n>> *CC:* pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n>> *Subject:* Re: Re: Support plpgsql multi-range in conditional control\n>> Hi\n>>\n>>\n> ok, I was wrong, PL/SQL supports this syntax. But what is the real use\n> case? This is an example from the book.\n>\n> Regards\n>\n> Pavel\n>\n>\n>> st 25. 1. 2023 v 15:18 odesílatel songjinzhou <2903807914@qq.com> napsal:\n>>\n>>> Hello, this usage scenario is from Oracle's PL/SQL language (I have been\n>>> doing the function development of PL/SQL language for some time). I think\n>>> this patch is very practical and will expand our for loop scenario. In\n>>> short, I look forward to your\n>>>\n>>\n>> I don't see any real usage. PL/SQL doesn't support proposed syntax.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>> reply.\n>>>\n>>> Happy Chinese New Year!\n>>>\n>>> ------------------------------\n>>> songjinzhou(2903807914@qq.com)\n>>>\n>>>\n>>> Maybe you didn't understand my reply. Without some significant real use\n>>> case, I am strongly against the proposed feature and merging your patch to\n>>> upstream. I don't see any reason to enhance language with this feature.\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>", "msg_date": "Wed, 25 Jan 2023 16:50:49 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": ">Hi\r\n\r\n>st 25. 1. 2023 v 16:39 odesílatel songjinzhou <2903807914@qq.com> napsal: Hello, my personal understanding is that you can use multiple iterative controls (as a merge) in a fo loop, otherwise we can only separate these iterative controls, but in fact, they may do the same thing.\r\n\r\n>1. please, don't use top posting in this mailing list https://en.wikipedia.org/wiki/Posting_styl\r\n\r\n>2. I understand the functionality, but I don't think there is a real necessity to support this functionality. Not in this static form, and just for integer type.\r\n\r\n>Postgres has a nice generic type \"multirange\". I can imagine some iterator over the value of multirange, but I cannot imagine the necessity of a richer iterator over just integer range. So the question is, what is the real possible use case of this proposed functionality? \r\n\r\n1. I'm very sorry that my personal negligence has caused obstacles to your reading. Thank you for your reminding.\r\n2. With regard to the use of this function, my understanding is relatively simple: there are many for loops that may do the same things. We can reduce our sql redundancy by merging iterative control; It is also more convenient to understand and read logically.\r\n\r\nAs follows, we can only repeat the for statement before we use such SQL:\r\n\r\nbegin\r\nfor i in 10..20 loop\r\nraise notice '%', i; -- Things to do\r\nend loop;\r\n\r\nfor i in 100 .. 200 by 10 loop\r\nraise notice '%', i; -- Things to do\r\nend loop;\r\nend;\r\n\r\nBut now we can simplify it as follows:\r\n\r\nbegin\r\nfor i in 10..20, 100 .. 200 by 10 loop\r\nraise notice '%', i; -- Things to do\r\nend loop;\r\nend;\r\n\r\nAlthough we can only use integer iterative control here, this is just a horizontal expansion of the previous logic. Thank you very much for your reply. I am very grateful!\r\n\r\n---\r\n\r\nsongjinzhou(2903807914@qq.com)\r\n\n\n>Hi>st 25. 1. 2023 v 16:39 odesílatel songjinzhou <2903807914@qq.com> napsal: Hello, my personal understanding is that you can use multiple iterative controls (as a merge) in a fo loop, otherwise we can only separate these iterative controls, but in fact, they may do the same thing.>1. please, don't use top posting in this mailing list https://en.wikipedia.org/wiki/Posting_styl>2. I understand the functionality, but I don't think there is a real necessity to support this functionality.  Not in this static form, and just for integer type.>Postgres has a nice generic type \"multirange\". I can imagine some iterator over the value of multirange, but I cannot imagine the necessity of a richer iterator over just integer range. So the question is, what is the real possible use case of this proposed functionality? 1. I'm very sorry that my personal negligence has caused obstacles to your reading. Thank you for your reminding.2. With regard to the use of this function, my understanding is relatively simple: there are many for loops that may do the same things. We can reduce our sql redundancy by merging iterative control; It is also more convenient to understand and read logically.As follows, we can only repeat the for statement before we use such SQL:begin for i in 10..20 loop raise notice '%', i; -- Things to do end loop; for i in 100 .. 200 by 10 loop raise notice '%', i; -- Things to do end loop;end;But now we can simplify it as follows:begin for i in 10..20, 100 .. 200 by 10 loop raise notice '%', i; -- Things to do end loop;end;Although we can only use integer iterative control here, this is just a horizontal expansion of the previous logic. Thank you very much for your reply. I am very grateful!---songjinzhou(2903807914@qq.com)", "msg_date": "Thu, 26 Jan 2023 00:22:06 +0800", "msg_from": "songjinzhou <2903807914@qq.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "st 25. 1. 2023 v 17:22 odesílatel songjinzhou <2903807914@qq.com> napsal:\n\n>\n> >Hi\n>\n> >st 25. 1. 2023 v 16:39 odesílatel songjinzhou <2903807914@qq.com>\n> napsal: Hello, my personal understanding is that you can use multiple\n> iterative controls (as a merge) in a fo loop, otherwise we can only\n> separate these iterative controls, but in fact, they may do the same thing.\n>\n> >1. please, don't use top posting in this mailing list\n> https://en.wikipedia.org/wiki/Posting_styl\n>\n> >2. I understand the functionality, but I don't think there is a real\n> necessity to support this functionality. Not in this static form, and just\n> for integer type.\n>\n> >Postgres has a nice generic type \"multirange\". I can imagine some\n> iterator over the value of multirange, but I cannot imagine the necessity\n> of a richer iterator over just integer range. So the question is, what is\n> the real possible use case of this proposed functionality?\n>\n> 1. I'm very sorry that my personal negligence has caused obstacles to your\n> reading. Thank you for your reminding.\n> 2. With regard to the use of this function, my understanding is relatively\n> simple: there are many for loops that may do the same things. We can reduce\n> our sql redundancy by merging iterative control; It is also more convenient\n> to understand and read logically.\n>\n> As follows, we can only repeat the for statement before we use such SQL:\n>\n> begin\n> for i in 10..20 loop\n> raise notice '%', i; -- Things to do\n> end loop;\n>\n> for i in 100 .. 200 by 10 loop\n> raise notice '%', i; -- Things to do\n> end loop;\n> end;\n>\n> But now we can simplify it as follows:\n>\n> begin\n> for i in 10..20, 100 .. 200 by 10 loop\n> raise notice '%', i; -- Things to do\n> end loop;\n> end;\n>\n> Although we can only use integer iterative control here, this is just a\n> horizontal expansion of the previous logic. Thank you very much for your\n> reply. I am very grateful!\n>\n>\nUnfortunately, this is not a real use case - this is not an example from\nthe real world.\n\nRegards\n\nPavel\n\n\n\n\n> ---\n>\n> songjinzhou(2903807914@qq.com)\n>\n>\n\nst 25. 1. 2023 v 17:22 odesílatel songjinzhou <2903807914@qq.com> napsal:\n>Hi>st 25. 1. 2023 v 16:39 odesílatel songjinzhou <2903807914@qq.com> napsal: Hello, my personal understanding is that you can use multiple iterative controls (as a merge) in a fo loop, otherwise we can only separate these iterative controls, but in fact, they may do the same thing.>1. please, don't use top posting in this mailing list https://en.wikipedia.org/wiki/Posting_styl>2. I understand the functionality, but I don't think there is a real necessity to support this functionality.  Not in this static form, and just for integer type.>Postgres has a nice generic type \"multirange\". I can imagine some iterator over the value of multirange, but I cannot imagine the necessity of a richer iterator over just integer range. So the question is, what is the real possible use case of this proposed functionality? 1. I'm very sorry that my personal negligence has caused obstacles to your reading. Thank you for your reminding.2. With regard to the use of this function, my understanding is relatively simple: there are many for loops that may do the same things. We can reduce our sql redundancy by merging iterative control; It is also more convenient to understand and read logically.As follows, we can only repeat the for statement before we use such SQL:begin for i in 10..20 loop raise notice '%', i; -- Things to do end loop; for i in 100 .. 200 by 10 loop raise notice '%', i; -- Things to do end loop;end;But now we can simplify it as follows:begin for i in 10..20, 100 .. 200 by 10 loop raise notice '%', i; -- Things to do end loop;end;Although we can only use integer iterative control here, this is just a horizontal expansion of the previous logic. Thank you very much for your reply. I am very grateful!Unfortunately, this is not a real use case - this is not an example from the real world. RegardsPavel---songjinzhou(2903807914@qq.com)", "msg_date": "Wed, 25 Jan 2023 18:02:07 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" }, { "msg_contents": "On Wed, 25 Jan 2023 at 12:02, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n>\n> st 25. 1. 2023 v 17:22 odesílatel songjinzhou <2903807914@qq.com> napsal:\n>\n>>\n>> As follows, we can only repeat the for statement before we use such SQL:\n>>\n>> begin\n>> for i in 10..20 loop\n>> raise notice '%', i; -- Things to do\n>> end loop;\n>>\n>> for i in 100 .. 200 by 10 loop\n>> raise notice '%', i; -- Things to do\n>> end loop;\n>> end;\n>>\n>> But now we can simplify it as follows:\n>>\n>> begin\n>> for i in 10..20, 100 .. 200 by 10 loop\n>> raise notice '%', i; -- Things to do\n>> end loop;\n>> end;\n>>\n>> Although we can only use integer iterative control here, this is just a\n>> horizontal expansion of the previous logic. Thank you very much for your\n>> reply. I am very grateful!\n>>\n>>\n> Unfortunately, this is not a real use case - this is not an example from\n> the real world.\n>\n\nAnd anyway, this is already supported using generate_series() and UNION:\n\nodyssey=> do $$ declare i int; begin for i in select generate_series (10,\n20) union all select generate_series (100, 200, 10) do loop raise notice\n'i=%', i; end loop; end;$$;\nNOTICE: i=10\nNOTICE: i=11\nNOTICE: i=12\nNOTICE: i=13\nNOTICE: i=14\nNOTICE: i=15\nNOTICE: i=16\nNOTICE: i=17\nNOTICE: i=18\nNOTICE: i=19\nNOTICE: i=20\nNOTICE: i=100\nNOTICE: i=110\nNOTICE: i=120\nNOTICE: i=130\nNOTICE: i=140\nNOTICE: i=150\nNOTICE: i=160\nNOTICE: i=170\nNOTICE: i=180\nNOTICE: i=190\nNOTICE: i=200\nDO\nodyssey=>\n\nThe existing x..y notation is just syntactic sugar for a presumably common\ncase (although I’m dubious how often one really loops through a range of\nnumbers — surely in a database looping through a query result is\noverwhelmingly dominant?); I don’t think you’ll find much support around\nhere for adding more syntax possibilities to the loop construct.\n\nOn Wed, 25 Jan 2023 at 12:02, Pavel Stehule <pavel.stehule@gmail.com> wrote:st 25. 1. 2023 v 17:22 odesílatel songjinzhou <2903807914@qq.com> napsal:\nAs follows, we can only repeat the for statement before we use such SQL:begin for i in 10..20 loop raise notice '%', i; -- Things to do end loop; for i in 100 .. 200 by 10 loop raise notice '%', i; -- Things to do end loop;end;But now we can simplify it as follows:begin for i in 10..20, 100 .. 200 by 10 loop raise notice '%', i; -- Things to do end loop;end;Although we can only use integer iterative control here, this is just a horizontal expansion of the previous logic. Thank you very much for your reply. I am very grateful!Unfortunately, this is not a real use case - this is not an example from the real world. And anyway, this is already supported using generate_series() and UNION:odyssey=> do $$ declare i int; begin for i in select generate_series (10, 20) union all select generate_series (100, 200, 10) do loop raise notice 'i=%', i; end loop; end;$$;NOTICE:  i=10NOTICE:  i=11NOTICE:  i=12NOTICE:  i=13NOTICE:  i=14NOTICE:  i=15NOTICE:  i=16NOTICE:  i=17NOTICE:  i=18NOTICE:  i=19NOTICE:  i=20NOTICE:  i=100NOTICE:  i=110NOTICE:  i=120NOTICE:  i=130NOTICE:  i=140NOTICE:  i=150NOTICE:  i=160NOTICE:  i=170NOTICE:  i=180NOTICE:  i=190NOTICE:  i=200DOodyssey=> The existing x..y notation is just syntactic sugar for a presumably common case (although I’m dubious how often one really loops through a range of numbers — surely in a database looping through a query result is overwhelmingly dominant?); I don’t think you’ll find much support around here for adding more syntax possibilities to the loop construct.", "msg_date": "Wed, 25 Jan 2023 17:17:59 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Support plpgsql multi-range in conditional control" } ]
[ { "msg_contents": "In [1] I noticed a bit of a poor usage of appendStringInfoString which\njust appends 4 spaces in a loop, one for each indent level of the\njsonb. It should be better just to use appendStringInfoSpaces and\njust append all the spaces in one go rather than appending 4 spaces in\na loop. That'll save having to check enlargeStringInfo() once for each\nloop.\n\nI'm aiming this mostly as a cleanup patch, but after looking at the\nappendStringInfoSpaces code, I thought it could be done a bit more\nefficiently by using memset instead of using the while loop that keeps\ntrack of 2 counters. memset has the option of doing more than a char\nat a time, which should be useful for larger numbers of spaces.\n\nIt does seem a bit faster when appending 8 chars at least going by the\nattached spaces.c file.\n\nWith -O1\n$ ./spaces\nwhile 0.536577 seconds\nmemset 0.326532 seconds\n\nHowever, I'm not really expecting much of a performance increase from\nthis change. I do at least want to make sure I've not made anything\nworse, so I used pgbench to run:\n\nselect jsonb_pretty(row_to_json(pg_class)::jsonb) from pg_class;\n\nperf top says:\n\nMaster:\n 0.96% postgres [.] add_indent.part.0\n\nPatched\n 0.25% postgres [.] add_indent.part.0\n\nI can't really detect a certain enough TPS change over the noise. I\nexpect it might become more significant with more complex json that\nhas more than a single indent level.\n\nI could only find 1 other instance where we use appendStringInfoString\nto append spaces. I've adjusted that one too.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvrrFNSm8dF24tmYOZpvo-R5ZP+0FoqVo2XcYhRftehoRQ@mail.gmail.com", "msg_date": "Thu, 19 Jan 2023 22:44:36 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Use appendStringInfoSpaces more" }, { "msg_contents": "On Thu, Jan 19, 2023 at 8:45 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> In [1] I noticed a bit of a poor usage of appendStringInfoString which\n> just appends 4 spaces in a loop, one for each indent level of the\n> jsonb. It should be better just to use appendStringInfoSpaces and\n> just append all the spaces in one go rather than appending 4 spaces in\n> a loop. That'll save having to check enlargeStringInfo() once for each\n> loop.\n>\n\nShould the add_indent function also have a check to avoid making\nunnecessary calls to appendStringInfoSpaces when the level is 0?\n\ne.g.\nif (indent)\n{\n appendStringInfoCharMacro(out, '\\n');\n if (level > 0)\n appendStringInfoSpaces(out, level * 4);\n }\n\nV.\n\nif (indent)\n{\n appendStringInfoCharMacro(out, '\\n');\n appendStringInfoSpaces(out, level * 4);\n }\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 20 Jan 2023 08:23:19 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use appendStringInfoSpaces more" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> Should the add_indent function also have a check to avoid making\n> unnecessary calls to appendStringInfoSpaces when the level is 0?\n\nSeems like unnecessary extra notation, seeing that appendStringInfoSpaces\nwill fall out quickly for a zero argument.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 16:25:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use appendStringInfoSpaces more" }, { "msg_contents": "On Fri, 20 Jan 2023 at 10:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > Should the add_indent function also have a check to avoid making\n> > unnecessary calls to appendStringInfoSpaces when the level is 0?\n>\n> Seems like unnecessary extra notation, seeing that appendStringInfoSpaces\n> will fall out quickly for a zero argument.\n\nYeah agreed. As far as I see it, the level will only be 0 before the\nfirst WJB_BEGIN_OBJECT and those appear to be the first thing in the\ndocument, so we'll only indent level 0 once and everything else will\nbe > 0. So, it also seems to me that the additional check is more\nlikely to cost more than it would save.\n\nDavid\n\n\n", "msg_date": "Fri, 20 Jan 2023 12:41:29 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use appendStringInfoSpaces more" }, { "msg_contents": "On Fri, 20 Jan 2023 at 10:23, Peter Smith <smithpb2250@gmail.com> wrote:\n> Should the add_indent function also have a check to avoid making\n> unnecessary calls to appendStringInfoSpaces when the level is 0?\n\nAlthough I didn't opt to do that, thank you for having a look.\n\nI do think the patch is trivially simple and nobody seems against it,\nso I've now pushed it.\n\nDavid\n\n\n", "msg_date": "Fri, 20 Jan 2023 13:09:44 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use appendStringInfoSpaces more" } ]
[ { "msg_contents": "Hi,\n\nThere's a few places in the code that try to format a variable definition like this\n\n ReorderBufferChange *next_change =\n dlist_container(ReorderBufferChange, node, next);\n\nbut pgindent turns that into\n\n ReorderBufferChange *next_change =\n dlist_container(ReorderBufferChange, node, next);\n\neven though the same pattern works, and is used fairly widely for assignments\n\n\tamroutine->amparallelvacuumoptions =\n\t\tVACUUM_OPTION_PARALLEL_BULKDEL;\n\nParticularly when variable and/or types names are longer, it's sometimes hard\nto fit enough into one line to use a different style. E.g., the code I'm\ncurrently hacking on has\n\n RWConflict possibleUnsafeConflict = dlist_container(RWConflictData, inLink, iter.cur);\n\nThere's simply no way to make break that across lines that doesn't either\nviolate the line length limit or makes pgindent do odd things:\n\ntoo long line:\n RWConflict possibleUnsafeConflict = dlist_container(RWConflictData,\n inLink,\n iter.cur);\n\npgindent will move start of second line:\n RWConflict possibleUnsafeConflict =\n dlist_container(RWConflictData, inLink, iter.cur);\n\nI know I can leave the variable initially uninitialized and then do a separate\nassignment, but that's not a great fix. And sometimes other initializations\nwant to access the variable alrady.\n\n\nDo others dislike this as well?\n\nI assume we'd again have to dive into pg_bsd_indent's code to fix it :(\n\nAnd even if we were to figure out how, would it be worth the\nreindent-all-branches pain? I'd say yes, but...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Jan 2023 17:31:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pgindent vs variable declaration across multiple lines" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> There's a few places in the code that try to format a variable definition like this\n\n> ReorderBufferChange *next_change =\n> dlist_container(ReorderBufferChange, node, next);\n\n> but pgindent turns that into\n\n> ReorderBufferChange *next_change =\n> dlist_container(ReorderBufferChange, node, next);\n\nYeah, that's bugged me too. I suspect that the triggering factor is\nuse of a typedef name within the assigned expression, but I've not\ntried to run it to ground.\n\n> I assume we'd again have to dive into pg_bsd_indent's code to fix it :(\n\nYeah :-(. That's enough of a rat's nest that I've not really wanted to.\nBut I'd support applying such a fix if someone can figure it out.\n\n> And even if we were to figure out how, would it be worth the\n> reindent-all-branches pain? I'd say yes, but...\n\nWhat reindent-all-branches pain? We haven't done an all-branches\nreindent in the past, even for pgindent fixes that touched far more\ncode than this would (assuming that the proposed fix doesn't have\nother side-effects).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 20:43:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "Hi,\n\nOn 2023-01-19 20:43:44 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > There's a few places in the code that try to format a variable definition like this\n> \n> > ReorderBufferChange *next_change =\n> > dlist_container(ReorderBufferChange, node, next);\n> \n> > but pgindent turns that into\n> \n> > ReorderBufferChange *next_change =\n> > dlist_container(ReorderBufferChange, node, next);\n> \n> Yeah, that's bugged me too. I suspect that the triggering factor is\n> use of a typedef name within the assigned expression, but I've not\n> tried to run it to ground.\n\nIt's not that - it happens even with just\n int frak =\n 1;\n\nsince it doesn't happen for plain assignments, I think it's somehow related to\ncode dealing with variable declarations.\n\n\n> > I assume we'd again have to dive into pg_bsd_indent's code to fix it :(\n> \n> Yeah :-(. That's enough of a rat's nest that I've not really wanted to.\n> But I'd support applying such a fix if someone can figure it out.\n\nIt's pretty awful code :(\n\n\n> > And even if we were to figure out how, would it be worth the\n> > reindent-all-branches pain? I'd say yes, but...\n> \n> What reindent-all-branches pain? We haven't done an all-branches\n> reindent in the past, even for pgindent fixes that touched far more\n> code than this would (assuming that the proposed fix doesn't have\n> other side-effects).\n\nOh. I thought we had re-indented the other branches when we modified\npg_bsd_intent substantially in the past, to reduce backpatching pain. But I\nguess we just discussed that option, but didn't end up pursuing it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Jan 2023 17:59:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-19 20:43:44 -0500, Tom Lane wrote:\n>> What reindent-all-branches pain? We haven't done an all-branches\n>> reindent in the past, even for pgindent fixes that touched far more\n>> code than this would (assuming that the proposed fix doesn't have\n>> other side-effects).\n\n> Oh. I thought we had re-indented the other branches when we modified\n> pg_bsd_intent substantially in the past, to reduce backpatching pain. But I\n> guess we just discussed that option, but didn't end up pursuing it.\n\nYeah, we did discuss it, but never did it --- I think the convincing\nargument not to was that major reformatting would be very painful\nfor people maintaining forks, and we shouldn't put them through that\nto track minor releases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 21:07:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "Hi,\n\nOn 2023-01-19 17:59:49 -0800, Andres Freund wrote:\n> On 2023-01-19 20:43:44 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > There's a few places in the code that try to format a variable definition like this\n> >\n> > > ReorderBufferChange *next_change =\n> > > dlist_container(ReorderBufferChange, node, next);\n> >\n> > > but pgindent turns that into\n> >\n> > > ReorderBufferChange *next_change =\n> > > dlist_container(ReorderBufferChange, node, next);\n> >\n> > Yeah, that's bugged me too. I suspect that the triggering factor is\n> > use of a typedef name within the assigned expression, but I've not\n> > tried to run it to ground.\n>\n> It's not that - it happens even with just\n> int frak =\n> 1;\n>\n> since it doesn't happen for plain assignments, I think it's somehow related to\n> code dealing with variable declarations.\n\nAnother fun one: pgindent turns\n\n\treturn (instr_time) {t.QuadPart};\ninto\n\treturn (struct instr_time)\n\t{\n\t\tt.QuadPart\n\t};\n\nObviously it can be dealt with with a local variable, but ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Jan 2023 15:12:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "On Thu, Jan 19, 2023 at 8:31 PM Andres Freund <andres@anarazel.de> wrote:\n> I know I can leave the variable initially uninitialized and then do a separate\n> assignment, but that's not a great fix.\n\nThat's what I do.\n\nIf you pick names for all of your data types that are very very long\nand wordy then you don't feel as bad about this, because you were\ngonna need a line break anyway. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 18:50:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "On Fri, Jan 20, 2023 at 2:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > There's a few places in the code that try to format a variable definition like this\n>\n> > ReorderBufferChange *next_change =\n> > dlist_container(ReorderBufferChange, node, next);\n>\n> > but pgindent turns that into\n>\n> > ReorderBufferChange *next_change =\n> > dlist_container(ReorderBufferChange, node, next);\n>\n> Yeah, that's bugged me too. I suspect that the triggering factor is\n> use of a typedef name within the assigned expression, but I've not\n> tried to run it to ground.\n>\n> > I assume we'd again have to dive into pg_bsd_indent's code to fix it :(\n>\n> Yeah :-(. That's enough of a rat's nest that I've not really wanted to.\n> But I'd support applying such a fix if someone can figure it out.\n\nThis may be a clue: the place where declarations are treated\ndifferently seems to be (stangely) in io.c:\n\n ps.ind_stmt = ps.in_stmt & ~ps.in_decl; /* next line should be\n * indented if we have not\n * completed this stmt and if\n * we are not in the middle of\n * a declaration */\n\nIf you just remove \"& ~ps.in_decl\" then it does the desired thing for\nthat new code in predicate.c, but it also interferes with declarations\nwith commas, ie int i, j; where i and j currently line up, now j just\ngets one indentation level. It's probably not the right way to do it\nbut you can fix that with a last token kluge, something like:\n\n#include \"indent_codes.h\"\n\n ps.ind_stmt = ps.in_stmt && (!ps.in_decl || ps.last_token != comma);\n\nThat improves a lot of code in our tree IMHO but of course there is\nother collateral damage...\n\n\n", "msg_date": "Sun, 22 Jan 2023 07:57:15 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Jan 20, 2023 at 2:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah :-(. That's enough of a rat's nest that I've not really wanted to.\n>> But I'd support applying such a fix if someone can figure it out.\n\n> This may be a clue: the place where declarations are treated\n> differently seems to be (stangely) in io.c:\n\n> ps.ind_stmt = ps.in_stmt & ~ps.in_decl; /* next line should be\n> * indented if we have not\n> * completed this stmt and if\n> * we are not in the middle of\n> * a declaration */\n\n> If you just remove \"& ~ps.in_decl\" then it does the desired thing for\n> that new code in predicate.c, but it also interferes with declarations\n> with commas, ie int i, j; where i and j currently line up, now j just\n> gets one indentation level. It's probably not the right way to do it\n> but you can fix that with a last token kluge, something like:\n> #include \"indent_codes.h\"\n> ps.ind_stmt = ps.in_stmt && (!ps.in_decl || ps.last_token != comma);\n> That improves a lot of code in our tree IMHO but of course there is\n> other collateral damage...\n\nI spent some more time staring at this and came up with what seems like\na workable patch, based on the idea that what we want to indent is\nspecifically initialization expressions. pg_bsd_indent does have some\nunderstanding of that: ps.block_init is true within such an expression,\nand then ps.block_init_level is the brace nesting depth inside it.\nIf you just enable ind_stmt based on block_init then you get a bunch\nof unwanted additional indentation inside struct initializers, but\nit seems to work okay if you restrict it to not happen inside braces.\nMore importantly, it doesn't change anything we don't want changed.\n\nProposed patch for pg_bsd_indent attached. I've also attached a diff\nrepresenting the delta between what current pg_bsd_indent wants to do\nto HEAD and what this would do. All the changes it wants to make look\ngood, although I can't say whether there are other places it's failing\nto change that we'd like it to.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 22 Jan 2023 17:34:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "Hi,\n\nOn 2023-01-22 17:34:52 -0500, Tom Lane wrote:\n> I spent some more time staring at this and came up with what seems like\n> a workable patch, based on the idea that what we want to indent is\n> specifically initialization expressions.\n\nThat's awesome. Thanks for doing that.\n\n\n> Proposed patch for pg_bsd_indent attached. I've also attached a diff\n> representing the delta between what current pg_bsd_indent wants to do\n> to HEAD and what this would do. All the changes it wants to make look\n> good, although I can't say whether there are other places it's failing\n> to change that we'd like it to.\n\nI think it's a significant improvement, even if it turns out that there's\nother cases it misses.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 22 Jan 2023 14:40:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "On Mon, Jan 23, 2023 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I spent some more time staring at this and came up with what seems like\n> a workable patch, based on the idea that what we want to indent is\n> specifically initialization expressions. pg_bsd_indent does have some\n> understanding of that: ps.block_init is true within such an expression,\n> and then ps.block_init_level is the brace nesting depth inside it.\n> If you just enable ind_stmt based on block_init then you get a bunch\n> of unwanted additional indentation inside struct initializers, but\n> it seems to work okay if you restrict it to not happen inside braces.\n> More importantly, it doesn't change anything we don't want changed.\n\nNice! LGTM now that I know about block_init.\n\n\n", "msg_date": "Mon, 23 Jan 2023 12:47:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "\nOn 2023-01-22 Su 17:34, Tom Lane wrote:\n> I've also attached a diff\n> representing the delta between what current pg_bsd_indent wants to do\n> to HEAD and what this would do. All the changes it wants to make look\n> good, although I can't say whether there are other places it's failing\n> to change that we'd like it to.\n>\n> \t\t\t\n\n\nChanges look good. There are a handful of places where I think the code\nwould be slightly more readable if a leading typecast were moved to the\nsecond line.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:20:30 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-22 Su 17:34, Tom Lane wrote:\n>> I've also attached a diff\n>> representing the delta between what current pg_bsd_indent wants to do\n>> to HEAD and what this would do. All the changes it wants to make look\n>> good, although I can't say whether there are other places it's failing\n>> to change that we'd like it to.\n\n> Changes look good. There are a handful of places where I think the code\n> would be slightly more readable if a leading typecast were moved to the\n> second line.\n\nPossibly, but that's the sort of decision that pgindent leaves to human\njudgment I think. It'll reflow comment blocks across lines, but I don't\nrecall having seen it move line breaks within code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:33:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" }, { "msg_contents": "Now that pg_bsd_indent is in our tree, we can format this as a\npatch against Postgres sources. I'll stick it in the March CF\nso we don't forget about it.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 12 Feb 2023 13:24:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs variable declaration across multiple lines" } ]
[ { "msg_contents": "Hi hackers,\n\nEXPLAIN ANALYZE for parallel Bitmap Heap Scans currently only reports \nthe number of heap blocks processed by the leader. It's missing the \nper-worker stats. The attached patch adds that functionality in the \nspirit of e.g. Sort or Memoize. Here is a simple test case and the \nEXPLAIN ANALYZE output with and without the patch:\n\ncreate table foo(col0 int, col1 int);\ninsert into foo select generate_series(1, 1000, 0.001), \ngenerate_series(1000, 2000, 0.001);\ncreate index idx0 on foo(col0);\ncreate index idx1 on foo(col1);\nset parallel_tuple_cost = 0;\nset parallel_setup_cost = 0;\nexplain (analyze, costs off, timing off) select * from foo where col0 > \n900 or col1 = 1;\n\nWith the patch:\n\n  Gather (actual rows=99501 loops=1)\n    Workers Planned: 2\n    Workers Launched: 2\n    ->  Parallel Bitmap Heap Scan on foo (actual rows=33167 loops=3)\n          Recheck Cond: ((col0 > 900) OR (col1 = 1))\n          Heap Blocks: exact=98\n          Worker 0:  Heap Blocks: exact=171 lossy=0\n          Worker 1:  Heap Blocks: exact=172 lossy=0\n          ->  BitmapOr (actual rows=0 loops=1)\n                ->  Bitmap Index Scan on idx0 (actual rows=99501 loops=1)\n                      Index Cond: (col0 > 900)\n                ->  Bitmap Index Scan on idx1 (actual rows=0 loops=1)\n                      Index Cond: (col1 = 1)\n\nWithout the patch:\n\n  Gather (actual rows=99501 loops=1)\n    Workers Planned: 2\n    Workers Launched: 2\n    ->  Parallel Bitmap Heap Scan on foo (actual rows=33167 loops=3)\n          Recheck Cond: ((col0 > 900) OR (col1 = 1))\n          Heap Blocks: exact=91\n          ->  BitmapOr (actual rows=0 loops=1)\n                ->  Bitmap Index Scan on idx0 (actual rows=99501 loops=1)\n                      Index Cond: (col0 > 900)\n                ->  Bitmap Index Scan on idx1 (actual rows=0 loops=1)\n                      Index Cond: (col1 = 1)\n\nSo in total the parallel Bitmap Heap Scan actually processed 441 heap \nblocks instead of just 91.\n\nNow two variable length arrays (VLA) would be needed, one for the \nsnapshot and one for the stats. As this obviously doesn't work, I now \nuse a single, big VLA and added functions to retrieve pointers to the \nrespective fields. I'm using MAXALIGN() to make sure the latter field is \naligned properly. Am I doing this correctly? I'm not entirely sure \naround alignment conventions and requirements of other platforms.\n\nI couldn't find existing tests that exercise the EXPLAIN ANALYZE output \nof specific nodes. I could only find a few basic smoke tests for EXPLAIN \nANALYZE with parallel nodes in parallel_select.sql. Do we want tests for \nthe changed functionality? If so I could right away also add tests for \nEXPLAIN ANALYZE including other parallel nodes.\n\nThank you for your feedback.\n\n-- \nDavid Geier\n(ServiceNow)", "msg_date": "Fri, 20 Jan 2023 09:34:26 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN ANALYZE" }, { "msg_contents": "Hi,\n\nOn 1/20/23 09:34, David Geier wrote:\n> EXPLAIN ANALYZE for parallel Bitmap Heap Scans currently only reports \n> the number of heap blocks processed by the leader. It's missing the \n> per-worker stats. The attached patch adds that functionality in the \n> spirit of e.g. Sort or Memoize. Here is a simple test case and the \n> EXPLAIN ANALYZE output with and without the patch:\n\nAttached is a rebased version of the patch. I would appreciate someone \ntaking a look.\n\nAs background: the change doesn't come out of thin air. We repeatedly \ntook wrong conclusions in our query analysis because we assumed that the \nreported block counts include the workers.\n\nIf no one objects I would also register the patch at the commit fest. \nThe patch is passing cleanly on CI.\n\n-- \nDavid Geier\n(ServiceNow)", "msg_date": "Tue, 21 Feb 2023 13:02:35 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "> On Tue, Feb 21, 2023 at 01:02:35PM +0100, David Geier wrote:\n> Hi,\n>\n> On 1/20/23 09:34, David Geier wrote:\n> > EXPLAIN ANALYZE for parallel Bitmap Heap Scans currently only reports\n> > the number of heap blocks processed by the leader. It's missing the\n> > per-worker stats. The attached patch adds that functionality in the\n> > spirit of e.g. Sort or Memoize. Here is a simple test case and the\n> > EXPLAIN ANALYZE output with and without the patch:\n>\n> Attached is a rebased version of the patch. I would appreciate someone\n> taking a look.\n>\n> As background: the change doesn't come out of thin air. We repeatedly took\n> wrong conclusions in our query analysis because we assumed that the reported\n> block counts include the workers.\n>\n> If no one objects I would also register the patch at the commit fest. The\n> patch is passing cleanly on CI.\n\nThanks for the patch.\n\nThe idea sounds reasonable to me, but I have to admit snapshot_and_stats\nimplementation looks awkward. Maybe it would be better to have a\nseparate structure field for both stats and snapshot, which will be set\nto point to a corresponding place in the shared FAM e.g. when the worker\nis getting initialized? shm_toc_allocate mentions BUFFERALIGN to handle\npossibility of some atomic operations needing it, so I guess that would\nhave to be an alignment in this case as well.\n\nProbably another option would be to allocate two separate pieces of\nshared memory, which resolves questions like proper alignment, but\nannoyingly will require an extra lookup and a new key.\n\n\n", "msg_date": "Fri, 17 Mar 2023 21:14:37 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "Hi Dmitry,\n\nThanks for looking at the patch and sorry for the long line.\n\nOn 3/17/23 21:14, Dmitry Dolgov wrote:\n> The idea sounds reasonable to me, but I have to admit snapshot_and_stats\n> implementation looks awkward. Maybe it would be better to have a\n> separate structure field for both stats and snapshot, which will be set\n> to point to a corresponding place in the shared FAM e.g. when the worker\n> is getting initialized? shm_toc_allocate mentions BUFFERALIGN to handle\n> possibility of some atomic operations needing it, so I guess that would\n> have to be an alignment in this case as well.\n>\n> Probably another option would be to allocate two separate pieces of\n> shared memory, which resolves questions like proper alignment, but\n> annoyingly will require an extra lookup and a new key.\n\nI considered the other options and it seems to me none of them is \nparticularly superior. All of them have pros and cons with the cons \nmostly outweighing the pros. Let me quickly elaborate:\n\n1. Use multiple shm_toc entries: Shared state is split into multiple \npieces. Extra pointers in BitmapHeapScanState needed to point at the \nsplit out data. BitmapHeapScanState has already a shared_info member, \nwhich is not a pointer to the shared memory but a pointer to the leader \nlocal data allocated used to store the instrumentation data from the \nworkers. This is confusing but at least consistent with how its done in \nother places (e.g. execSort.c, nodeHash.c, nodeIncrementalSort.c). \nHaving another pointer there which points to the shared memory makes it \neven more confusing. If we go this way we would have e.g. \nshared_info_copy and shared_info members in BitmapHeapScanState.\n\n2. Store two extra pointers to the shared FAM entries in \nBitmapHeapScanState: IMHO, that is the better alternative of (1) as it \ndoesn't need an extra TOC entry but comes with the same confusion of \nmultiple pointers to SharedBitmapHeapScanInfo in BitmapHeapScanState. \nBut maybe that's not too bad?\n\n3. Solution in initial patch (use two functions to obtain pointers \nwhere/when needed): Avoids the need for another pointer in \nBitmapHeapScanState at the cost / ugliness of having to call the helper \nfunctions.\n\nAnother, not yet discussed, option I can see work is:\n\n4. Allocate a fixed amount of memory for the instrumentation stats based \non MAX_PARALLEL_WORKER_LIMIT: MAX_PARALLEL_WORKER_LIMIT is 1024 and used \nas the limit of the max_parallel_workers GUC. This way \nMAX_PARALLEL_WORKER_LIMIT * sizeof(BitmapHeapScanInstrumentation) = 1024 \n* 8 = 8192 bytes would be allocated. To cut this down in half we could \nadditionally change the type of lossy_pages and exact_pages from long to \nuint32. Only possibly needed memory would have to get initialized, the \nremaining unused memory would remain untouched to not waste cycles.\n\nMy first preference is the new option (4). My second preference is \noption (1). What's your take?\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n", "msg_date": "Wed, 20 Sep 2023 15:42:43 +0200", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "> On Wed, Sep 20, 2023 at 03:42:43PM +0200, David Geier wrote:\n> Another, not yet discussed, option I can see work is:\n>\n> 4. Allocate a fixed amount of memory for the instrumentation stats based on\n> MAX_PARALLEL_WORKER_LIMIT: MAX_PARALLEL_WORKER_LIMIT is 1024 and used as the\n> limit of the max_parallel_workers GUC. This way MAX_PARALLEL_WORKER_LIMIT *\n> sizeof(BitmapHeapScanInstrumentation) = 1024 * 8 = 8192 bytes would be\n> allocated. To cut this down in half we could additionally change the type of\n> lossy_pages and exact_pages from long to uint32. Only possibly needed memory\n> would have to get initialized, the remaining unused memory would remain\n> untouched to not waste cycles.\n\nI'm not sure that it would be acceptable -- if I understand correctly it\nwould be 8192 bytes per parallel bitmap heap scan node, and could be\nnoticeable in the worst case scenario with too many connections and too\nmany such nodes in every query.\n\nI find the original approach with an offset not that bad, after all\nthere is something similar going on in other places, e.g. parallel heap\nscan also has phs_snapshot_off (although the rest is fixed sized). My\ncommentary above in the thread was mostly about the cosmetic side.\nGiving snapshot_and_stats a decent name and maybe even ditching the\naccess functions, using instead only the offset field couple of times,\nand it would look better to me.\n\n\n", "msg_date": "Fri, 6 Oct 2023 17:21:59 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "> EXPLAIN ANALYZE for parallel Bitmap Heap Scans currently only reports\n> the number of heap blocks processed by the leader. It's missing the\n> per-worker stats.\n\n\nHi David,\n\nAccording to the docs[1]: \"In a parallel bitmap heap scan, one process is\nchosen as the leader. That process performs a scan of one or more indexes\nand builds a bitmap indicating which table blocks need to be visited. These\nblocks are then divided among the cooperating processes as in a parallel\nsequential scan.\"\n\nMy understanding is that the \"Heap Blocks\" statistic is only reporting\nblocks for the bitmap (i.e. not the subsequent scan). As such, I think it\nis correct that the workers do not report additional exact heap blocks.\n\n\n> explain (analyze, costs off, timing off) select * from foo where col0 >\n> 900 or col1 = 1;\n>\n\nIn your example, if you add the buffers and verbose parameters, do the\nworker reported buffers numbers report what you are looking for?\n\ni.e. explain (analyze, buffers, verbose, costs off, timing off) select *\nfrom foo where col0 > 900 or col1 = 1;\n\n—\nMichael Christofides\nFounder, pgMustard <https://pgmustard.com/>\n\n[1]:\nhttps://www.postgresql.org/docs/current/parallel-plans.html#PARALLEL-SCANS\n\n \nEXPLAIN ANALYZE for parallel Bitmap Heap Scans currently only reports \nthe number of heap blocks processed by the leader. It's missing the \nper-worker stats. Hi David,According to the docs[1]: \"In a parallel bitmap heap scan, one process is chosen as the leader. That process performs a scan of one or more indexes and builds a bitmap indicating which table blocks need to be visited. These blocks are then divided among the cooperating processes as in a parallel sequential scan.\"My understanding is that the \"Heap Blocks\" statistic is only reporting blocks for the bitmap (i.e. not the subsequent scan). As such, I think it is correct that the workers do not report additional exact heap blocks.  \nexplain (analyze, costs off, timing off) select * from foo where col0 > \n900 or col1 = 1;In your example, if you add the buffers and verbose parameters, do the worker reported buffers numbers report what you are looking for?i.e. explain (analyze, buffers, verbose, costs off, timing off) select * from foo where col0 > 900 or col1 = 1;—Michael ChristofidesFounder, pgMustard[1]: https://www.postgresql.org/docs/current/parallel-plans.html#PARALLEL-SCANS", "msg_date": "Mon, 16 Oct 2023 17:29:37 +0100", "msg_from": "Michael Christofides <michael@pgmustard.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Mon, Oct 16, 2023 at 12:31 PM Michael Christofides\n<michael@pgmustard.com> wrote:\n> According to the docs[1]: \"In a parallel bitmap heap scan, one process is chosen as the leader. That process performs a scan of one or more indexes and builds a bitmap indicating which table blocks need to be visited. These blocks are then divided among the cooperating processes as in a parallel sequential scan.\"\n>\n> My understanding is that the \"Heap Blocks\" statistic is only reporting blocks for the bitmap (i.e. not the subsequent scan). As such, I think it is correct that the workers do not report additional exact heap blocks.\n\nI think you're wrong about that. The bitmap index scans are what scan\nthe indexes and build the bitmap. The bitmap heap scan node is what\nscans the heap i.e. the table, and that is what is divided across the\nworkers.\n\nOn the patch itself, snapshot_and_stats doesn't strike me as a great\nname. If we added three more variable-length things would we call the\nmember snapshot_and_stats_and_pink_and_orange_and_blue? Probably\nbetter to pick a name that is somehow more generic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Oct 2023 13:57:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Fri, Jan 20, 2023 at 2:04 PM David Geier <geidav.pg@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> EXPLAIN ANALYZE for parallel Bitmap Heap Scans currently only reports\n> the number of heap blocks processed by the leader. It's missing the\n> per-worker stats. The attached patch adds that functionality in the\n> spirit of e.g. Sort or Memoize. Here is a simple test case and the\n> EXPLAIN ANALYZE output with and without the patch:\n>\n\n> With the patch:\n>\n> Gather (actual rows=99501 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel Bitmap Heap Scan on foo (actual rows=33167 loops=3)\n> Recheck Cond: ((col0 > 900) OR (col1 = 1))\n> Heap Blocks: exact=98\n> Worker 0: Heap Blocks: exact=171 lossy=0\n> Worker 1: Heap Blocks: exact=172 lossy=0\n\n\nelse\n {\n+ if (planstate->stats.exact_pages > 0)\n+ appendStringInfo(es->str, \" exact=%ld\", planstate->stats.exact_pages);\n+ if (planstate->stats.lossy_pages > 0)\n+ appendStringInfo(es->str, \" lossy=%ld\", planstate->stats.lossy_pages);\n appendStringInfoChar(es->str, '\\n');\n }\n }\n....\n+ for (int n = 0; n < planstate->shared_info->num_workers; n++)\n+ {\n....\n+ \"Heap Blocks: exact=\"UINT64_FORMAT\" lossy=\" INT64_FORMAT\"\\n\", +\nsi->exact_pages, si->lossy_pages);\n\nShouldn't we use the same format for reporting exact and lossy pages\nfor the actual backend and the worker? I mean here for the backend you\nare showing lossy pages only if it is > 0 whereas for workers we are\nshowing 0 lossy pages as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Nov 2023 14:22:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "Hi David,\n\nDo you plan to work continue working on this patch? I did take a look,\nand on the whole it looks reasonable - it modifies the right places etc.\n\nI think there are two things that may need an improvement:\n\n1) Storing variable-length data in ParallelBitmapHeapState\n\nI agree with Robert the snapshot_and_stats name is not great. I see\nDmitry mentioned phs_snapshot_off as used by ParallelTableScanDescData -\nthe reasons are somewhat different (phs_snapshot_off exists because we\ndon't know which exact struct will be allocated), while here we simply\nneed to allocate two variable-length pieces of memory. But it seems like\nit would work nicely for this. That is, we could try adding an offset\nfor each of those pieces of memory:\n\n - snapshot_off\n - stats_off\n\nI don't like the GetSharedSnapshotData name very much, it seems very\nclose to GetSnapshotData - quite confusing, I think.\n\nDmitry also suggested we might add a separate piece of shared memory. I\ndon't quite see how that would work for ParallelBitmapHeapState, but I\ndoubt it'd be simpler than having two offsets. I don't think the extra\ncomplexity (paid by everyone) would be worth it just to make EXPLAIN\nANALYZE work.\n\n\n2) Leader vs. worker counters\n\nIt seems to me this does nothing to add the per-worker values from \"Heap\nBlocks\" into the leader, which means we get stuff like this:\n\n Heap Blocks: exact=102 lossy=10995\n Worker 0: actual time=50.559..209.773 rows=215253 loops=1\n Heap Blocks: exact=207 lossy=19354\n Worker 1: actual time=50.543..211.387 rows=162934 loops=1\n Heap Blocks: exact=161 lossy=14636\n\nI think this is wrong / confusing, and inconsistent with what we do for\nother nodes. It's also inconsistent with how we deal e.g. with BUFFERS,\nwhere we *do* add the values to the leader:\n\n Heap Blocks: exact=125 lossy=10789\n Buffers: shared hit=11 read=45420\n Worker 0: actual time=51.419..221.904 rows=150437 loops=1\n Heap Blocks: exact=136 lossy=13541\n Buffers: shared hit=4 read=13541\n Worker 1: actual time=56.610..222.469 rows=229738 loops=1\n Heap Blocks: exact=209 lossy=20655\n Buffers: shared hit=4 read=20655\n\nHere it's not entirely obvious, because leader participates in the\nexecution, but once we disable leader participation, it's clearer:\n\n Buffers: shared hit=7 read=45421\n Worker 0: actual time=28.540..247.683 rows=309112 loops=1\n Heap Blocks: exact=282 lossy=27806\n Buffers: shared hit=4 read=28241\n Worker 1: actual time=24.290..251.993 rows=190815 loops=1\n Heap Blocks: exact=188 lossy=17179\n Buffers: shared hit=3 read=17180\n\nNot only is \"Buffers\" clearly a sum of per-worker stats, but the \"Heap\nBlocks\" simply disappeared because the leader does nothing and we don't\nprint zeros.\n\n\n3) I'm not sure dealing with various EXPLAIN flags may not be entirely\ncorrect. Consider this:\n\nEXPLAIN (ANALYZE):\n\n -> Parallel Bitmap Heap Scan on t (...)\n Recheck Cond: (a < 5000)\n Rows Removed by Index Recheck: 246882\n Worker 0: Heap Blocks: exact=168 lossy=15648\n Worker 1: Heap Blocks: exact=302 lossy=29337\n\nEXPLAIN (ANALYZE, VERBOSE):\n\n -> Parallel Bitmap Heap Scan on public.t (...)\n Recheck Cond: (t.a < 5000)\n Rows Removed by Index Recheck: 246882\n Worker 0: actual time=35.067..300.882 rows=282108 loops=1\n Heap Blocks: exact=257 lossy=25358\n Worker 1: actual time=32.827..302.224 rows=217819 loops=1\n Heap Blocks: exact=213 lossy=19627\n\nEXPLAIN (ANALYZE, BUFFERS):\n\n -> Parallel Bitmap Heap Scan on t (...)\n Recheck Cond: (a < 5000)\n Rows Removed by Index Recheck: 246882\n Buffers: shared hit=7 read=45421\n Worker 0: Heap Blocks: exact=236 lossy=21870\n Worker 1: Heap Blocks: exact=234 lossy=23115\n\nEXPLAIN (ANALYZE, VERBOSE, BUFFERS):\n\n -> Parallel Bitmap Heap Scan on public.t (...)\n Recheck Cond: (t.a < 5000)\n Rows Removed by Index Recheck: 246882\n Buffers: shared hit=7 read=45421\n Worker 0: actual time=28.265..260.381 rows=261264 loops=1\n Heap Blocks: exact=260 lossy=23477\n Buffers: shared hit=3 read=23478\n Worker 1: actual time=28.224..261.627 rows=238663 loops=1\n Heap Blocks: exact=210 lossy=21508\n Buffers: shared hit=4 read=21943\n\nWhy should the per-worker buffer info be shown when combined with the\nVERBOSE flag, and not just with BUFFERS, when the patch shows the\nper-worker info always?\n\n\n4) Now that I think about this, isn't the *main* problem really that we\ndon't display the sum of the per-worker stats (which I think is wrong)?\nI mean, we already can get the worker details VERBOSEm right? So the\nonly reason to display that by default seems to be that it the values in\n\"Heap Blocks\" are from the leader only.\n\nBTW doesn't this also suggest some of the code added to explain.c may\nnot be quite necessary? Wouldn't it be enough to just \"extend\" the\nexisting code printing per-worker stats. (I haven't tried, so maybe I'm\nwrong and we need the new code.)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 17 Feb 2024 23:31:09 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Sat, Feb 17, 2024 at 5:31 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi David,\n>\n> Do you plan to work continue working on this patch? I did take a look,\n> and on the whole it looks reasonable - it modifies the right places etc.\n\nI haven't started reviewing this patch yet, but I just ran into the\nbehavior that it fixes and was very outraged. +10000 to fixing this.\n\n- Melanie\n\n\n", "msg_date": "Fri, 1 Mar 2024 17:44:38 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On 18/02/2024 00:31, Tomas Vondra wrote:\n> Do you plan to work continue working on this patch? I did take a look,\n> and on the whole it looks reasonable - it modifies the right places etc.\n\n+1\n\n> I think there are two things that may need an improvement:\n> \n> 1) Storing variable-length data in ParallelBitmapHeapState\n> \n> I agree with Robert the snapshot_and_stats name is not great. I see\n> Dmitry mentioned phs_snapshot_off as used by ParallelTableScanDescData -\n> the reasons are somewhat different (phs_snapshot_off exists because we\n> don't know which exact struct will be allocated), while here we simply\n> need to allocate two variable-length pieces of memory. But it seems like\n> it would work nicely for this. That is, we could try adding an offset\n> for each of those pieces of memory:\n> \n> - snapshot_off\n> - stats_off\n> \n> I don't like the GetSharedSnapshotData name very much, it seems very\n> close to GetSnapshotData - quite confusing, I think.\n> \n> Dmitry also suggested we might add a separate piece of shared memory. I\n> don't quite see how that would work for ParallelBitmapHeapState, but I\n> doubt it'd be simpler than having two offsets. I don't think the extra\n> complexity (paid by everyone) would be worth it just to make EXPLAIN\n> ANALYZE work.\n\nI just removed phs_snapshot_data in commit 84c18acaf6. I thought that \nwould make this moot, but now that I rebased this, there are stills some \naesthetic questions on how best to represent this.\n\nIn all the other node types that use shared instrumentation like this, \nthe pattern is as follows: (using Memoize here as an example, but it's \nsimilar for Sort, IncrementalSort, Agg and Hash)\n\n/* ----------------\n *\t Shared memory container for per-worker memoize information\n * ----------------\n */\ntypedef struct SharedMemoizeInfo\n{\n\tint\t\t\tnum_workers;\n\tMemoizeInstrumentation sinstrument[FLEXIBLE_ARRAY_MEMBER];\n} SharedMemoizeInfo;\n\n/* this struct is backend-private */\ntypedef struct MemoizeState\n{\n\tScanState\tss;\t\t\t\t/* its first field is NodeTag */\n\t...\n\tMemoizeInstrumentation stats;\t/* execution statistics */\n\tSharedMemoizeInfo *shared_info; /* statistics for parallel workers */\n} MemoizeState;\n\nWhile the scan is running, the node updates its private data in \nMemoizeState->stats. At the end of a parallel scan, the worker process \ncopies the MemoizeState->stats to MemoizeState->shared_info->stats, \nwhich lives in shared memory. The leader process copies \nMemoizeState->shared_info->stats to its own backend-private copy, which \nit then stores in its MemoizeState->shared_info, replacing the pointer \nto the shared memory with a pointer to the private copy. That happens in \nExecMemoizeRetrieveInstrumentation().\n\nThis is a little different for parallel bitmap heap scans, because a \nbitmap heap scan keeps some other data in shared memory too, not just \ninstrumentation data. Also, the naming is inconsistent: the equivalent \nof SharedMemoizeInfo is actually called ParallelBitmapHeapState. I think \nwe should rename it to SharedBitmapHeapInfo, to make it clear that it \nlives in shared memory, but I digress.\n\nWe could now put the new stats at the end of ParallelBitmapHeapState as \na varlen field. But I'm not sure that's a good idea. In \nExecBitmapHeapRetrieveInstrumentation(), would we make a backend-private \ncopy of the whole ParallelBitmapHeapState struct, even though the other \nfields don't make sense after the shared memory is released? Sounds \nconfusing. Or we could introduce a separate struct for the stats, and \ncopy just that:\n\ntypedef struct SharedBitmapHeapInstrumentation\n{\n\tint\t\t\tnum_workers;\n\tBitmapHeapScanInstrumentation sinstrument[FLEXIBLE_ARRAY_MEMBER];\n} SharedBitmapHeapInstrumentation;\n\ntypedef struct BitmapHeapScanState\n{\n\tScanState\tss;\t\t\t\t/* its first field is NodeTag */\n\t...\n\tSharedBitmapHeapInstrumentation sinstrument;\n} BitmapHeapScanState;\n\nthat compiles, at least with my compiler, but I find it weird to have a \nvariable-length inner struct embedded in an outer struct like that.\n\nLong story short, I think it's still better to store \nParallelBitmapHeapInstrumentationInfo separately in the DSM chunk, not \nas part of ParallelBitmapHeapState. Attached patch does that, rebased \nover current master.\n\n\nI didn't address any of the other things that you, Tomas, pointed out, \nbut I think they're valid concerns.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 14 Mar 2024 17:30:30 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Thu, Mar 14, 2024 at 05:30:30PM +0200, Heikki Linnakangas wrote:\n> On 18/02/2024 00:31, Tomas Vondra wrote:\n> > Do you plan to work continue working on this patch? I did take a look,\n> > and on the whole it looks reasonable - it modifies the right places etc.\n> \n> +1\n> \n> > I think there are two things that may need an improvement:\n> > \n> > 1) Storing variable-length data in ParallelBitmapHeapState\n> > \n> > I agree with Robert the snapshot_and_stats name is not great. I see\n> > Dmitry mentioned phs_snapshot_off as used by ParallelTableScanDescData -\n> > the reasons are somewhat different (phs_snapshot_off exists because we\n> > don't know which exact struct will be allocated), while here we simply\n> > need to allocate two variable-length pieces of memory. But it seems like\n> > it would work nicely for this. That is, we could try adding an offset\n> > for each of those pieces of memory:\n> > \n> > - snapshot_off\n> > - stats_off\n> > \n> > I don't like the GetSharedSnapshotData name very much, it seems very\n> > close to GetSnapshotData - quite confusing, I think.\n> > \n> > Dmitry also suggested we might add a separate piece of shared memory. I\n> > don't quite see how that would work for ParallelBitmapHeapState, but I\n> > doubt it'd be simpler than having two offsets. I don't think the extra\n> > complexity (paid by everyone) would be worth it just to make EXPLAIN\n> > ANALYZE work.\n> \n> I just removed phs_snapshot_data in commit 84c18acaf6. I thought that would\n> make this moot, but now that I rebased this, there are stills some aesthetic\n> questions on how best to represent this.\n> \n> In all the other node types that use shared instrumentation like this, the\n> pattern is as follows: (using Memoize here as an example, but it's similar\n> for Sort, IncrementalSort, Agg and Hash)\n> \n> /* ----------------\n> *\t Shared memory container for per-worker memoize information\n> * ----------------\n> */\n> typedef struct SharedMemoizeInfo\n> {\n> \tint\t\t\tnum_workers;\n> \tMemoizeInstrumentation sinstrument[FLEXIBLE_ARRAY_MEMBER];\n> } SharedMemoizeInfo;\n> \n> /* this struct is backend-private */\n> typedef struct MemoizeState\n> {\n> \tScanState\tss;\t\t\t\t/* its first field is NodeTag */\n> \t...\n> \tMemoizeInstrumentation stats;\t/* execution statistics */\n> \tSharedMemoizeInfo *shared_info; /* statistics for parallel workers */\n> } MemoizeState;\n> \n> While the scan is running, the node updates its private data in\n> MemoizeState->stats. At the end of a parallel scan, the worker process\n> copies the MemoizeState->stats to MemoizeState->shared_info->stats, which\n> lives in shared memory. The leader process copies\n> MemoizeState->shared_info->stats to its own backend-private copy, which it\n> then stores in its MemoizeState->shared_info, replacing the pointer to the\n> shared memory with a pointer to the private copy. That happens in\n> ExecMemoizeRetrieveInstrumentation().\n> \n> This is a little different for parallel bitmap heap scans, because a bitmap\n> heap scan keeps some other data in shared memory too, not just\n> instrumentation data. Also, the naming is inconsistent: the equivalent of\n> SharedMemoizeInfo is actually called ParallelBitmapHeapState. I think we\n> should rename it to SharedBitmapHeapInfo, to make it clear that it lives in\n> shared memory, but I digress.\n\nFWIW, if we merge a BHS streaming read user like the one I propose in\n[1] (not as a pre-condition to this but just as something to make you\nmore comfortable with these names), the ParallelBitmapHeapState will\nbasically only contain the shared iterator and the coordination state\nfor accessing it and could be named as such.\n\nThen if you really wanted to be consistent with Memoize, you could name\nthe instrumentation SharedBitmapHeapInfo. But, personally I prefer the\nname you gave it: SharedBitmapHeapInstrumentation. I think that would\nhave been a better name for SharedMemoizeInfo since num_workers is\nreally just used as the length of the array of instrumentation info.\n\n> We could now put the new stats at the end of ParallelBitmapHeapState as a\n> varlen field. But I'm not sure that's a good idea. In\n> ExecBitmapHeapRetrieveInstrumentation(), would we make a backend-private\n> copy of the whole ParallelBitmapHeapState struct, even though the other\n> fields don't make sense after the shared memory is released? Sounds\n> confusing. Or we could introduce a separate struct for the stats, and copy\n> just that:\n> \n> typedef struct SharedBitmapHeapInstrumentation\n> {\n> \tint\t\t\tnum_workers;\n> \tBitmapHeapScanInstrumentation sinstrument[FLEXIBLE_ARRAY_MEMBER];\n> } SharedBitmapHeapInstrumentation;\n> \n> typedef struct BitmapHeapScanState\n> {\n> \tScanState\tss;\t\t\t\t/* its first field is NodeTag */\n> \t...\n> \tSharedBitmapHeapInstrumentation sinstrument;\n> } BitmapHeapScanState;\n> \n> that compiles, at least with my compiler, but I find it weird to have a\n> variable-length inner struct embedded in an outer struct like that.\n\nIn the attached patch, BitmapHeapScanState->sinstrument is a pointer,\nthough. Or are you proposing the above as an alternative that you\ndecided not to go with?\n\n> Long story short, I think it's still better to store\n> ParallelBitmapHeapInstrumentationInfo separately in the DSM chunk, not as\n> part of ParallelBitmapHeapState. Attached patch does that, rebased over\n> current master.\n\nThe approach in the attached patch looks good to me.\n\n> I didn't address any of the other things that you, Tomas, pointed out, but I\n> think they're valid concerns.\n\nI'll send a separate review of these issues that are still present in\nyour patch as well.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/CAAKRu_ZwCwWFeL_H3ia26bP2e7HiKLWt0ZmGXPVwPO6uXq0vaA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 16:00:06 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On 14/03/2024 22:00, Melanie Plageman wrote:\n> On Thu, Mar 14, 2024 at 05:30:30PM +0200, Heikki Linnakangas wrote:\n>> typedef struct SharedBitmapHeapInstrumentation\n>> {\n>> \tint\t\t\tnum_workers;\n>> \tBitmapHeapScanInstrumentation sinstrument[FLEXIBLE_ARRAY_MEMBER];\n>> } SharedBitmapHeapInstrumentation;\n>>\n>> typedef struct BitmapHeapScanState\n>> {\n>> \tScanState\tss;\t\t\t\t/* its first field is NodeTag */\n>> \t...\n>> \tSharedBitmapHeapInstrumentation sinstrument;\n>> } BitmapHeapScanState;\n>>\n>> that compiles, at least with my compiler, but I find it weird to have a\n>> variable-length inner struct embedded in an outer struct like that.\n> \n> In the attached patch, BitmapHeapScanState->sinstrument is a pointer,\n> though. Or are you proposing the above as an alternative that you\n> decided not to go with?\n\nRight, the above is what I contemplated at first but decided it was a \nbad idea.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 22:59:50 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "Hi Tomas,\n>\n> On Sat, Feb 17, 2024 at 2:31 PM Tomas Vondra <\ntomas.vondra@enterprisedb.com> wrote:\n> Hi David,\n>\n> Do you plan to work continue working on this patch? I did take a look,\n> and on the whole it looks reasonable - it modifies the right places etc.\n>\n> I think there are two things that may need an improvement:\n>\n> 1) Storing variable-length data in ParallelBitmapHeapState\n>\n> I agree with Robert the snapshot_and_stats name is not great. I see\n> Dmitry mentioned phs_snapshot_off as used by ParallelTableScanDescData -\n> the reasons are somewhat different (phs_snapshot_off exists because we\n> don't know which exact struct will be allocated), while here we simply\n> need to allocate two variable-length pieces of memory. But it seems like\n> it would work nicely for this. That is, we could try adding an offset\n> for each of those pieces of memory:\n>\n> - snapshot_off\n> - stats_off\n>\n> I don't like the GetSharedSnapshotData name very much, it seems very\n> close to GetSnapshotData - quite confusing, I think.\n>\n> Dmitry also suggested we might add a separate piece of shared memory. I\n> don't quite see how that would work for ParallelBitmapHeapState, but I\n> doubt it'd be simpler than having two offsets. I don't think the extra\n> complexity (paid by everyone) would be worth it just to make EXPLAIN\n> ANALYZE work.\n\nThis issue is now gone after Heikki's fix.\n\n> 2) Leader vs. worker counters\n>\n> It seems to me this does nothing to add the per-worker values from \"Heap\n> Blocks\" into the leader, which means we get stuff like this:\n>\n> Heap Blocks: exact=102 lossy=10995\n> Worker 0: actual time=50.559..209.773 rows=215253 loops=1\n> Heap Blocks: exact=207 lossy=19354\n> Worker 1: actual time=50.543..211.387 rows=162934 loops=1\n> Heap Blocks: exact=161 lossy=14636\n>\n> I think this is wrong / confusing, and inconsistent with what we do for\n> other nodes. It's also inconsistent with how we deal e.g. with BUFFERS,\n> where we *do* add the values to the leader:\n>\n> Heap Blocks: exact=125 lossy=10789\n> Buffers: shared hit=11 read=45420\n> Worker 0: actual time=51.419..221.904 rows=150437 loops=1\n> Heap Blocks: exact=136 lossy=13541\n> Buffers: shared hit=4 read=13541\n> Worker 1: actual time=56.610..222.469 rows=229738 loops=1\n> Heap Blocks: exact=209 lossy=20655\n> Buffers: shared hit=4 read=20655\n>\n> Here it's not entirely obvious, because leader participates in the\n> execution, but once we disable leader participation, it's clearer:\n>\n> Buffers: shared hit=7 read=45421\n> Worker 0: actual time=28.540..247.683 rows=309112 loops=1\n> Heap Blocks: exact=282 lossy=27806\n> Buffers: shared hit=4 read=28241\n> Worker 1: actual time=24.290..251.993 rows=190815 loops=1\n> Heap Blocks: exact=188 lossy=17179\n> Buffers: shared hit=3 read=17180\n>\n> Not only is \"Buffers\" clearly a sum of per-worker stats, but the \"Heap\n> Blocks\" simply disappeared because the leader does nothing and we don't\n> print zeros.\n\nHeap Blocks is specific to Bitmap Heap Scan. It seems that node specific\nstats\ndo not aggregate workers' stats into leaders for some existing nodes. For\nexample,\nMemorize node for Hits, Misses, etc\n\n -> Nested Loop (actual rows=166667 loops=3)\n -> Parallel Seq Scan on t (actual rows=33333 loops=3)\n -> Memoize (actual rows=5 loops=100000)\n Cache Key: t.j\n Cache Mode: logical\n Hits: 32991 Misses: 5 Evictions: 0 Overflows: 0 Memory\nUsage: 2kB\n Worker 0: Hits: 33551 Misses: 5 Evictions: 0 Overflows:\n0 Memory Usage: 2kB\n Worker 1: Hits: 33443 Misses: 5 Evictions: 0 Overflows:\n0 Memory Usage: 2kB\n -> Index Scan using uj on u (actual rows=5 loops=15)\n Index Cond: (j = t.j)\n\nSort, HashAggregate also do the same stuff.\n\n> 3) I'm not sure dealing with various EXPLAIN flags may not be entirely\n> correct. Consider this:\n>\n> EXPLAIN (ANALYZE):\n>\n> -> Parallel Bitmap Heap Scan on t (...)\n> Recheck Cond: (a < 5000)\n> Rows Removed by Index Recheck: 246882\n> Worker 0: Heap Blocks: exact=168 lossy=15648\n> Worker 1: Heap Blocks: exact=302 lossy=29337\n>\n> EXPLAIN (ANALYZE, VERBOSE):\n>\n> -> Parallel Bitmap Heap Scan on public.t (...)\n> Recheck Cond: (t.a < 5000)\n> Rows Removed by Index Recheck: 246882\n> Worker 0: actual time=35.067..300.882 rows=282108 loops=1\n> Heap Blocks: exact=257 lossy=25358\n> Worker 1: actual time=32.827..302.224 rows=217819 loops=1\n> Heap Blocks: exact=213 lossy=19627\n>\n> EXPLAIN (ANALYZE, BUFFERS):\n>\n> -> Parallel Bitmap Heap Scan on t (...)\n> Recheck Cond: (a < 5000)\n> Rows Removed by Index Recheck: 246882\n> Buffers: shared hit=7 read=45421\n> Worker 0: Heap Blocks: exact=236 lossy=21870\n> Worker 1: Heap Blocks: exact=234 lossy=23115\n>\n> EXPLAIN (ANALYZE, VERBOSE, BUFFERS):\n>\n> -> Parallel Bitmap Heap Scan on public.t (...)\n> Recheck Cond: (t.a < 5000)\n> Rows Removed by Index Recheck: 246882\n> Buffers: shared hit=7 read=45421\n> Worker 0: actual time=28.265..260.381 rows=261264 loops=1\n> Heap Blocks: exact=260 lossy=23477\n> Buffers: shared hit=3 read=23478\n> Worker 1: actual time=28.224..261.627 rows=238663 loops=1\n> Heap Blocks: exact=210 lossy=21508\n> Buffers: shared hit=4 read=21943\n>\n> Why should the per-worker buffer info be shown when combined with the\n> VERBOSE flag, and not just with BUFFERS, when the patch shows the\n> per-worker info always?\n>\n\nIt seems that the general explain print framework requires verbose mode to\nshow per worker stats.\nFor example, how Buffers hits, JIT are printed. While in some specific\nnodes which involves parallelism,\nthey always show worker blocks. This is why we see that some worker blocks\ndon't have buffers\nstats in non verbose mode. There are several existing nodes have the same\nissue as what this\npatch does: memorize, sort, hashaggregate.\n\n> 4) Now that I think about this, isn't the *main* problem really that we\n> don't display the sum of the per-worker stats (which I think is wrong)?\n> I mean, we already can get the worker details VERBOSEm right? So the\n> only reason to display that by default seems to be that it the values in\n> \"Heap Blocks\" are from the leader only.\n\nIf we print aggregate Heap Blocks in the 'leader' block and show worker\nstats only in\nverbose mode, does it look better? In this way, it matches better with how\nthe\ngeneral framework prints workers stats, but it differs from some existing\nnodes which\nalso print worker stats. As mentioned above, they are memorize, sort, and\nhashaggregate nodes.\nBy the way, is that also a problem for these nodes?\n\n> BTW doesn't this also suggest some of the code added to explain.c may\n> not be quite necessary? Wouldn't it be enough to just \"extend\" the\n> existing code printing per-worker stats. (I haven't tried, so maybe I'm\n> wrong and we need the new code.)\n\nWe need the new code as they are node specific stats and we do call the\nworker print function\nto lay out the explain plan. I think the problem is in which mode we should\nshow worker blocks. This\nis discussed in the above section.\n\nv3 failed on master, attached a rebased version.\n\nRegards,\nDonghang Lin\n(ServiceNow)", "msg_date": "Sun, 24 Mar 2024 23:28:52 -0700", "msg_from": "Donghang Lin <donghanglin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Mon, Mar 25, 2024 at 2:29 AM Donghang Lin <donghanglin@gmail.com> wrote:\n>\n>\n> > On Sat, Feb 17, 2024 at 2:31 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> > 2) Leader vs. worker counters\n> >\n> > It seems to me this does nothing to add the per-worker values from \"Heap\n> > Blocks\" into the leader, which means we get stuff like this:\n> >\n> > Heap Blocks: exact=102 lossy=10995\n> > Worker 0: actual time=50.559..209.773 rows=215253 loops=1\n> > Heap Blocks: exact=207 lossy=19354\n> > Worker 1: actual time=50.543..211.387 rows=162934 loops=1\n> > Heap Blocks: exact=161 lossy=14636\n> >\n> > I think this is wrong / confusing, and inconsistent with what we do for\n> > other nodes. It's also inconsistent with how we deal e.g. with BUFFERS,\n> > where we *do* add the values to the leader:\n> >\n> > Heap Blocks: exact=125 lossy=10789\n> > Buffers: shared hit=11 read=45420\n> > Worker 0: actual time=51.419..221.904 rows=150437 loops=1\n> > Heap Blocks: exact=136 lossy=13541\n> > Buffers: shared hit=4 read=13541\n> > Worker 1: actual time=56.610..222.469 rows=229738 loops=1\n> > Heap Blocks: exact=209 lossy=20655\n> > Buffers: shared hit=4 read=20655\n> >\n> > Here it's not entirely obvious, because leader participates in the\n> > execution, but once we disable leader participation, it's clearer:\n> >\n> > Buffers: shared hit=7 read=45421\n> > Worker 0: actual time=28.540..247.683 rows=309112 loops=1\n> > Heap Blocks: exact=282 lossy=27806\n> > Buffers: shared hit=4 read=28241\n> > Worker 1: actual time=24.290..251.993 rows=190815 loops=1\n> > Heap Blocks: exact=188 lossy=17179\n> > Buffers: shared hit=3 read=17180\n> >\n> > Not only is \"Buffers\" clearly a sum of per-worker stats, but the \"Heap\n> > Blocks\" simply disappeared because the leader does nothing and we don't\n> > print zeros.\n>\n> Heap Blocks is specific to Bitmap Heap Scan. It seems that node specific stats\n> do not aggregate workers' stats into leaders for some existing nodes. For example,\n> Memorize node for Hits, Misses, etc\n>\n> -> Nested Loop (actual rows=166667 loops=3)\n> -> Parallel Seq Scan on t (actual rows=33333 loops=3)\n> -> Memoize (actual rows=5 loops=100000)\n> Cache Key: t.j\n> Cache Mode: logical\n> Hits: 32991 Misses: 5 Evictions: 0 Overflows: 0 Memory Usage: 2kB\n> Worker 0: Hits: 33551 Misses: 5 Evictions: 0 Overflows: 0 Memory Usage: 2kB\n> Worker 1: Hits: 33443 Misses: 5 Evictions: 0 Overflows: 0 Memory Usage: 2kB\n> -> Index Scan using uj on u (actual rows=5 loops=15)\n> Index Cond: (j = t.j)\n>\n> Sort, HashAggregate also do the same stuff.\n>\n> > 3) I'm not sure dealing with various EXPLAIN flags may not be entirely\n> > correct. Consider this:\n> >\n> > EXPLAIN (ANALYZE):\n> >\n> > -> Parallel Bitmap Heap Scan on t (...)\n> > Recheck Cond: (a < 5000)\n> > Rows Removed by Index Recheck: 246882\n> > Worker 0: Heap Blocks: exact=168 lossy=15648\n> > Worker 1: Heap Blocks: exact=302 lossy=29337\n> >\n> > EXPLAIN (ANALYZE, VERBOSE):\n> >\n> > -> Parallel Bitmap Heap Scan on public.t (...)\n> > Recheck Cond: (t.a < 5000)\n> > Rows Removed by Index Recheck: 246882\n> > Worker 0: actual time=35.067..300.882 rows=282108 loops=1\n> > Heap Blocks: exact=257 lossy=25358\n> > Worker 1: actual time=32.827..302.224 rows=217819 loops=1\n> > Heap Blocks: exact=213 lossy=19627\n> >\n> > EXPLAIN (ANALYZE, BUFFERS):\n> >\n> > -> Parallel Bitmap Heap Scan on t (...)\n> > Recheck Cond: (a < 5000)\n> > Rows Removed by Index Recheck: 246882\n> > Buffers: shared hit=7 read=45421\n> > Worker 0: Heap Blocks: exact=236 lossy=21870\n> > Worker 1: Heap Blocks: exact=234 lossy=23115\n> >\n> > EXPLAIN (ANALYZE, VERBOSE, BUFFERS):\n> >\n> > -> Parallel Bitmap Heap Scan on public.t (...)\n> > Recheck Cond: (t.a < 5000)\n> > Rows Removed by Index Recheck: 246882\n> > Buffers: shared hit=7 read=45421\n> > Worker 0: actual time=28.265..260.381 rows=261264 loops=1\n> > Heap Blocks: exact=260 lossy=23477\n> > Buffers: shared hit=3 read=23478\n> > Worker 1: actual time=28.224..261.627 rows=238663 loops=1\n> > Heap Blocks: exact=210 lossy=21508\n> > Buffers: shared hit=4 read=21943\n> >\n> > Why should the per-worker buffer info be shown when combined with the\n> > VERBOSE flag, and not just with BUFFERS, when the patch shows the\n> > per-worker info always?\n> >\n>\n> It seems that the general explain print framework requires verbose mode to show per worker stats.\n> For example, how Buffers hits, JIT are printed. While in some specific nodes which involves parallelism,\n> they always show worker blocks. This is why we see that some worker blocks don't have buffers\n> stats in non verbose mode. There are several existing nodes have the same issue as what this\n> patch does: memorize, sort, hashaggregate.\n\nI don't think passing explain the BUFFERS option should impact what is\nshown for bitmap heap scan lossy/exact. BUFFERS has to do with buffer\nusage. \"Heap Blocks\" here is actually more accurately \"Bitmap Blocks\".\n1) it is not heap specific, so at least changing it to \"Table Blocks\"\nwould be better and 2) the meaning is about whether or not the blocks\nwere represented lossily in the bitmap -- yes, those blocks that we\nare talking about are table blocks (in contrast to index blocks), but\nthe important part is the bitmap.\n\nSo, BUFFERS shouldn't cause this info to show.\n\nAs for whether or not the leader number should be inclusive of all the\nworker numbers, if there is a combination of options in which\nper-worker stats are not displayed, then the leader count should be\ninclusive of the worker numbers. However, if the worker numbers are\nalways displayed, I think it is okay for the leader number to only\ndisplay its own count. Though I agree with Tomas that that can be\nconfusing when parallel_leader_participation is off. You end up\nwithout a topline number.\n\nAs for whether or not per-worker stats should be displayed by default\nor only with VERBOSE, it sounds like there are two different\nprecedents. I don't have a strong feeling one way or the other.\nWhichever is most consistent.\nDonghang, could you list again which plan nodes and explain options\nalways print per-worker stats and which only do with the VERBOSE\noption?\n\nI think there is an issue with the worker counts on rescan though. I\nwas playing around with this patch with one of the regression test\nsuite tables:\n\ndrop table if exists tenk1;\ndrop table if exists tenk2;\nCREATE TABLE tenk1 (\n unique1 int4,\n unique2 int4,\n two int4,\n four int4,\n ten int4,\n twenty int4,\n hundred int4,\n thousand int4,\n twothousand int4,\n fivethous int4,\n tenthous int4,\n odd int4,\n even int4,\n stringu1 name,\n stringu2 name,\n string4 name\n) with (autovacuum_enabled = false);\n\nCOPY tenk1 FROM '/[source directory]/src/test/regress/data/tenk.data';\n\nCREATE TABLE tenk2 AS SELECT * FROM tenk1;\nCREATE INDEX tenk1_hundred ON tenk1 USING btree(hundred int4_ops);\nVACUUM ANALYZE tenk1;\nVACUUM ANALYZE tenk2;\n\nset enable_seqscan to off;\nset enable_indexscan to off;\nset enable_hashjoin to off;\nset enable_mergejoin to off;\nset enable_material to off;\nset parallel_setup_cost=0;\nset parallel_tuple_cost=0;\nset min_parallel_table_scan_size=0;\nset max_parallel_workers_per_gather=2;\nset parallel_leader_participation = off;\nexplain (analyze, costs off, verbose)\n select count(*) from tenk1, tenk2 where tenk1.hundred > 1 and\ntenk2.thousand=0;\n\nI don't think the worker counts are getting carried across rescans.\nFor this query, with parallel_leader_participation, you can see the\nleader has a high number of exact heap blocks (there are 30 rescans,\nso it is the sum across all of those). But the workers have low\ncounts. That might be normal because maybe they did less work, but if\nyou set parallel_leader_participation = off, you see the worker\nnumbers are still small. I did some logging and I do see workers with\ncounts of lossy/exact not making it into the final count. I haven't\nhad time to debug more, but it is worth looking into.\n\nparallel_leader_participation = on\n-----------------------------------------------------------------------------------------------\n Aggregate (actual time=62.321..63.178 rows=1 loops=1)\n Output: count(*)\n -> Nested Loop (actual time=2.058..55.202 rows=98000 loops=1)\n -> Seq Scan on public.tenk2 (actual time=0.182..3.699 rows=10 loops=1)\n Output: tenk2.unique1 ...\n Filter: (tenk2.thousand = 0)\n Rows Removed by Filter: 9990\n -> Gather (actual time=1.706..4.142 rows=9800 loops=10)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Bitmap Heap Scan on public.tenk1 (actual\ntime=0.365..0.958 rows=3267 loops=30)\n Recheck Cond: (tenk1.hundred > 1)\n Heap Blocks: exact=1801\n Worker 0: actual time=0.033..0.414 rows=1993 loops=10\n Heap Blocks: exact=78 lossy=0\n Worker 1: actual time=0.032..0.550 rows=2684 loops=10\n Heap Blocks: exact=86 lossy=0\n -> Bitmap Index Scan on tenk1_hundred (actual\ntime=0.972..0.972 rows=9800 loops=10)\n Index Cond: (tenk1.hundred > 1)\n\nparallel_leader_participation = off\n-----------------------------------------------------------------------------------------------\n Aggregate (actual time=84.502..84.977 rows=1 loops=1)\n Output: count(*)\n -> Nested Loop (actual time=6.185..77.085 rows=98000 loops=1)\n -> Seq Scan on public.tenk2 (actual time=0.182..3.709 rows=10 loops=1)\n Output: tenk2.unique1...\n Filter: (tenk2.thousand = 0)\n Rows Removed by Filter: 9990\n -> Gather (actual time=5.265..6.355 rows=9800 loops=10)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Bitmap Heap Scan on public.tenk1 (actual\ntime=0.951..1.863 rows=4900 loops=20)\n Recheck Cond: (tenk1.hundred > 1)\n Worker 0: actual time=0.794..1.705 rows=4909 loops=10\n Heap Blocks: exact=168 lossy=0\n Worker 1: actual time=1.108..2.021 rows=4891 loops=10\n Heap Blocks: exact=177 lossy=0\n -> Bitmap Index Scan on tenk1_hundred (actual\ntime=1.024..1.024 rows=9800 loops=10)\n Index Cond: (tenk1.hundred > 1)\n Worker 1: actual time=1.024..1.024\nrows=9800 loops=10\n\n- Melanie\n\n\n", "msg_date": "Mon, 25 Mar 2024 17:11:20 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Mon, Mar 25, 2024 at 2:11 PM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n> As for whether or not per-worker stats should be displayed by default\n> or only with VERBOSE, it sounds like there are two different\n> precedents. I don't have a strong feeling one way or the other.\n> Whichever is most consistent.\n> Donghang, could you list again which plan nodes and explain options\n> always print per-worker stats and which only do with the VERBOSE\n> option?\n\nI took a look at explain.c where workers info is printed out.\n\nThese works for every parallel aware nodes:\nBuffers stats print for workers with VERBOSE and BUFFERS\nWAL stats print for workers with VERBOSE and WAL\nJIT stats print for workers with VERBOSE and COSTS\nTiming print for workers with VERBOSE and TIMING\nRows and loops print for workers with VERBOSE\n\nSome specific nodes:\nSort / Incremental Sort / Hash / HashAggregate / Memorize and Bitmap Heap\nScan (this patch) nodes\nalways print their specific stats for workers.\n\n> I did some logging and I do see workers with\n> counts of lossy/exact not making it into the final count. I haven't\n> had time to debug more, but it is worth looking into.\n\nIndeed, rescan overrides previous scan stats in workers.\nAttach v5 with v4 plus the fix to aggregate the counts.\n\nRegards,\nDonghang Lin\n(ServiceNow)", "msg_date": "Wed, 27 Mar 2024 00:03:08 -0700", "msg_from": "Donghang Lin <donghanglin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "Hi! Thank you for your work on this issue!\n\nYour patch required a little revision. I did this and attached the patch.\n\nAlso, I think you should add some clarification to the comments about \nprinting 'exact' and 'loosy' pages in show_hashagg_info function, which \nyou get from planstate->stats, whereas previously it was output only \nfrom planstate. Perhaps it is enough to mention this in the comment to \nthe commit.\n\nI mean this place:\n\ndiff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\nindex 926d70afaf..02251994c6 100644\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -3467,26 +3467,57 @@ show_hashagg_info(AggState *aggstate, \nExplainState *es)\n  static void\n  show_tidbitmap_info(BitmapHeapScanState *planstate, ExplainState *es)\n  {\n+    Assert(es->analyze);\n+\n      if (es->format != EXPLAIN_FORMAT_TEXT)\n      {\n          ExplainPropertyInteger(\"Exact Heap Blocks\", NULL,\n-                               planstate->exact_pages, es);\n+                               planstate->stats.exact_pages, es);\n          ExplainPropertyInteger(\"Lossy Heap Blocks\", NULL,\n-                               planstate->lossy_pages, es);\n+                               planstate->stats.lossy_pages, es);\n      }\n      else\n      {\n-        if (planstate->exact_pages > 0 || planstate->lossy_pages > 0)\n+        if (planstate->stats.exact_pages > 0 || \nplanstate->stats.lossy_pages > 0)\n          {\n              ExplainIndentText(es);\n              appendStringInfoString(es->str, \"Heap Blocks:\");\n-            if (planstate->exact_pages > 0)\n-                appendStringInfo(es->str, \" exact=%ld\", \nplanstate->exact_pages);\n-            if (planstate->lossy_pages > 0)\n-                appendStringInfo(es->str, \" lossy=%ld\", \nplanstate->lossy_pages);\n+            if (planstate->stats.exact_pages > 0)\n+                appendStringInfo(es->str, \" exact=%ld\", \nplanstate->stats.exact_pages);\n+            if (planstate->stats.lossy_pages > 0)\n+                appendStringInfo(es->str, \" lossy=%ld\", \nplanstate->stats.lossy_pages);\n              appendStringInfoChar(es->str, '\\n');\n          }\n      }\n+\n+    if (planstate->pstate != NULL)\n+    {\n+        for (int n = 0; n < planstate->sinstrument->num_workers; n++)\n+        {\n+            BitmapHeapScanInstrumentation *si = \n&planstate->sinstrument->sinstrument[n];\n+\n+            if (si->exact_pages == 0 && si->lossy_pages == 0)\n+                continue;\n+\n+            if (es->workers_state)\n+                ExplainOpenWorker(n, es);\n+\n+            if (es->format == EXPLAIN_FORMAT_TEXT)\n+            {\n+                ExplainIndentText(es);\n+                appendStringInfo(es->str, \"Heap Blocks: exact=%ld \nlossy=%ld\\n\",\n+                         si->exact_pages, si->lossy_pages);\n+            }\n+            else\n+            {\n+                ExplainPropertyInteger(\"Exact Heap Blocks\", NULL, \nsi->exact_pages, es);\n+                ExplainPropertyInteger(\"Lossy Heap Blocks\", NULL, \nsi->lossy_pages, es);\n+            }\n+\n+            if (es->workers_state)\n+                ExplainCloseWorker(n, es);\n+        }\n+    }\n  }\n\nI suggest some code refactoring (diff.diff.no-cfbot file) that allows \nyou to improve your code.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 9 Apr 2024 00:16:56 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "Hi,\r\n\r\nThanks for working it! I'm interested in this feature, so I'd like to participate in the\r\npatch review. Though I've just started looking at the patch, I have two comments\r\nabout the v6 patch. (And I want to confirm the thread active.)\r\n\r\n\r\n1) Unify the print format of leader and worker\r\n\r\nIn show_tidbitmap_info(), the number of exact/loosy blocks of the leader and workers\r\nare printed. I think the printed format should be same. Currently, the leader does not\r\nprint the blocks of exact/lossy with a value of 0, but the workers could even if it is 0.\r\n\r\nIMHO, it's better to print both exact/lossy blocks if at least one of the numbers of\r\nexact/lossy blocks is greater than 0. After all, the print logic is redundant for leader\r\nand workers, but I thought it would be better to make it a common function.\r\n\r\n2) Move es->workers_state check\r\n\r\nIn show_tidbitmap_info(), ExplainOpenWorker() and ExplainCloseWorker() are called\r\nafter checking es->worker_state is not NULL. However, es->workers_state seem to be\r\nable to be NULL only for the Gather node (I see ExplainPrintPlan()). Also, reading the\r\ncomments, there is a description that each worker information needs to be hidden\r\nwhen printing the plan.\r\n\r\nEven if es->workers_state becomes NULL in BitmapHeapScan node in the future,\r\nI think that workers' information(Heap Blocks) should not be printed. Therefore,\r\nI think es->workers_state check should be move to the place of \r\n\"if (planstate->pstate != NULL)\" like ExplainNode(), doesn't it?\r\n\r\nIIUC, we need to correct show_sort_info() and so on too…\r\n\r\nRegards,\r\n--\r\nMasahiro Ikeda\r\nNTT DATA CORPORATION\r\n\r\n", "msg_date": "Wed, 26 Jun 2024 10:22:23 +0000", "msg_from": "<Masahiro.Ikeda@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "Hi,\n\nThanks for working it! I'm interested in this feature, so I'd like to participate\nin the patch review. Though I've just started looking at the patch, I have two\ncomments about the v6 patch. (And I want to confirm the thread active.)\n\n\n1) Unify the print format of leader and worker\n\nIn show_tidbitmap_info(), the number of exact/loosy blocks of the leader and\nworkers are printed. I think the printed format should be same. Currently, the\nleader does not print the blocks of exact/lossy with a value of 0, but the workers\ncould even if it is 0.\n\nIMHO, it's better to print both exact/lossy blocks if at least one of the numbers of\nexact/lossy blocks is greater than 0. After all, the print logic is redundant for leader\nand workers, but I thought it would be better to make it a common function.\n\n\n2) Move es->workers_state check\n\nIn show_tidbitmap_info(), ExplainOpenWorker() and ExplainCloseWorker() are called\nafter checking es->worker_state is not NULL. However, es->workers_state seem to be\nable to be NULL only for the Gather node (I see ExplainPrintPlan()). Also, reading the\ncomments, there is a description that each worker information needs to be hidden when\nprinting the plan.\n\nEven if es->workers_state becomes NULL in BitmapHeapScan node in the future, I think\nthat workers' information(Heap Blocks) should not be printed. Therefore, I think\nes->workers_state check should be move to the place of \"if (planstate->pstate != NULL)\"\nlike ExplainNode(), doesn't it?\n\nIIUC, we need to correct show_sort_info() and so on too…\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 26 Jun 2024 10:27:09 +0000", "msg_from": "<Masahiro.Ikeda@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Wed, 26 Jun 2024 at 22:22, <Masahiro.Ikeda@nttdata.com> wrote:\n> 1) Unify the print format of leader and worker\n>\n> In show_tidbitmap_info(), the number of exact/loosy blocks of the leader and workers\n> are printed. I think the printed format should be same. Currently, the leader does not\n> print the blocks of exact/lossy with a value of 0, but the workers could even if it is 0.\n\nI agree with this. The two should match. I've fixed that in the attached.\n\nI also made a pass over the patch, and I also changed:\n\n1. Fixed up a few outdated comments in execnodes.h.\n2. Added a comment in ExecEndBitmapHeapScan() to explain why we += the\nstats rather than memcpy the BitmapHeapScanInstrumentation.\n3. A bunch of other comments.\n4. updated typedefs.list and ran pgindent.\n\nFor #2, I was surprised at this. I think there's probably a bug in the\nMemoize stats code for the same reason. I've not looked into that yet.\nI find it a little bit strange that we're showing stats for Worker N\nwhen that worker could have been made up from possibly hundreds of\ndifferent parallel workers in the case where the Gather/GatherMerge\nnode is rescanned and the worker gets shut down at the end of each\nGather and fresh ones started up on rescan. I do agree that we need to\naccumulate the totals from previous scans as that's what the\nnon-parallel version does.\n\nMany people have been hacking on this and I'm wondering who should be\nlisted as authors. I plan to put David Geier first. Should anyone\nelse be listed there?\n\nI've attached the rebased v5 patch with part of Alena's changes from\nthe diff.diff.no-cfbot file. I left the following one off as it looks\nwrong.\n\n- ptr += MAXALIGN(sizeof(ParallelBitmapHeapState));\n+ ptr += size;\n\nThat would make ptr point to the end of the allocation.\n\nI'd like to commit this patch soon, so if anyone wants to give it a\nfinal look, can they do so before next week?\n\nDavid", "msg_date": "Fri, 5 Jul 2024 01:59:26 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Fri, 5 Jul 2024 at 01:59, David Rowley <dgrowleyml@gmail.com> wrote:\n> I also made a pass over the patch, and I also changed:\n>\n> 1. Fixed up a few outdated comments in execnodes.h.\n> 2. Added a comment in ExecEndBitmapHeapScan() to explain why we += the\n> stats rather than memcpy the BitmapHeapScanInstrumentation.\n> 3. A bunch of other comments.\n> 4. updated typedefs.list and ran pgindent.\n\nOne other thing I think we should do while on this topic is move away\nfrom using \"long\" as a data type for storing the number of exact and\nlossy pages. The problem is that sizeof(long) on 64-bit MSVC is 32\nbits. A signed 32-bit type isn't large enough to store anything more\nthan 16TBs worth of 8k pages.\n\nI propose we change these to uint64 while causing churn in this area,\nprobably as a follow-on patch. I think a uint32 isn't wide enough as\nyou could exceed the limit with rescans.\n\nDavid\n\n\n", "msg_date": "Fri, 5 Jul 2024 12:52:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Fri, 5 Jul 2024 at 12:52, David Rowley <dgrowleyml@gmail.com> wrote:\n> I propose we change these to uint64 while causing churn in this area,\n> probably as a follow-on patch. I think a uint32 isn't wide enough as\n> you could exceed the limit with rescans.\n\nI wondered how large a query it would take to cause this problem. I tried:\n\ncreate table a (a int);\ninsert into a select x%1000 from generate_Series(1,1500000)x;\ncreate index on a(a);\nvacuum freeze analyze a;\n\nset enable_hashjoin=0;\nset enable_mergejoin=0;\nset enable_indexscan=0;\nset max_parallel_workers_per_gather=0;\n\nexplain (analyze, costs off, timing off, summary off)\nselect count(*) from a a1 inner join a a2 on a1.a=a2.a;\n\nAfter about 15 mins, the trimmed output from Linux is:\n\n Aggregate (actual rows=1 loops=1)\n -> Nested Loop (actual rows=2250000000 loops=1)\n -> Seq Scan on a a1 (actual rows=1500000 loops=1)\n -> Bitmap Heap Scan on a a2 (actual rows=1500 loops=1500000)\n Recheck Cond: (a1.a = a)\n Heap Blocks: exact=2250000000\n -> Bitmap Index Scan on a_a_idx (actual rows=1500 loops=1500000)\n Index Cond: (a = a1.a)\n\nWhereas, on MSVC, due to sizeof(long) == 4, it's:\n\n Aggregate (actual rows=1 loops=1)\n -> Nested Loop (actual rows=2250000000 loops=1)\n -> Seq Scan on a a1 (actual rows=1500000 loops=1)\n -> Bitmap Heap Scan on a a2 (actual rows=1500 loops=1500000)\n Recheck Cond: (a1.a = a)\n -> Bitmap Index Scan on a_a_idx (actual rows=1500 loops=1500000)\n Index Cond: (a = a1.a)\n\nNotice the \"Heap Blocks: exact=2250000000\" is missing on Windows.\nThis is because it wrapped around to a negative value and\nshow_tidbitmap_info() only shows > 0 values.\n\nI feel this is a good enough justification to increase the width of\nthose counters to uint64, so I'll do that too.\n\nDavid\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:19:58 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Mon, 8 Jul 2024 at 12:19, David Rowley <dgrowleyml@gmail.com> wrote:\n> Notice the \"Heap Blocks: exact=2250000000\" is missing on Windows.\n> This is because it wrapped around to a negative value and\n> show_tidbitmap_info() only shows > 0 values.\n>\n> I feel this is a good enough justification to increase the width of\n> those counters to uint64, so I'll do that too.\n\nI pushed the widening of the types first as I saw some code in the\nEXPLAIN patch which assumed var == 0 is the negator of var > 0. I\ncouldn't bring myself to commit that knowing it was wrong and also\ncouldn't bring myself to write <= 0 knowing I was about to make that\nlook like a weird thing to write for an unsigned type.\n\nDavid\n\n\n", "msg_date": "Mon, 8 Jul 2024 14:47:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Sun, 18 Feb 2024 at 11:31, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> 2) Leader vs. worker counters\n>\n> It seems to me this does nothing to add the per-worker values from \"Heap\n> Blocks\" into the leader, which means we get stuff like this:\n>\n> Heap Blocks: exact=102 lossy=10995\n> Worker 0: actual time=50.559..209.773 rows=215253 loops=1\n> Heap Blocks: exact=207 lossy=19354\n> Worker 1: actual time=50.543..211.387 rows=162934 loops=1\n> Heap Blocks: exact=161 lossy=14636\n>\n> I think this is wrong / confusing, and inconsistent with what we do for\n> other nodes.\n\nAre you able to share which other nodes that you mean here?\n\nI used the following to compare to Sort and Memoize, and as far as I\nsee, the behaviour matches with the attached v8 patch.\n\nIs there some inconsistency here that I'm not seeing?\n\ncreate table mill (a int);\ncreate index on mill(a);\ninsert into mill select x%1000 from generate_Series(1,10000000)x;\nvacuum analyze mill;\ncreate table big (a int primary key);\ninsert into big select x from generate_series(1,10000000)x;\ncreate table probe (a int);\ninsert into probe select 1 from generate_Series(1,1000000);\nanalyze big\nanalyze probe;\n\nset parallel_tuple_cost=0;\nset parallel_setup_cost=0;\nset enable_indexscan=0;\n\n-- compare Parallel Bitmap Heap Scan with Memoize and Sort.\n\n-- each includes \"Worker N:\" with stats for the operation.\nexplain (analyze) select * from mill where a < 100;\nexplain (analyze) select * from big b inner join probe p on b.a=p.a;\nexplain (analyze) select * from probe order by a;\n\n-- each includes \"Worker N:\" with stats for the operation\n-- also includes actual time and rows for each worker.\nexplain (analyze, verbose) select * from mill where a < 100;\nexplain (analyze, verbose) select * from big b inner join probe p on b.a=p.a;\nexplain (analyze, verbose) select * from probe order by a;\n\n-- each includes \"Worker N:\" with stats for the operation\n-- shows a single total buffers which includes leader and worker buffers.\nexplain (analyze, buffers) select * from mill where a < 100;\nexplain (analyze, buffers) select * from big b inner join probe p on b.a=p.a;\nexplain (analyze, buffers) select * from probe order by a;\n\n-- each includes \"Worker N:\" with stats for the operation\n-- also includes actual time and rows for each worker.\n-- shows a single total buffers which includes leader and worker buffers.\n-- shows buffer counts for each worker process\nexplain (analyze, buffers, verbose) select * from mill where a < 100;\nexplain (analyze, buffers, verbose) select * from big b inner join\nprobe p on b.a=p.a;\nexplain (analyze, buffers, verbose) select * from probe order by a;\n\nIf we did want to adjust things to show the totals for each worker\nrather than the stats for the leader, what would Sort Method show if\none worker spilled to disk and another did not?\n\nDavid", "msg_date": "Mon, 8 Jul 2024 15:43:01 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Mon, 8 Jul 2024 at 15:43, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 18 Feb 2024 at 11:31, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > 2) Leader vs. worker counters\n> >\n> > It seems to me this does nothing to add the per-worker values from \"Heap\n> > Blocks\" into the leader, which means we get stuff like this:\n> >\n> > Heap Blocks: exact=102 lossy=10995\n> > Worker 0: actual time=50.559..209.773 rows=215253 loops=1\n> > Heap Blocks: exact=207 lossy=19354\n> > Worker 1: actual time=50.543..211.387 rows=162934 loops=1\n> > Heap Blocks: exact=161 lossy=14636\n> >\n> > I think this is wrong / confusing, and inconsistent with what we do for\n> > other nodes.\n>\n> Are you able to share which other nodes that you mean here?\n\nI did the analysis on this and out of the node types that have\nparallel instrumentation (per ExecParallelRetrieveInstrumentation()),\nParallel Hash is the only node that does anything different from the\nothers. Looking at the loop inside show_hash_info(), you can see it\ntakes the Max() of each property. There's some discussion in [1] about\nwhy this came about. In particular [2].\n\nI see no reason to copy the odd one out here, so I'm planning on going\nahead with the patch that has Bitmap Heap Scan copy what the majority\nof other nodes do. I think we should consider aligning Parallel Hash\nwith the other Parallel node behaviour.\n\nI've attached the (roughly done) schema and queries I used to obtain\nthe plans to do this analysis.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/20200323165059.GA24950%40alvherre.pgsql\n[2] https://www.postgresql.org/message-id/31321.1586549487%40sss.pgh.pa.us", "msg_date": "Tue, 9 Jul 2024 11:51:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" }, { "msg_contents": "On Tue, 9 Jul 2024 at 11:51, David Rowley <dgrowleyml@gmail.com> wrote:\n> I think we should consider aligning Parallel Hash\n> with the other Parallel node behaviour.\n\nI looked at that and quickly realised that it makes sense that\nParallel Hash does something different here. All the workers are\ncontributing to building the same hash table, so they're all going to\nshow the same set of values, provided they managed to help building\nit.\n\nWe're able to tell how much each worker helped according to EXPLAIN\n(ANALYZE, VERBOSE)'s Worker N: rows=n output. I don't think there's\nanything else not already shown that would be interesting to know per\nworker.\n\nDavid\n\n\n", "msg_date": "Tue, 9 Jul 2024 14:44:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN\n ANALYZE" } ]
[ { "msg_contents": "Hi Team,\n\nHope you guys are doing good.\n\nWe are facing below issue with read replica we did work arounds by setting hot_standby_feedback, max_standby_streaming_delay and max_standby_archive_delay, which indeed caused adverse effects on primary DB and storage. As our DB is nearly 6 TB which runs as AWS Postgres RDS.\n\nEven the below error occurs on tables where vacuum is disabled and no DML operations are permitted. Will there be any chances to see row versions being changed even if vacuum is disabled.\nPlease advise.\n\n2023-01-13 07:20:12 UTC:10.64.103.75(61096):ubpreplica@ubprdb01:[17707]:ERROR: canceling statement due to conflict with recovery\n2023-01-13 07:20:12 UTC:10.64.103.75(61096):ubpreplica@ubprdb01:[17707]:DETAIL: User query might have needed to see row versions that must be removed.\n\nThanks & Regards,\nAbhishek P\n\n\n\n\n\n\n\n\n\n\n\nHi Team,\n \nHope you guys are doing good.\n \nWe are facing below issue with read replica we did work arounds by setting hot_standby_feedback, max_standby_streaming_delay and max_standby_archive_delay, which indeed caused adverse effects on primary DB and storage. As our DB is nearly\n 6 TB which runs as AWS Postgres RDS. \n \nEven the below error occurs on tables where vacuum is disabled and no DML operations are permitted. Will there be any chances to see row versions being changed even if vacuum is disabled.\nPlease advise.\n \n2023-01-13 07:20:12 UTC:10.64.103.75(61096):ubpreplica@ubprdb01:[17707]:ERROR:  canceling statement due to conflict with recovery\n2023-01-13 07:20:12 UTC:10.64.103.75(61096):ubpreplica@ubprdb01:[17707]:DETAIL:  User query might have needed to see row versions that must be removed.\n \nThanks & Regards,\nAbhishek P", "msg_date": "Fri, 20 Jan 2023 08:56:49 +0000", "msg_from": "Abhishek Prakash <abhishek.prakash08@infosys.com>", "msg_from_op": true, "msg_subject": "***Conflict with recovery error***" }, { "msg_contents": "On Fri, 2023-01-20 at 08:56 +0000, Abhishek Prakash wrote:\n> We are facing below issue with read replica we did work arounds by setting\n> hot_standby_feedback, max_standby_streaming_delay and max_standby_archive_delay,\n> which indeed caused adverse effects on primary DB and storage. As our DB is\n> nearly 6 TB which runs as AWS Postgres RDS. \n>  \n> Even the below error occurs on tables where vacuum is disabled and no DML\n> operations are permitted. Will there be any chances to see row versions\n> being changed even if vacuum is disabled.\n> Please advise.\n>  \n> 2023-01-13 07:20:12 UTC:10.64.103.75(61096):ubpreplica@ubprdb01:[17707]:ERROR:  canceling statement due to conflict with recovery\n> 2023-01-13 07:20:12 UTC:10.64.103.75(61096):ubpreplica@ubprdb01:[17707]:DETAIL:  User query might have needed to see row versions that must be removed.\n\nIt could be HOT chain pruning or an anti-wraparound autovacuum (which runs\neven if autovacuum is disabled).\nDisabling autovacuum is not a smart idea to begin with.\n\nYour best bet is to set \"max_standby_streaming_delay = -1\".\n\nMore reading:\nhttps://www.cybertec-postgresql.com/en/streaming-replication-conflicts-in-postgresql/\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 20 Jan 2023 10:55:48 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: ***Conflict with recovery error***" }, { "msg_contents": "Hi Laurenz,\n\nThanks for your reply.\nWe had set max_standby_streaming_delay = -1, but faced storage issues nearly 3.5 TB of storage was consumed. \n\nRegards,\nAbhishek P\n\n-----Original Message-----\nFrom: Laurenz Albe <laurenz.albe@cybertec.at> \nSent: Friday, January 20, 2023 3:26 PM\nTo: Abhishek Prakash <abhishek.prakash08@infosys.com>; pgsql-general@lists.postgresql.org; pgsql-hackers@lists.postgresql.org; usergroups@postgresql.org\nSubject: Re: ***Conflict with recovery error***\n\n[**EXTERNAL EMAIL**]\n\nOn Fri, 2023-01-20 at 08:56 +0000, Abhishek Prakash wrote:\n> We are facing below issue with read replica we did work arounds by \n> setting hot_standby_feedback, max_standby_streaming_delay and \n> max_standby_archive_delay, which indeed caused adverse effects on \n> primary DB and storage. As our DB is nearly 6 TB which runs as AWS Postgres RDS.\n>\n> Even the below error occurs on tables where vacuum is disabled and no \n> DML operations are permitted. Will there be any chances to see row \n> versions being changed even if vacuum is disabled.\n> Please advise.\n>\n> 2023-01-13 07:20:12 \n> UTC:10.64.103.75(61096):ubpreplica@ubprdb01:[17707]:ERROR: canceling \n> statement due to conflict with recovery\n> 2023-01-13 07:20:12 UTC:10.64.103.75(61096):ubpreplica@ubprdb01:[17707]:DETAIL: User query might have needed to see row versions that must be removed.\n\nIt could be HOT chain pruning or an anti-wraparound autovacuum (which runs even if autovacuum is disabled).\nDisabling autovacuum is not a smart idea to begin with.\n\nYour best bet is to set \"max_standby_streaming_delay = -1\".\n\nMore reading:\nhttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.cybertec-postgresql.com%2Fen%2Fstreaming-replication-conflicts-in-postgresql%2F&data=05%7C01%7Cabhishek.prakash08%40infosys.com%7Ce50f15f9ec4a497669a208dafacc8a3c%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638098053794261389%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2FlYwVKhkjP23vza5yhuJfw6mcOYynDVbNIhnKRBwUu4%3D&reserved=0\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 20 Jan 2023 09:59:12 +0000", "msg_from": "Abhishek Prakash <abhishek.prakash08@infosys.com>", "msg_from_op": true, "msg_subject": "RE: ***Conflict with recovery error***" }, { "msg_contents": "On Fri, 2023-01-20 at 09:59 +0000, Abhishek Prakash wrote:\n> We had set max_standby_streaming_delay = -1, but faced storage issues\n> nearly 3.5 TB of storage was consumed.\n\nThen either don't run queries that take that long or run fewer data\nmodifications on the primary.\n\nOr invest in a few more TB disk storage.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 20 Jan 2023 11:08:49 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: ***Conflict with recovery error***" } ]
[ { "msg_contents": "In the script below, the presence of an IN clause forces the internal\ncomponents of the UNION ALL clause to fully compute even though they are\nfully optimizable. = ANY doesn't have this issue, so I wonder if there is\nany opportunity to convert the 'slow' variant (see below) to the 'fast'\nvariant. thank you!\n\nmerlin\n\n\ndrop table a cascade;\ndrop table b cascade;\ndrop table c cascade;\n\ncreate table a (a_id int primary key);\ncreate table b (b_id int primary key, a_id int references a);\ncreate table c (c_id int primary key, b_id int references b);\n\n\ninsert into a select s from generate_series(1, 50000) s;\ninsert into b select s, (s % 50000 ) + 1 from generate_series(1, 100000) s;\ninsert into c select s, (s % 100000 ) + 1 from generate_series(1, 1000000)\ns;\n\ncreate index on b (a_id, b_id);\ncreate index on c (b_id, c_id);\n\nanalyze a;\nanalyze b;\nanalyze c;\n\n\ncreate temp table d (a_id int);\ninsert into d values (99);\ninsert into d values (999);\ninsert into d values (9999);\nanalyze d;\n\ncreate or replace view v as\nselect * from a join b using(a_id) join c using(b_id)\nunion all select * from a join b using(a_id) join c using(b_id);\n\nexplain analyze select * from v where a_id in (select a_id from d); --\nthis is slow\nexplain analyze select * from v where a_id = any(array(select a_id from\nd)); -- this is fast\n\nIn the script below, the presence of an IN clause forces the internal components of the UNION ALL clause to fully compute even though they are fully optimizable.  = ANY doesn't have this issue, so I wonder if there is any opportunity to convert the 'slow' variant (see below) to the 'fast' variant.    thank you!merlindrop table a cascade;drop table b cascade;drop table c cascade;create table a (a_id int  primary key);create table b (b_id int primary key, a_id int references a);create table c (c_id int primary key, b_id int references b);insert into a select s from generate_series(1, 50000) s;insert into b select s, (s % 50000 ) + 1 from generate_series(1, 100000) s;insert into c select s, (s % 100000 ) + 1 from generate_series(1, 1000000) s;create index on b (a_id, b_id);create index on c (b_id, c_id);analyze a;analyze b;analyze c;create temp table d (a_id int);insert into d values (99);insert into d values (999);insert into d values (9999);analyze d;create or replace view v as select * from a join b using(a_id) join c using(b_id)union all select * from a join b using(a_id) join c using(b_id);explain analyze select * from v where a_id in (select a_id from d);   -- this is slowexplain analyze select * from v where a_id = any(array(select a_id from d)); -- this is fast", "msg_date": "Fri, 20 Jan 2023 16:24:31 -0600", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": true, "msg_subject": "feature request: IN clause optimized through append nodes with UNION\n ALL" } ]
[ { "msg_contents": "Hi,\n\nWe have code like this in libpqrcv_connect():\n\n\tconn = palloc0(sizeof(WalReceiverConn));\n\tconn->streamConn = PQconnectStartParams(keys, vals,\n\t\t\t\t\t\t\t\t\t\t\t /* expand_dbname = */ true);\n\tif (PQstatus(conn->streamConn) == CONNECTION_BAD)\n\t{\n\t\t*err = pchomp(PQerrorMessage(conn->streamConn));\n\t\treturn NULL;\n\t}\n\n [try to establish connection]\n\n\tif (PQstatus(conn->streamConn) != CONNECTION_OK)\n\t{\n\t\t*err = pchomp(PQerrorMessage(conn->streamConn));\n\t\treturn NULL;\n\t}\n\n\nAm I missing something, or are we leaking the libpq connection in case of\nerrors?\n\nIt doesn't matter really for walreceiver, since it will exit anyway, but we\nalso use libpqwalreceiver for logical replication, where it might?\n\n\nSeems pretty clear that we should do a PQfinish() before returning NULL? I\nlean towards thinking that this isn't worth backpatching given the current\nuses of libpq, but I could easily be convinced otherwise.\n\n\nNoticed while taking another look through [1].\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20220925232237.p6uskba2dw6fnwj2%40awork3.anarazel.de\n\n\n", "msg_date": "Fri, 20 Jan 2023 17:12:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "libpqrcv_connect() leaks PGconn" }, { "msg_contents": "Hi,\n\nOn 2023-01-20 17:12:37 -0800, Andres Freund wrote:\n> We have code like this in libpqrcv_connect():\n> \n> \tconn = palloc0(sizeof(WalReceiverConn));\n> \tconn->streamConn = PQconnectStartParams(keys, vals,\n> \t\t\t\t\t\t\t\t\t\t\t /* expand_dbname = */ true);\n> \tif (PQstatus(conn->streamConn) == CONNECTION_BAD)\n> \t{\n> \t\t*err = pchomp(PQerrorMessage(conn->streamConn));\n> \t\treturn NULL;\n> \t}\n> \n> [try to establish connection]\n> \n> \tif (PQstatus(conn->streamConn) != CONNECTION_OK)\n> \t{\n> \t\t*err = pchomp(PQerrorMessage(conn->streamConn));\n> \t\treturn NULL;\n> \t}\n> \n> \n> Am I missing something, or are we leaking the libpq connection in case of\n> errors?\n> \n> It doesn't matter really for walreceiver, since it will exit anyway, but we\n> also use libpqwalreceiver for logical replication, where it might?\n> \n> \n> Seems pretty clear that we should do a PQfinish() before returning NULL? I\n> lean towards thinking that this isn't worth backpatching given the current\n> uses of libpq, but I could easily be convinced otherwise.\n> \n\nIt's bit worse than I earlier thought: We use walrv_connect() during CREATE\nSUBSCRIPTION. One can easily exhaust file descriptors right now. So I think\nwe need to fix this.\n\nI also noticed the following in libpqrcv_connect, added in 11da97024abb:\n\n\tif (logical)\n\t{\n\t\tPGresult *res;\n\n\t\tres = libpqrcv_PQexec(conn->streamConn,\n\t\t\t\t\t\t\t ALWAYS_SECURE_SEARCH_PATH_SQL);\n\t\tif (PQresultStatus(res) != PGRES_TUPLES_OK)\n\t\t{\n\t\t\tPQclear(res);\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errmsg(\"could not clear search path: %s\",\n\t\t\t\t\t\t\tpchomp(PQerrorMessage(conn->streamConn)))));\n\t\t}\n\t\tPQclear(res);\n\t}\n\n\nWhich doesn't seem quite right? The comment for the function says:\n\n * Returns NULL on error and fills the err with palloc'ed error message.\n\nwhich this doesn't do. Of course we don't expect this to fail, but network\nissues etc could still lead us to hit this case. In this case we'll actually\nhave an open libpq connection around that we'll leak.\n\n\nThe attached patch fixes both issues.\n\n\n\nI seems we don't have any tests for creating a subscription that fails during\nconnection establishment? That doesn't seem optimal - I guess there may have\nbeen concern around portability of the error messages? I think we can control\nfor that in a tap test, by failing to connect due to a non-existant database,\nthen the error is under our control. Whereas e.g. an invalid hostname would\ncontain an error from gai_strerror().\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 20 Jan 2023 18:50:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: libpqrcv_connect() leaks PGconn" }, { "msg_contents": "On Fri, Jan 20, 2023 at 06:50:37PM -0800, Andres Freund wrote:\n> On 2023-01-20 17:12:37 -0800, Andres Freund wrote:\n> > We have code like this in libpqrcv_connect():\n> > \n> > \tconn = palloc0(sizeof(WalReceiverConn));\n> > \tconn->streamConn = PQconnectStartParams(keys, vals,\n> > \t\t\t\t\t\t\t\t\t\t\t /* expand_dbname = */ true);\n> > \tif (PQstatus(conn->streamConn) == CONNECTION_BAD)\n> > \t{\n> > \t\t*err = pchomp(PQerrorMessage(conn->streamConn));\n> > \t\treturn NULL;\n> > \t}\n> > \n> > [try to establish connection]\n> > \n> > \tif (PQstatus(conn->streamConn) != CONNECTION_OK)\n> > \t{\n> > \t\t*err = pchomp(PQerrorMessage(conn->streamConn));\n> > \t\treturn NULL;\n> > \t}\n> > \n> > \n> > Am I missing something, or are we leaking the libpq connection in case of\n> > errors?\n> > \n> > It doesn't matter really for walreceiver, since it will exit anyway, but we\n> > also use libpqwalreceiver for logical replication, where it might?\n> > \n> > \n> > Seems pretty clear that we should do a PQfinish() before returning NULL? I\n> > lean towards thinking that this isn't worth backpatching given the current\n> > uses of libpq, but I could easily be convinced otherwise.\n> > \n> \n> It's bit worse than I earlier thought: We use walrv_connect() during CREATE\n> SUBSCRIPTION. One can easily exhaust file descriptors right now. So I think\n> we need to fix this.\n> \n> I also noticed the following in libpqrcv_connect, added in 11da97024abb:\n\n> The attached patch fixes both issues.\n\nLooks good. I'm not worried about a superuser hosing their own session via\nCREATE SUBSCRIPTION failures in a loop. At the same time, this fix is plenty\nsafe to back-patch.\n\n> I seems we don't have any tests for creating a subscription that fails during\n> connection establishment? That doesn't seem optimal - I guess there may have\n> been concern around portability of the error messages?\n\nPerhaps. We have various (non-subscription) tests using \"\\set VERBOSITY\nsqlstate\" for that problem. If even the sqlstate varies, a DO block is the\nnext level of error swallowing.\n\n> I think we can control\n> for that in a tap test, by failing to connect due to a non-existant database,\n> then the error is under our control. Whereas e.g. an invalid hostname would\n> contain an error from gai_strerror().\n\nThat sounds fine.\n\n\n", "msg_date": "Sat, 21 Jan 2023 08:16:42 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: libpqrcv_connect() leaks PGconn" }, { "msg_contents": "Hi,\n\nOn 2023-01-21 08:16:42 -0800, Noah Misch wrote:\n> On Fri, Jan 20, 2023 at 06:50:37PM -0800, Andres Freund wrote:\n> > On 2023-01-20 17:12:37 -0800, Andres Freund wrote:\n> > > We have code like this in libpqrcv_connect():\n> > It's bit worse than I earlier thought: We use walrv_connect() during CREATE\n> > SUBSCRIPTION. One can easily exhaust file descriptors right now. So I think\n> > we need to fix this.\n> > \n> > I also noticed the following in libpqrcv_connect, added in 11da97024abb:\n> \n> > The attached patch fixes both issues.\n> \n> Looks good. I'm not worried about a superuser hosing their own session via\n> CREATE SUBSCRIPTION failures in a loop. At the same time, this fix is plenty\n> safe to back-patch.\n\nYea, I'm not worried about it from a security perspective and more from a\nusability perspective (but even there not terribly). File descriptors that\nleaked, particularly when not reserved (AcquireExternalFD() etc), can lead to\nweird problems down the line. And I think it's not that rare to need a few\nattempts at getting the connection string, permissions, etc right.\n\nThanks for looking at the fix!\n\n\n> > I seems we don't have any tests for creating a subscription that fails during\n> > connection establishment? That doesn't seem optimal - I guess there may have\n> > been concern around portability of the error messages?\n> \n> Perhaps. We have various (non-subscription) tests using \"\\set VERBOSITY\n> sqlstate\" for that problem. If even the sqlstate varies, a DO block is the\n> next level of error swallowing.\n\nThat's a good trick I need to remember. And the errcode for an invalid\nconnection string luckily differs from the one for a not working one.\n\n\nI think found an even easier way - port=-1 is rejected during PQconnectPoll()\nand will never even open a socket. That'd make it reasonable for the test to\nhappen in subscription.sql, instead of a tap test, I think (faster, easier to\nmaintain). It may be that we'll one day move that error into the\nPQconninfoParse() phase, but I don't think we need to worry about it now.\n\nAny reason not to go for that?\n\nIf not, I'll add a test for an invalid conninfo and a non-working connection\nstring to subscription.sql.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 21 Jan 2023 12:04:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: libpqrcv_connect() leaks PGconn" }, { "msg_contents": "On Sat, Jan 21, 2023 at 12:04:53PM -0800, Andres Freund wrote:\n> On 2023-01-21 08:16:42 -0800, Noah Misch wrote:\n> > On Fri, Jan 20, 2023 at 06:50:37PM -0800, Andres Freund wrote:\n> > > I seems we don't have any tests for creating a subscription that fails during\n> > > connection establishment? That doesn't seem optimal - I guess there may have\n> > > been concern around portability of the error messages?\n> > \n> > Perhaps. We have various (non-subscription) tests using \"\\set VERBOSITY\n> > sqlstate\" for that problem. If even the sqlstate varies, a DO block is the\n> > next level of error swallowing.\n> \n> That's a good trick I need to remember. And the errcode for an invalid\n> connection string luckily differs from the one for a not working one.\n> \n> \n> I think found an even easier way - port=-1 is rejected during PQconnectPoll()\n> and will never even open a socket. That'd make it reasonable for the test to\n> happen in subscription.sql, instead of a tap test, I think (faster, easier to\n> maintain). It may be that we'll one day move that error into the\n> PQconninfoParse() phase, but I don't think we need to worry about it now.\n> \n> Any reason not to go for that?\n\nNo, a port=-1 test in subscription.sql sounds ideal.\n\n> If not, I'll add a test for an invalid conninfo and a non-working connection\n> string to subscription.sql.\n\n\n", "msg_date": "Sat, 21 Jan 2023 23:14:08 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: libpqrcv_connect() leaks PGconn" }, { "msg_contents": "\nOn 2023-01-21 23:14:08 -0800, Noah Misch wrote:\n> On Sat, Jan 21, 2023 at 12:04:53PM -0800, Andres Freund wrote:\n> > On 2023-01-21 08:16:42 -0800, Noah Misch wrote:\n> > > On Fri, Jan 20, 2023 at 06:50:37PM -0800, Andres Freund wrote:\n> > > > I seems we don't have any tests for creating a subscription that fails during\n> > > > connection establishment? That doesn't seem optimal - I guess there may have\n> > > > been concern around portability of the error messages?\n> > > \n> > > Perhaps. We have various (non-subscription) tests using \"\\set VERBOSITY\n> > > sqlstate\" for that problem. If even the sqlstate varies, a DO block is the\n> > > next level of error swallowing.\n> > \n> > That's a good trick I need to remember. And the errcode for an invalid\n> > connection string luckily differs from the one for a not working one.\n> > \n> > \n> > I think found an even easier way - port=-1 is rejected during PQconnectPoll()\n> > and will never even open a socket. That'd make it reasonable for the test to\n> > happen in subscription.sql, instead of a tap test, I think (faster, easier to\n> > maintain). It may be that we'll one day move that error into the\n> > PQconninfoParse() phase, but I don't think we need to worry about it now.\n> > \n> > Any reason not to go for that?\n> \n> No, a port=-1 test in subscription.sql sounds ideal.\n\nCool. Thanks for the review - pushed that way.\n\n\n", "msg_date": "Mon, 23 Jan 2023 18:58:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: libpqrcv_connect() leaks PGconn" } ]
[ { "msg_contents": "Hi,\n\nDue to [1] I thought it'd be a good idea to write an isolation test for\ntesting postgres_fdw interruptability during connection establishment.\n\nI was able to make that work - but unfortunately doing so requires preventing\na login from completing. The only way I could see to achieve that is to lock\none of the important tables. I ended up with\n step s2_hang_logins { LOCK pg_db_role_setting; }\n\nHowever, I'm a bit worried that that might cause problems. It'll certainly\nblock progress in concurrent tests, given it's a shared relation. But locking\nrelevant non-shared relations causes more problems, because it'll e.g. prevent\nquerying pg_stat_activity.\n\nDoes anybody see another way to cause a login to hang as part of an isolation\ntest, in a controllable manner? Or, if not, do you think we can get away with\nlocking pg_db_role_setting?\n\n\nThe other complexity is that isolationtester won't see the wait edge going\nthrough postgres_fdw. My approach for that is to do that one wait in a DO\nblock loop, matching on application_name = 'isolation/interrupt/s1'.\n\nI don't think we can teach isolationtester to understand such edges. I guess\nwe could teach it to wait for certain wait events though? But I'm not sure how\ngenerally useful that is. IIRC Tom concluded in the past that it didn't get us\nvery far.\n\n\nThe test currently tests one termination case, because isolationtester will\njust fail the next permutation if a connection is gone. I don't see an issue\nfixing that?\n\n\nI attached my current WIP patch for the test.\n\n\nNote that the test will only work with the patches from [1] applied.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20220925232237.p6uskba2dw6fnwj2%40awork3.anarazel.de", "msg_date": "Sat, 21 Jan 2023 13:38:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "isolation test for postgres_fdw interruptability" } ]
[ { "msg_contents": "The attached patch adds GUCs to control the use of the abbreviated keys\noptimization when sorting. Also, I changed the TRUST_STRXFRM from a\n#define into a GUC.\n\nOne reason for these GUCs is to make it easier to diagnose any issues\nthat come up with my collation work. Another is that I observed cases\nwith ICU where the abbreviated keys optimization resulted in a ~8-10%\nregression, and it would be good to have some basic control over it.\n\nI made them developer options because they are more about diagnosing\nand I don't expect users to change these in production. If the issues\nwith abbreviated keys get more complex (where maybe we need to consider\ncosting each provider?), we can make it more user-facing.\n\nThis is fairly simple, so I plan to commit soon.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Sat, 21 Jan 2023 17:16:01 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "GUCs to control abbreviated sort keys" }, { "msg_contents": "On Sat, Jan 21, 2023 at 05:16:01PM -0800, Jeff Davis wrote:\n\n> + <varlistentry id=\"guc-sort-abbreviated-keys\" xreflabel=\"sort_abbreviated_keys\">\n> + <term><varname>sort_abbreviated_keys</varname> (<type>boolean</type>)\n> + <indexterm>\n> + <primary><varname>sort_abbreviated_keys</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Enables or disables the use of abbreviated sort keys, an optimization,\n> + if applicable. The default is <literal>true</literal>. Disabling may\n\nI think \"an optimization, if applicable\" is either too terse, or somehow\nwrong. Maybe:\n\n| Enables or disables the use of abbreviated keys, a sort optimization...\n\n> + optimization could return wrong results. Set to\n> + <literal>true</literal> if certain that <function>strxfrm()</function>\n> + can be trusted.\n\n\"if you are certain\"; or \"if it is ...\"\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 24 Jan 2023 19:43:49 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Sat, Jan 21, 2023 at 8:16 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> This is fairly simple, so I plan to commit soon.\n\nI find it a bit premature to include this comment in the very first\nemail.... what if other people don't like the idea?\n\nI would like to hear about the cases where abbreviated keys resulted\nin a regression.\n\nI'd also like to know whether there's a realistic possibility that\nmaking this a run-time test could itself result in a regression.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Jan 2023 21:42:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Tue, 2023-01-24 at 21:42 -0500, Robert Haas wrote:\n> I find it a bit premature to include this comment in the very first\n> email.... what if other people don't like the idea?\n\nThe trust_strxfrm GUC was pulled from the larger collation refactoring\npatch, which has been out for a while. The sort_abbreviated_keys GUC is\nnew, and I posted these both in a new thread because they started to\nlook independently useful.\n\nIf someone doesn't like the idea, they are free to comment, like in\nevery other case (though this patch doesn't seem very controversial to\nme?). I suppose the wording was off-putting, so I'll choose different\nwords next time.\n\n> I would like to hear about the cases where abbreviated keys resulted\n> in a regression.\n\nI want to be clear that this is not a general criticism of the\nabbreviated keys optimization, nor a comprehensive analysis of its\nperformance.\n\nI am highlighting this case because the existence of a single non-\ncontrived case or regression suggests that we may want to explore\nfurther and tweak heuristics. That's quite natural when the heuristics\nare based on a complex dependency like a collation provider. The\nsort_abbreviated_keys GUC makes that kind of exploration and tweaking a\nlot easier.\n\nBuilt with meson on linux, gcc 11.3.0, opt -O3. Times are the middle of\nthree runs, taken from the sort operator's \"first returned tuple\" time\nin EXPLAIN ANALYZE. Total runtime (as reported in EXPLAIN ANALYZE) is\npretty much the same story, but I think there was slightly more noise\nin that number.\n\n$ perl text_generator.pl 10000000 10 > /tmp/strings.txt\n\nCREATE TABLE s (t TEXT);\nCOPY s FROM '/tmp/strings.txt';\nVACUUM FREEZE s;\nCHECKPOINT;\nSET work_mem='10GB';\nSET max_parallel_workers = 0;\nSET max_parallel_workers_per_gather = 0;\n\nSET sort_abbreviated_keys = false;\nEXPLAIN ANALYZE SELECT t FROM s ORDER BY t COLLATE \"en-US-x-icu\";\n-- 20875ms\n\nSET sort_abbreviated_keys = true;\nEXPLAIN ANALYZE SELECT t FROM s ORDER BY t COLLATE \"en-US-x-icu\";\n-- 22931ms\n\nRegression for abbreviated keys optimization in this case: 9.8%\n\n> I'd also like to know whether there's a realistic possibility that\n> making this a run-time test could itself result in a regression.\n\nThe sort_abbreviated_keys branch is happening after\ntuplesort_begin_common (which creates memory contexts, etc.) and before\npreparing the sort keys (which involves catalog lookups). The\ntrust_strxfrm branch is happening in the type-specific sort support\nfunction, which needs to be looked up in the catalog before being\ncalled (using V1 calling convention).\n\nIt doesn't look likely that a single branch in that path will have a\nperf impact. Do you have a more specific concern?\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Wed, 25 Jan 2023 13:16:44 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Tue, 2023-01-24 at 19:43 -0600, Justin Pryzby wrote:\n> I think \"an optimization, if applicable\" is either too terse, or\n> somehow\n> wrong.  Maybe:\n> \n> > Enables or disables the use of abbreviated keys, a sort\n> > optimization...\n\nDone.\n\n> > +        optimization could return wrong results. Set to\n> > +        <literal>true</literal> if certain that\n> > <function>strxfrm()</function>\n> > +        can be trusted.\n> \n> \"if you are certain\"; or \"if it is ...\"\n\nDone.\n\nThank you, rebased patch attached.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Wed, 25 Jan 2023 13:30:07 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On 25.01.23 22:16, Jeff Davis wrote:\n> I am highlighting this case because the existence of a single non-\n> contrived case or regression suggests that we may want to explore\n> further and tweak heuristics. That's quite natural when the heuristics\n> are based on a complex dependency like a collation provider. The\n> sort_abbreviated_keys GUC makes that kind of exploration and tweaking a\n> lot easier.\n\nMaybe an easier way to enable or disable it in the source code with a \n#define would serve this. Making it a GUC right away seems a bit \nheavy-handed. Further exploration and tweaking might well require \nfurther source code changes, so relying on a source code level toggle \nwould seem appropriate.\n\n\n\n", "msg_date": "Thu, 26 Jan 2023 22:39:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Thu, 2023-01-26 at 22:39 +0100, Peter Eisentraut wrote:\n> Maybe an easier way to enable or disable it in the source code with a\n> #define would serve this.  Making it a GUC right away seems a bit \n> heavy-handed.  Further exploration and tweaking might well require \n> further source code changes, so relying on a source code level toggle\n> would seem appropriate.\n\nI am using these GUCs for testing the various collation paths in my\ncollation refactoring branch.\n\nI find them pretty useful, and when I saw a regression, I thought\nothers might think it was useful, too. But if not I'll just leave them\nin my branch and withdraw from this thread.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 26 Jan 2023 15:29:30 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Wed, Jan 25, 2023 at 4:16 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> $ perl text_generator.pl 10000000 10 > /tmp/strings.txt\n>\n> CREATE TABLE s (t TEXT);\n> COPY s FROM '/tmp/strings.txt';\n> VACUUM FREEZE s;\n> CHECKPOINT;\n> SET work_mem='10GB';\n> SET max_parallel_workers = 0;\n> SET max_parallel_workers_per_gather = 0;\n>\n> SET sort_abbreviated_keys = false;\n> EXPLAIN ANALYZE SELECT t FROM s ORDER BY t COLLATE \"en-US-x-icu\";\n> -- 20875ms\n>\n> SET sort_abbreviated_keys = true;\n> EXPLAIN ANALYZE SELECT t FROM s ORDER BY t COLLATE \"en-US-x-icu\";\n> -- 22931ms\n>\n> Regression for abbreviated keys optimization in this case: 9.8%\n\nThat's interesting. Do you have any idea why this happens?\n\nI've been a bit busy the last few days so haven't had a chance to look\nat the test case until now. It seems like it's just a lorum ipsum\ngenerator, except that each line is made to contain a random number of\nwords, and certain letters from the Latin alphabet are replaced with\nother symbols. But why is that a problem for abbreviated keys? The\nmost obvious way for things to go wrong is for the first 8 bytes of\nthe strxfrm() blob to be very low-entropy, but it's not really clear\nto me what about your test case would make that more likely. I guess\nanother explanation could be if having a few non-ASCII characters\nmixed into the string makes strxfrm() a lot slower.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 09:14:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Thu, Jan 26, 2023 at 3:29 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2023-01-26 at 22:39 +0100, Peter Eisentraut wrote:\n> > Maybe an easier way to enable or disable it in the source code with a\n> > #define would serve this. Making it a GUC right away seems a bit\n> > heavy-handed. Further exploration and tweaking might well require\n> > further source code changes, so relying on a source code level toggle\n> > would seem appropriate.\n>\n> I am using these GUCs for testing the various collation paths in my\n> collation refactoring branch.\n\nI'm fine with adding the GUC as a developer option. I think that there\nis zero chance of the changes to tuplesort.c having appreciable\noverhead.\n\n> I find them pretty useful, and when I saw a regression, I thought\n> others might think it was useful, too. But if not I'll just leave them\n> in my branch and withdraw from this thread.\n\nI cannot recreate the issue you describe. With abbreviated keys, your\nexact test case takes 00:16.620 on my system. Without abbreviated\nkeys, it takes 00:21.255.\n\nTo me it appears to be a moderately good case for abbreviated keys,\nthough certainly not as good as some cases that I've seen -- ~3x\nimprovements are common enough.\n\nAs a point of reference, the same test case with the C collation and\nwith abbreviated keys takes 00:10.822. When I look at the \"trace_sort\"\noutput for the C collation with abbreviated keys, and compare it to\nthe equivalent \"trace_sort\" output for the original \"en-US-x-icu\"\ncollation from your test case, it is clear that the overhead of\ngenerating collated abbreviated keys within ICU is relatively high --\nthe initial scan of the table (which is where we generate all\nabbreviated keys here) takes 4.45 seconds in the ICU case, and only\n1.65 seconds in the \"C\" locale case. I think that you should look into\nthat same difference on your own system, so that we can compare and\ncontrast.\n\nThe underlying issue might well have something to do with the ICU\nversion you're using, or some other detail of your environment. I'm\nusing Debian unstable here. Postgres links to the system ICU, which is\nICU 72.\n\nIt's not impossible that the perl program you wrote produces\nnon-deterministic output, which should be controlled for, since it\nmight just be significant. I see this on my system, having run the\nperl program as outlined in your test case:\n\n$ ls -l /tmp/strings.txt\n-rw-r--r-- 1 pg pg 431886574 Jan 27 11:13 /tmp/strings.txt\n$ sha1sum /tmp/strings.txt\n22f60dc12527c215c8e3992e49d31dc531261a83 /tmp/strings.txt\n\nDoes that match what you see on your system?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Jan 2023 11:41:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Fri, 2023-01-27 at 11:41 -0800, Peter Geoghegan wrote:\n> I cannot recreate the issue you describe.\n\nInteresting. For my test:\n\nglibc 2.35 ICU 70.1\ngcc 11.3.0 LLVM 14.0.0\n\n> It's not impossible that the perl program you wrote produces\n> non-deterministic output\n\nIt is non-deterministic, but I tried with two generated files, and got\nsimilar results.\n\nRight now I suspect the ICU version might be the reason. I'll try with\n72.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 12:34:13 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Fri, Jan 27, 2023 at 12:34 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> It is non-deterministic, but I tried with two generated files, and got\n> similar results.\n\nJeff and I coordinated off-list. It turned out that the\nnondeterministic nature of the program to generate test data was\nbehind my initial inability to recreate Jeff's results. Once Jeff\nprovided me with the exact data that he saw the problem with, I was\nable to recreate the problematic case for abbreviated keys.\n\nIt turns out that this was due to aborting abbreviation way too late\nin the process. It would happen relatively late in the process, when\nmore than 50% of all tuples had already had abbreviations generated by\nICU. This was a marginal case for abbreviated keys, which is precisely\nwhy it only happened this long into the process. That factor is also\nlikely why I couldn't recreate the problem at first, even though I had\ntest data that was substantially the same as the data required to show\nthe problem.\n\nAttached patch fixes the issue. It teaches varstr_abbrev_abort to do\nsomething similar to every other abbreviated keys abort function: stop\nestimating cardinality entirely (give up on giving up) once there are\na certain number of distinct abbreviated keys, regardless of any other\nfactor.\n\nThis is very closely based on existing code from numeric_abbrev_abort,\nthough I use a cutoff of 10k rather than a cutoff of 100k. This\ndifference is justified by the special considerations for text, where\nwe authoritative comparisons have further optimizations such as\nstrcoll caching and the memcmp equality fast path. It's also required\nto actually fix the test case at hand -- 100k isn't enough to avoid\nthe performance issue Jeff reported.\n\nI think that this should be committed to HEAD only.\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 6 Feb 2023 10:44:50 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: GUCs to control abbreviated sort keys" }, { "msg_contents": "On Fri, Jan 27, 2023 at 12:29 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I am using these GUCs for testing the various collation paths in my\n> collation refactoring branch.\n\nSpeaking of testing, has anyone ever tried porting Tom's random test\nprogram[1] to ICU?\n\n[1] https://www.postgresql.org/message-id/31913.1458747836@sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 7 Feb 2023 08:22:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GUCs to control abbreviated sort keys" } ]
[ { "msg_contents": "Hi,\n\nOn platforms where we support 128bit integers, we could accelerate division\nwhen the number of digits in the divisor is larger than 8 and less than or\nequal to 16 digits, i.e. when the divisor that fits in a 64-bit integer but would\nnot fit in a 32-bit integer.\n\nThis patch adds div_var_int64(), which is similar to the existing div_var_int(),\nbut accepts a 64-bit divisor instead of a 32-bit divisor.\n\nThe new function is used within div_var() and div_var_fast().\n\nTo measure the effect, we need a volatile wrapper function for numeric_div(),\nto avoid it being cached since it's immutable:\n\nCREATE OR REPLACE FUNCTION numeric_div_volatile(numeric,numeric)\nRETURNS numeric LANGUAGE internal AS 'numeric_div';\n\nWe can then use generate_series() to measure the execution time for lots of\nexecutions. This does not account for the overhead of generate_series() and\ncount(), but that's okay since the overhead is the same in both measurements,\nso the relative difference is still correct.\n\n--\n-- Division when the divisor is 8 digits should be unchanged:\n--\nEXPLAIN ANALYZE\nSELECT count(numeric_div_volatile(\n repeat('1',131071)::numeric,\n repeat('3',8)::numeric\n)) FROM generate_series(1,1e4);\n-- Execution Time: 1633.722 ms (HEAD)\n-- Execution Time: 1680.228 ms (div_var_int64.patch)\n\n--\n-- Division when the divisor is 9 digits should be faster:\n--\nEXPLAIN ANALYZE\nSELECT count(numeric_div_volatile(\n repeat('1',131071)::numeric,\n repeat('3',9)::numeric\n)) FROM generate_series(1,1e4);\n-- Execution Time: 5444.755 ms (HEAD)\n-- Execution Time: 1604.967 ms (div_var_int64.patch)\n\n--\n-- Division when the divisor is 16 digits should also be faster:\n--\nEXPLAIN ANALYZE\nSELECT count(numeric_div_volatile(\n repeat('1',131071)::numeric,\n repeat('3',16)::numeric\n)) FROM generate_series(1,1e4);\n-- Execution Time: 6072.683 ms (HEAD)\n-- Execution Time: 3215.686 ms (div_var_int64.patch)\n\n--\n-- Division when the divisor is 17 digits should be unchanged:\n--\nEXPLAIN ANALYZE\nSELECT count(numeric_div_volatile(\n repeat('1',131071)::numeric,\n repeat('3',17)::numeric\n)) FROM generate_series(1,1e4);\n-- Execution Time: 6948.150 ms (HEAD)\n-- Execution Time: 7010.544 ms (div_var_int64.patch)\n\n--\n-- Same tests as above, but with a single digit dividend,\n-- and 1e7 executions instead of just 1e4.\n--\n\n--\n-- Division when the divisor is 8 digits should be unchanged:\n--\nEXPLAIN ANALYZE\nSELECT count(numeric_div_volatile(\n 1,\n repeat('3',8)::numeric\n)) FROM generate_series(1,1e7);\n-- Execution Time: 1827.567 ms (HEAD)\n-- Execution Time: 1828.029 ms (div_var_int64.patch)\n\n--\n-- Division when the divisor is 9 digits should be faster:\n--\nEXPLAIN ANALYZE\nSELECT count(numeric_div_volatile(\n 1,\n repeat('3',9)::numeric\n)) FROM generate_series(1,1e7);\n-- Execution Time: 2314.851 ms (HEAD)\n-- Execution Time: 1886.170 ms (div_var_int64.patch)\n\n--\n-- Division when the divisor is 16 digits should also be faster:\n--\nEXPLAIN ANALYZE\nSELECT count(numeric_div_volatile(\n 1,\n repeat('3',16)::numeric\n)) FROM generate_series(1,1e7);\n-- Execution Time: 2244.009 ms (HEAD)\n-- Execution Time: 1968.148 ms (div_var_int64.patch)\n\n--\n-- Division when the divisor is 17 digits should be unchanged:\n--\nEXPLAIN ANALYZE\nSELECT count(numeric_div_volatile(\n 1,\n repeat('3',17)::numeric\n)) FROM generate_series(1,1e7);\n-- Execution Time: 2334.896 ms (HEAD)\n-- Execution Time: 2338.141 ms (div_var_int64.patch)\n\nThe graph below shows the effect on execution time for numeric_div(),\nand also looks at numeric_mod() since it's a heavy user of numeric_div().\n\nThe graph was produced by generating 100 random numeric integer values\nfor each combination of number of dividend/divisor digits between 1 to 20.\n\nIn total, that's 20*20*100*2=80000 test values.\n\nAs expected, the ceiling for the fast short division is lifted from 8 to 16 divisor digits,\nand speedups for modulus is noticed in the same region.\n\nThe graph was produced using results from pg-timeit [1] and R for the plotting.\n\n\n[1] https://github.com/joelonsql/pg-timeit\n\n/Joel", "msg_date": "Sun, 22 Jan 2023 09:41:01 -0400", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "[PATCH] Use 128-bit math to accelerate numeric division,\n when 8 < divisor\n digits <= 16" }, { "msg_contents": "On Sun, 22 Jan 2023 at 13:42, Joel Jacobson <joel@compiler.org> wrote:\n>\n> Hi,\n>\n> On platforms where we support 128bit integers, we could accelerate division\n> when the number of digits in the divisor is larger than 8 and less than or\n> equal to 16 digits, i.e. when the divisor that fits in a 64-bit integer but would\n> not fit in a 32-bit integer.\n>\n\nSeems like a reasonable idea, with some pretty decent gains.\n\nNote, however, that for a divisor having fewer than 5 or 6 digits,\nit's now significantly slower because it's forced to go through\ndiv_var_int64() instead of div_var_int() for all small divisors. So\nthe var2ndigits <= 2 case needs to come first.\n\nThe implementation of div_var_int64() should be in an #ifdef HAVE_INT128 block.\n\nIn div_var_int64(), s/ULONG_MAX/PG_UINT64_MAX/\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 22 Jan 2023 15:06:50 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use 128-bit math to accelerate numeric division, when 8 <\n divisor digits <= 16" }, { "msg_contents": "On Sun, Jan 22, 2023, at 11:06, Dean Rasheed wrote:\n> Seems like a reasonable idea, with some pretty decent gains.\n>\n> Note, however, that for a divisor having fewer than 5 or 6 digits,\n> it's now significantly slower because it's forced to go through\n> div_var_int64() instead of div_var_int() for all small divisors. So\n> the var2ndigits <= 2 case needs to come first.\n\nCan you give a measurable example of when the patch\nthe way it's written is significantly slower for a divisor having\nfewer than 5 or 6 digits, on some platform?\n\nI can't detect any difference at all at my MacBook Pro M1 Max for this example:\nEXPLAIN ANALYZE SELECT count(numeric_div_volatile(1,3333)) FROM generate_series(1,1e8);\n\nI did write the code like you suggest first, but changed it,\nsince I realised the extra \"else if\" needed could be eliminated,\nand thought div_var_int64() wouldn't be slower than div_var_int() since\nI thought 64-bit instructions in general are as fast as 32-bit instructions,\non 64-bit platforms.\n\nI'm not suggesting your claim is incorrect, I'm just trying to understand\nand verify it experimentally.\n\n> The implementation of div_var_int64() should be in an #ifdef HAVE_INT128 block.\n>\n> In div_var_int64(), s/ULONG_MAX/PG_UINT64_MAX/\n\nOK, thanks, I'll fix, but I'll await your feedback first on the above.\n\n/Joel\n\n\n", "msg_date": "Sun, 22 Jan 2023 11:41:32 -0400", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Use 128-bit math to accelerate numeric division,\n when 8 < divisor\n digits <= 16" }, { "msg_contents": "On Sun, 22 Jan 2023 at 15:41, Joel Jacobson <joel@compiler.org> wrote:\n>\n> On Sun, Jan 22, 2023, at 11:06, Dean Rasheed wrote:\n> > Seems like a reasonable idea, with some pretty decent gains.\n> >\n> > Note, however, that for a divisor having fewer than 5 or 6 digits,\n> > it's now significantly slower because it's forced to go through\n> > div_var_int64() instead of div_var_int() for all small divisors. So\n> > the var2ndigits <= 2 case needs to come first.\n>\n> Can you give a measurable example of when the patch\n> the way it's written is significantly slower for a divisor having\n> fewer than 5 or 6 digits, on some platform?\n>\n\nI just modified the previous test you posted:\n\n\\timing on\nSELECT count(numeric_div_volatile(1e131071,123456)) FROM generate_series(1,1e4);\n\nTime: 2048.060 ms (00:02.048) -- HEAD\nTime: 2422.720 ms (00:02.423) -- With patch\n\n> I did write the code like you suggest first, but changed it,\n> since I realised the extra \"else if\" needed could be eliminated,\n> and thought div_var_int64() wouldn't be slower than div_var_int() since\n> I thought 64-bit instructions in general are as fast as 32-bit instructions,\n> on 64-bit platforms.\n>\n\nApparently it can make a difference. Probably something to do with\nhaving less data to move around. I remember noticing that when I wrote\ndiv_var_int(), which is why I split it into 2 branches in that way.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 22 Jan 2023 17:25:58 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use 128-bit math to accelerate numeric division, when 8 <\n divisor digits <= 16" }, { "msg_contents": "On Sun, Jan 22, 2023, at 14:25, Dean Rasheed wrote:\n> I just modified the previous test you posted:\n>\n> \\timing on\n> SELECT count(numeric_div_volatile(1e131071,123456)) FROM generate_series(1,1e4);\n>\n> Time: 2048.060 ms (00:02.048) -- HEAD\n> Time: 2422.720 ms (00:02.423) -- With patch\n>\n...\n>\n> Apparently it can make a difference. Probably something to do with\n> having less data to move around. I remember noticing that when I wrote\n> div_var_int(), which is why I split it into 2 branches in that way.\n\nMany thanks for feedback. Nice catch! New patch attached.\n\nInteresting, I'm not able to reproduce this on my MacBook Pro M1 Max:\n\nSELECT version;\nPostgreSQL 16devel on aarch64-apple-darwin22.2.0, compiled by Apple clang version 14.0.0 (clang-1400.0.29.202), 64-bit\n\nSELECT count(numeric_div_volatile(1e131071,123456)) FROM generate_series(1,1e 4);\nTime: 1569.730 ms (00:01.570) - HEAD\nTime: 1569.918 ms (00:01.570) -- div_var_int64.patch\nTime: 1569.038 ms (00:01.569) -- div_var_int64-2.patch\n\nJust curious, what platform are you on?\n\n/Joel", "msg_date": "Sun, 22 Jan 2023 19:48:37 -0300", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Use 128-bit math to accelerate numeric division,\n when 8 < divisor\n digits <= 16" }, { "msg_contents": "On Sun, Jan 22, 2023 at 10:42 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> I did write the code like you suggest first, but changed it,\n> since I realised the extra \"else if\" needed could be eliminated,\n> and thought div_var_int64() wouldn't be slower than div_var_int() since\n> I thought 64-bit instructions in general are as fast as 32-bit\ninstructions,\n> on 64-bit platforms.\n\nAccording to Agner's instruction tables [1], integer division on Skylake\n(for example) has a latency of 26 cycles for 32-bit operands, and 42-95\ncycles for 64-bit.\n\n[1] https://www.agner.org/optimize/instruction_tables.pdf\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Jan 22, 2023 at 10:42 PM Joel Jacobson <joel@compiler.org> wrote:> I did write the code like you suggest first, but changed it,> since I realised the extra \"else if\" needed could be eliminated,> and thought div_var_int64() wouldn't be slower than div_var_int() since> I thought 64-bit instructions in general are as fast as 32-bit instructions,> on 64-bit platforms.According to Agner's instruction tables [1], integer division on Skylake (for example) has a latency of 26 cycles for 32-bit operands, and 42-95 cycles for 64-bit.[1] https://www.agner.org/optimize/instruction_tables.pdf--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 23 Jan 2023 12:06:40 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use 128-bit math to accelerate numeric division, when 8 <\n divisor digits <= 16" }, { "msg_contents": "On Mon, 23 Jan 2023 at 05:06, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> According to Agner's instruction tables [1], integer division on Skylake (for example) has a latency of 26 cycles for 32-bit operands, and 42-95 cycles for 64-bit.\n>\n> [1] https://www.agner.org/optimize/instruction_tables.pdf\n>\n\nThanks, that's a very useful reference.\n\n(And I do indeed have one of those CPUs, which explains what I was seeing.)\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 23 Jan 2023 12:02:52 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use 128-bit math to accelerate numeric division, when 8 <\n divisor digits <= 16" }, { "msg_contents": "On Sun, 22 Jan 2023 at 22:49, Joel Jacobson <joel@compiler.org> wrote:\n>\n> Many thanks for feedback. Nice catch! New patch attached.\n>\n\nCool, that resolves the performance issues I was seeing for smaller\ndivisors (which also had a noticeable impact on the numeric_big\nregression test).\n\nAfter some more testing, the gains look good to me, and I wasn't able\nto find any cases where it made things slower, so I've gone ahead and\npushed it.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 23 Jan 2023 12:04:58 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use 128-bit math to accelerate numeric division, when 8 <\n divisor digits <= 16" } ]
[ { "msg_contents": "pg_stat_progress_analyze was added in v13 (a166d408e).\n\nFor tables with inheritance children, do_analyze_rel() and\nacquire_sample_rows() are called twice. The first time through,\npgstat_progress_start_command() has memset() the progress array to zero.\n\nBut the 2nd time, ANALYZE_BLOCKS_DONE is already set from the previous\ncall, and BLOCKS_TOTAL can be set to some lower value (and in any case a\nvalue unrelated to the pre-existing value of BLOCKS_DONE). So the\nprogress report briefly shows a bogus combination of values and, with\nthese assertions, fails regression tests in master and v13, unless\nBLOCKS_DONE is first zeroed.\n\n| Core was generated by `postgres: pryzbyj regression [local] VACUUM '.\n| ...\n| #5 0x0000559a1c9fbbcc in ExceptionalCondition (conditionName=conditionName@entry=0x559a1cb68068 \"a[PROGRESS_ANALYZE_BLOCKS_DONE] <= a[PROGRESS_ANALYZE_BLOCKS_TOTAL]\", \n| ...\n| #16 0x0000563165cc7cfe in exec_simple_query (query_string=query_string@entry=0x563167cad0c8 \"VACUUM ANALYZE stxdinh, stxdinh1, stxdinh2;\") at ../src/backend/tcop/postgres.c:1237\n| ...\n| (gdb) p MyBEEntry->st_progress_param[1]\n| $1 = 5\n| (gdb) p MyBEEntry->st_progress_param[2]\n| $2 = 9\n\nBTW, I found this bug as well as the COPY progress bug I reported [0]\nwhile testing the CREATE INDEX progress bug reported by Ilya. It seems\nlike the progress infrastructure should have some checks added.\n\n[0] https://www.postgresql.org/message-id/flat/20230119054703.GB13860@telsasoft.com\n\ndiff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c\nindex c86e690980e..96710b84558 100644\n--- a/src/backend/commands/analyze.c\n+++ b/src/backend/commands/analyze.c\n@@ -1145,6 +1145,12 @@ acquire_sample_rows(Relation onerel, int elevel,\n \tTableScanDesc scan;\n \tBlockNumber nblocks;\n \tBlockNumber blksdone = 0;\n+\tint64\t\tprogress_vals[2] = {0};\n+\tint const\tprogress_inds[2] = {\n+\t\tPROGRESS_ANALYZE_BLOCKS_DONE,\n+\t\tPROGRESS_ANALYZE_BLOCKS_TOTAL\n+\t};\n+\n #ifdef USE_PREFETCH\n \tint\t\t\tprefetch_maximum = 0;\t/* blocks to prefetch if enabled */\n \tBlockSamplerData prefetch_bs;\n@@ -1169,8 +1175,8 @@ acquire_sample_rows(Relation onerel, int elevel,\n #endif\n \n \t/* Report sampling block numbers */\n-\tpgstat_progress_update_param(PROGRESS_ANALYZE_BLOCKS_TOTAL,\n-\t\t\t\t\t\t\t\t nblocks);\n+\tprogress_vals[1] = nblocks;\n+\tpgstat_progress_update_multi_param(2, progress_inds, progress_vals);\n \n \t/* Prepare for sampling rows */\n \treservoir_init_selection_state(&rstate, targrows);\ndiff --git a/src/backend/utils/activity/backend_progress.c b/src/backend/utils/activity/backend_progress.c\nindex d96af812b19..05593fb13cb 100644\n--- a/src/backend/utils/activity/backend_progress.c\n+++ b/src/backend/utils/activity/backend_progress.c\n@@ -10,6 +10,7 @@\n */\n #include \"postgres.h\"\n \n+#include \"commands/progress.h\"\n #include \"port/atomics.h\"\t\t/* for memory barriers */\n #include \"utils/backend_progress.h\"\n #include \"utils/backend_status.h\"\n@@ -37,6 +38,83 @@ pgstat_progress_start_command(ProgressCommandType cmdtype, Oid relid)\n \tPGSTAT_END_WRITE_ACTIVITY(beentry);\n }\n \n+/*\n+ * Check for consistency of progress data (current < total).\n+ *\n+ * Check during pgstat_progress_updates_*() rather than only from\n+ * pgstat_progress_end_command() to catch issues with uninitialized/stale data\n+ * from previous progress commands.\n+ *\n+ * If a command fails due to interrupt or error, the values may be less than\n+ * the expected final value.\n+ */\n+static void\n+pgstat_progress_asserts(void)\n+{\n+\tvolatile PgBackendStatus *beentry = MyBEEntry;\n+\tvolatile int64\t\t\t *a = beentry->st_progress_param;\n+\n+\tswitch (beentry->st_progress_command)\n+\t{\n+\tcase PROGRESS_COMMAND_VACUUM:\n+\t\tAssert(a[PROGRESS_VACUUM_HEAP_BLKS_SCANNED] <=\n+\t\t\t\ta[PROGRESS_VACUUM_TOTAL_HEAP_BLKS]);\n+\t\tAssert(a[PROGRESS_VACUUM_HEAP_BLKS_VACUUMED] <=\n+\t\t\t\ta[PROGRESS_VACUUM_TOTAL_HEAP_BLKS]);\n+\t\tAssert(a[PROGRESS_VACUUM_NUM_DEAD_TUPLES] <=\n+\t\t\t\ta[PROGRESS_VACUUM_MAX_DEAD_TUPLES]);\n+\t\tbreak;\n+\n+\tcase PROGRESS_COMMAND_ANALYZE:\n+\t\tAssert(a[PROGRESS_ANALYZE_BLOCKS_DONE] <=\n+\t\t\t\ta[PROGRESS_ANALYZE_BLOCKS_TOTAL]);\n+\t\tAssert(a[PROGRESS_ANALYZE_EXT_STATS_COMPUTED] <=\n+\t\t\t\ta[PROGRESS_ANALYZE_EXT_STATS_TOTAL]);\n+\t\tAssert(a[PROGRESS_ANALYZE_CHILD_TABLES_DONE] <=\n+\t\t\t\ta[PROGRESS_ANALYZE_CHILD_TABLES_TOTAL]);\n+\t\tbreak;\n+\n+\tcase PROGRESS_COMMAND_CLUSTER:\n+\t\tAssert(a[PROGRESS_CLUSTER_HEAP_BLKS_SCANNED] <=\n+\t\t\t\ta[PROGRESS_CLUSTER_TOTAL_HEAP_BLKS]);\n+\t\t/* fall through because CLUSTER rebuilds indexes */\n+\tcase PROGRESS_COMMAND_CREATE_INDEX:\n+\t\tAssert(a[PROGRESS_CREATEIDX_TUPLES_DONE] <=\n+\t\t\t\ta[PROGRESS_CREATEIDX_TUPLES_TOTAL]);\n+\t\tAssert(a[PROGRESS_CREATEIDX_PARTITIONS_DONE] <=\n+\t\t\t\ta[PROGRESS_CREATEIDX_PARTITIONS_TOTAL]);\n+\t\tbreak;\n+\n+\tcase PROGRESS_COMMAND_BASEBACKUP:\n+\t\t/* progress reporting is optional for these */\n+\t\tif (a[PROGRESS_BASEBACKUP_BACKUP_TOTAL] >= 0)\n+\t\t{\n+\t\t\tAssert(a[PROGRESS_BASEBACKUP_BACKUP_STREAMED] <=\n+\t\t\t\t\ta[PROGRESS_BASEBACKUP_BACKUP_TOTAL]);\n+\t\t\tAssert(a[PROGRESS_BASEBACKUP_TBLSPC_STREAMED] <=\n+\t\t\t\t\ta[PROGRESS_BASEBACKUP_TBLSPC_TOTAL]);\n+\t\t}\n+\t\tbreak;\n+\n+#if 0\n+\tcase PROGRESS_COMMAND_COPY:\n+// This currently fails file_fdw tests, since pgstat_prorgress evidently fails\n+// to support simultaneous copy commands, as happens during JOIN.\n+\t\t/* bytes progress is not available in all cases */\n+\t\tif (a[PROGRESS_COPY_BYTES_TOTAL] > 0)\n+\t\t\t// Assert(a[PROGRESS_COPY_BYTES_PROCESSED] <= a[PROGRESS_COPY_BYTES_TOTAL]);\n+\t\t\tif (a[PROGRESS_COPY_BYTES_PROCESSED] > a[PROGRESS_COPY_BYTES_TOTAL])\n+\t\t\t\telog(WARNING, \"PROGRESS_COPY_BYTES_PROCESSED %ld %ld\",\n+\t\t\t\t\t\t\ta[PROGRESS_COPY_BYTES_PROCESSED],\n+\t\t\t\t\t\t\ta[PROGRESS_COPY_BYTES_TOTAL]);\n+#endif\n+\t\tbreak;\n+\n+\tcase PROGRESS_COMMAND_INVALID:\n+\t\tbreak; /* Do nothing */\n+\t}\n+}\n+\n /*-----------\n * pgstat_progress_update_param() -\n *\n@@ -56,6 +134,8 @@ pgstat_progress_update_param(int index, int64 val)\n \tPGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n \tbeentry->st_progress_param[index] = val;\n \tPGSTAT_END_WRITE_ACTIVITY(beentry);\n+\n+\tpgstat_progress_asserts();\n }\n \n /*-----------\n@@ -85,6 +165,8 @@ pgstat_progress_update_multi_param(int nparam, const int *index,\n \t}\n \n \tPGSTAT_END_WRITE_ACTIVITY(beentry);\n+\n+\tpgstat_progress_asserts();\n }\n \n /*-----------\n\n\n", "msg_date": "Sun, 22 Jan 2023 10:23:45 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "bug: ANALYZE progress report with inheritance tables" }, { "msg_contents": "> On 22 Jan 2023, at 17:23, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c\n> index c86e690980e..96710b84558 100644\n> ...\n\nThis CF entry fails to build in the CFBot since the patch isn't attached to the\nemail, and the CFBot can't extract inline patches from the mail body. Can you\nplease submit this as an attached (and rebased) patch to make sure we get\nautomated testing on it?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 23:03:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: bug: ANALYZE progress report with inheritance tables" }, { "msg_contents": "On 22/01/2023 18:23, Justin Pryzby wrote:\n> pg_stat_progress_analyze was added in v13 (a166d408e).\n> \n> For tables with inheritance children, do_analyze_rel() and\n> acquire_sample_rows() are called twice. The first time through,\n> pgstat_progress_start_command() has memset() the progress array to zero.\n> \n> But the 2nd time, ANALYZE_BLOCKS_DONE is already set from the previous\n> call, and BLOCKS_TOTAL can be set to some lower value (and in any case a\n> value unrelated to the pre-existing value of BLOCKS_DONE). So the\n> progress report briefly shows a bogus combination of values and, with\n> these assertions, fails regression tests in master and v13, unless\n> BLOCKS_DONE is first zeroed.\n\nGood catch!\n\nI think the counts need do be reset even earlier, in \nacquire_inherited_sample_rows(), at the same time that we update \nPROGRESS_ANALYZE_CURRENT_CHILD_TABLE_RELID. See attached patch. \nOtherwise, there's a brief moment where we have already updated the \nchild table ID, but the PROGRESS_ANALYZE_BLOCKS_TOTAL \nPROGRESS_ANALYZE_BLOCKS_DONE still show the counts from the previous \nchild table. And if it's a foreign table, the FDW's sampling function \nmight not update the progress report at all, in which case the old \nvalues will be displayed until the table is fully processed.\n\nI appreciate the assertions you added, that made it easy to reproduce \nthe problem. I'm inclined to not commit that though. It seems like a \nmodularity violation for the code in backend_progress.c to have such \nintimate knowledge of what the different counters mean.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 28 Sep 2023 19:06:21 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: bug: ANALYZE progress report with inheritance tables" }, { "msg_contents": "On 28/09/2023 19:06, Heikki Linnakangas wrote:\n> On 22/01/2023 18:23, Justin Pryzby wrote:\n>> pg_stat_progress_analyze was added in v13 (a166d408e).\n>>\n>> For tables with inheritance children, do_analyze_rel() and\n>> acquire_sample_rows() are called twice. The first time through,\n>> pgstat_progress_start_command() has memset() the progress array to zero.\n>>\n>> But the 2nd time, ANALYZE_BLOCKS_DONE is already set from the previous\n>> call, and BLOCKS_TOTAL can be set to some lower value (and in any case a\n>> value unrelated to the pre-existing value of BLOCKS_DONE). So the\n>> progress report briefly shows a bogus combination of values and, with\n>> these assertions, fails regression tests in master and v13, unless\n>> BLOCKS_DONE is first zeroed.\n> \n> Good catch!\n> \n> I think the counts need do be reset even earlier, in\n> acquire_inherited_sample_rows(), at the same time that we update\n> PROGRESS_ANALYZE_CURRENT_CHILD_TABLE_RELID. See attached patch.\n> Otherwise, there's a brief moment where we have already updated the\n> child table ID, but the PROGRESS_ANALYZE_BLOCKS_TOTAL\n> PROGRESS_ANALYZE_BLOCKS_DONE still show the counts from the previous\n> child table. And if it's a foreign table, the FDW's sampling function\n> might not update the progress report at all, in which case the old\n> values will be displayed until the table is fully processed.\n\nCommitted and backported. Thank you!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sat, 30 Sep 2023 17:17:41 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: bug: ANALYZE progress report with inheritance tables" } ]
[ { "msg_contents": "On my machine, the src/test/subscription/t/002_types.pl test\nusually takes right about 1.5 seconds:\n\n$ time make check PROVE_FLAGS=--timer PROVE_TESTS=t/002_types.pl \n...\n[14:22:12] t/002_types.pl .. ok 1550 ms ( 0.00 usr 0.00 sys + 0.70 cusr 0.25 csys = 0.95 CPU)\n[14:22:13]\n\nI noticed however that sometimes (at least one try in ten, for me)\nit takes about 2.5 seconds:\n\n[14:22:16] t/002_types.pl .. ok 2591 ms ( 0.00 usr 0.00 sys + 0.69 cusr 0.28 csys = 0.97 CPU)\n[14:22:18]\n\nand I've even seen 3.5 seconds. I dug into this and eventually\nidentified the cause: it's a deadlock between a subscription's apply\nworker and a tablesync worker that it's spawned. Sometimes the apply\nworker calls wait_for_relation_state_change (to wait for the tablesync\nworker to finish) while it's holding a lock on pg_replication_origin.\nIf that's the case, then when the tablesync worker reaches\nprocess_syncing_tables_for_sync it is able to perform\nUpdateSubscriptionRelState and reach the transaction commit below\nthat; but when it tries to do replorigin_drop_by_name a little further\ndown, it blocks on acquiring ExclusiveLock on pg_replication_origin.\nSo we have an undetected deadlock. We escape that because\nwait_for_relation_state_change has a 1-second timeout, after which\nit rechecks GetSubscriptionRelState and is able to see the committed\nrelation state change; so it continues, and eventually releases its\ntransaction and the lock, permitting the tablesync worker to finish.\n\nI've not tracked down the exact circumstances in which the apply\nworker ends up holding a problematic lock, but it seems likely\nthat it corresponds to cases where its main loop has itself called\nreplorigin_drop_by_name, a bit further up, for some other concurrent\ntablesync operation. (In all the cases I've traced through, the apply\nworker is herding multiple tablesync workers when this happens.)\n\nI experimented with having the apply worker release its locks\nbefore waiting for the tablesync worker, as attached. This passes\ncheck-world and it seems to eliminate the test runtime instability,\nbut I wonder whether it's semantically correct. This whole business\nof taking table-wide ExclusiveLock on pg_replication_origin looks\nlike a horrid kluge that we should try to get rid of, not least\nbecause I don't see any clear documentation of what hazard it's\ntrying to prevent.\n\nAnother thing that has a bad smell about it is the fact that\nprocess_syncing_tables_for_sync uses two transactions in the first\nplace. There's a comment there claiming that it's for crash safety,\nbut I can't help suspecting it's really because this case becomes a\nhard deadlock without that mid-function commit.\n\nIt's not great in any case that the apply worker can move on in\nthe belief that the tablesync worker is done when in fact the latter\nstill has catalog state updates to make. And I wonder what we're\ndoing with having both of them calling replorigin_drop_by_name\n... shouldn't that responsibility belong to just one of them?\n\nSo I think this whole area deserves a hard look, and I'm not at\nall sure that what's attached is a good solution.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 22 Jan 2023 14:59:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Mon, Jan 23, 2023 at 1:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> On my machine, the src/test/subscription/t/002_types.pl test\n> usually takes right about 1.5 seconds:\n>\n> $ time make check PROVE_FLAGS=--timer PROVE_TESTS=t/002_types.pl\n> ...\n> [14:22:12] t/002_types.pl .. ok 1550 ms ( 0.00 usr 0.00 sys + 0.70 cusr 0.25 csys = 0.95 CPU)\n> [14:22:13]\n>\n> I noticed however that sometimes (at least one try in ten, for me)\n> it takes about 2.5 seconds:\n>\n> [14:22:16] t/002_types.pl .. ok 2591 ms ( 0.00 usr 0.00 sys + 0.69 cusr 0.28 csys = 0.97 CPU)\n> [14:22:18]\n>\n> and I've even seen 3.5 seconds. I dug into this and eventually\n> identified the cause: it's a deadlock between a subscription's apply\n> worker and a tablesync worker that it's spawned. Sometimes the apply\n> worker calls wait_for_relation_state_change (to wait for the tablesync\n> worker to finish) while it's holding a lock on pg_replication_origin.\n> If that's the case, then when the tablesync worker reaches\n> process_syncing_tables_for_sync it is able to perform\n> UpdateSubscriptionRelState and reach the transaction commit below\n> that; but when it tries to do replorigin_drop_by_name a little further\n> down, it blocks on acquiring ExclusiveLock on pg_replication_origin.\n> So we have an undetected deadlock. We escape that because\n> wait_for_relation_state_change has a 1-second timeout, after which\n> it rechecks GetSubscriptionRelState and is able to see the committed\n> relation state change; so it continues, and eventually releases its\n> transaction and the lock, permitting the tablesync worker to finish.\n>\n> I've not tracked down the exact circumstances in which the apply\n> worker ends up holding a problematic lock, but it seems likely\n> that it corresponds to cases where its main loop has itself called\n> replorigin_drop_by_name, a bit further up, for some other concurrent\n> tablesync operation. (In all the cases I've traced through, the apply\n> worker is herding multiple tablesync workers when this happens.)\n>\n> I experimented with having the apply worker release its locks\n> before waiting for the tablesync worker, as attached.\n>\n\nI don't see any problem with your proposed change but I was wondering\nif it would be better to commit the transaction and release locks\nimmediately after performing the replication origin drop? By doing\nthat, we will minimize the amount of time the transaction holds the\nlock.\n\n> This passes\n> check-world and it seems to eliminate the test runtime instability,\n> but I wonder whether it's semantically correct. This whole business\n> of taking table-wide ExclusiveLock on pg_replication_origin looks\n> like a horrid kluge that we should try to get rid of, not least\n> because I don't see any clear documentation of what hazard it's\n> trying to prevent.\n>\n\nIIRC, this is done to prevent concurrent drops of origin drop say by\nexposed API pg_replication_origin_drop(). See the discussion in [1]\nrelated to it. If we want we can optimize it so that we can acquire\nthe lock on the specific origin as mentioned in comments\nreplorigin_drop_by_name() but it was not clear that this operation\nwould be frequent enough.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPuW8DWV5fskkMWWMqzt-x7RPcNQOtJQBp6SdwyRghCk7A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 23 Jan 2023 10:51:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Mon, Jan 23, 2023 at 1:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Another thing that has a bad smell about it is the fact that\n> process_syncing_tables_for_sync uses two transactions in the first\n> place. There's a comment there claiming that it's for crash safety,\n> but I can't help suspecting it's really because this case becomes a\n> hard deadlock without that mid-function commit.\n>\n> It's not great in any case that the apply worker can move on in\n> the belief that the tablesync worker is done when in fact the latter\n> still has catalog state updates to make. And I wonder what we're\n> doing with having both of them calling replorigin_drop_by_name\n> ... shouldn't that responsibility belong to just one of them?\n>\n\nOriginally, it was being dropped at one place only (via tablesync\nworker) but we found a race condition as mentioned in the comments in\nprocess_syncing_tables_for_sync() before the start of the second\ntransaction which leads to this change. See the report and discussion\nabout that race condition in the email [1].\n\n[1] - https://www.postgresql.org/message-id/CAD21AoAw0Oofi4kiDpJBOwpYyBBBkJj=sLUOn4Gd2GjUAKG-fw@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 23 Jan 2023 11:38:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Mon, 23 Jan 2023 at 10:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> IIRC, this is done to prevent concurrent drops of origin drop say by\n> exposed API pg_replication_origin_drop(). See the discussion in [1]\n> related to it. If we want we can optimize it so that we can acquire\n> the lock on the specific origin as mentioned in comments\n> replorigin_drop_by_name() but it was not clear that this operation\n> would be frequent enough.\n\nHere is an attached patch to lock the replication origin record using\nLockSharedObject instead of locking pg_replication_origin relation in\nExclusiveLock mode. Now tablesync worker will wait only if the\ntablesync worker is trying to drop the same replication origin which\nhas already been dropped by the apply worker, the other tablesync\nworkers will be able to successfully drop the replication origin\nwithout any wait.\n\nRegards,\nVignesh", "msg_date": "Fri, 27 Jan 2023 15:45:04 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Fri, Jan 27, 2023 at 3:45 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 23 Jan 2023 at 10:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > IIRC, this is done to prevent concurrent drops of origin drop say by\n> > exposed API pg_replication_origin_drop(). See the discussion in [1]\n> > related to it. If we want we can optimize it so that we can acquire\n> > the lock on the specific origin as mentioned in comments\n> > replorigin_drop_by_name() but it was not clear that this operation\n> > would be frequent enough.\n>\n> Here is an attached patch to lock the replication origin record using\n> LockSharedObject instead of locking pg_replication_origin relation in\n> ExclusiveLock mode. Now tablesync worker will wait only if the\n> tablesync worker is trying to drop the same replication origin which\n> has already been dropped by the apply worker, the other tablesync\n> workers will be able to successfully drop the replication origin\n> without any wait.\n>\n\nThere is a code in the function replorigin_drop_guts() that uses the\nfunctionality introduced by replorigin_exists(). Can we reuse this\nfunction for the same?\n\nAlso, it would be good if you can share the numbers for different runs\nof \"src/test/subscription/t/002_types.pl\" before and after the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 27 Jan 2023 17:45:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Friday, January 27, 2023 8:16 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Fri, Jan 27, 2023 at 3:45 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> >\r\n> > On Mon, 23 Jan 2023 at 10:52, Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > IIRC, this is done to prevent concurrent drops of origin drop say by\r\n> > > exposed API pg_replication_origin_drop(). See the discussion in [1]\r\n> > > related to it. If we want we can optimize it so that we can acquire\r\n> > > the lock on the specific origin as mentioned in comments\r\n> > > replorigin_drop_by_name() but it was not clear that this operation\r\n> > > would be frequent enough.\r\n> >\r\n> > Here is an attached patch to lock the replication origin record using\r\n> > LockSharedObject instead of locking pg_replication_origin relation in\r\n> > ExclusiveLock mode. Now tablesync worker will wait only if the\r\n> > tablesync worker is trying to drop the same replication origin which\r\n> > has already been dropped by the apply worker, the other tablesync\r\n> > workers will be able to successfully drop the replication origin\r\n> > without any wait.\r\n> >\r\n> \r\n> There is a code in the function replorigin_drop_guts() that uses the\r\n> functionality introduced by replorigin_exists(). Can we reuse this function for\r\n> the same?\r\n\r\nMaybe we can use SearchSysCacheExists1 to check the existence instead of\r\nadding a new function.\r\n\r\nOne comment about the patch.\r\n\r\n@@ -430,23 +445,21 @@ replorigin_drop_by_name(const char *name, bool missing_ok, bool nowait)\r\n...\r\n+\t/* Drop the replication origin if it has not been dropped already */\r\n+\tif (replorigin_exists(roident))\r\n \t\treplorigin_drop_guts(rel, roident, nowait);\r\n\r\nIf developer pass missing_ok as false, should we report an ERROR here\r\ninstead of silently return ?\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Sat, 28 Jan 2023 04:06:26 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Sat, Jan 28, 2023 at 9:36 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, January 27, 2023 8:16 PM Amit Kapila <amit.kapila16@gmail.com>\n> >\n> > On Fri, Jan 27, 2023 at 3:45 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Mon, 23 Jan 2023 at 10:52, Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > >\n> > > > IIRC, this is done to prevent concurrent drops of origin drop say by\n> > > > exposed API pg_replication_origin_drop(). See the discussion in [1]\n> > > > related to it. If we want we can optimize it so that we can acquire\n> > > > the lock on the specific origin as mentioned in comments\n> > > > replorigin_drop_by_name() but it was not clear that this operation\n> > > > would be frequent enough.\n> > >\n> > > Here is an attached patch to lock the replication origin record using\n> > > LockSharedObject instead of locking pg_replication_origin relation in\n> > > ExclusiveLock mode. Now tablesync worker will wait only if the\n> > > tablesync worker is trying to drop the same replication origin which\n> > > has already been dropped by the apply worker, the other tablesync\n> > > workers will be able to successfully drop the replication origin\n> > > without any wait.\n> > >\n> >\n> > There is a code in the function replorigin_drop_guts() that uses the\n> > functionality introduced by replorigin_exists(). Can we reuse this function for\n> > the same?\n>\n> Maybe we can use SearchSysCacheExists1 to check the existence instead of\n> adding a new function.\n>\n\nYeah, I think that would be better.\n\n> One comment about the patch.\n>\n> @@ -430,23 +445,21 @@ replorigin_drop_by_name(const char *name, bool missing_ok, bool nowait)\n> ...\n> + /* Drop the replication origin if it has not been dropped already */\n> + if (replorigin_exists(roident))\n> replorigin_drop_guts(rel, roident, nowait);\n>\n> If developer pass missing_ok as false, should we report an ERROR here\n> instead of silently return ?\n>\n\nOne thing that looks a bit odd is that we will anyway have a similar\ncheck in replorigin_drop_guts() which is a static function and called\nfrom only one place, so, will it be required to check at both places?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 28 Jan 2023 11:26:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Sat, 28 Jan 2023 at 11:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jan 28, 2023 at 9:36 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, January 27, 2023 8:16 PM Amit Kapila <amit.kapila16@gmail.com>\n> > >\n> > > On Fri, Jan 27, 2023 at 3:45 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Mon, 23 Jan 2023 at 10:52, Amit Kapila <amit.kapila16@gmail.com>\n> > > wrote:\n> > > > >\n> > > > > IIRC, this is done to prevent concurrent drops of origin drop say by\n> > > > > exposed API pg_replication_origin_drop(). See the discussion in [1]\n> > > > > related to it. If we want we can optimize it so that we can acquire\n> > > > > the lock on the specific origin as mentioned in comments\n> > > > > replorigin_drop_by_name() but it was not clear that this operation\n> > > > > would be frequent enough.\n> > > >\n> > > > Here is an attached patch to lock the replication origin record using\n> > > > LockSharedObject instead of locking pg_replication_origin relation in\n> > > > ExclusiveLock mode. Now tablesync worker will wait only if the\n> > > > tablesync worker is trying to drop the same replication origin which\n> > > > has already been dropped by the apply worker, the other tablesync\n> > > > workers will be able to successfully drop the replication origin\n> > > > without any wait.\n> > > >\n> > >\n> > > There is a code in the function replorigin_drop_guts() that uses the\n> > > functionality introduced by replorigin_exists(). Can we reuse this function for\n> > > the same?\n> >\n> > Maybe we can use SearchSysCacheExists1 to check the existence instead of\n> > adding a new function.\n> >\n>\n> Yeah, I think that would be better.\n>\n> > One comment about the patch.\n> >\n> > @@ -430,23 +445,21 @@ replorigin_drop_by_name(const char *name, bool missing_ok, bool nowait)\n> > ...\n> > + /* Drop the replication origin if it has not been dropped already */\n> > + if (replorigin_exists(roident))\n> > replorigin_drop_guts(rel, roident, nowait);\n> >\n> > If developer pass missing_ok as false, should we report an ERROR here\n> > instead of silently return ?\n> >\n>\n> One thing that looks a bit odd is that we will anyway have a similar\n> check in replorigin_drop_guts() which is a static function and called\n> from only one place, so, will it be required to check at both places?\n\nThere is a possibility that the initial check to verify if replication\norigin exists in replorigin_drop_by_name was successful but later one\nof either table sync worker or apply worker process might have dropped\nthe replication origin, so it is better to check again before calling\nreplorigin_drop_guts, ideally the tuple should be valid in\nreplorigin_drop_guts, but can we keep the check as it is to maintain\nthe consistency before calling CatalogTupleDelete.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 30 Jan 2023 09:20:10 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Fri, 27 Jan 2023 at 17:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 27, 2023 at 3:45 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, 23 Jan 2023 at 10:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > IIRC, this is done to prevent concurrent drops of origin drop say by\n> > > exposed API pg_replication_origin_drop(). See the discussion in [1]\n> > > related to it. If we want we can optimize it so that we can acquire\n> > > the lock on the specific origin as mentioned in comments\n> > > replorigin_drop_by_name() but it was not clear that this operation\n> > > would be frequent enough.\n> >\n> > Here is an attached patch to lock the replication origin record using\n> > LockSharedObject instead of locking pg_replication_origin relation in\n> > ExclusiveLock mode. Now tablesync worker will wait only if the\n> > tablesync worker is trying to drop the same replication origin which\n> > has already been dropped by the apply worker, the other tablesync\n> > workers will be able to successfully drop the replication origin\n> > without any wait.\n> >\n>\n> There is a code in the function replorigin_drop_guts() that uses the\n> functionality introduced by replorigin_exists(). Can we reuse this\n> function for the same?\n>\n> Also, it would be good if you can share the numbers for different runs\n> of \"src/test/subscription/t/002_types.pl\" before and after the patch.\n\nBy using only Tom Lane's Fix, I noticed that the execution time is\nvarying between 3.6 seconds to 4.4 seconds.\nBy using only the \"Changing to LockSharedObject\" fix, I noticed that\nthe execution time is varying between 3.8 seconds to 4.6 seconds.\nBy using the combined Fix(Tom Lane's fix + changing to\nLockSharedObject) , I noticed that the execution time is varying\nbetween 3.5 seconds to 3.8 seconds.\nI felt both the changes will be required as it will also handle the\nscenario when both the apply worker and the table sync worker try to\ndrop the same replication origin.\n\nThe execution results for the same:\nWith only Tom Lane's fix:\n[12:25:32] t/002_types.pl .. ok 3604 ms ( 0.00 usr 0.00 sys +\n2.26 cusr 0.37 csys = 2.63 CPU)\n[12:25:48] t/002_types.pl .. ok 3788 ms ( 0.00 usr 0.00 sys +\n2.24 cusr 0.39 csys = 2.63 CPU)\n[12:26:01] t/002_types.pl .. ok 3783 ms ( 0.00 usr 0.00 sys +\n2.42 cusr 0.37 csys = 2.79 CPU)\n[12:26:14] t/002_types.pl .. ok 3845 ms ( 0.00 usr 0.00 sys +\n2.38 cusr 0.44 csys = 2.82 CPU)\n[12:26:29] t/002_types.pl .. ok 3923 ms ( 0.00 usr 0.00 sys +\n2.54 cusr 0.39 csys = 2.93 CPU)\n[12:26:42] t/002_types.pl .. ok 4416 ms ( 0.00 usr 0.00 sys +\n2.73 cusr 0.48 csys = 3.21 CPU)\n[12:26:55] t/002_types.pl .. ok 4310 ms ( 0.00 usr 0.00 sys +\n2.62 cusr 0.39 csys = 3.01 CPU)\n[12:27:09] t/002_types.pl .. ok 4168 ms ( 0.00 usr 0.00 sys +\n2.67 cusr 0.46 csys = 3.13 CPU)\n[12:27:21] t/002_types.pl .. ok 4167 ms ( 0.00 usr 0.00 sys +\n2.46 cusr 0.53 csys = 2.99 CPU)\n[12:27:34] t/002_types.pl .. ok 4144 ms ( 0.00 usr 0.00 sys +\n2.59 cusr 0.41 csys = 3.00 CPU)\n[12:27:46] t/002_types.pl .. ok 3982 ms ( 0.00 usr 0.00 sys +\n2.52 cusr 0.41 csys = 2.93 CPU)\n[12:28:03] t/002_types.pl .. ok 4190 ms ( 0.01 usr 0.00 sys +\n2.67 cusr 0.46 csys = 3.14 CPU)\n\nWith only \"Changing to LockSharedObject\" fix:\n[12:33:02] t/002_types.pl .. ok 3815 ms ( 0.00 usr 0.00 sys +\n2.30 cusr 0.38 csys = 2.68 CPU)\n[12:33:16] t/002_types.pl .. ok 4295 ms ( 0.00 usr 0.00 sys +\n2.66 cusr 0.42 csys = 3.08 CPU)\n[12:33:31] t/002_types.pl .. ok 4270 ms ( 0.00 usr 0.00 sys +\n2.72 cusr 0.44 csys = 3.16 CPU)\n[12:33:44] t/002_types.pl .. ok 4460 ms ( 0.00 usr 0.00 sys +\n2.78 cusr 0.45 csys = 3.23 CPU)\n[12:33:58] t/002_types.pl .. ok 4340 ms ( 0.01 usr 0.00 sys +\n2.67 cusr 0.45 csys = 3.13 CPU)\n[12:34:11] t/002_types.pl .. ok 4142 ms ( 0.00 usr 0.00 sys +\n2.58 cusr 0.42 csys = 3.00 CPU)\n[12:34:24] t/002_types.pl .. ok 4459 ms ( 0.00 usr 0.00 sys +\n2.76 cusr 0.49 csys = 3.25 CPU)\n[12:34:38] t/002_types.pl .. ok 4427 ms ( 0.00 usr 0.00 sys +\n2.68 cusr 0.48 csys = 3.16 CPU)\n[12:35:10] t/002_types.pl .. ok 4642 ms ( 0.00 usr 0.00 sys +\n2.84 cusr 0.55 csys = 3.39 CPU)\n[12:35:22] t/002_types.pl .. ok 4047 ms ( 0.01 usr 0.00 sys +\n2.49 cusr 0.46 csys = 2.96 CPU)\n[12:35:32] t/002_types.pl .. ok 4505 ms ( 0.01 usr 0.00 sys +\n2.90 cusr 0.45 csys = 3.36 CPU)\n[12:36:03] t/002_types.pl .. ok 4088 ms ( 0.00 usr 0.00 sys +\n2.51 cusr 0.42 csys = 2.93 CPU)\n\n002_types with combination of Tom Lane's and \"Changing to LockSharedObject\" fix:\n[10:22:04] t/002_types.pl .. ok 3730 ms ( 0.00 usr 0.00 sys +\n2.30 cusr 0.41 csys = 2.71 CPU)\n[10:23:40] t/002_types.pl .. ok 3666 ms ( 0.00 usr 0.00 sys +\n2.16 cusr 0.42 csys = 2.58 CPU)\n[10:23:31] t/002_types.pl .. ok 3665 ms ( 0.00 usr 0.00 sys +\n2.31 cusr 0.40 csys = 2.71 CPU)\n[10:23:23] t/002_types.pl .. ok 3500 ms ( 0.00 usr 0.00 sys +\n2.20 cusr 0.36 csys = 2.56 CPU)\n[10:23:14] t/002_types.pl .. ok 3704 ms ( 0.00 usr 0.00 sys +\n2.36 cusr 0.35 csys = 2.71 CPU)\n[10:23:05] t/002_types.pl .. ok 3594 ms ( 0.00 usr 0.00 sys +\n2.32 cusr 0.31 csys = 2.63 CPU)\n[10:24:10] t/002_types.pl .. ok 3702 ms ( 0.00 usr 0.00 sys +\n2.27 cusr 0.42 csys = 2.69 CPU)\n[10:24:22] t/002_types.pl .. ok 3741 ms ( 0.00 usr 0.00 sys +\n2.39 cusr 0.36 csys = 2.75 CPU)\n[10:24:38] t/002_types.pl .. ok 3676 ms ( 0.00 usr 0.00 sys +\n2.28 cusr 0.43 csys = 2.71 CPU)\n[10:24:50] t/002_types.pl .. ok 3843 ms ( 0.00 usr 0.00 sys +\n2.36 cusr 0.43 csys = 2.79 CPU)\n[10:25:03] t/002_types.pl .. ok 3710 ms ( 0.00 usr 0.00 sys +\n2.30 cusr 0.36 csys = 2.66 CPU)\n[10:25:12] t/002_types.pl .. ok 3695 ms ( 0.00 usr 0.00 sys +\n2.34 cusr 0.35 csys = 2.69 CPU)\n\n 002_types on HEAD:\n[10:31:05] t/002_types.pl .. ok 5687 ms ( 0.00 usr 0.00 sys +\n2.35 cusr 0.45 csys = 2.80 CPU)\n[10:31:31] t/002_types.pl .. ok 6815 ms ( 0.00 usr 0.00 sys +\n2.61 cusr 0.43 csys = 3.04 CPU)\n[10:31:47] t/002_types.pl .. ok 5561 ms ( 0.00 usr 0.00 sys +\n2.24 cusr 0.47 csys = 2.71 CPU)\n[10:32:05] t/002_types.pl .. ok 4542 ms ( 0.00 usr 0.00 sys +\n2.27 cusr 0.39 csys = 2.66 CPU)\n[10:32:20] t/002_types.pl .. ok 3663 ms ( 0.00 usr 0.00 sys +\n2.30 cusr 0.38 csys = 2.68 CPU)\n[10:32:33] t/002_types.pl .. ok 3627 ms ( 0.00 usr 0.00 sys +\n2.27 cusr 0.32 csys = 2.59 CPU)\n[10:32:45] t/002_types.pl .. ok 3808 ms ( 0.00 usr 0.00 sys +\n2.41 cusr 0.39 csys = 2.80 CPU)\n[10:32:59] t/002_types.pl .. ok 4536 ms ( 0.00 usr 0.00 sys +\n2.24 cusr 0.38 csys = 2.62 CPU)\n[10:33:13] t/002_types.pl .. ok 3638 ms ( 0.00 usr 0.00 sys +\n2.25 cusr 0.41 csys = 2.66 CPU)\n[10:33:35] t/002_types.pl .. ok 4796 ms ( 0.00 usr 0.00 sys +\n2.38 cusr 0.38 csys = 2.76 CPU)\n[10:33:51] t/002_types.pl .. ok 4695 ms ( 0.00 usr 0.00 sys +\n2.40 cusr 0.37 csys = 2.77 CPU)\n[10:34:06] t/002_types.pl .. ok 5738 ms ( 0.00 usr 0.00 sys +\n2.44 cusr 0.43 csys = 2.87 CPU)\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 30 Jan 2023 09:46:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Mon, Jan 30, 2023 at 9:20 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, 28 Jan 2023 at 11:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > One thing that looks a bit odd is that we will anyway have a similar\n> > check in replorigin_drop_guts() which is a static function and called\n> > from only one place, so, will it be required to check at both places?\n>\n> There is a possibility that the initial check to verify if replication\n> origin exists in replorigin_drop_by_name was successful but later one\n> of either table sync worker or apply worker process might have dropped\n> the replication origin,\n>\n\nWon't locking on the particular origin prevent concurrent drops? IIUC,\nthe drop happens after the patch acquires the lock on the origin.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Jan 2023 12:01:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Monday, January 30, 2023 2:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Jan 30, 2023 at 9:20 AM vignesh C <vignesh21@gmail.com> wrote:\r\n> >\r\n> > On Sat, 28 Jan 2023 at 11:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > > One thing that looks a bit odd is that we will anyway have a similar\r\n> > > check in replorigin_drop_guts() which is a static function and\r\n> > > called from only one place, so, will it be required to check at both places?\r\n> >\r\n> > There is a possibility that the initial check to verify if replication\r\n> > origin exists in replorigin_drop_by_name was successful but later one\r\n> > of either table sync worker or apply worker process might have dropped\r\n> > the replication origin,\r\n> >\r\n> \r\n> Won't locking on the particular origin prevent concurrent drops? IIUC, the\r\n> drop happens after the patch acquires the lock on the origin.\r\n\r\nYes, I think the existence check in replorigin_drop_guts is unnecessary as we\r\nalready lock the origin before that. I think the check in replorigin_drop_guts\r\nis a custom check after calling SearchSysCache1 to get the tuple, but the error\r\nshould not happen as no concurrent drop can be performed.\r\n\r\nTo make it simpler, one idea is to move the code that getting the tuple from\r\nsystem cache to the replorigin_drop_by_name(). After locking the origin, we\r\ncan try to get the tuple and do the existence check, and we can reuse\r\nthis tuple to perform origin delete. In this approach we only need to check\r\norigin existence once after locking. BTW, if we do this, then we'd better rename the\r\nreplorigin_drop_guts() to something like replorigin_state_clear() as the function\r\nonly clear the in-memory information after that.\r\n\r\nThe code could be like:\r\n\r\n-------\r\nreplorigin_drop_by_name(const char *name, bool missing_ok, bool nowait)\r\n...\r\n\t/*\r\n\t * Lock the origin to prevent concurrent drops. We keep the lock until the\r\n\t * end of transaction.\r\n\t */\r\n\tLockSharedObject(ReplicationOriginRelationId, roident, 0,\r\n\t\t\t\t\t AccessExclusiveLock);\r\n\r\n\ttuple = SearchSysCache1(REPLORIGIDENT, ObjectIdGetDatum(roident));\r\n\tif (!HeapTupleIsValid(tuple))\r\n\t{\r\n\t\tif (!missing_ok)\r\n\t\t\telog(ERROR, \"cache lookup failed for replication origin with ID %d\",\r\n\t\t\t\t roident);\r\n\t\t\r\n\t\treturn;\r\n\t}\r\n\r\n\treplorigin_state_clear(rel, roident, nowait);\r\n\r\n\t/*\r\n\t * Now, we can delete the catalog entry.\r\n\t */\r\n\tCatalogTupleDelete(rel, &tuple->t_self);\r\n\tReleaseSysCache(tuple);\r\n\r\n\tCommandCounterIncrement();\r\n...\r\n-------\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n", "msg_date": "Mon, 30 Jan 2023 07:30:04 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Mon, 30 Jan 2023 at 13:00, houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, January 30, 2023 2:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jan 30, 2023 at 9:20 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Sat, 28 Jan 2023 at 11:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > One thing that looks a bit odd is that we will anyway have a similar\n> > > > check in replorigin_drop_guts() which is a static function and\n> > > > called from only one place, so, will it be required to check at both places?\n> > >\n> > > There is a possibility that the initial check to verify if replication\n> > > origin exists in replorigin_drop_by_name was successful but later one\n> > > of either table sync worker or apply worker process might have dropped\n> > > the replication origin,\n> > >\n> >\n> > Won't locking on the particular origin prevent concurrent drops? IIUC, the\n> > drop happens after the patch acquires the lock on the origin.\n>\n> Yes, I think the existence check in replorigin_drop_guts is unnecessary as we\n> already lock the origin before that. I think the check in replorigin_drop_guts\n> is a custom check after calling SearchSysCache1 to get the tuple, but the error\n> should not happen as no concurrent drop can be performed.\n\nThis scenario is possible while creating subscription, apply worker\nwill try to drop the replication origin if the state is\nSUBREL_STATE_SYNCDONE. Table sync worker will set the state to\nSUBREL_STATE_SYNCDONE and update the relation state before calling\nreplorigin_drop_by_name. Since the transaction is committed by table\nsync worker, the state is visible to apply worker, now apply worker\nwill parallelly try to drop the replication origin in this case.\nThere is a race condition in this case, one of the process table sync\nworker or apply worker will acquire the lock and drop the replication\norigin, the other process will get the lock after the process drops\nthe origin and commits the transaction. Now the other process will try\nto drop the replication origin once it acquires the lock and get the\nerror(from replorigin_drop_guts): cache lookup failed for replication\norigin with ID.\nConcurrent drop is possible in this case.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 30 Jan 2023 14:42:43 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Mon, 30 Jan 2023 at 13:00, houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, January 30, 2023 2:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jan 30, 2023 at 9:20 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Sat, 28 Jan 2023 at 11:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > One thing that looks a bit odd is that we will anyway have a similar\n> > > > check in replorigin_drop_guts() which is a static function and\n> > > > called from only one place, so, will it be required to check at both places?\n> > >\n> > > There is a possibility that the initial check to verify if replication\n> > > origin exists in replorigin_drop_by_name was successful but later one\n> > > of either table sync worker or apply worker process might have dropped\n> > > the replication origin,\n> > >\n> >\n> > Won't locking on the particular origin prevent concurrent drops? IIUC, the\n> > drop happens after the patch acquires the lock on the origin.\n>\n> Yes, I think the existence check in replorigin_drop_guts is unnecessary as we\n> already lock the origin before that. I think the check in replorigin_drop_guts\n> is a custom check after calling SearchSysCache1 to get the tuple, but the error\n> should not happen as no concurrent drop can be performed.\n>\n> To make it simpler, one idea is to move the code that getting the tuple from\n> system cache to the replorigin_drop_by_name(). After locking the origin, we\n> can try to get the tuple and do the existence check, and we can reuse\n> this tuple to perform origin delete. In this approach we only need to check\n> origin existence once after locking. BTW, if we do this, then we'd better rename the\n> replorigin_drop_guts() to something like replorigin_state_clear() as the function\n> only clear the in-memory information after that.\n>\n> The code could be like:\n>\n> -------\n> replorigin_drop_by_name(const char *name, bool missing_ok, bool nowait)\n> ...\n> /*\n> * Lock the origin to prevent concurrent drops. We keep the lock until the\n> * end of transaction.\n> */\n> LockSharedObject(ReplicationOriginRelationId, roident, 0,\n> AccessExclusiveLock);\n>\n> tuple = SearchSysCache1(REPLORIGIDENT, ObjectIdGetDatum(roident));\n> if (!HeapTupleIsValid(tuple))\n> {\n> if (!missing_ok)\n> elog(ERROR, \"cache lookup failed for replication origin with ID %d\",\n> roident);\n>\n> return;\n> }\n>\n> replorigin_state_clear(rel, roident, nowait);\n>\n> /*\n> * Now, we can delete the catalog entry.\n> */\n> CatalogTupleDelete(rel, &tuple->t_self);\n> ReleaseSysCache(tuple);\n>\n> CommandCounterIncrement();\n> ...\n\n+1 for this change as it removes the redundant check which is not\nrequired. I will post an updated version for this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 30 Jan 2023 16:02:52 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Mon, 30 Jan 2023 at 13:00, houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, January 30, 2023 2:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jan 30, 2023 at 9:20 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Sat, 28 Jan 2023 at 11:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > One thing that looks a bit odd is that we will anyway have a similar\n> > > > check in replorigin_drop_guts() which is a static function and\n> > > > called from only one place, so, will it be required to check at both places?\n> > >\n> > > There is a possibility that the initial check to verify if replication\n> > > origin exists in replorigin_drop_by_name was successful but later one\n> > > of either table sync worker or apply worker process might have dropped\n> > > the replication origin,\n> > >\n> >\n> > Won't locking on the particular origin prevent concurrent drops? IIUC, the\n> > drop happens after the patch acquires the lock on the origin.\n>\n> Yes, I think the existence check in replorigin_drop_guts is unnecessary as we\n> already lock the origin before that. I think the check in replorigin_drop_guts\n> is a custom check after calling SearchSysCache1 to get the tuple, but the error\n> should not happen as no concurrent drop can be performed.\n>\n> To make it simpler, one idea is to move the code that getting the tuple from\n> system cache to the replorigin_drop_by_name(). After locking the origin, we\n> can try to get the tuple and do the existence check, and we can reuse\n> this tuple to perform origin delete. In this approach we only need to check\n> origin existence once after locking. BTW, if we do this, then we'd better rename the\n> replorigin_drop_guts() to something like replorigin_state_clear() as the function\n> only clear the in-memory information after that.\n>\n> The code could be like:\n>\n> -------\n> replorigin_drop_by_name(const char *name, bool missing_ok, bool nowait)\n> ...\n> /*\n> * Lock the origin to prevent concurrent drops. We keep the lock until the\n> * end of transaction.\n> */\n> LockSharedObject(ReplicationOriginRelationId, roident, 0,\n> AccessExclusiveLock);\n>\n> tuple = SearchSysCache1(REPLORIGIDENT, ObjectIdGetDatum(roident));\n> if (!HeapTupleIsValid(tuple))\n> {\n> if (!missing_ok)\n> elog(ERROR, \"cache lookup failed for replication origin with ID %d\",\n> roident);\n>\n> return;\n> }\n>\n> replorigin_state_clear(rel, roident, nowait);\n>\n> /*\n> * Now, we can delete the catalog entry.\n> */\n> CatalogTupleDelete(rel, &tuple->t_self);\n> ReleaseSysCache(tuple);\n>\n> CommandCounterIncrement();\n\nThe attached updated patch has the changes to handle the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 30 Jan 2023 17:30:50 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Mon, 30 Jan 2023 at 17:30, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 30 Jan 2023 at 13:00, houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Monday, January 30, 2023 2:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Jan 30, 2023 at 9:20 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Sat, 28 Jan 2023 at 11:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > One thing that looks a bit odd is that we will anyway have a similar\n> > > > > check in replorigin_drop_guts() which is a static function and\n> > > > > called from only one place, so, will it be required to check at both places?\n> > > >\n> > > > There is a possibility that the initial check to verify if replication\n> > > > origin exists in replorigin_drop_by_name was successful but later one\n> > > > of either table sync worker or apply worker process might have dropped\n> > > > the replication origin,\n> > > >\n> > >\n> > > Won't locking on the particular origin prevent concurrent drops? IIUC, the\n> > > drop happens after the patch acquires the lock on the origin.\n> >\n> > Yes, I think the existence check in replorigin_drop_guts is unnecessary as we\n> > already lock the origin before that. I think the check in replorigin_drop_guts\n> > is a custom check after calling SearchSysCache1 to get the tuple, but the error\n> > should not happen as no concurrent drop can be performed.\n> >\n> > To make it simpler, one idea is to move the code that getting the tuple from\n> > system cache to the replorigin_drop_by_name(). After locking the origin, we\n> > can try to get the tuple and do the existence check, and we can reuse\n> > this tuple to perform origin delete. In this approach we only need to check\n> > origin existence once after locking. BTW, if we do this, then we'd better rename the\n> > replorigin_drop_guts() to something like replorigin_state_clear() as the function\n> > only clear the in-memory information after that.\n> >\n> > The code could be like:\n> >\n> > -------\n> > replorigin_drop_by_name(const char *name, bool missing_ok, bool nowait)\n> > ...\n> > /*\n> > * Lock the origin to prevent concurrent drops. We keep the lock until the\n> > * end of transaction.\n> > */\n> > LockSharedObject(ReplicationOriginRelationId, roident, 0,\n> > AccessExclusiveLock);\n> >\n> > tuple = SearchSysCache1(REPLORIGIDENT, ObjectIdGetDatum(roident));\n> > if (!HeapTupleIsValid(tuple))\n> > {\n> > if (!missing_ok)\n> > elog(ERROR, \"cache lookup failed for replication origin with ID %d\",\n> > roident);\n> >\n> > return;\n> > }\n> >\n> > replorigin_state_clear(rel, roident, nowait);\n> >\n> > /*\n> > * Now, we can delete the catalog entry.\n> > */\n> > CatalogTupleDelete(rel, &tuple->t_self);\n> > ReleaseSysCache(tuple);\n> >\n> > CommandCounterIncrement();\n>\n> The attached updated patch has the changes to handle the same.\n\nI had not merged one of the local changes that was present, please\nfind the updated patch including that change. Sorry for missing that\nchange.\n\nRegards,\nVignesh", "msg_date": "Mon, 30 Jan 2023 22:36:32 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Tuesday, January 31, 2023 1:07 AM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Mon, 30 Jan 2023 at 17:30, vignesh C <vignesh21@gmail.com> wrote:\r\n> >\r\n> > On Mon, 30 Jan 2023 at 13:00, houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Monday, January 30, 2023 2:32 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > >\r\n> > > > On Mon, Jan 30, 2023 at 9:20 AM vignesh C <vignesh21@gmail.com>\r\n> wrote:\r\n> > > > >\r\n> > > > > On Sat, 28 Jan 2023 at 11:26, Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > > > >\r\n> > > > > > One thing that looks a bit odd is that we will anyway have a\r\n> > > > > > similar check in replorigin_drop_guts() which is a static\r\n> > > > > > function and called from only one place, so, will it be required to\r\n> check at both places?\r\n> > > > >\r\n> > > > > There is a possibility that the initial check to verify if\r\n> > > > > replication origin exists in replorigin_drop_by_name was\r\n> > > > > successful but later one of either table sync worker or apply\r\n> > > > > worker process might have dropped the replication origin,\r\n> > > > >\r\n> > > >\r\n> > > > Won't locking on the particular origin prevent concurrent drops?\r\n> > > > IIUC, the drop happens after the patch acquires the lock on the origin.\r\n> > >\r\n> > > Yes, I think the existence check in replorigin_drop_guts is\r\n> > > unnecessary as we already lock the origin before that. I think the\r\n> > > check in replorigin_drop_guts is a custom check after calling\r\n> > > SearchSysCache1 to get the tuple, but the error should not happen as no\r\n> concurrent drop can be performed.\r\n> > >\r\n> > > To make it simpler, one idea is to move the code that getting the\r\n> > > tuple from system cache to the replorigin_drop_by_name(). After\r\n> > > locking the origin, we can try to get the tuple and do the existence\r\n> > > check, and we can reuse this tuple to perform origin delete. In this\r\n> > > approach we only need to check origin existence once after locking.\r\n> > > BTW, if we do this, then we'd better rename the\r\n> > > replorigin_drop_guts() to something like replorigin_state_clear() as\r\n> > > the function only clear the in-memory information after that.\r\n> > >\r\n> >\r\n> > The attached updated patch has the changes to handle the same.\r\n> \r\n> I had not merged one of the local changes that was present, please find the\r\n> updated patch including that change. Sorry for missing that change.\r\n> \r\n\r\nI also tried to test the time of \"src/test/subscription/t/002_types.pl\"\r\nbefore and after the patch(change the lock level) and Tom's patch(split\r\ntransaction) like what Vignesh has shared on -hackers.\r\n\r\nI run about 100 times for each case. Tom's and the lock level patch\r\nbehave similarly on my machines[1].\r\n\r\nHEAD: 3426 ~ 6425 ms\r\nHEAD + Tom: 3404 ~ 3462 ms\r\nHEAD + Vignesh: 3419 ~ 3474 ms\r\nHEAD + Tom + Vignesh: 3408 ~ 3454 ms\r\n\r\nEven apart from the testing time reduction, reducing the lock level and lock\r\nthe specific object can also help improve the lock contention which user(that\r\nuse the exposed function) , table sync worker and apply worker can also benefit\r\nfrom it. So, I think pushing the patch to change the lock level makes sense.\r\n\r\nAnd the patch looks good to me.\r\n\r\nWhile on it, after pushing the patch, I think there is another case might also\r\nworth to be improved, that is the table sync and apply worker try to drop the\r\nsame origin which might cause some delay. This is another case(different from\r\nthe deadlock), so I feel we can try to improve this in another patch.\r\n\r\n[1] CentOS 8.2, 128G RAM, 40 processors Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Thu, 2 Feb 2023 06:35:38 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Thu, Feb 2, 2023 at 12:05 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, January 31, 2023 1:07 AM vignesh C <vignesh21@gmail.com> wrote:\n> > On Mon, 30 Jan 2023 at 17:30, vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> I also tried to test the time of \"src/test/subscription/t/002_types.pl\"\n> before and after the patch(change the lock level) and Tom's patch(split\n> transaction) like what Vignesh has shared on -hackers.\n>\n> I run about 100 times for each case. Tom's and the lock level patch\n> behave similarly on my machines[1].\n>\n> HEAD: 3426 ~ 6425 ms\n> HEAD + Tom: 3404 ~ 3462 ms\n> HEAD + Vignesh: 3419 ~ 3474 ms\n> HEAD + Tom + Vignesh: 3408 ~ 3454 ms\n>\n> Even apart from the testing time reduction, reducing the lock level and lock\n> the specific object can also help improve the lock contention which user(that\n> use the exposed function) , table sync worker and apply worker can also benefit\n> from it. So, I think pushing the patch to change the lock level makes sense.\n>\n> And the patch looks good to me.\n>\n\nThanks for the tests. I also see a reduction in test time variability\nwith Vignesh's patch. I think we can release the locks in case the\norigin is concurrently dropped as in the attached patch. I am planning\nto commit this patch tomorrow unless there are more comments or\nobjections.\n\n> While on it, after pushing the patch, I think there is another case might also\n> worth to be improved, that is the table sync and apply worker try to drop the\n> same origin which might cause some delay. This is another case(different from\n> the deadlock), so I feel we can try to improve this in another patch.\n>\n\nRight, I think that case could be addressed by Tom's patch to some\nextent but I am thinking we should also try to analyze if we can\ncompletely avoid the need to remove origins from both processes. One\nidea could be to introduce another relstate something like\nPRE_SYNCDONE and set it in a separate transaction before we set the\nstate as SYNCDONE and remove the slot and origin in tablesync worker.\nNow, if the tablesync worker errors out due to some reason during the\nsecond transaction, it can remove the slot and origin after restart by\nchecking the state. However, it would add another relstate which may\nnot be the best way to address this problem. Anyway, that can be\naccomplished as a separate patch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 2 Feb 2023 16:51:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Thu, Feb 2, 2023 at 4:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks for the tests. I also see a reduction in test time variability\n> with Vignesh's patch. I think we can release the locks in case the\n> origin is concurrently dropped as in the attached patch. I am planning\n> to commit this patch tomorrow unless there are more comments or\n> objections.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 3 Feb 2023 10:16:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Thursday, February 2, 2023 7:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Feb 2, 2023 at 12:05 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, January 31, 2023 1:07 AM vignesh C <vignesh21@gmail.com>\r\n> wrote:\r\n> > > On Mon, 30 Jan 2023 at 17:30, vignesh C <vignesh21@gmail.com> wrote:\r\n> > >\r\n> >\r\n> > I also tried to test the time of \"src/test/subscription/t/002_types.pl\"\r\n> > before and after the patch(change the lock level) and Tom's\r\n> > patch(split\r\n> > transaction) like what Vignesh has shared on -hackers.\r\n> >\r\n> > I run about 100 times for each case. Tom's and the lock level patch\r\n> > behave similarly on my machines[1].\r\n> >\r\n> > HEAD: 3426 ~ 6425 ms\r\n> > HEAD + Tom: 3404 ~ 3462 ms\r\n> > HEAD + Vignesh: 3419 ~ 3474 ms\r\n> > HEAD + Tom + Vignesh: 3408 ~ 3454 ms\r\n> >\r\n> > Even apart from the testing time reduction, reducing the lock level\r\n> > and lock the specific object can also help improve the lock contention\r\n> > which user(that use the exposed function) , table sync worker and\r\n> > apply worker can also benefit from it. So, I think pushing the patch to change\r\n> the lock level makes sense.\r\n> >\r\n> > And the patch looks good to me.\r\n> >\r\n> \r\n> Thanks for the tests. I also see a reduction in test time variability with Vignesh's\r\n> patch. I think we can release the locks in case the origin is concurrently\r\n> dropped as in the attached patch. I am planning to commit this patch\r\n> tomorrow unless there are more comments or objections.\r\n> \r\n> > While on it, after pushing the patch, I think there is another case\r\n> > might also worth to be improved, that is the table sync and apply\r\n> > worker try to drop the same origin which might cause some delay. This\r\n> > is another case(different from the deadlock), so I feel we can try to improve\r\n> this in another patch.\r\n> >\r\n> \r\n> Right, I think that case could be addressed by Tom's patch to some extent but\r\n> I am thinking we should also try to analyze if we can completely avoid the need\r\n> to remove origins from both processes. One idea could be to introduce\r\n> another relstate something like PRE_SYNCDONE and set it in a separate\r\n> transaction before we set the state as SYNCDONE and remove the slot and\r\n> origin in tablesync worker.\r\n> Now, if the tablesync worker errors out due to some reason during the second\r\n> transaction, it can remove the slot and origin after restart by checking the state.\r\n> However, it would add another relstate which may not be the best way to\r\n> address this problem. Anyway, that can be accomplished as a separate patch.\r\n\r\nHere is an attempt to achieve the same.\r\nBasically, the patch removes the code that drop the origin in apply worker. And\r\nadd a new state PRE_SYNCDONE after synchronization finished in front of apply\r\n(sublsn set), but before dropping the origin and other final cleanups. The\r\ntablesync will restart and redo the cleanup if it failed after reaching the new\r\nstate. Besides, since the changes can already be applied on the table in\r\nPRE_SYNCDONE state, so I also modified the check in\r\nshould_apply_changes_for_rel(). And some other conditions for the origin drop\r\nin subscription commands are were adjusted in this patch.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 3 Feb 2023 07:57:57 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Fri, Feb 3, 2023 at 6:58 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n...\n> > Right, I think that case could be addressed by Tom's patch to some extent but\n> > I am thinking we should also try to analyze if we can completely avoid the need\n> > to remove origins from both processes. One idea could be to introduce\n> > another relstate something like PRE_SYNCDONE and set it in a separate\n> > transaction before we set the state as SYNCDONE and remove the slot and\n> > origin in tablesync worker.\n> > Now, if the tablesync worker errors out due to some reason during the second\n> > transaction, it can remove the slot and origin after restart by checking the state.\n> > However, it would add another relstate which may not be the best way to\n> > address this problem. Anyway, that can be accomplished as a separate patch.\n>\n> Here is an attempt to achieve the same.\n> Basically, the patch removes the code that drop the origin in apply worker. And\n> add a new state PRE_SYNCDONE after synchronization finished in front of apply\n> (sublsn set), but before dropping the origin and other final cleanups. The\n> tablesync will restart and redo the cleanup if it failed after reaching the new\n> state. Besides, since the changes can already be applied on the table in\n> PRE_SYNCDONE state, so I also modified the check in\n> should_apply_changes_for_rel(). And some other conditions for the origin drop\n> in subscription commands are were adjusted in this patch.\n>\n\nHere are some review comments for the 0001 patch\n\n======\nGeneral Comment\n\n0.\nThe idea of using the extra relstate for clean-up seems OK, but the\nimplementation of the new state in this patch appears misordered and\nmisnamed to me.\n\nThe state name should indicate what it is doing (PRE_SYNCDONE is\nmeaningless). The patch describes in several places that this state\nmeans \"synchronized, but not yet cleaned up\" therefore IMO it means\nthe SYNCDONE state should be *before* this new state. And since this\nnew state is for \"cleanup\" then let's call it something like that.\n\nTo summarize, I don’t think the meaning of SYNCDONE should be touched.\nSYNCDONE means the synchronization is done, same as before. And your\nnew \"cleanup\" state belongs directly *after* that. IMO it should be\nlike this:\n\n1. STATE_INIT\n2. STATE_DATASYNC\n3. STATE_FINISHEDCOPY\n4. STATE_SYNCDONE\n5. STATE_CLEANUP <-- new relstate\n6. STATE_READY\n\nOf course, this is going to impact almost every aspect of the patch,\nbut I think everything will be basically the same as you have it now\n-- only all the state names and comments need to be adjusted according\nto the above.\n\n======\nCommit Message\n\n1.\nPreviously, we allowed the apply worker to drop the origin to avoid the case\nthat the tablesync worker fails to the origin(due to crash). In this case we\ndon't restart the tablesync worker, and the apply worker can clean the origin.\n\n~\n\nThere seem to be some words missing in this paragraph.\n\nSUGGESTION\nPreviously, we allowed the apply worker to drop the origin as a way to\nrecover from the scenario where the tablesync worker failed to drop it\n(due to crash).\n\n~~~\n\n2.\nTo improve this, we introduce a new relstate SUBREL_STATE_PRE_SYNCDONE which\nwill be set after synchronization finished in front of apply (sublsn set), but\nbefore dropping the origin and other final cleanups. The apply worker will\nrestart tablesync worker if the relstate is SUBREL_STATE_PRE_SYNCDONE. This\nway, even if the tablesync worker error out in the transaction that tries to\ndrop the origin, the apply worker will restart the tablesync worker to redo the\ncleanup(for origin and other stuff) and then directly exit.\n\n~\n\n2a.\nThis is going to be impacted by my \"General Comment\". Notice how you\ndescribe again \"will be set after synchronization finished\". Evidence\nagain this means\nthe new CLEANUP state should directly follow the SYNCDONE state.\n\n2b.\n\"error out” --> \"encounters an error\"\n\n2c.\n\"cleanup(for origin\" --> space before the \"(\"\n\n======\ndoc/src/sgml/catalogs.sgml\n\n3.\n@@ -8071,7 +8071,8 @@ SCRAM-SHA-256$<replaceable>&lt;iteration\ncount&gt;</replaceable>:<replaceable>&l\n <literal>i</literal> = initialize,\n <literal>d</literal> = data is being copied,\n <literal>f</literal> = finished table copy,\n- <literal>s</literal> = synchronized,\n+ <literal>p</literal> = synchronized but not yet cleaned up,\n+ <literal>s</literal> = synchronization done,\n <literal>r</literal> = ready (normal replication)\n </para></entry>\n </row>\n@@ -8082,8 +8083,8 @@ SCRAM-SHA-256$<replaceable>&lt;iteration\ncount&gt;</replaceable>:<replaceable>&l\n </para>\n <para>\n Remote LSN of the state change used for synchronization coordination\n- when in <literal>s</literal> or <literal>r</literal> states,\n- otherwise null\n+ when in <literal>p</literal>, <literal>s</literal> or\n+ <literal>r</literal> states, otherwise null\n </para></entry>\n </row>\n </tbody>\n\nThis state order and choice of the letter are impacted by my \"General Comment\".\n\nIMO it should be more like this:\n\nState code: i = initialize, d = data is being copied, f = finished\ntable copy, s = synchronization done, c = clean-up done, r = ready\n(normal replication)\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n4. AlterSubscription_refresh\n\nSome adjustments are needed according to my \"General Comment\".\n\n~~~\n\n5. DropSubscription\n\nSome adjustments are needed according to my \"General Comment\".\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n6.\n\n+ * Update the state of the table to SUBREL_STATE_SYNCDONE and cleanup the\n+ * tablesync slot and drop the tablesync's origin tracking.\n+ */\n+static void\n+finish_synchronization(bool restart_after_crash)\n\n6a.\nSuggest calling this function something like 'cleanup_after_synchronization'\n\n~\n\n6b.\nSome adjustments to states and comments are needed according to my\n\"General Comment\".\n\n~~~\n\n7. process_syncing_tables_for_sync\n\n- MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCDONE;\n+ MyLogicalRepWorker->relstate = SUBREL_STATE_PRE_SYNCDONE;\n MyLogicalRepWorker->relstate_lsn = current_lsn;\n\nThis should just be setting SUBREL_STATE_SYNCDONE how it previously did.\n\nOther states/comments in this function to change according to my\n\"General Comments\".\n\n~\n\n8.\nif (rstate->state == SUBREL_STATE_SYNCDONE)\n{\n/*\n* Apply has caught up to the position where the table sync has\n* finished. Mark the table as ready so that the apply will just\n* continue to replicate it normally.\n*/\n\nThat should now be checking for SUBREL_STATE_CLEANUPDONE according to\nme \"General Comment\"\n\n~~~\n\n9. process_syncing_tables_for_apply\n\nSome adjustments to states and comments are needed according to my\n\"General Comment\".\n\n~~~\n\n10. LogicalRepSyncTableStart\n\nSome adjustments to states and comments are needed according to my\n\"General Comment\".\n\n======\nsrc/backend/replication/logical/worker.c\n\n11. should_apply_changes_for_rel\n\nSome adjustments to states according to my \"General Comment\".\n\n======\nsrc/include/catalog/pg_subscription_rel.h\n\n12.\n@@ -62,8 +62,10 @@\nDECLARE_UNIQUE_INDEX_PKEY(pg_subscription_rel_srrelid_srsubid_index,\n6117, Subsc\n * NULL) */\n #define SUBREL_STATE_FINISHEDCOPY 'f' /* tablesync copy phase is completed\n * (sublsn NULL) */\n-#define SUBREL_STATE_SYNCDONE 's' /* synchronization finished in front of\n- * apply (sublsn set) */\n+#define SUBREL_STATE_PRE_SYNCDONE 'p' /* synchronization finished in front of\n+ * apply (sublsn set), but the final\n+ * cleanup has not yet been performed */\n+#define SUBREL_STATE_SYNCDONE 's' /* synchronization complete */\n #define SUBREL_STATE_READY 'r' /* ready (sublsn set) */\n\nSome adjustments to states and comments are needed according to my\n\"General Comment\".\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 7 Feb 2023 15:12:00 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Tuesday, February 7, 2023 12:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> On Fri, Feb 3, 2023 at 6:58 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> ...\r\n> > > Right, I think that case could be addressed by Tom's patch to some\r\n> > > extent but I am thinking we should also try to analyze if we can\r\n> > > completely avoid the need to remove origins from both processes. One\r\n> > > idea could be to introduce another relstate something like\r\n> > > PRE_SYNCDONE and set it in a separate transaction before we set the\r\n> > > state as SYNCDONE and remove the slot and origin in tablesync worker.\r\n> > > Now, if the tablesync worker errors out due to some reason during\r\n> > > the second transaction, it can remove the slot and origin after restart by\r\n> checking the state.\r\n> > > However, it would add another relstate which may not be the best way\r\n> > > to address this problem. Anyway, that can be accomplished as a separate\r\n> patch.\r\n> >\r\n> > Here is an attempt to achieve the same.\r\n> > Basically, the patch removes the code that drop the origin in apply\r\n> > worker. And add a new state PRE_SYNCDONE after synchronization\r\n> > finished in front of apply (sublsn set), but before dropping the\r\n> > origin and other final cleanups. The tablesync will restart and redo\r\n> > the cleanup if it failed after reaching the new state. Besides, since\r\n> > the changes can already be applied on the table in PRE_SYNCDONE state,\r\n> > so I also modified the check in should_apply_changes_for_rel(). And\r\n> > some other conditions for the origin drop in subscription commands are\r\n> were adjusted in this patch.\r\n> >\r\n> \r\n> Here are some review comments for the 0001 patch\r\n> \r\n> ======\r\n> General Comment\r\n> \r\n> 0.\r\n> The idea of using the extra relstate for clean-up seems OK, but the\r\n> implementation of the new state in this patch appears misordered and\r\n> misnamed to me.\r\n> \r\n> The state name should indicate what it is doing (PRE_SYNCDONE is\r\n> meaningless). The patch describes in several places that this state means\r\n> \"synchronized, but not yet cleaned up\" therefore IMO it means the SYNCDONE\r\n> state should be *before* this new state. And since this new state is for\r\n> \"cleanup\" then let's call it something like that.\r\n> \r\n> To summarize, I don’t think the meaning of SYNCDONE should be touched.\r\n> SYNCDONE means the synchronization is done, same as before. And your new\r\n> \"cleanup\" state belongs directly *after* that. IMO it should be like this:\r\n> \r\n> 1. STATE_INIT\r\n> 2. STATE_DATASYNC\r\n> 3. STATE_FINISHEDCOPY\r\n> 4. STATE_SYNCDONE\r\n> 5. STATE_CLEANUP <-- new relstate\r\n> 6. STATE_READY\r\n> \r\n> Of course, this is going to impact almost every aspect of the patch, but I think\r\n> everything will be basically the same as you have it now\r\n> -- only all the state names and comments need to be adjusted according to the\r\n> above.\r\n\r\nAlthough I agree the CLEANUP is easier to understand, but I am a bit concerned\r\nthat the changes would be a bit invasive.\r\n\r\nIf we add a CLEANUP state at the end as suggested, it will change the meaning\r\nof the existing SYNCDONE state, before the change it means both data sync and\r\ncleanup have been done, but after the change it only mean the data sync is\r\nover. This also means all the current C codes that considered the SYNCDONE as\r\nthe final state of table sync will need to be changed. Moreover, it's common\r\nfor user to query the relation state from pg_subscription_rel to identify if\r\nthe table sync of a table is finished(e.g. check relstate IN ('r', 's')), but\r\nif we add a new state(CLEANUP) as the final state, then all these SQLs would\r\nneed to be changed as they need to check like relstate IN ('r', 'x'(new cleanup\r\nstate)).\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 7 Feb 2023 07:46:05 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Tue, Feb 7, 2023 at 6:46 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, February 7, 2023 12:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > On Fri, Feb 3, 2023 at 6:58 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\n> > wrote:\n> > >\n> > ...\n> > > > Right, I think that case could be addressed by Tom's patch to some\n> > > > extent but I am thinking we should also try to analyze if we can\n> > > > completely avoid the need to remove origins from both processes. One\n> > > > idea could be to introduce another relstate something like\n> > > > PRE_SYNCDONE and set it in a separate transaction before we set the\n> > > > state as SYNCDONE and remove the slot and origin in tablesync worker.\n> > > > Now, if the tablesync worker errors out due to some reason during\n> > > > the second transaction, it can remove the slot and origin after restart by\n> > checking the state.\n> > > > However, it would add another relstate which may not be the best way\n> > > > to address this problem. Anyway, that can be accomplished as a separate\n> > patch.\n> > >\n> > > Here is an attempt to achieve the same.\n> > > Basically, the patch removes the code that drop the origin in apply\n> > > worker. And add a new state PRE_SYNCDONE after synchronization\n> > > finished in front of apply (sublsn set), but before dropping the\n> > > origin and other final cleanups. The tablesync will restart and redo\n> > > the cleanup if it failed after reaching the new state. Besides, since\n> > > the changes can already be applied on the table in PRE_SYNCDONE state,\n> > > so I also modified the check in should_apply_changes_for_rel(). And\n> > > some other conditions for the origin drop in subscription commands are\n> > were adjusted in this patch.\n> > >\n> >\n> > Here are some review comments for the 0001 patch\n> >\n> > ======\n> > General Comment\n> >\n> > 0.\n> > The idea of using the extra relstate for clean-up seems OK, but the\n> > implementation of the new state in this patch appears misordered and\n> > misnamed to me.\n> >\n> > The state name should indicate what it is doing (PRE_SYNCDONE is\n> > meaningless). The patch describes in several places that this state means\n> > \"synchronized, but not yet cleaned up\" therefore IMO it means the SYNCDONE\n> > state should be *before* this new state. And since this new state is for\n> > \"cleanup\" then let's call it something like that.\n> >\n> > To summarize, I don’t think the meaning of SYNCDONE should be touched.\n> > SYNCDONE means the synchronization is done, same as before. And your new\n> > \"cleanup\" state belongs directly *after* that. IMO it should be like this:\n> >\n> > 1. STATE_INIT\n> > 2. STATE_DATASYNC\n> > 3. STATE_FINISHEDCOPY\n> > 4. STATE_SYNCDONE\n> > 5. STATE_CLEANUP <-- new relstate\n> > 6. STATE_READY\n> >\n> > Of course, this is going to impact almost every aspect of the patch, but I think\n> > everything will be basically the same as you have it now\n> > -- only all the state names and comments need to be adjusted according to the\n> > above.\n>\n> Although I agree the CLEANUP is easier to understand, but I am a bit concerned\n> that the changes would be a bit invasive.\n>\n> If we add a CLEANUP state at the end as suggested, it will change the meaning\n> of the existing SYNCDONE state, before the change it means both data sync and\n> cleanup have been done, but after the change it only mean the data sync is\n> over. This also means all the current C codes that considered the SYNCDONE as\n> the final state of table sync will need to be changed. Moreover, it's common\n> for user to query the relation state from pg_subscription_rel to identify if\n> the table sync of a table is finished(e.g. check relstate IN ('r', 's')), but\n> if we add a new state(CLEANUP) as the final state, then all these SQLs would\n> need to be changed as they need to check like relstate IN ('r', 'x'(new cleanup\n> state)).\n>\n\nIIUC, you are saying that we still want to keep the SYNCDONE state as\nthe last state before READY mainly because otherwise there is too much\nimpact on user/test SQL that is currently checking those ('s','r')\nstates.\n\nOTOH, in the current 001 patch you had the SUBREL_STATE_PRE_SYNCDONE\nmeaning \"synchronized but not yet cleaned up\" (that's verbatim from\nyour PGDOCS). And there is C code where you are checking\nSUBREL_STATE_PRE_SYNCDONE and essentially giving the state before the\nSYNCDONE an equal status to the SYNCDONE (e.g.\nshould_apply_changes_for_rel seemed to be doing this).\n\nIt seems to be trying to have an each-way bet...\n\n~~~\n\nBut I think there may be an easy way out of this problem:\n\nCurrent HEAD\n1. STATE_INIT 'i'\n2. STATE_DATASYNC 'd'\n3. STATE_FINISHEDCOPY 'f'\n4. STATE_SYNCDONE 's'\n5. STATE_READY 'r'\n\nThe patch 0001\n1. STATE_INIT 'i'\n2. STATE_DATASYNC 'd'\n3. STATE_FINISHEDCOPY 'f'\n4. STATE_PRESYNCDONE 'p' <-- new relstate\n4. STATE_SYNCDONE 's'\n5. STATE_READY 'r'\n\nMy previous suggestion (which you acknowledge is easier to understand,\nbut might cause hassles for existing SQL)\n1. STATE_INIT 'i'\n2. STATE_DATASYNC 'd'\n3. STATE_FINISHEDCOPY 'f'\n4. STATE_SYNCDONE 's'\n5. STATE_CLEANUP 'x' <-- new relstate\n6. STATE_READY 'r'\n\nSUGGESTED (hack to solve everything?)\n1. STATE_INIT 'i'\n2. STATE_DATASYNC 'd'\n3. STATE_FINISHEDCOPY 'f'\n4. STATE_SYNCDONE_PRE_CLEANUP 'x' <-- change the char code for this\nexisting relstate (was SYNCDONE 's')\n5. STATE_SYNCDONE_WITH_CLEANUP 's' <-- new relstate using 's'\n6. STATE_READY 'r'\n\nBy commandeering the 's' flag for the new CLEANUP state it means no\nexisting user code or test code needs to change - IIUC everything will\nwork the same as before.\n\n~\n\nHmmm -- In hindsight, perhaps I have gone around in a big circle here\nand the solution I am describing here is almost exactly the same as\nyour patch 0001 only with better names for the relstates.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 8 Feb 2023 20:11:42 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" }, { "msg_contents": "On Fri, Feb 3, 2023 at 6:58 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, February 2, 2023 7:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Feb 2, 2023 at 12:05 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, January 31, 2023 1:07 AM vignesh C <vignesh21@gmail.com>\n> > wrote:\n> > > > On Mon, 30 Jan 2023 at 17:30, vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > >\n> > > I also tried to test the time of \"src/test/subscription/t/002_types.pl\"\n> > > before and after the patch(change the lock level) and Tom's\n> > > patch(split\n> > > transaction) like what Vignesh has shared on -hackers.\n> > >\n> > > I run about 100 times for each case. Tom's and the lock level patch\n> > > behave similarly on my machines[1].\n> > >\n> > > HEAD: 3426 ~ 6425 ms\n> > > HEAD + Tom: 3404 ~ 3462 ms\n> > > HEAD + Vignesh: 3419 ~ 3474 ms\n> > > HEAD + Tom + Vignesh: 3408 ~ 3454 ms\n> > >\n> > > Even apart from the testing time reduction, reducing the lock level\n> > > and lock the specific object can also help improve the lock contention\n> > > which user(that use the exposed function) , table sync worker and\n> > > apply worker can also benefit from it. So, I think pushing the patch to change\n> > the lock level makes sense.\n> > >\n> > > And the patch looks good to me.\n> > >\n> >\n> > Thanks for the tests. I also see a reduction in test time variability with Vignesh's\n> > patch. I think we can release the locks in case the origin is concurrently\n> > dropped as in the attached patch. I am planning to commit this patch\n> > tomorrow unless there are more comments or objections.\n> >\n> > > While on it, after pushing the patch, I think there is another case\n> > > might also worth to be improved, that is the table sync and apply\n> > > worker try to drop the same origin which might cause some delay. This\n> > > is another case(different from the deadlock), so I feel we can try to improve\n> > this in another patch.\n> > >\n> >\n> > Right, I think that case could be addressed by Tom's patch to some extent but\n> > I am thinking we should also try to analyze if we can completely avoid the need\n> > to remove origins from both processes. One idea could be to introduce\n> > another relstate something like PRE_SYNCDONE and set it in a separate\n> > transaction before we set the state as SYNCDONE and remove the slot and\n> > origin in tablesync worker.\n> > Now, if the tablesync worker errors out due to some reason during the second\n> > transaction, it can remove the slot and origin after restart by checking the state.\n> > However, it would add another relstate which may not be the best way to\n> > address this problem. Anyway, that can be accomplished as a separate patch.\n>\n> Here is an attempt to achieve the same.\n> Basically, the patch removes the code that drop the origin in apply worker. And\n> add a new state PRE_SYNCDONE after synchronization finished in front of apply\n> (sublsn set), but before dropping the origin and other final cleanups. The\n> tablesync will restart and redo the cleanup if it failed after reaching the new\n> state. Besides, since the changes can already be applied on the table in\n> PRE_SYNCDONE state, so I also modified the check in\n> should_apply_changes_for_rel(). And some other conditions for the origin drop\n> in subscription commands are were adjusted in this patch.\n>\n\n\nBTW, the tablesync.c has a large file header comment which describes\nall about the relstates including some examples. So this patch needs\nto include modifications to that comment.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 9 Feb 2023 06:08:27 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deadlock between logrep apply worker and tablesync worker" } ]
[ { "msg_contents": "Hello,\n\nSee the attached for a simple comment fix -- the referenced\ngenerate_useful_gather_paths call isn't in grouping_planner it's in\napply_scanjoin_target_to_paths.\n\nThanks,\nJames Coleman", "msg_date": "Mon, 23 Jan 2023 08:31:04 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Fix incorrect comment reference" }, { "msg_contents": "On Mon, Jan 23, 2023 at 8:31 AM James Coleman <jtc331@gmail.com> wrote:\n> See the attached for a simple comment fix -- the referenced\n> generate_useful_gather_paths call isn't in grouping_planner it's in\n> apply_scanjoin_target_to_paths.\n\nThe intended reading of the comment is not clear. Is it telling you to\nlook at grouping_planner because that's where we\ngenerate_useful_gather_paths, or is it telling you to look there to\nsee how we get the final target list together? If it's the former,\nthen your fix is correct. If the latter, it's fine as it is.\n\nThe real answer is probably that some years ago both things happened\nin that function. We've moved on from there, but I'm still not sure\nwhat the most useful phrasing of the comment is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 13:26:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix incorrect comment reference" }, { "msg_contents": "On Mon, Jan 23, 2023 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 23, 2023 at 8:31 AM James Coleman <jtc331@gmail.com> wrote:\n> > See the attached for a simple comment fix -- the referenced\n> > generate_useful_gather_paths call isn't in grouping_planner it's in\n> > apply_scanjoin_target_to_paths.\n>\n> The intended reading of the comment is not clear. Is it telling you to\n> look at grouping_planner because that's where we\n> generate_useful_gather_paths, or is it telling you to look there to\n> see how we get the final target list together? If it's the former,\n> then your fix is correct. If the latter, it's fine as it is.\n>\n> The real answer is probably that some years ago both things happened\n> in that function. We've moved on from there, but I'm still not sure\n> what the most useful phrasing of the comment is.\n\nYeah, almost certainly, and the comments just didn't keep up.\n\nWould you prefer something that notes both that the broader concern is\nhappening via the grouping_planner() stage but still points to the\nproper callsite (so that people don't go looking for that confused)?\n\nJames Coleman\n\n\n", "msg_date": "Mon, 23 Jan 2023 15:19:14 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix incorrect comment reference" }, { "msg_contents": "On Mon, Jan 23, 2023 at 3:19 PM James Coleman <jtc331@gmail.com> wrote:\n> On Mon, Jan 23, 2023 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Mon, Jan 23, 2023 at 8:31 AM James Coleman <jtc331@gmail.com> wrote:\n> > > See the attached for a simple comment fix -- the referenced\n> > > generate_useful_gather_paths call isn't in grouping_planner it's in\n> > > apply_scanjoin_target_to_paths.\n> >\n> > The intended reading of the comment is not clear. Is it telling you to\n> > look at grouping_planner because that's where we\n> > generate_useful_gather_paths, or is it telling you to look there to\n> > see how we get the final target list together? If it's the former,\n> > then your fix is correct. If the latter, it's fine as it is.\n> >\n> > The real answer is probably that some years ago both things happened\n> > in that function. We've moved on from there, but I'm still not sure\n> > what the most useful phrasing of the comment is.\n>\n> Yeah, almost certainly, and the comments just didn't keep up.\n>\n> Would you prefer something that notes both that the broader concern is\n> happening via the grouping_planner() stage but still points to the\n> proper callsite (so that people don't go looking for that confused)?\n\nI don't really have a strong view on what the best thing to do is. I\nwas just pointing out that the comment might not be quite so obviously\nwrong as you were supposing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 15:41:45 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix incorrect comment reference" }, { "msg_contents": "On Mon, Jan 23, 2023 at 3:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 23, 2023 at 3:19 PM James Coleman <jtc331@gmail.com> wrote:\n> > On Mon, Jan 23, 2023 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Mon, Jan 23, 2023 at 8:31 AM James Coleman <jtc331@gmail.com> wrote:\n> > > > See the attached for a simple comment fix -- the referenced\n> > > > generate_useful_gather_paths call isn't in grouping_planner it's in\n> > > > apply_scanjoin_target_to_paths.\n> > >\n> > > The intended reading of the comment is not clear. Is it telling you to\n> > > look at grouping_planner because that's where we\n> > > generate_useful_gather_paths, or is it telling you to look there to\n> > > see how we get the final target list together? If it's the former,\n> > > then your fix is correct. If the latter, it's fine as it is.\n> > >\n> > > The real answer is probably that some years ago both things happened\n> > > in that function. We've moved on from there, but I'm still not sure\n> > > what the most useful phrasing of the comment is.\n> >\n> > Yeah, almost certainly, and the comments just didn't keep up.\n> >\n> > Would you prefer something that notes both that the broader concern is\n> > happening via the grouping_planner() stage but still points to the\n> > proper callsite (so that people don't go looking for that confused)?\n>\n> I don't really have a strong view on what the best thing to do is. I\n> was just pointing out that the comment might not be quite so obviously\n> wrong as you were supposing.\n\n\"Wrong\" is certainly too strong; my apologies.\n\nI'm really just hoping to improve it for future readers to save them\nsome confusion I had initially reading it.\n\nJames Coleman\n\n\n", "msg_date": "Mon, 23 Jan 2023 16:07:03 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix incorrect comment reference" }, { "msg_contents": "On Mon, Jan 23, 2023 at 4:07 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Mon, Jan 23, 2023 at 3:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Jan 23, 2023 at 3:19 PM James Coleman <jtc331@gmail.com> wrote:\n> > > On Mon, Jan 23, 2023 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > On Mon, Jan 23, 2023 at 8:31 AM James Coleman <jtc331@gmail.com> wrote:\n> > > > > See the attached for a simple comment fix -- the referenced\n> > > > > generate_useful_gather_paths call isn't in grouping_planner it's in\n> > > > > apply_scanjoin_target_to_paths.\n> > > >\n> > > > The intended reading of the comment is not clear. Is it telling you to\n> > > > look at grouping_planner because that's where we\n> > > > generate_useful_gather_paths, or is it telling you to look there to\n> > > > see how we get the final target list together? If it's the former,\n> > > > then your fix is correct. If the latter, it's fine as it is.\n> > > >\n> > > > The real answer is probably that some years ago both things happened\n> > > > in that function. We've moved on from there, but I'm still not sure\n> > > > what the most useful phrasing of the comment is.\n> > >\n> > > Yeah, almost certainly, and the comments just didn't keep up.\n> > >\n> > > Would you prefer something that notes both that the broader concern is\n> > > happening via the grouping_planner() stage but still points to the\n> > > proper callsite (so that people don't go looking for that confused)?\n> >\n> > I don't really have a strong view on what the best thing to do is. I\n> > was just pointing out that the comment might not be quite so obviously\n> > wrong as you were supposing.\n>\n> \"Wrong\" is certainly too strong; my apologies.\n>\n> I'm really just hoping to improve it for future readers to save them\n> some confusion I had initially reading it.\n\nUpdated patch attached.\n\nThanks,\nJames Coleman", "msg_date": "Mon, 23 Jan 2023 18:42:45 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix incorrect comment reference" }, { "msg_contents": "On Mon, Jan 23, 2023 at 06:42:45PM -0500, James Coleman wrote:\n> On Mon, Jan 23, 2023 at 4:07 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > On Mon, Jan 23, 2023 at 3:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Mon, Jan 23, 2023 at 3:19 PM James Coleman <jtc331@gmail.com> wrote:\n> > > > On Mon, Jan 23, 2023 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > On Mon, Jan 23, 2023 at 8:31 AM James Coleman <jtc331@gmail.com> wrote:\n> > > > > > See the attached for a simple comment fix -- the referenced\n> > > > > > generate_useful_gather_paths call isn't in grouping_planner it's in\n> > > > > > apply_scanjoin_target_to_paths.\n> > > > >\n> > > > > The intended reading of the comment is not clear. Is it telling you to\n> > > > > look at grouping_planner because that's where we\n> > > > > generate_useful_gather_paths, or is it telling you to look there to\n> > > > > see how we get the final target list together? If it's the former,\n> > > > > then your fix is correct. If the latter, it's fine as it is.\n> > > > >\n> > > > > The real answer is probably that some years ago both things happened\n> > > > > in that function. We've moved on from there, but I'm still not sure\n> > > > > what the most useful phrasing of the comment is.\n> > > >\n> > > > Yeah, almost certainly, and the comments just didn't keep up.\n> > > >\n> > > > Would you prefer something that notes both that the broader concern is\n> > > > happening via the grouping_planner() stage but still points to the\n> > > > proper callsite (so that people don't go looking for that confused)?\n> > >\n> > > I don't really have a strong view on what the best thing to do is. I\n> > > was just pointing out that the comment might not be quite so obviously\n> > > wrong as you were supposing.\n> >\n> > \"Wrong\" is certainly too strong; my apologies.\n> >\n> > I'm really just hoping to improve it for future readers to save them\n> > some confusion I had initially reading it.\n> \n> Updated patch attached.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 29 Sep 2023 14:26:19 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fix incorrect comment reference" }, { "msg_contents": "On Fri, Sep 29, 2023 at 2:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Jan 23, 2023 at 06:42:45PM -0500, James Coleman wrote:\n> > On Mon, Jan 23, 2023 at 4:07 PM James Coleman <jtc331@gmail.com> wrote:\n> > >\n> > > On Mon, Jan 23, 2023 at 3:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jan 23, 2023 at 3:19 PM James Coleman <jtc331@gmail.com> wrote:\n> > > > > On Mon, Jan 23, 2023 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > > On Mon, Jan 23, 2023 at 8:31 AM James Coleman <jtc331@gmail.com> wrote:\n> > > > > > > See the attached for a simple comment fix -- the referenced\n> > > > > > > generate_useful_gather_paths call isn't in grouping_planner it's in\n> > > > > > > apply_scanjoin_target_to_paths.\n> > > > > >\n> > > > > > The intended reading of the comment is not clear. Is it telling you to\n> > > > > > look at grouping_planner because that's where we\n> > > > > > generate_useful_gather_paths, or is it telling you to look there to\n> > > > > > see how we get the final target list together? If it's the former,\n> > > > > > then your fix is correct. If the latter, it's fine as it is.\n> > > > > >\n> > > > > > The real answer is probably that some years ago both things happened\n> > > > > > in that function. We've moved on from there, but I'm still not sure\n> > > > > > what the most useful phrasing of the comment is.\n> > > > >\n> > > > > Yeah, almost certainly, and the comments just didn't keep up.\n> > > > >\n> > > > > Would you prefer something that notes both that the broader concern is\n> > > > > happening via the grouping_planner() stage but still points to the\n> > > > > proper callsite (so that people don't go looking for that confused)?\n> > > >\n> > > > I don't really have a strong view on what the best thing to do is. I\n> > > > was just pointing out that the comment might not be quite so obviously\n> > > > wrong as you were supposing.\n> > >\n> > > \"Wrong\" is certainly too strong; my apologies.\n> > >\n> > > I'm really just hoping to improve it for future readers to save them\n> > > some confusion I had initially reading it.\n> >\n> > Updated patch attached.\n>\n> Patch applied.\n\nThanks!\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Fri, 29 Sep 2023 14:38:39 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix incorrect comment reference" } ]
[ { "msg_contents": "Hi!\n\nOne of our customers stumble onto a significant performance degradation\nwhile running multiple OLAP-like queries on a replica.\nAfter some investigation, it became clear that the problem is in accessing\nold_snapshot_threshold parameter.\n\nAccessing old_snapshot_threshold parameter is guarded by mutex_threshold.\nThis is not a problem on primary\nserver, since we rarely call GetOldSnapshotThresholdTimestamp:\n\n5028 void\n5029 TestForOldSnapshot_impl(Snapshot snapshot, Relation relation)\n5030 {\n5031 ····if (RelationAllowsEarlyPruning(relation)\n5032 ········&& (snapshot)->whenTaken < GetOldSnapshotThresholdTimestamp())\n5033 ········ereport(ERROR,\n5034 ················(errcode(ERRCODE_SNAPSHOT_TOO_OLD),\n5035 ················ errmsg(\"snapshot too old\")));\n\nBut in case of a replica, we have to call GetOldSnapshotThresholdTimestamp\nmuch often. So, this become a\nbottleneck. The customer solve this issue by setting old_snapshot_threshold\nto 0. But, I think, we can\ndo something about it.\n\nSome more investigation:\n\n-- On primary --\n$ ./bin/psql postgres -c \"create database benchmark\"\nCREATE DATABASE\n$ ./bin/pgbench -i -Uorlov -s300 benchmark\ndropping old tables...\nNOTICE: table \"pgbench_accounts\" does not exist, skipping\n...\ncreating tables...\ngenerating data (client-side)...\n30000000 of 30000000 tuples (100%) done (elapsed 142.37 s, remaining 0.00 s)\nvacuuming...\ncreating primary keys...\ndone in 177.67 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 144.45 s, vacuum 0.59 s, primary keys 32.61 s).\n\n-- On secondary --\n$ touch 1.sql\n$ vim 1.sql\n$ cat 1.sql\n\\set bid random(1, 300)\nBEGIN;\nSELECT sum(aid) FROM pgbench_accounts where bid = :bid GROUP BY bid;\nEND;\n$ ./bin/pgbench -f 1.sql -p5433 -Uorlov -j10 -c100 -T720 -P1 -n benchmark\npgbench (16devel)\nprogress: 1.0 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failed\n...\nprogress: 20.0 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failed\n\n$ perf record -F 99 -a -g --call-graph=dwarf sleep 5\n$ perf script --header --fields comm,pid,tid,time,event,ip,sym,dso > file\n$ grep s_lock file | wc -l\n\n3486\n\n\nMy proposal is to use atomic for threshold_timestamp and threshold_xid. PFA\n0001 patch.\nWith patch 0001 we got:\n\n$ grep s_lock file2 | wc -l\n8\n\n\nMaybe, we shall go farther and remove mutex_threshold here? This will lead\nto inconsistency of\nthreshold_timestamp and threshold_xid, but is this really a problem?\n\nThoughts?\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 23 Jan 2023 17:40:15 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "Hi, Maxim!\n\nOn Mon, 23 Jan 2023 at 18:40, Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> Hi!\n>\n> One of our customers stumble onto a significant performance degradation while running multiple OLAP-like queries on a replica.\n> After some investigation, it became clear that the problem is in accessing old_snapshot_threshold parameter.\n>\n> Accessing old_snapshot_threshold parameter is guarded by mutex_threshold. This is not a problem on primary\n> server, since we rarely call GetOldSnapshotThresholdTimestamp:\n>\n> 5028 void\n> 5029 TestForOldSnapshot_impl(Snapshot snapshot, Relation relation)\n> 5030 {\n> 5031 ····if (RelationAllowsEarlyPruning(relation)\n> 5032 ········&& (snapshot)->whenTaken < GetOldSnapshotThresholdTimestamp())\n> 5033 ········ereport(ERROR,\n> 5034 ················(errcode(ERRCODE_SNAPSHOT_TOO_OLD),\n> 5035 ················ errmsg(\"snapshot too old\")));\n>\n> But in case of a replica, we have to call GetOldSnapshotThresholdTimestamp much often. So, this become a\n> bottleneck. The customer solve this issue by setting old_snapshot_threshold to 0. But, I think, we can\n> do something about it.\n>\n> Some more investigation:\n>\n> -- On primary --\n> $ ./bin/psql postgres -c \"create database benchmark\"\n> CREATE DATABASE\n> $ ./bin/pgbench -i -Uorlov -s300 benchmark\n> dropping old tables...\n> NOTICE: table \"pgbench_accounts\" does not exist, skipping\n> ...\n> creating tables...\n> generating data (client-side)...\n> 30000000 of 30000000 tuples (100%) done (elapsed 142.37 s, remaining 0.00 s)\n> vacuuming...\n> creating primary keys...\n> done in 177.67 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 144.45 s, vacuum 0.59 s, primary keys 32.61 s).\n>\n> -- On secondary --\n> $ touch 1.sql\n> $ vim 1.sql\n> $ cat 1.sql\n> \\set bid random(1, 300)\n> BEGIN;\n> SELECT sum(aid) FROM pgbench_accounts where bid = :bid GROUP BY bid;\n> END;\n> $ ./bin/pgbench -f 1.sql -p5433 -Uorlov -j10 -c100 -T720 -P1 -n benchmark\n> pgbench (16devel)\n> progress: 1.0 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failed\n> ...\n> progress: 20.0 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failed\n>\n> $ perf record -F 99 -a -g --call-graph=dwarf sleep 5\n> $ perf script --header --fields comm,pid,tid,time,event,ip,sym,dso > file\n> $ grep s_lock file | wc -l\n>\n> 3486\n>\n>\n> My proposal is to use atomic for threshold_timestamp and threshold_xid. PFA 0001 patch.\n> With patch 0001 we got:\n>\n> $ grep s_lock file2 | wc -l\n> 8\n>\n>\n> Maybe, we shall go farther and remove mutex_threshold here? This will lead to inconsistency of\n> threshold_timestamp and threshold_xid, but is this really a problem?\n>\n> Thoughts?\n\nI think optimizing locking and switching to atomics wherever it\nimproves performance is a good direction. If performance improvement\ncould be demonstrated in a more direct way it would be a good argument\nto commit the improvement. Personally I like TPS plots like in [1].\n\n[1] https://www.postgresql.org/message-id/CALT9ZEHSX1Hpz5xjDA62yHAHtpinkA6hg8Zt-odyxqppmKbQFA%40mail.gmail.com\n\nKind regards,\nPavel Borisov,\nSupabase\n\n\n", "msg_date": "Tue, 24 Jan 2023 14:35:21 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Mon, Jan 23, 2023 at 9:40 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n> One of our customers stumble onto a significant performance degradation while running multiple OLAP-like queries on a replica.\n> After some investigation, it became clear that the problem is in accessing old_snapshot_threshold parameter.\n\nIt has been suggested that we remove that feature entirely.\n\n> My proposal is to use atomic for threshold_timestamp and threshold_xid. PFA 0001 patch.\n\nThis patch changes threshold_timestamp and threshold_xid to use\natomics, but it does not remove mutex_threshold which, according to a\nquick glance at the comments, protects exactly those two fields. So,\neither:\n\n(1) that mutex also protects something else and the existing comment\nis wrong, or\n\n(2) the mutex should have been removed but the patch neglected to do so, or\n\n(3) the mutex is still needed for some reason, in which case either\n(3a) the patch isn't actually safe or (3b) the patch needs comments to\nexplain what the new synchronization model is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:46:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Tue, 24 Jan 2023 at 18:46, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> (1) that mutex also protects something else and the existing comment\n> is wrong, or\n>\n> (2) the mutex should have been removed but the patch neglected to do so, or\n>\n> (3) the mutex is still needed for some reason, in which case either\n> (3a) the patch isn't actually safe or (3b) the patch needs comments to\n> explain what the new synchronization model is.\n>\n> Yes, you're absolutely right. And my first intention was to remove this\nmutex completely.\nBut in TransactionIdLimitedForOldSnapshots these variable is using\nconjointly. So, I'm not\nsure, is it completely safe to remove mutex. Actually, removing mutex and\nswitch to atomics\nwas my first choice. I've run all the tests and no problems were found.\nBut, at that time I choose\nto be more conservative. Anyway, here is the new variant.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Wed, 25 Jan 2023 11:51:53 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Wed, Jan 25, 2023 at 3:52 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n> But in TransactionIdLimitedForOldSnapshots these variable is using conjointly. So, I'm not\n> sure, is it completely safe to remove mutex.\n\nWell, that's something we - and ideally you, as the patch author -\nneed to analyze and figure out. We can't just take a shot and hope for\nthe best.\n\n> Actually, removing mutex and switch to atomics\n> was my first choice. I've run all the tests and no problems were found\n\nUnfortunately, that kind of testing is not very likely to find a\nsubtle synchronization problem. That's why a theoretical analysis is\nso important.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 08:52:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Wed, 25 Jan 2023 at 16:52, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Jan 25, 2023 at 3:52 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> Well, that's something we - and ideally you, as the patch author -\n> need to analyze and figure out. We can't just take a shot and hope for\n> the best.\n>\n\nI thank you for your advices. I've dived deeper into the problem and I\nthink v2 patch is wrong.\nAccessing threshold_timestamp and threshold_xid in\nTransactionIdLimitedForOldSnapshots\nwithout lock would lead to an improper xlimit calculation.\n\nSo, my choice would be (3b). My goal is to optimize access to the\nthreshold_timestamp to avoid\nmultiple spinlock acquisition on read. In the same time, simultaneous\naccess to these variable\n(threshold_timestamp and threshold_xid) should be protected with spinlock.\n\nI remove atomic for threshold_xid and add comments on mutex_threshold. PFA,\nv3. I\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 27 Jan 2023 17:30:11 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Fri, Jan 27, 2023 at 9:30 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n> I thank you for your advices. I've dived deeper into the problem and I think v2 patch is wrong.\n\nCool!\n\n> Accessing threshold_timestamp and threshold_xid in TransactionIdLimitedForOldSnapshots\n> without lock would lead to an improper xlimit calculation.\n\nThat would be a bummer.\n\n> So, my choice would be (3b). My goal is to optimize access to the threshold_timestamp to avoid\n> multiple spinlock acquisition on read. In the same time, simultaneous access to these variable\n> (threshold_timestamp and threshold_xid) should be protected with spinlock.\n>\n> I remove atomic for threshold_xid and add comments on mutex_threshold. PFA, v3. I\n\nInteresting, but it's still not entirely clear to me from reading the\ncomments why we should think that this is safe.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:17:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Fri, 27 Jan 2023 at 18:18, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> Interesting, but it's still not entirely clear to me from reading the\n> comments why we should think that this is safe.\n>\n\nIn overall, I think this is safe, because we do not change algorithm here.\nMore specific, threshold_timestamp have only used in a few cases:\n1). To get the latest value by calling GetOldSnapshotThresholdTimestamp.\nThis will work, since we only change the sync type here from the spinlock\nto an atomic.\n2). In TransactionIdLimitedForOldSnapshots, but here no changes in the\nbehaviour will be done. Sync model will be the save as before the patch.\n3). In SnapshotTooOldMagicForTest, which is a stub to make\nold_snapshot_threshold tests appear \"working\". But no coherence with the\nthreshold_xid here.\n\nSo, we have a two use cases for the threshold_timestamp:\na). When the threshold_timestamp is used in conjunction with the\nthreshold_xid. We must use spinlock to sync.\nb). When the threshold_timestamp is used without conjunction with the\nthreshold_xid. In this case, we use atomic values.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Fri, 27 Jan 2023 at 18:18, Robert Haas <robertmhaas@gmail.com> wrote:\n\nInteresting, but it's still not entirely clear to me from reading the\ncomments why we should think that this is safe. In overall, I think this is safe, because we do not change algorithm here. More specific, threshold_timestamp have only used in a few cases:1). To get the latest value by calling GetOldSnapshotThresholdTimestamp. This will work, since we only change the sync type here from the spinlock to an atomic.2). In TransactionIdLimitedForOldSnapshots, but here no changes in the behaviour will be done. Sync model will be the save as before the patch.3). In SnapshotTooOldMagicForTest, which is a stub to make old_snapshot_threshold tests appear \"working\". But no coherence with the threshold_xid here.So, we have a two use cases for the threshold_timestamp:a). When the threshold_timestamp is used in conjunction with the threshold_xid. We must use spinlock to sync.b). When the threshold_timestamp is used without conjunction with the threshold_xid. In this case, we use atomic values.-- Best regards,Maxim Orlov.", "msg_date": "Mon, 30 Jan 2023 13:18:55 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "Hi,\n\nOn 2023-01-24 10:46:28 -0500, Robert Haas wrote:\n> On Mon, Jan 23, 2023 at 9:40 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n> > One of our customers stumble onto a significant performance degradation while running multiple OLAP-like queries on a replica.\n> > After some investigation, it became clear that the problem is in accessing old_snapshot_threshold parameter.\n>\n> It has been suggested that we remove that feature entirely.\n\nIndeed. There's a lot of things wrong with it. We have reproducers for\ncreating wrong query results. Nobody has shown interest in fixing the\nproblems, for several years by now. It costs users that *do not* use the\nfeature performance (*).\n\nI think we're doing our users a disservice by claiming to have this feature.\n\nI don't think a lot of the existing code would survive if we were to create a\nnewer version, more maintainable / reliable, version of the feature.\n\nGreetings,\n\nAndres Freund\n\n(*) E.g. TestForOldSnapshot() is called in a good number of places, and emits\n quite a bit of code. It's not executed, but the emitted code is large\n enough to lead to worse code being generated.\n\n\n", "msg_date": "Mon, 13 Feb 2023 12:45:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Tue, Feb 14, 2023 at 9:45 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-24 10:46:28 -0500, Robert Haas wrote:\n> > On Mon, Jan 23, 2023 at 9:40 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n> > > One of our customers stumble onto a significant performance degradation while running multiple OLAP-like queries on a replica.\n> > > After some investigation, it became clear that the problem is in accessing old_snapshot_threshold parameter.\n> >\n> > It has been suggested that we remove that feature entirely.\n>\n> Indeed. There's a lot of things wrong with it. We have reproducers for\n> creating wrong query results. Nobody has shown interest in fixing the\n> problems, for several years by now. It costs users that *do not* use the\n> feature performance (*).\n>\n> I think we're doing our users a disservice by claiming to have this feature.\n>\n> I don't think a lot of the existing code would survive if we were to create a\n> newer version, more maintainable / reliable, version of the feature.\n\nI raised this at the recent developer meeting and the assembled\nhackers agreed. Does anyone think we *shouldn't* drop the feature? I\nvolunteered to write a removal patch for v17, so here's a first run\nthrough to find all the traces of this feature. In this first go I\nremoved everything I could think of, but we might want to keep some\nvestiges. I guess we might want to keep the registered error\nclass/code? Should we invent a place where we keep stuff like #define\nTestForOldSnapshot(...) expanding to nothing for some amount of time,\nfor extensions? I dunno, I bet extensions doing stuff that\nsophisticated already have a bunch of version tests anyway. I suppose\nkeeping the GUC wouldn't really be helpful (if you're using it, you\nprobably want to know that it isn't available anymore and think about\nthe implications for your application).", "msg_date": "Wed, 14 Jun 2023 15:56:46 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Wed, Jun 14, 2023 at 3:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Feb 14, 2023 at 9:45 AM Andres Freund <andres@anarazel.de> wrote:\n> > Indeed. There's a lot of things wrong with it. We have reproducers for\n> > creating wrong query results. Nobody has shown interest in fixing the\n> > problems, for several years by now. It costs users that *do not* use the\n> > feature performance (*).\n> >\n> > I think we're doing our users a disservice by claiming to have this feature.\n> >\n> > I don't think a lot of the existing code would survive if we were to create a\n> > newer version, more maintainable / reliable, version of the feature.\n>\n> I raised this at the recent developer meeting and the assembled\n> hackers agreed. Does anyone think we *shouldn't* drop the feature? I\n> volunteered to write a removal patch for v17, so here's a first run\n> through to find all the traces of this feature. In this first go I\n> removed everything I could think of, but we might want to keep some\n> vestiges. I guess we might want to keep the registered error\n> class/code? Should we invent a place where we keep stuff like #define\n> TestForOldSnapshot(...) expanding to nothing for some amount of time,\n> for extensions? I dunno, I bet extensions doing stuff that\n> sophisticated already have a bunch of version tests anyway. I suppose\n> keeping the GUC wouldn't really be helpful (if you're using it, you\n> probably want to know that it isn't available anymore and think about\n> the implications for your application).\n\nDone.\n\nI hope we get \"snapshot too old\" back one day.\n\n\n", "msg_date": "Tue, 5 Sep 2023 19:58:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Tue, Sep 5, 2023 at 12:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I hope we get \"snapshot too old\" back one day.\n\nThanks for working on this. Though I wonder why you didn't do\nsomething closer to a straight revert of the feature. Why is nbtree\nstill passing around snapshots needlessly?\n\nAlso, why are there still many comments referencing the feature?\nThere's the one above should_attempt_truncation(), for example.\nAnother appears above init_toast_snapshot(). Are these just\noversights, or was it deliberate? You said something about retaining\nvestiges.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Sep 2023 18:53:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Fri, Sep 8, 2023 at 1:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Sep 5, 2023 at 12:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I hope we get \"snapshot too old\" back one day.\n>\n> Thanks for working on this. Though I wonder why you didn't do\n> something closer to a straight revert of the feature. Why is nbtree\n> still passing around snapshots needlessly?\n>\n> Also, why are there still many comments referencing the feature?\n> There's the one above should_attempt_truncation(), for example.\n> Another appears above init_toast_snapshot(). Are these just\n> oversights, or was it deliberate? You said something about retaining\n> vestiges.\n\nOh. Not intentional. Looking now...\n\n\n", "msg_date": "Fri, 8 Sep 2023 14:00:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Fri, Sep 8, 2023 at 2:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Sep 8, 2023 at 1:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Thanks for working on this. Though I wonder why you didn't do\n> > something closer to a straight revert of the feature. Why is nbtree\n> > still passing around snapshots needlessly?\n\nThe code moved around quite a few times over several commits and quite\na lot since then, which is why I didn't go for straight revert, but\nclearly the manual approach risked missing things. I think the\nattached removes all unused 'snapshot' arguments from AM-internal\nfunctions. Checked by compiling with clang's -Wunused-parameters, and\nthen searching for 'snapshot', and excluding the expected cases.\n\n> > Also, why are there still many comments referencing the feature?\n> > There's the one above should_attempt_truncation(), for example.\n> > Another appears above init_toast_snapshot(). Are these just\n> > oversights, or was it deliberate? You said something about retaining\n> > vestiges.\n\nStray comments removed.", "msg_date": "Fri, 8 Sep 2023 15:48:43 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Thu, Sep 7, 2023 at 8:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The code moved around quite a few times over several commits and quite\n> a lot since then, which is why I didn't go for straight revert, but\n> clearly the manual approach risked missing things.\n\nIt's not a big deal, obviously.\n\n> I think the\n> attached removes all unused 'snapshot' arguments from AM-internal\n> functions. Checked by compiling with clang's -Wunused-parameters, and\n> then searching for 'snapshot', and excluding the expected cases.\n\nThis looks right to me.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Sep 2023 21:22:23 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" }, { "msg_contents": "On Fri, Sep 8, 2023 at 4:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This looks right to me.\n\nThanks, pushed.\n\n\n", "msg_date": "Fri, 8 Sep 2023 17:21:44 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: old_snapshot_threshold bottleneck on replica" } ]
[ { "msg_contents": "Hi\n\nLast time I wrote new tests for session variables.\n\nOne is\n\ncreate variable :\"DBNAME\".public.var as int;\n\nOn platform with enabled WRITE_READ_PARSE_PLAN_TREES I got warning\n\n\"WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse\ntree\"\n\nAfter some investigation, I found a problem in the RangeVar node.\n\nThe field \"catalogname\" is setted to NULL in _readRangeVar, but it is\ncompared in _equalRangeVar function.\n\nI thought so it is problem in my patch, but it looks like generic issue:\n\ncreate table postgres.public.foo(a int);\nWARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\nCREATE TABLE\n\nIs it a known issue?\n\nRegards\n\nPavel\n\nHiLast time I wrote new tests for session variables.One iscreate variable :\"DBNAME\".public.var as int;On platform with enabled WRITE_READ_PARSE_PLAN_TREES I got warning\"WARNING:  outfuncs/readfuncs failed to produce an equal rewritten parse tree\"After some investigation, I found a problem in the RangeVar node.The field \"catalogname\" is setted to NULL in _readRangeVar, but it is compared in _equalRangeVar function. I thought so it is problem in my patch, but it looks like generic issue:create table postgres.public.foo(a int);WARNING:  outfuncs/readfuncs failed to produce an equal rewritten parse treeCREATE TABLEIs it a known issue?RegardsPavel", "msg_date": "Mon, 23 Jan 2023 16:53:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "WARNING: outfuncs/readfuncs failed to produce an equal rewritten\n parse tree" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> After some investigation, I found a problem in the RangeVar node.\n\n> The field \"catalogname\" is setted to NULL in _readRangeVar, but it is\n> compared in _equalRangeVar function.\n\n> I thought so it is problem in my patch, but it looks like generic issue:\n\n> create table postgres.public.foo(a int);\n> WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\n> CREATE TABLE\n\nHeh. Probably we should just drop that special treatment of the\ncatalogname field --- that was always premature optimization,\ngiven that (I think) we don't ever store RangeVar in the catalogs.\n\nThe alternative would be to also lobotomize comparisons of RangeVars\nby marking the field equal_ignore, but what's the point?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Jan 2023 11:31:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WARNING: outfuncs/readfuncs failed to produce an equal rewritten\n parse tree" }, { "msg_contents": "po 23. 1. 2023 v 17:31 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > After some investigation, I found a problem in the RangeVar node.\n>\n> > The field \"catalogname\" is setted to NULL in _readRangeVar, but it is\n> > compared in _equalRangeVar function.\n>\n> > I thought so it is problem in my patch, but it looks like generic issue:\n>\n> > create table postgres.public.foo(a int);\n> > WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse\n> tree\n> > CREATE TABLE\n>\n> Heh. Probably we should just drop that special treatment of the\n> catalogname field --- that was always premature optimization,\n> given that (I think) we don't ever store RangeVar in the catalogs.\n>\n\n+1\n\nRegards\n\nPavel\n\n\n> The alternative would be to also lobotomize comparisons of RangeVars\n> by marking the field equal_ignore, but what's the point?\n>\n> regards, tom lane\n>\n\npo 23. 1. 2023 v 17:31 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> After some investigation, I found a problem in the RangeVar node.\n\n> The field \"catalogname\" is setted to NULL in _readRangeVar, but it is\n> compared in _equalRangeVar function.\n\n> I thought so it is problem in my patch, but it looks like generic issue:\n\n> create table postgres.public.foo(a int);\n> WARNING:  outfuncs/readfuncs failed to produce an equal rewritten parse tree\n> CREATE TABLE\n\nHeh.  Probably we should just drop that special treatment of the\ncatalogname field --- that was always premature optimization,\ngiven that (I think) we don't ever store RangeVar in the catalogs.+1RegardsPavel \n\nThe alternative would be to also lobotomize comparisons of RangeVars\nby marking the field equal_ignore, but what's the point?\n\n                        regards, tom lane", "msg_date": "Mon, 23 Jan 2023 17:46:24 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WARNING: outfuncs/readfuncs failed to produce an equal rewritten\n parse tree" } ]
[ { "msg_contents": "On 2023-Jan-23, Tom Lane wrote:\n\n> 1. [...] So now I think that we should\n> stick to the convention that it's on the user to install\n> pg_bsd_indent somewhere in their PATH; all we'll be doing with\n> this change is eliminating the step of fetching pg_bsd_indent's\n> source files from somewhere else.\n\n+1\n\n> 2. Given #1, it'll be prudent to continue having pgindent\n> double-check that pg_bsd_indent reports a specific version\n> number. We could imagine starting to use the main Postgres\n> version number for that, but I'm inclined to continue with\n> its existing numbering series.\n\n+1\n\n> 3. If we do nothing special, the first mass reindentation is\n> going to reformat the pg_bsd_indent sources per PG style,\n> which is ... er ... not the way they look now. Do we want\n> to accept that outcome, or take steps to prevent pgindent\n> from processing pg_bsd_indent? I have a feeling that manual\n> cleanup would be necessary if we let such reindentation\n> happen, but I haven't experimented.\n\nHmm, initially it must just be easier to have an exception so that\npg_bsd_indent itself isn't indented.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n", "msg_date": "Mon, 23 Jan 2023 18:38:20 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: run pgindent on a regular basis / scripted manner" }, { "msg_contents": "Hi!\n\nI've ran pdindent on the whole Postgres and it'd changed\nan awful lot of source files. Won't it create a lot of merge conflicts?\n\nOn Mon, Jan 23, 2023 at 8:48 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2023-Jan-23, Tom Lane wrote:\n>\n> > 1. [...] So now I think that we should\n> > stick to the convention that it's on the user to install\n> > pg_bsd_indent somewhere in their PATH; all we'll be doing with\n> > this change is eliminating the step of fetching pg_bsd_indent's\n> > source files from somewhere else.\n>\n> +1\n>\n> > 2. Given #1, it'll be prudent to continue having pgindent\n> > double-check that pg_bsd_indent reports a specific version\n> > number. We could imagine starting to use the main Postgres\n> > version number for that, but I'm inclined to continue with\n> > its existing numbering series.\n>\n> +1\n>\n> > 3. If we do nothing special, the first mass reindentation is\n> > going to reformat the pg_bsd_indent sources per PG style,\n> > which is ... er ... not the way they look now. Do we want\n> > to accept that outcome, or take steps to prevent pgindent\n> > from processing pg_bsd_indent? I have a feeling that manual\n> > cleanup would be necessary if we let such reindentation\n> > happen, but I haven't experimented.\n>\n> Hmm, initially it must just be easier to have an exception so that\n> pg_bsd_indent itself isn't indented.\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n> #error <https://www.EnterpriseDB.com/#error> \"Operator lives in the wrong\n> universe\"\n> (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc\n> Guire)\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I've ran pdindent on the whole Postgres and it'd changedan awful lot of source files. Won't it create a lot of merge conflicts?On Mon, Jan 23, 2023 at 8:48 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2023-Jan-23, Tom Lane wrote:\n\n> 1. [...] So now I think that we should\n> stick to the convention that it's on the user to install\n> pg_bsd_indent somewhere in their PATH; all we'll be doing with\n> this change is eliminating the step of fetching pg_bsd_indent's\n> source files from somewhere else.\n\n+1\n\n> 2. Given #1, it'll be prudent to continue having pgindent\n> double-check that pg_bsd_indent reports a specific version\n> number.  We could imagine starting to use the main Postgres\n> version number for that, but I'm inclined to continue with\n> its existing numbering series.\n\n+1\n\n> 3. If we do nothing special, the first mass reindentation is\n> going to reformat the pg_bsd_indent sources per PG style,\n> which is ... er ... not the way they look now.  Do we want\n> to accept that outcome, or take steps to prevent pgindent\n> from processing pg_bsd_indent?  I have a feeling that manual\n> cleanup would be necessary if we let such reindentation\n> happen, but I haven't experimented.\n\nHmm, initially it must just be easier to have an exception so that\npg_bsd_indent itself isn't indented.\n\n-- \nÁlvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n  (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 23 Jan 2023 21:54:15 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: run pgindent on a regular basis / scripted manner" }, { "msg_contents": "Nikita Malakhov <hukutoc@gmail.com> writes:\n> I've ran pdindent on the whole Postgres and it'd changed\n> an awful lot of source files. Won't it create a lot of merge conflicts?\n\nWell, yeah, you've rediscovered the fact that a lot of commits are sloppy\nabout this, and it's been awhile since our last tree-wide pgindent.\n\nOur normal process has been to do a cleanup pgindent run annually or so,\nusually after the end of the last commitfest of a cycle when there's\nplenty of time for people to deal with the ensuing merge conflicts.\n\nIf we could get to \"commits are pgindent clean to begin with\", we\ncould avoid the merge problems from one-big-reindent. I'd still\nbe inclined to do an annual run as a backup, but hopefully it would\nfind few problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Jan 2023 15:09:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: run pgindent on a regular basis / scripted manner" } ]
[ { "msg_contents": "Hey,\n\nGRANT role_name [, ...] TO role_specification [, ...]\n [ WITH { ADMIN | INHERIT | SET } { OPTION | TRUE | FALSE } ]\n [ GRANTED BY role_specification ]\n\nIt would be really nice to complete this new feature of INHERIT/SET\nFALSE/TRUE with a multi-specification capability.\n\nGRANT role_name [, ...] TO role_specification [, ...]\n [ WITH { ADMIN | INHERIT | SET } { OPTION | TRUE | FALSE } ] [, ...]\n [ GRANTED BY role_specification ]\n\ni.e., multiple WITH clauses\n\nGRANT admin1, admin2 TO usr1, usr2\nWITH ADMIN OPTION,\nWITH SET FALSE,\nWITH INHERIT TRUE\nGRANTED BY createroleuser;\n\nPersonally, I'm fine with any given GRANT command of this form having only\na single GRANTED BY specification.\n\nDavid J.\n\nHey,GRANT role_name [, ...] TO role_specification [, ...]    [ WITH { ADMIN | INHERIT | SET } { OPTION | TRUE | FALSE } ]    [ GRANTED BY role_specification ]It would be really nice to complete this new feature of INHERIT/SET FALSE/TRUE with a multi-specification capability.GRANT role_name [, ...] TO role_specification [, ...]    [ WITH { ADMIN | INHERIT | SET } { OPTION | TRUE | FALSE } ] [, ...]    [ GRANTED BY role_specification ]i.e., multiple WITH clausesGRANT admin1, admin2 TO usr1, usr2WITH ADMIN OPTION,WITH SET FALSE,WITH INHERIT TRUEGRANTED BY createroleuser;Personally, I'm fine with any given GRANT command of this form having only a single GRANTED BY specification.David J.", "msg_date": "Mon, 23 Jan 2023 13:09:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "v16 GRANT role TO role needs a multi-option setting capability" }, { "msg_contents": "On 23.01.2023 23:09, David G. Johnston wrote:\n> GRANT role_name [, ...] TO role_specification [, ...]\n>     [ WITH { ADMIN | INHERIT | SET } { OPTION | TRUE | FALSE } ]\n>     [ GRANTED BY role_specification ]\n>\n> It would be really nice to complete this new feature of INHERIT/SET \n> FALSE/TRUE with a multi-specification capability.\n\nIf I understand properly, the multi-specification capability is \nsupported in the form:\n\nGRANT admin1, admin2 TO usr1, usr2\nWITH ADMIN OPTION, SET FALSE, INHERIT TRUE;\n\nBut this doesn't seem to be reflected correctly in the documentation.\nIf I'm not mistaken, the current spec should be like this:\n\nGRANT role_name [, ...] TO role_specification [, ...]\n     [ WITH [ { ADMIN | INHERIT | SET } { OPTION | TRUE | FALSE } ] [, \n...] ]\n     [ GRANTED BY role_specification ]\n\nBy the way, there is suggestion to add role's membership options to the \n\\du+ command.[1]\n\n[1]https://www.postgresql.org/message-id/flat/b9be2d0e-a9bc-0a30-492f-a4f68e4f7740@postgrespro.ru\n\n-- \nPavel Luzanov\n\n\n\n\n\n\nOn 23.01.2023 23:09, David G. Johnston wrote:\n\n\nGRANT role_name [, ...] TO\n role_specification [, ...]\n    [ WITH { ADMIN | INHERIT | SET } { OPTION | TRUE\n | FALSE } ]\n    [ GRANTED BY role_specification ]\n\n\n\nIt would be really nice to\n complete this new feature of INHERIT/SET FALSE/TRUE with a\n multi-specification capability.\n\n\n\nIf I understand properly, the multi-specification\n capability is supported in the form:\n\nGRANT admin1, admin2 TO usr1, usr2 \nWITH ADMIN OPTION, SET FALSE, INHERIT TRUE;\n\nBut this doesn't seem to be reflected correctly in the\n documentation.\nIf I'm not mistaken, the current spec should be like this:\n\nGRANT role_name [, ...] TO role_specification [, ...]\n    [ WITH [ { ADMIN | INHERIT | SET } { OPTION | TRUE |\n FALSE } ] [, ...] ]\n    [ GRANTED BY role_specification ]\n\nBy the way, there is suggestion to add role's membership\n options to the \\du+ command.[1]\n\n[1]https://www.postgresql.org/message-id/flat/b9be2d0e-a9bc-0a30-492f-a4f68e4f7740@postgrespro.ru\n\n-- \nPavel Luzanov", "msg_date": "Tue, 24 Jan 2023 00:07:58 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: v16 GRANT role TO role needs a multi-option setting capability" } ]
[ { "msg_contents": "9d9c02ccd [1] added infrastructure in the query planner and executor\nso that the executor would know to stop processing a monotonic\nWindowFunc when its value went beyond what some qual in the outer\nquery could possibly match in future evaluations due to the\nWindowFunc's monotonic nature.\n\nIn that commit, support was added so that the optimisation would work\nfor row_number(), rank(), dense_rank() and forms of count(*). On\nfurther inspection, it looks like the same can be done for ntile(),\npercent_rank() and cume_dist(). These WindowFuncs are always\nmonotonically increasing.\n\nI've attached a trivial patch to add the required support request type\nto the existing prosupport functions for these window functions.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9d9c02ccd", "msg_date": "Tue, 24 Jan 2023 11:01:08 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Monotonic WindowFunc support for ntile(),\n percent_rank() and cume_dist()" }, { "msg_contents": "On Tue, Jan 24, 2023 at 11:01:08AM +1300, David Rowley wrote:\n> 9d9c02ccd [1] added infrastructure in the query planner and executor\n> so that the executor would know to stop processing a monotonic\n> WindowFunc when its value went beyond what some qual in the outer\n> query could possibly match in future evaluations due to the\n> WindowFunc's monotonic nature.\n> \n> In that commit, support was added so that the optimisation would work\n> for row_number(), rank(), dense_rank() and forms of count(*). On\n> further inspection, it looks like the same can be done for ntile(),\n> percent_rank() and cume_dist(). These WindowFuncs are always\n> monotonically increasing.\n> \n\nSilly question, but was there any reason these were omitted in the first\ncommit?\n\n> diff --git a/src/backend/utils/adt/windowfuncs.c b/src/backend/utils/adt/windowfuncs.c\n> index af13b8e53d..b87a624fb2 100644\n> --- a/src/backend/utils/adt/windowfuncs.c\n> +++ b/src/backend/utils/adt/windowfuncs.c\n> @@ -288,6 +288,15 @@ window_percent_rank_support(PG_FUNCTION_ARGS)\n> {\n> \tNode\t *rawreq = (Node *) PG_GETARG_POINTER(0);\n> \n> +\tif (IsA(rawreq, SupportRequestWFuncMonotonic))\n> +\t{\n> +\t\tSupportRequestWFuncMonotonic *req = (SupportRequestWFuncMonotonic *) rawreq;\n> +\n> +\t\t/* percent_rank() is monotonically increasing */\n> +\t\treq->monotonic = MONOTONICFUNC_INCREASING;\n> +\t\tPG_RETURN_POINTER(req);\n> +\t}\n> +\n> \tif (IsA(rawreq, SupportRequestOptimizeWindowClause))\n> \t{\n> \t\tSupportRequestOptimizeWindowClause *req = (SupportRequestOptimizeWindowClause *) rawreq;\n> @@ -362,6 +371,15 @@ window_cume_dist_support(PG_FUNCTION_ARGS)\n> {\n> \tNode\t *rawreq = (Node *) PG_GETARG_POINTER(0);\n> \n> +\tif (IsA(rawreq, SupportRequestWFuncMonotonic))\n> +\t{\n> +\t\tSupportRequestWFuncMonotonic *req = (SupportRequestWFuncMonotonic *) rawreq;\n> +\n> +\t\t/* cume_dist() is monotonically increasing */\n> +\t\treq->monotonic = MONOTONICFUNC_INCREASING;\n> +\t\tPG_RETURN_POINTER(req);\n> +\t}\n> +\n> \tif (IsA(rawreq, SupportRequestOptimizeWindowClause))\n> \t{\n> \t\tSupportRequestOptimizeWindowClause *req = (SupportRequestOptimizeWindowClause *) rawreq;\n> @@ -465,6 +483,18 @@ window_ntile_support(PG_FUNCTION_ARGS)\n> {\n> \tNode\t *rawreq = (Node *) PG_GETARG_POINTER(0);\n> \n> +\tif (IsA(rawreq, SupportRequestWFuncMonotonic))\n> +\t{\n> +\t\tSupportRequestWFuncMonotonic *req = (SupportRequestWFuncMonotonic *) rawreq;\n> +\n> +\t\t/*\n> +\t\t * ntile() is monotonically increasing as the number of buckets cannot\n> +\t\t * change after the first call\n> +\t\t */\n> +\t\treq->monotonic = MONOTONICFUNC_INCREASING;\n> +\t\tPG_RETURN_POINTER(req);\n> +\t}\n> +\n> \tif (IsA(rawreq, SupportRequestOptimizeWindowClause))\n> \t{\n> \t\tSupportRequestOptimizeWindowClause *req = (SupportRequestOptimizeWindowClause *) rawreq;\n\nSince all three cases are exactly the same code, maybe you could\nmacro-ize it and add a single comment?\n\n- Melanie\n\n\n", "msg_date": "Mon, 23 Jan 2023 19:26:16 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Monotonic WindowFunc support for ntile(), percent_rank() and\n cume_dist()" }, { "msg_contents": "Thanks for having a look at this.\n\nOn Tue, 24 Jan 2023 at 13:26, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Silly question, but was there any reason these were omitted in the first\n> commit?\n\nGood question, it's just that I didn't think of it at the time and\nnobody else did or if they did, they didn't mention it.\n\n> Since all three cases are exactly the same code, maybe you could\n> macro-ize it and add a single comment?\n\nHmm, I kinda like that it's being spelt out explicitly. To me, it\nseems clean and easy to read. I know we could have fewer lines of code\nwith something else, but for me, being able to quickly see what the\nproperties of the WindowFunc are without having to look at some other\nfunction is more important than saving some space in windowfuncs.c\n\nI'd likely feel differently if the code in question didn't all fit on\nmy screen at once, but it does and I can see at a quick glance that\nthe function is unconditionally monotonically increasing. Functions\nsuch as COUNT(*) are conditionally monotonically\nincreasing/decreasing, depending on the frame options.\n\nIf you feel strongly about that, then feel free to show me what you\nhave in mind in more detail so I can think harder about it.\n\nDavid\n\n\n", "msg_date": "Tue, 24 Jan 2023 14:00:33 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Monotonic WindowFunc support for ntile(),\n percent_rank() and cume_dist()" }, { "msg_contents": "On Tue, Jan 24, 2023 at 02:00:33PM +1300, David Rowley wrote:\n> Thanks for having a look at this.\n> \n> On Tue, 24 Jan 2023 at 13:26, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> \n> > Since all three cases are exactly the same code, maybe you could\n> > macro-ize it and add a single comment?\n> \n> Hmm, I kinda like that it's being spelt out explicitly. To me, it\n> seems clean and easy to read. I know we could have fewer lines of code\n> with something else, but for me, being able to quickly see what the\n> properties of the WindowFunc are without having to look at some other\n> function is more important than saving some space in windowfuncs.c\n> \n> I'd likely feel differently if the code in question didn't all fit on\n> my screen at once, but it does and I can see at a quick glance that\n> the function is unconditionally monotonically increasing. Functions\n> such as COUNT(*) are conditionally monotonically\n> increasing/decreasing, depending on the frame options.\n> \n> If you feel strongly about that, then feel free to show me what you\n> have in mind in more detail so I can think harder about it.\n\nNah, I don't feel strongly. I think it was because looking at the patch\nin isolation, the repetition stands out but in the context of the rest\nof the code it doesn't.\n\n- Melanie\n\n\n", "msg_date": "Tue, 24 Jan 2023 08:38:58 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Monotonic WindowFunc support for ntile(), percent_rank() and\n cume_dist()" }, { "msg_contents": "On Wed, 25 Jan 2023 at 02:39, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Tue, Jan 24, 2023 at 02:00:33PM +1300, David Rowley wrote:\n> > If you feel strongly about that, then feel free to show me what you\n> > have in mind in more detail so I can think harder about it.\n>\n> Nah, I don't feel strongly. I think it was because looking at the patch\n> in isolation, the repetition stands out but in the context of the rest\n> of the code it doesn't.\n\nI played around with this patch again and the performance gains for\nthe best case are not quite as good as we got for row_number(), rank()\nand dense_rank() in the original commit. The reasons for this is that\nntile() and co all require getting a count of the total rows in the\npartition so it can calculate the result. ntile() needs to know how\nlarge the tiles are, for example. That obviously requires pulling all\ntuples into the tuplestore.\n\nDespite this, the performance with the patch is still much better than\nwithout. Here's master:\n\nexplain (analyze, timing off, costs off) select * from (select\na,ntile(10) over (order by a) nt from a) where nt < 1;\n\n Subquery Scan on unnamed_subquery (actual rows=0 loops=1)\n Filter: (unnamed_subquery.nt < 1)\n Rows Removed by Filter: 1000000\n -> WindowAgg (actual rows=1000000 loops=1)\n -> Index Only Scan using a_a_idx on a (actual rows=1000000 loops=1)\n Heap Fetches: 0\n Planning Time: 0.073 ms\n Execution Time: 254.118 ms\n(8 rows)\n\nand with the patch:\n\n WindowAgg (actual rows=0 loops=1)\n Run Condition: (ntile(10) OVER (?) < 1)\n -> Index Only Scan using a_a_idx on a (actual rows=1000000 loops=1)\n Heap Fetches: 0\n Planning Time: 0.072 ms\n Execution Time: 140.023 ms\n(6 rows)\n\nHere's with row_number() for reference:\n\nexplain (analyze, timing off, costs off) select * from (select\na,row_number() over (order by a) rn from a) where rn < 1;\n\n WindowAgg (actual rows=0 loops=1)\n Run Condition: (row_number() OVER (?) < 1)\n -> Index Only Scan using a_a_idx on a (actual rows=1 loops=1)\n Heap Fetches: 0\n Planning Time: 0.089 ms\n Execution Time: 0.054 ms\n(6 rows)\n\nyou can clearly see the difference with the number of rows pulled out\nof the index only scan.\n\nThis is just a 1 million row table with a single INT column and an\nindex on that column.\n\nAnyway, all seems like clear wins and low controversy so I've now pushed it.\n\nDavid\n\n\n", "msg_date": "Fri, 27 Jan 2023 16:19:06 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Monotonic WindowFunc support for ntile(),\n percent_rank() and cume_dist()" } ]
[ { "msg_contents": "On 13.01.23 11:01, Dean Rasheed wrote:\n> So I'm feeling quite good about the end result -- I set out hoping not\n> to make performance noticeably worse, but ended up making it\n> significantly better.\nHi Dean, thanks for your work.\n\nBut since PG_RETURN_NULL, is a simple return,\nnow the \"value\" var is not leaked?\n\nIf not, sorry for the noise.\n\nregards,\nRanier Vilela", "msg_date": "Mon, 23 Jan 2023 21:46:49 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Non-decimal integer literals" }, { "msg_contents": "On Tue, 24 Jan 2023 at 00:47, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> On 13.01.23 11:01, Dean Rasheed wrote:\n> > So I'm feeling quite good about the end result -- I set out hoping not\n> > to make performance noticeably worse, but ended up making it\n> > significantly better.\n> Hi Dean, thanks for your work.\n>\n> But since PG_RETURN_NULL, is a simple return,\n> now the \"value\" var is not leaked?\n>\n\nThat originates from a prior commit:\n\nccff2d20ed Convert a few datatype input functions to use \"soft\" error reporting.\n\nand see also a bunch of follow-on commits for other input functions.\n\nIt will only return NULL if the input is invalid and escontext is\nnon-NULL. You only identified a fraction of the cases where that would\nhappen. If we really cared about not leaking memory for invalid\ninputs, we'd have to look at every code path using ereturn()\n(including lower-level functions, and not just in numeric.c). I think\nthat would be a waste of time, and counterproductive -- trying to\nimmediately free memory for all possible invalid inputs would likely\ncomplicate a lot of code, and slow down parsing of valid inputs.\nBetter to leave it until the owning memory context is freed.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:24:09 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-decimal integer literals" }, { "msg_contents": "Em ter., 24 de jan. de 2023 às 07:24, Dean Rasheed <dean.a.rasheed@gmail.com>\nescreveu:\n\n> On Tue, 24 Jan 2023 at 00:47, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > On 13.01.23 11:01, Dean Rasheed wrote:\n> > > So I'm feeling quite good about the end result -- I set out hoping not\n> > > to make performance noticeably worse, but ended up making it\n> > > significantly better.\n> > Hi Dean, thanks for your work.\n> >\n> > But since PG_RETURN_NULL, is a simple return,\n> > now the \"value\" var is not leaked?\n> >\n>\n> That originates from a prior commit:\n>\n> ccff2d20ed Convert a few datatype input functions to use \"soft\" error\n> reporting.\n>\n> and see also a bunch of follow-on commits for other input functions.\n>\n> It will only return NULL if the input is invalid and escontext is\n> non-NULL. You only identified a fraction of the cases where that would\n> happen. If we really cared about not leaking memory for invalid\n> inputs, we'd have to look at every code path using ereturn()\n> (including lower-level functions, and not just in numeric.c). I think\n> that would be a waste of time, and counterproductive -- trying to\n> immediately free memory for all possible invalid inputs would likely\n> complicate a lot of code, and slow down parsing of valid inputs.\n> Better to leave it until the owning memory context is freed.\n>\nThank you for the explanation.\n\nregards,\nRanier Vilela\n\nEm ter., 24 de jan. de 2023 às 07:24, Dean Rasheed <dean.a.rasheed@gmail.com> escreveu:On Tue, 24 Jan 2023 at 00:47, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> On 13.01.23 11:01, Dean Rasheed wrote:\n> > So I'm feeling quite good about the end result -- I set out hoping not\n> > to make performance noticeably worse, but ended up making it\n> > significantly better.\n> Hi Dean, thanks for your work.\n>\n> But since PG_RETURN_NULL, is a simple return,\n> now the \"value\" var is not leaked?\n>\n\nThat originates from a prior commit:\n\nccff2d20ed Convert a few datatype input functions to use \"soft\" error reporting.\n\nand see also a bunch of follow-on commits for other input functions.\n\nIt will only return NULL if the input is invalid and escontext is\nnon-NULL. You only identified a fraction of the cases where that would\nhappen. If we really cared about not leaking memory for invalid\ninputs, we'd have to look at every code path using ereturn()\n(including lower-level functions, and not just in numeric.c). I think\nthat would be a waste of time, and counterproductive -- trying to\nimmediately free memory for all possible invalid inputs would likely\ncomplicate a lot of code, and slow down parsing of valid inputs.\nBetter to leave it until the owning memory context is freed.Thank you for the explanation.regards,Ranier Vilela", "msg_date": "Tue, 24 Jan 2023 07:34:19 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Non-decimal integer literals" } ]
[ { "msg_contents": "Hi,\n\ncfbot, the buildfarm and locally I have seen 100_bugs.pl fail\noccasionally. Just rarely enough that I never got around to looking into it\nfor real.\n\nJust now there was another failure on master:\nhttps://cirrus-ci.com/task/5279589287591936\n\n[01:00:49.441] ok 1 - index predicates do not cause crash\n[01:00:49.441] ok 2 - update to temporary table without replica identity with FOR ALL TABLES publication\n[01:00:49.441] ok 3 - update to unlogged table without replica identity with FOR ALL TABLES publication\n[01:00:49.441] # test failed\n[01:00:49.441] --- stderr ---\n[01:00:49.441] # poll_query_until timed out executing this query:\n[01:00:49.441] # SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\n[01:00:49.441] # expecting this output:\n[01:00:49.441] # t\n[01:00:49.441] # last actual query output:\n[01:00:49.441] # f\n[01:00:49.441] # with stderr:\n[01:00:49.441] # Tests were run but no plan was declared and done_testing() was not seen.\n[01:00:49.441] # Looks like your test exited with 29 just after 3.\n[01:00:49.441] \n[01:00:49.441] (test program exited with status code 29)\n\nthe regress log:\nhttps://api.cirrus-ci.com/v1/artifact/task/5279589287591936/testrun/build-32/testrun/subscription/100_bugs/log/regress_log_100_bugs\nand twoway's log:\nhttps://api.cirrus-ci.com/v1/artifact/task/5279589287591936/testrun/build-32/testrun/subscription/100_bugs/log/100_bugs_twoways.log\n\n\n\nWe see t2 added to the publication:\n2023-01-24 00:57:30.099 UTC [73654][client backend] [100_bugs.pl][7/5:0] LOG: statement: ALTER PUBLICATION testpub ADD TABLE t2\n\nAnd that *then* \"t\" was synchronized:\n2023-01-24 00:57:30.102 UTC [73640][logical replication worker] LOG: logical replication table synchronization worker for subscription \"testsub\", table \"t\" has finished\n\nand then that the refresh was issued:\n2023-01-24 00:57:30.128 UTC [73657][client backend] [100_bugs.pl][5/10:0] LOG: statement: ALTER SUBSCRIPTION testsub REFRESH PUBLICATION\n\nAnd we see a walsender starting and the query to get the new tables being executed:\n2023-01-24 00:57:30.139 UTC [73660][walsender] [testsub][6/8:0] LOG: statement: SELECT DISTINCT t.schemaname, t.tablename \n\t, t.attnames\n\tFROM pg_catalog.pg_publication_tables t\n\t WHERE t.pubname IN ('testpub')\n\n\nAnd that's it, the rest of the time is just polling.\n\n\nPerhaps wait_for_subscription_sync() should dump the set of rel states to make\nsomething like this more debuggable?\n\n\nThe fact that the synchronization for t finished just before the refresh makes\nme wonder if a wakeup or a cache invalidation got lost?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Jan 2023 19:23:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Test failures of 100_bugs.pl" }, { "msg_contents": "On Tue, Jan 24, 2023 at 8:53 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> cfbot, the buildfarm and locally I have seen 100_bugs.pl fail\n> occasionally. Just rarely enough that I never got around to looking into it\n> for real.\n>\n...\n>\n> We see t2 added to the publication:\n> 2023-01-24 00:57:30.099 UTC [73654][client backend] [100_bugs.pl][7/5:0] LOG: statement: ALTER PUBLICATION testpub ADD TABLE t2\n>\n> And that *then* \"t\" was synchronized:\n> 2023-01-24 00:57:30.102 UTC [73640][logical replication worker] LOG: logical replication table synchronization worker for subscription \"testsub\", table \"t\" has finished\n>\n> and then that the refresh was issued:\n> 2023-01-24 00:57:30.128 UTC [73657][client backend] [100_bugs.pl][5/10:0] LOG: statement: ALTER SUBSCRIPTION testsub REFRESH PUBLICATION\n>\n> And we see a walsender starting and the query to get the new tables being executed:\n> 2023-01-24 00:57:30.139 UTC [73660][walsender] [testsub][6/8:0] LOG: statement: SELECT DISTINCT t.schemaname, t.tablename\n> , t.attnames\n> FROM pg_catalog.pg_publication_tables t\n> WHERE t.pubname IN ('testpub')\n>\n>\n> And that's it, the rest of the time is just polling.\n>\n>\n> Perhaps wait_for_subscription_sync() should dump the set of rel states to make\n> something like this more debuggable?\n>\n>\n> The fact that the synchronization for t finished just before the refresh makes\n> me wonder if a wakeup or a cache invalidation got lost?\n>\n\n From the LOGs, the only thing one could draw is lost invalidation\nbecause the nap time of the apply worker is 1s, so it should process\ninvalidation during the time we are polling. Also, the rel should be\nadded to pg_subscription_rel because the test is still polling for\nrels to be in 'ready' or 'done' state.\n\nI think we can do three things to debug (a) as you suggest dump the\nrel state in wait_for_subscription_sync; (b) add some DEBUG log in\ninvalidate_syncing_table_states() to ensure that invalidation has been\nprocessed; (c) print rel states and relids from table_states_not_ready\nin process_syncing_tables_for_apply() to see if t2 has ever appeared\nin that list.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 24 Jan 2023 17:10:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Test failures of 100_bugs.pl" }, { "msg_contents": "On Tue, Jan 24, 2023 7:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Jan 24, 2023 at 8:53 AM Andres Freund <andres@anarazel.de> wrote:\r\n> >\r\n> > cfbot, the buildfarm and locally I have seen 100_bugs.pl fail\r\n> > occasionally. Just rarely enough that I never got around to looking into it\r\n> > for real.\r\n> >\r\n> ...\r\n> >\r\n> > We see t2 added to the publication:\r\n> > 2023-01-24 00:57:30.099 UTC [73654][client backend] [100_bugs.pl][7/5:0]\r\n> LOG: statement: ALTER PUBLICATION testpub ADD TABLE t2\r\n> >\r\n> > And that *then* \"t\" was synchronized:\r\n> > 2023-01-24 00:57:30.102 UTC [73640][logical replication worker] LOG: logical\r\n> replication table synchronization worker for subscription \"testsub\", table \"t\" has\r\n> finished\r\n> >\r\n> > and then that the refresh was issued:\r\n> > 2023-01-24 00:57:30.128 UTC [73657][client backend] [100_bugs.pl][5/10:0]\r\n> LOG: statement: ALTER SUBSCRIPTION testsub REFRESH PUBLICATION\r\n> >\r\n> > And we see a walsender starting and the query to get the new tables being\r\n> executed:\r\n> > 2023-01-24 00:57:30.139 UTC [73660][walsender] [testsub][6/8:0] LOG:\r\n> statement: SELECT DISTINCT t.schemaname, t.tablename\r\n> > , t.attnames\r\n> > FROM pg_catalog.pg_publication_tables t\r\n> > WHERE t.pubname IN ('testpub')\r\n> >\r\n> >\r\n> > And that's it, the rest of the time is just polling.\r\n> >\r\n> >\r\n> > Perhaps wait_for_subscription_sync() should dump the set of rel states to\r\n> make\r\n> > something like this more debuggable?\r\n> >\r\n> >\r\n> > The fact that the synchronization for t finished just before the refresh makes\r\n> > me wonder if a wakeup or a cache invalidation got lost?\r\n> >\r\n> \r\n> From the LOGs, the only thing one could draw is lost invalidation\r\n> because the nap time of the apply worker is 1s, so it should process\r\n> invalidation during the time we are polling. Also, the rel should be\r\n> added to pg_subscription_rel because the test is still polling for\r\n> rels to be in 'ready' or 'done' state.\r\n> \r\n> I think we can do three things to debug (a) as you suggest dump the\r\n> rel state in wait_for_subscription_sync; (b) add some DEBUG log in\r\n> invalidate_syncing_table_states() to ensure that invalidation has been\r\n> processed; (c) print rel states and relids from table_states_not_ready\r\n> in process_syncing_tables_for_apply() to see if t2 has ever appeared\r\n> in that list.\r\n> \r\n\r\nThere was a similar buildfarm failure on francolin recently[1]. I think the\r\nproblem is that after REFRESH PUBLICATION, the table sync worker for the new\r\ntable test_mismatching_types was not started. So, the test timed out while\r\nwaiting for an ERROR message that should have been reported by the table sync\r\nworker.\r\n\r\n--\r\nregress_log_014_binary:\r\ntimed out waiting for match: (?^:ERROR: ( [A-Z0-9]+:)? incorrect binary data format) at /home/bf/bf-build/francolin/HEAD/pgsql/src/test/subscription/t/014_binary.pl line 269.\r\n\r\n014_binary_subscriber.log:\r\n2023-04-16 18:18:38.455 UTC [3079482] 014_binary.pl LOG: statement: ALTER SUBSCRIPTION tsub REFRESH PUBLICATION;\r\n2023-04-16 18:21:39.219 UTC [3079474] ERROR: could not receive data from WAL stream: server closed the connection unexpectedly\r\n\t\tThis probably means the server terminated abnormally\r\n\t\tbefore or while processing the request.\r\n--\r\n\r\nI wrote a patch to dump rel state in wait_for_subscription_sync() as suggested.\r\nPlease see the attached patch.\r\nI will try to add some debug logs in code later.\r\n\r\n[1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2023-04-16%2018%3A17%3A09\r\n\r\nRegards,\r\nShi Yu", "msg_date": "Fri, 21 Apr 2023 05:47:57 +0000", "msg_from": "\"Yu Shi (Fujitsu)\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Test failures of 100_bugs.pl" }, { "msg_contents": "On Fri, Apr 21, 2023 1:48 PM Yu Shi (Fujitsu) <shiy.fnst@fujitsu.com> wrote:\r\n> \r\n> I wrote a patch to dump rel state in wait_for_subscription_sync() as suggested.\r\n> Please see the attached patch.\r\n> I will try to add some debug logs in code later.\r\n> \r\n\r\nPlease see the attached v2 patch.\r\n\r\nI added some debug logs when invalidating syncing table states and updating\r\ntable_states_not_ready list. I also adjusted the message level in the tests\r\nwhich failed before.\r\n\r\nRegards,\r\nShi Yu", "msg_date": "Mon, 24 Apr 2023 09:50:22 +0000", "msg_from": "\"Yu Shi (Fujitsu)\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Test failures of 100_bugs.pl" }, { "msg_contents": "On Monday, April 24, 2023 5:50 PM Yu Shi (Fujitsu) <shiy.fnst@fujitsu.com> wrote:\r\n> \r\n> On Fri, Apr 21, 2023 1:48 PM Yu Shi (Fujitsu) <shiy.fnst@fujitsu.com> wrote:\r\n> >\r\n> > I wrote a patch to dump rel state in wait_for_subscription_sync() as\r\n> suggested.\r\n> > Please see the attached patch.\r\n> > I will try to add some debug logs in code later.\r\n> >\r\n> \r\n> Please see the attached v2 patch.\r\n> \r\n> I added some debug logs when invalidating syncing table states and updating\r\n> table_states_not_ready list. I also adjusted the message level in the tests which\r\n> failed before.\r\n\r\nJust a reference.\r\n\r\nI think similar issue has been analyzed in other thread[1] and the reason seems\r\nclear that the table state cache invalidation got lost due to a race condition.\r\nThe fix is also being discussed there.\r\n\r\n[1] https://www.postgresql.org/message-id/flat/711a6afe-edb7-1211-cc27-1bef8239eec7%40gmail.com\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n", "msg_date": "Mon, 11 Mar 2024 12:34:08 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Test failures of 100_bugs.pl" } ]
[ { "msg_contents": "We throw an error if the expression in a CREATE INDEX statement is not IMMUTABLE.\nBut while the documentation notes that expressions in CHECK constraints are not\nto be immutable, we don't enforce that. Why don't we call something like\nCheckMutability inside cookConstraint? Sure, that wouldn't catch all abuse,\nbut it would be better than nothing.\n\nThere is of course the worry of breaking upgrade for unsafe constraints, but is\nthere any other reason not to enforce immutability?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 24 Jan 2023 05:54:08 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Mutable CHECK constraints?" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> We throw an error if the expression in a CREATE INDEX statement is not IMMUTABLE.\n> But while the documentation notes that expressions in CHECK constraints are not\n> to be immutable, we don't enforce that. Why don't we call something like\n> CheckMutability inside cookConstraint? Sure, that wouldn't catch all abuse,\n> but it would be better than nothing.\n\n> There is of course the worry of breaking upgrade for unsafe constraints, but is\n> there any other reason not to enforce immutability?\n\nYeah, that's exactly it, it's a historical exemption for compatibility\nreasons. There are discussions about this in the archives, if memory\nserves ... but I'm too tired to go digging.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 01:38:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mutable CHECK constraints?" }, { "msg_contents": "On Tue, 2023-01-24 at 01:38 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > We throw an error if the expression in a CREATE INDEX statement is not IMMUTABLE.\n> > But while the documentation notes that expressions in CHECK constraints are not\n> > to be immutable, we don't enforce that.  Why don't we call something like\n> > CheckMutability inside cookConstraint?  Sure, that wouldn't catch all abuse,\n> > but it would be better than nothing.\n> \n> > There is of course the worry of breaking upgrade for unsafe constraints, but is\n> > there any other reason not to enforce immutability?\n> \n> Yeah, that's exactly it, it's a historical exemption for compatibility\n> reasons.  There are discussions about this in the archives, if memory\n> serves ... but I'm too tired to go digging.\n\nThanks for the answer. A search turned up\nhttps://postgr.es/m/AANLkTikwFfvavEX9nDwcRD4_xJb_VAitMeP1IH4wpGIt%40mail.gmail.com\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 25 Jan 2023 00:24:00 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Mutable CHECK constraints?" } ]
[ { "msg_contents": "I only recently realised that to_hex() converts its input to unsigned\nbefore converting it to hex (something that's not mentioned in the\ndocs):\n\n to_hex(-1) -> ffffffff\n\nI think that's something that some users might find surprising,\nespecially if they were expecting to be able to use it to output\nvalues that could be read back in, now that we support non-decimal\ninteger input.\n\nSo I think the docs should be a little more explicit about this. I\nthink the following should suffice:\n\n---\n to_hex ( integer ) -> text\n to_hex ( bigint ) -> text\n\n Converts the number to its equivalent two's complement hexadecimal\n representation.\n\n to_hex(1234) -> 4d2\n to_hex(-1234) -> fffffb2e\n---\n\ninstead of the existing example with 2147483647, which doesn't add much.\n\nI also think it might be useful for it to gain a couple of boolean options:\n\n1). An option to output a signed value (defaulting to false, to\npreserve the current two's complement output).\n\n2). An option to output the base prefix \"0x\" (which comes after the\nsign, making it inconvenient for the user to add themselves).\n\nI've also been idly wondering about whether we should have a numeric\nvariant (for integers only, since non-integers won't necessarily\nterminate in hex, aren't accepted as inputs, and don't seem\nparticularly useful anyway). I don't think two's complement output\nmakes much sense for numeric, so perhaps it should only have the\nprefix option, and always output signed hex strings.\n\nThoughts?\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 24 Jan 2023 13:10:03 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "to_hex() for negative inputs" }, { "msg_contents": "Hi Dean,\n\n> I only recently realised that to_hex() converts its input to unsigned\n> before converting it to hex (something that's not mentioned in the\n> docs):\n\nTechnically the documentation is accurate [1]:\n\n\"\"\"\nConverts the number to its equivalent hexadecimal representation.\n\"\"\"\n\nBut I agree that adding an example with negative numbers will not\nhurt. Would you like to submit a patch?\n\n> I also think it might be useful for it to gain a couple of boolean options:\n\nAdding extra arguments for something the user can implement\n(him/her)self doesn't seem to be a great idea. With this approach we\nmay end up with hundreds of arguments one day.\n\n> 1). An option to output a signed value (defaulting to false, to\n> preserve the current two's complement output).\n\nThis in particular can be done like this:\n\n```\n=# select case when sign(x) > 0 then '' else '-' end || to_hex(abs(x))\nfrom ( values (123), (-123) ) as s(x);\n ?column?\n----------\n 7b\n -7b\n(2 rows)\n```\n\n> 2). An option to output the base prefix \"0x\" (which comes after the\n> sign, making it inconvenient for the user to add themselves).\n\nDitto:\n\n```\n=# select '0x' || to_hex(x) from ( values (123), (-123) ) as s(x);\n ?column?\n------------\n 0x7b\n 0xffffff85\n```\n\n[1]: https://www.postgresql.org/docs/current/functions-string.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 24 Jan 2023 16:42:56 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: to_hex() for negative inputs" }, { "msg_contents": "On Tue, 24 Jan 2023 at 13:43, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Adding extra arguments for something the user can implement\n> (him/her)self doesn't seem to be a great idea. With this approach we\n> may end up with hundreds of arguments one day.\n>\n\nI don't see how a couple of extra arguments will expand to hundreds.\n\n> =# select case when sign(x) > 0 then '' else '-' end || to_hex(abs(x))\n> from ( values (123), (-123) ) as s(x);\n>\n\nWhich is broken for INT_MIN:\n\nselect case when sign(x) > 0 then '' else '-' end || to_hex(abs(x))\nfrom ( values (-2147483648) ) as s(x);\n\nERROR: integer out of range\n\nPart of the reason for implementing this in core would be to save\nusers from such easy-to-overlook bugs.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 24 Jan 2023 15:10:30 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: to_hex() for negative inputs" }, { "msg_contents": "Hi Dean,\n\n> I don't see how a couple of extra arguments will expand to hundreds.\n\nMaybe I was exaggerating, but the point is that adding extra flags for\nevery possible scenario is a disadvantageous approach in general.\nThere is no need to increase the code base, the amount of test cases\nand the amount of documentation every time someone has an idea \"in\nrare cases I also may want to do A or B, let's add a flag for this\".\n\n> Which is broken for INT_MIN:\n>\n> select case when sign(x) > 0 then '' else '-' end || to_hex(abs(x))\n> from ( values (-2147483648) ) as s(x);\n>\n> ERROR: integer out of range\n\nI'm afraid the behavior of something like to_hex(X, with_sign => true)\nis going to be exactly the same. There is no safe and consistent way\nto calculate abs(INT_MIN).\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Jan 2023 12:02:11 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: to_hex() for negative inputs" }, { "msg_contents": "On Wed, 25 Jan 2023 at 09:02, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> > I don't see how a couple of extra arguments will expand to hundreds.\n>\n> Maybe I was exaggerating, but the point is that adding extra flags for\n> every possible scenario is a disadvantageous approach in general.\n> There is no need to increase the code base, the amount of test cases\n> and the amount of documentation every time someone has an idea \"in\n> rare cases I also may want to do A or B, let's add a flag for this\".\n>\n\nOK, but the point was that we've just added support for accepting hex\ninputs in a particular format, so I think it would be useful if\nto_hex() could produce outputs compatible with that.\n\n> > Which is broken for INT_MIN:\n> >\n> > select case when sign(x) > 0 then '' else '-' end || to_hex(abs(x))\n> > from ( values (-2147483648) ) as s(x);\n> >\n> > ERROR: integer out of range\n>\n> I'm afraid the behavior of something like to_hex(X, with_sign => true)\n> is going to be exactly the same. There is no safe and consistent way\n> to calculate abs(INT_MIN).\n>\n\nOf course there is. This is easy to code in C using unsigned ints,\nwithout resorting to abs() (yes, I'm aware that abs() is undefined for\nINT_MIN).\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:15:51 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: to_hex() for negative inputs" }, { "msg_contents": "Hi Dean,\n\n> Of course there is. This is easy to code in C using unsigned ints,\n> without resorting to abs() (yes, I'm aware that abs() is undefined for\n> INT_MIN).\n\nSo in your opinion what is the expected result of to_hex(INT_MIN,\nwith_sign => true)?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Jan 2023 13:57:43 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: to_hex() for negative inputs" }, { "msg_contents": "On Wed, 25 Jan 2023 at 10:57, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> > Of course there is. This is easy to code in C using unsigned ints,\n> > without resorting to abs() (yes, I'm aware that abs() is undefined for\n> > INT_MIN).\n>\n> So in your opinion what is the expected result of to_hex(INT_MIN,\n> with_sign => true)?\n>\n\n\"-80000000\" or \"-0x80000000\", depending on whether the prefix is\nrequested. The latter is legal input for an integer, which is the\npoint (to allow round-tripping):\n\nSELECT int '-0x80000000';\n int4\n-------------\n -2147483648\n(1 row)\n\nSELECT pg_typeof(-0x80000000);\n pg_typeof\n-----------\n integer\n(1 row)\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 25 Jan 2023 11:21:11 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: to_hex() for negative inputs" }, { "msg_contents": "Hi Dean,\n\n> > So in your opinion what is the expected result of to_hex(INT_MIN,\n> > with_sign => true)?\n> >\n>\n> \"-80000000\" or \"-0x80000000\", depending on whether the prefix is\n> requested.\n\nWhether this is the right result is very debatable. 0x80000000 is a\nbinary representation of -2147483648:\n\n```\n(gdb) ptype cur_timeout\ntype = int\n(gdb) p cur_timeout = 0x80000000\n$1 = -2147483648\n(gdb) p/x cur_timeout\n$2 = 0x80000000\n```\n\nSo what you propose to return is -(-2147483648). For some users this\nmay be a wanted result, for some it may be not. Personally I would\nprefer to get an ERROR in this case. And this is exactly how you end\nup with even more flags.\n\nI believe it would be better to let the user write the exact query\ndepending on what he/she wants.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Jan 2023 22:55:48 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: to_hex() for negative inputs" }, { "msg_contents": "On 24.01.23 14:10, Dean Rasheed wrote:\n> I also think it might be useful for it to gain a couple of boolean options:\n> \n> 1). An option to output a signed value (defaulting to false, to\n> preserve the current two's complement output).\n\nI find the existing behavior so strange, I would rather give up and \ninvent a whole new function family with correct behavior, which could \nthen also support octal and binary, and perhaps any base. This could be \nsomething generally named, like \"convert\" or \"format\".\n\n> 2). An option to output the base prefix \"0x\" (which comes after the\n> sign, making it inconvenient for the user to add themselves).\n\nThis could also be handled with a \"format\"-like function.\n\n\n\n", "msg_date": "Wed, 25 Jan 2023 22:43:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: to_hex() for negative inputs" }, { "msg_contents": "On Wed, 25 Jan 2023 at 21:43, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 24.01.23 14:10, Dean Rasheed wrote:\n> > I also think it might be useful for it to gain a couple of boolean options:\n> >\n> > 1). An option to output a signed value (defaulting to false, to\n> > preserve the current two's complement output).\n>\n> I find the existing behavior so strange, I would rather give up and\n> invent a whole new function family with correct behavior, which could\n> then also support octal and binary, and perhaps any base. This could be\n> something generally named, like \"convert\" or \"format\".\n>\n> > 2). An option to output the base prefix \"0x\" (which comes after the\n> > sign, making it inconvenient for the user to add themselves).\n>\n> This could also be handled with a \"format\"-like function.\n>\n\nYeah, making a break from the existing to_hex() functions might not be\na bad idea.\n\nMy only concern with something general like convert() or format() is\nthat it'll end up being hard to use, with lots of different formatting\noptions, like to_char(). In fact Oracle's to_char() has an 'X'\nformatting option to output hexadecimal, though it's pretty limited\n(e.g., it doesn't support negative inputs). MySQL has conv() that'll\nconvert between any two bases, but it does bizarre things for negative\ninputs or inputs larger than 2^64.\n\nTBH, I'm not that interested in bases other than hex (I mean who\nreally uses octal anymore?), and I don't particularly care about\nformats other than the format we accept as input. For that, just\nadding a numeric_to_hex() function would suffice. That works fine for\nint4 and int8 inputs too, and doesn't preclude adding a more general\nbase-conversion / formatting function later, if there's demand.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 26 Jan 2023 10:27:58 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: to_hex() for negative inputs" } ]
[ { "msg_contents": "Hi,\n\nA recent commit of mine [1] broke compilation of plpython on AIX [2]. But my\ncommit turns out to only be very tangentially related - it only causes a\nfailure because it references clock_gettime() in an inline function instead of\na macro and, as it turns out, plpython currently breaks references to\nclock_gettime() and lots of other things.\n\n From [2]\n> There's nice bit in plpython.h:\n>\n> /*\n> * Include order should be: postgres.h, other postgres headers, plpython.h,\n> * other plpython headers. (In practice, other plpython headers will also\n> * include this file, so that they can compile standalone.)\n> */\n> #ifndef POSTGRES_H\n> #error postgres.h must be included before plpython.h\n> #endif\n>\n> /*\n> * Undefine some things that get (re)defined in the Python headers. They aren't\n> * used by the PL/Python code, and all PostgreSQL headers should be included\n> * earlier, so this should be pretty safe.\n> */\n> #undef _POSIX_C_SOURCE\n> #undef _XOPEN_SOURCE\n>\n>\n> the relevant stuff in time.h is indeed guarded by\n> #if _XOPEN_SOURCE>=500\n>\n>\n> I don't think the plpython actually code follows the rule about including all\n> postgres headers earlier.\n>\n> plpy_typeio.h:\n>\n> #include \"access/htup.h\"\n> #include \"fmgr.h\"\n> #include \"plpython.h\"\n> #include \"utils/typcache.h\"\n> [...]\n>\n> The include order aspect was perhaps feasible when there just was plpython.c,\n> but with the split into many different C files and many headers, it seems hard\n> to maintain. There's a lot of violations afaics.\n> \n> The undefines were added in a11cf433413, the split in 147c2482542.\n\n\nThe background for the undefines is that _POSIX_C_SOURCE needs to be defined\nthe same for the whole compilation, not change in the middle, and Python.h\ndefines it. To protect \"our\" parts a11cf433413 instituted the rule that all\npostgres headers have to be included first. But then that promptly got broken\nin 147c2482542.\n\nBut apparently the breakage in 147c2482542 was partial enough that we didn't\nrun into obvious trouble so far (although I wonder if some of the linkage\nissues we had in the past with plpython could be related).\n\n\nI don't see a good solution here. I don't think the include order can\ntrivially be repaired, as long as plpy_*.h headers include postgres headers,\nbecause there's no way to order two plpy_*.h includes in a .c file and have\nall postgres headers come first.\n\n\nThe most minimal fix I can see is to institute the rule that no plpy_*.h\nheader can include a postgres header other than plpython.h.\n\n\nA completely different approach would be to for our build to acquire the value\nof _POSIX_C_SOURCE _XOPEN_SOURCE from Python.h and define them when compiling\nplpython .c files. That has some dangers of incompatibilities with the rest of\nthe build though. But it'd allow us to get rid of an obviously hard to\nenforce rule.\n\nOr we could see what breaks if we just don't care about _POSIX_C_SOURCE -\nevidently we haven't succeeded in making this work for a long time.\n\n\nSome other semi-related things:\n\nAn old comments:\n/* Also hide away errcode, since we load Python.h before postgres.h */\n#define errcode __msvc_errcode\n\nbut we don't do include Python.h before postgres.h...\n\nWe try to force linking to a non-debug python:\n#if defined(_MSC_VER) && defined(_DEBUG)\n/* Python uses #pragma to bring in a non-default libpython on VC++ if\n * _DEBUG is defined */\n#undef _DEBUG\n\nWhich seems ill-advised? That's from d8f75d41315 in 2006.\n\n\npython scribbling over our macros:\n/*\n * Sometimes python carefully scribbles on our *printf macros.\n * So we undefine them here and redefine them after it's done its dirty deed.\n\nI didn't find code in recent-ish python to do that. Perhaps we should try to\nget away with not doing that?\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/E1pJ6NT-004jOH-DF@gemulon.postgresql.org\n[2] https://postgr.es/m/20230121190303.7xjiwdg3gvb62lu3@awork3.anarazel.de\n\n\n", "msg_date": "Tue, 24 Jan 2023 08:58:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The background for the undefines is that _POSIX_C_SOURCE needs to be defined\n> the same for the whole compilation, not change in the middle, and Python.h\n> defines it. To protect \"our\" parts a11cf433413 instituted the rule that all\n> postgres headers have to be included first. But then that promptly got broken\n> in 147c2482542.\n\n> But apparently the breakage in 147c2482542 was partial enough that we didn't\n> run into obvious trouble so far (although I wonder if some of the linkage\n> issues we had in the past with plpython could be related).\n\nI found the discussion thread that led up to a11cf433413:\n\nhttps://www.postgresql.org/message-id/flat/4DB3B546.9080508%40dunslane.net\n\nWhat we originally set out to fix, AFAICS, was compiler warnings about\n_POSIX_C_SOURCE getting redefined with a different value. I think that'd\nonly happen if pyconfig.h had originally been constructed on a machine\nwhere _POSIX_C_SOURCE was different from what prevails in a Postgres\nbuild. On my RHEL8 box, I see that /usr/include/python3.6m/pyconfig-64.h\nunconditionally does\n\n#define _POSIX_C_SOURCE 200809L\n\nwhile /usr/include/features.h can set a few different values, but the\none that would always prevail for us is\n\n#ifdef _GNU_SOURCE\n...\n# undef _POSIX_C_SOURCE\n# define _POSIX_C_SOURCE\t200809L\n\nSo I wouldn't see this warning, and I venture that you'd never see\nit on any other Linux/glibc platform either. The 2011 thread started\nwith concerns about Windows, where it's a lot easier to believe that\nthere might be mismatched build environments. But maybe nobody's\nactually set up a Windows box with that particular problem since 2011.\n\nWhether inconsistency in _POSIX_C_SOURCE could lead to worse problems\nthan a compiler warning isn't entirely clear to me, but it certainly\nseems possible.\n\nAnyway, I'm still of the opinion that what a11cf433413 tried to do\nwas the best available fix, and we need to do whatever we have to do\nto plpython's headers to reinstate that coding rule.\n\n> The most minimal fix I can see is to institute the rule that no plpy_*.h\n> header can include a postgres header other than plpython.h.\n\nDoesn't sound *too* awful.\n\n> Or we could see what breaks if we just don't care about _POSIX_C_SOURCE -\n> evidently we haven't succeeded in making this work for a long time.\n\nWell, hoverfly is broken right now ...\n\n> * Sometimes python carefully scribbles on our *printf macros.\n> * So we undefine them here and redefine them after it's done its dirty deed.\n\n> I didn't find code in recent-ish python to do that. Perhaps we should try to\n> get away with not doing that?\n\nThat would be nice. This old code was certainly mostly concerned with\npython 2, maybe python 3 no longer does that? (Unfortunately, the\n_POSIX_C_SOURCE business is clearly still there in current python.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 12:55:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Hi,\n\nOn 2023-01-24 12:55:15 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The background for the undefines is that _POSIX_C_SOURCE needs to be defined\n> > the same for the whole compilation, not change in the middle, and Python.h\n> > defines it. To protect \"our\" parts a11cf433413 instituted the rule that all\n> > postgres headers have to be included first. But then that promptly got broken\n> > in 147c2482542.\n> \n> > But apparently the breakage in 147c2482542 was partial enough that we didn't\n> > run into obvious trouble so far (although I wonder if some of the linkage\n> > issues we had in the past with plpython could be related).\n> \n> I found the discussion thread that led up to a11cf433413:\n> \n> https://www.postgresql.org/message-id/flat/4DB3B546.9080508%40dunslane.net\n> \n> What we originally set out to fix, AFAICS, was compiler warnings about\n> _POSIX_C_SOURCE getting redefined with a different value. I think that'd\n> only happen if pyconfig.h had originally been constructed on a machine\n> where _POSIX_C_SOURCE was different from what prevails in a Postgres\n> build.\n\nPython's _POSIX_C_SOURCE value is set to a specific value in their configure\nscript:\n\nif test $define_xopen_source = yes\nthen\n # X/Open 7, incorporating POSIX.1-2008\n AC_DEFINE(_XOPEN_SOURCE, 700,\n Define to the level of X/Open that your system supports)\n\n # On Tru64 Unix 4.0F, defining _XOPEN_SOURCE also requires\n # definition of _XOPEN_SOURCE_EXTENDED and _POSIX_C_SOURCE, or else\n # several APIs are not declared. Since this is also needed in some\n # cases for HP-UX, we define it globally.\n AC_DEFINE(_XOPEN_SOURCE_EXTENDED, 1,\n Define to activate Unix95-and-earlier features)\n \n AC_DEFINE(_POSIX_C_SOURCE, 200809L, Define to activate features from IEEE Stds 1003.1-2008)\nfi\n\nSo the concrete values don't depend on the environment (but whether they are\nset does, sunos, hpux as well as a bunch of obsolete OS versions don't). But I\nsomehow doubt we'll see a different _POSIX_C_SOURCE value coming up.\n\n\n> So I wouldn't see this warning, and I venture that you'd never see\n> it on any other Linux/glibc platform either.\n\nYea, it works just fine on linux without it, I tried that already.\n\n\nIn fact, removing the _POSIX_C_SOURCE stuff build without additional warnings (*)\non freebsd, netbsd, linux (centos 7, fedora rawhide, debian bullseye, sid),\nmacOS, openbsd, windows with msvc and mingw.\n\nThose I can test automatedly with the extended set of CI OSs:\nhttps://cirrus-ci.com/build/4853456020701184\n\nSome of the OSs are still running tests, but I doubt there's a runtime issue.\n\n\nSolaris and AIX are the ones missing.\n\nI guess I'll test them manually. It seems promising not to need this stuff\nanymore?\n\n\n(*) I see one existing plpython related warning on netbsd 9:\n[18:49:12.710] ld: warning: libintl.so.1, needed by /usr/pkg/lib/libpython3.9.so, may conflict with libintl.so.8\n\nBut that's not related to this change. I assume it's an issue with one side\nusing libintl from /usr/lib and the other from /usr/pkg/lib.\n\n\n> The 2011 thread started with concerns about Windows, where it's a lot easier\n> to believe that there might be mismatched build environments. But maybe\n> nobody's actually set up a Windows box with that particular problem since\n> 2011.\n\nThe native python doesn't have the issue, the windows pyconfig.h doesn't\ndefine _POSIX_SOURCE et al. It's possible that there's a problem with older\nmingw versions though - but I don't really care about old mingw versions, tbh.\n\n\n> Whether inconsistency in _POSIX_C_SOURCE could lead to worse problems\n> than a compiler warning isn't entirely clear to me, but it certainly\n> seems possible.\n\nI'm sure it can lead to compiler errors, there's IIRC some struct members only\ndefined for certain values.\n\n\n> Anyway, I'm still of the opinion that what a11cf433413 tried to do\n> was the best available fix, and we need to do whatever we have to do\n> to plpython's headers to reinstate that coding rule.\n\nYou think it's not a viable path to just remove the _POSIX_C_SOURCE,\n_XOPEN_SOURCE undefines?\n\n\n> > The most minimal fix I can see is to institute the rule that no plpy_*.h\n> > header can include a postgres header other than plpython.h.\n> \n> Doesn't sound *too* awful.\n\nIt's not too bad to make the change, I'm less hopeful about it staying\nfixed. I can't think of a reasonable way to make violations of the rule error\nout.\n\n\n> > Or we could see what breaks if we just don't care about _POSIX_C_SOURCE -\n> > evidently we haven't succeeded in making this work for a long time.\n> \n> Well, hoverfly is broken right now ...\n\nWhat I mean is that we haven't handled the _POSIX_C_SOURCE stuff correctly for\na long time and the only problem that became apparent is hoverfly's issue, and\nthat's a problem of undefining _POSIX_C_SOURCE, rather than it being\nredefined.\n\n\n\n> > * Sometimes python carefully scribbles on our *printf macros.\n> > * So we undefine them here and redefine them after it's done its dirty deed.\n> \n> > I didn't find code in recent-ish python to do that. Perhaps we should try to\n> > get away with not doing that?\n> \n> That would be nice. This old code was certainly mostly concerned with\n> python 2, maybe python 3 no longer does that?\n\nThere's currently no non-comment references to *printf in their headers. The\nonly past reference was removed in:\n\ncommit e822e37946f27c09953bb5733acf3b07c2db690f\nAuthor: Victor Stinner <vstinner@python.org>\nDate: 2020-06-15 21:59:47 +0200\n\n bpo-36020: Remove snprintf macro in pyerrors.h (GH-20889)\n...\n\nwith the following relevant hunk:\n\n@@ -307,21 +309,6 @@ PyAPI_FUNC(int) PyUnicodeTranslateError_SetReason(\n const char *reason /* UTF-8 encoded string */\n );\n \n-/* These APIs aren't really part of the error implementation, but\n- often needed to format error messages; the native C lib APIs are\n- not available on all platforms, which is why we provide emulations\n- for those platforms in Python/mysnprintf.c,\n- WARNING: The return value of snprintf varies across platforms; do\n- not rely on any particular behavior; eventually the C99 defn may\n- be reliable.\n-*/\n-#if defined(MS_WIN32) && !defined(HAVE_SNPRINTF)\n-# define HAVE_SNPRINTF\n-# define snprintf _snprintf\n-# define vsnprintf _vsnprintf\n-#endif\n-\n-#include <stdarg.h>\n PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...)\n Py_GCC_ATTRIBUTE((format(printf, 3, 4)));\n PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va)\n\n\nWhich suggests an easier fix would be to just to do\n\n/*\n * Python versions <= 3.8 otherwise define a replacement, causing\n * macro redefinition warnings.\n */\n#define HAVE_SNPRINTF\n\nAnd have that be enough for all python versions?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 Jan 2023 11:07:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Python's _POSIX_C_SOURCE value is set to a specific value in their configure\n> script:\n\n> if test $define_xopen_source = yes\n> then\n> ...\n> AC_DEFINE(_POSIX_C_SOURCE, 200809L, Define to activate features from IEEE Stds 1003.1-2008)\n> fi\n\nHm. I looked into Python 3.2 (the oldest release we still support)\nand it has similar code but\n\n AC_DEFINE(_POSIX_C_SOURCE, 200112L, Define to activate features from IEEE Stds 1003.1-2001)\n\nSo yeah it's fixed (or else not defined) for any particular Python\nrelease, but could vary across releases.\n\n> Solaris and AIX are the ones missing.\n> I guess I'll test them manually. It seems promising not to need this stuff\n> anymore?\n\nGiven that hoverfly is AIX, I'm betting there's an issue there.\n\n>> Anyway, I'm still of the opinion that what a11cf433413 tried to do\n>> was the best available fix, and we need to do whatever we have to do\n>> to plpython's headers to reinstate that coding rule.\n\n> You think it's not a viable path to just remove the _POSIX_C_SOURCE,\n> _XOPEN_SOURCE undefines?\n\nI think at the least that will result in warnings on some platforms,\nand at the worst in actual build problems. Maybe there are no more\nof the latter a dozen years after the fact, but ...\n\n>> That would be nice. This old code was certainly mostly concerned with\n>> python 2, maybe python 3 no longer does that?\n\n> There's currently no non-comment references to *printf in their headers. The\n> only past reference was removed in:\n> commit e822e37946f27c09953bb5733acf3b07c2db690f\n> Author: Victor Stinner <vstinner@python.org>\n> Date: 2020-06-15 21:59:47 +0200\n> bpo-36020: Remove snprintf macro in pyerrors.h (GH-20889)\n\nOh, interesting.\n\n> Which suggests an easier fix would be to just to do\n\n> /*\n> * Python versions <= 3.8 otherwise define a replacement, causing\n> * macro redefinition warnings.\n> */\n> #define HAVE_SNPRINTF\n\n> And have that be enough for all python versions?\n\nNice idea. We did not have that option while we were using HAVE_SNPRINTF\nourselves, but now that we are not I concur that this should work.\n(I confirmed that their code looks the same in Python 3.2.)\nNote that you'd better make it\n\n#define HAVE_SNPRINTF 1\n\nor you risk macro-redefinition warnings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 16:16:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Hi,\n\nOn 2023-01-24 16:16:06 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Python's _POSIX_C_SOURCE value is set to a specific value in their configure\n> > script:\n>\n> > if test $define_xopen_source = yes\n> > then\n> > ...\n> > AC_DEFINE(_POSIX_C_SOURCE, 200809L, Define to activate features from IEEE Stds 1003.1-2008)\n> > fi\n>\n> Hm. I looked into Python 3.2 (the oldest release we still support)\n> and it has similar code but\n>\n> AC_DEFINE(_POSIX_C_SOURCE, 200112L, Define to activate features from IEEE Stds 1003.1-2001)\n>\n> So yeah it's fixed (or else not defined) for any particular Python\n> release, but could vary across releases.\n\nLooks like it changed in 3.3:\n\n$ git grep -E 'AC_DEFINE.*_POSIX_C_SOURCE' v3.2 v3.3.0\nv3.2:configure.in: AC_DEFINE(_POSIX_C_SOURCE, 200112L, Define to activate features from IEEE Stds 1003.1-2001)\nv3.3.0:configure.ac: AC_DEFINE(_POSIX_C_SOURCE, 200809L, Define to activate features from IEEE Stds 1003.1-2008)\n\nI'm not sure we need to care a lot about a build with python 3.3 triggering a\nbunch of warnings.\n\nPersonally I'd just bump the python requirements to well above it - the last\n3.2 release was Oct. 12, 2014.\n\nOfficial EOL date:\nVer\tLast Release\tEOL Date\n3.2\t2014-10-11\t2016-02-20\n3.3\t2017-09-19\t2017-09-29\n From 3.4 on there's just an official last release:\n3.4\t2019-03-18\n3.5\t2020-09-05\n3.6\t2021-09-04\n3.7\t2023-06-??\n\n\n\n> > Solaris and AIX are the ones missing.\n> > I guess I'll test them manually. It seems promising not to need this stuff\n> > anymore?\n>\n> Given that hoverfly is AIX, I'm betting there's an issue there.\n\nDoesn't look that way.\n\nI found plenty problems on AIX, but all independent of _POSIX_C_SOURCE.\n\n\nBoth autoconf and meson builds seem to need externally specified\n-D_LARGE_FILES=1 to build successfully when using plpython, otherwise we end\nup with conflicting signatures with lseek. I see that Noah has that in his\nbuildfarm config. ISTM that we should just move that into our build specs.\n\n\n\n\nTo get 64 bit autoconf to link plpython3.so correctly, I needed to add to\nmanually add -lpthread:\nld: 0711-317 ERROR: Undefined symbol: .pthread_init\n...\n\nI suspect Noah might not hit this, because one of the dependencies he has\nenabled already adds it to the backend LDFLAGS.\n\n\nAlso for autoconf, I needed to link\n$prefix/lib/python3.11/config-3.11/libpython3.11.a\nto\n$prefix/lib/libpython3.11.a\nThat might be a python version difference or be related to building python\nwith --enable-shared - but I see saw other problems without --enable-shared.\n\n\n\nI ran out of energy to test on aix with xlc, I spent way more time on this\nthan I have already. I'll pick it up later.\n\n\n\nI also tested 64bit solaris. No relevant warnings (lots of other warnings\nthough), tests pass, with both acc and gcc.\n\n\n\n> >> Anyway, I'm still of the opinion that what a11cf433413 tried to do\n> >> was the best available fix, and we need to do whatever we have to do\n> >> to plpython's headers to reinstate that coding rule.\n>\n> > You think it's not a viable path to just remove the _POSIX_C_SOURCE,\n> > _XOPEN_SOURCE undefines?\n>\n> I think at the least that will result in warnings on some platforms,\n> and at the worst in actual build problems. Maybe there are no more\n> of the latter a dozen years after the fact, but ...\n\nI think it might be ok. I tested nearly all OSs that we support, with the\nexception of DragonFlyBSD and Illumos, which both are very similar to tested\noperating systems.\n\n\n> Nice idea. We did not have that option while we were using HAVE_SNPRINTF\n> ourselves, but now that we are not I concur that this should work.\n\nCool.\n\n\n> (I confirmed that their code looks the same in Python 3.2.)\n> Note that you'd better make it\n>\n> #define HAVE_SNPRINTF 1\n>\n> or you risk macro-redefinition warnings.\n\nGood point.\n\nI guess I'll push that part first, given that we have agreement how it should\nlook like.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 Jan 2023 17:48:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Hi,\n\nOn 2023-01-24 17:48:56 -0800, Andres Freund wrote:\n> Also for autoconf, I needed to link\n> $prefix/lib/python3.11/config-3.11/libpython3.11.a\n> to\n> $prefix/lib/libpython3.11.a\n> That might be a python version difference or be related to building python\n> with --enable-shared - but I see saw other problems without --enable-shared.\n\nThat actually doesn't quite work right. One needs to either link to the file\nby name (i.e. just $prefix/lib/libpython3.11.so instead of -lpython3.11), or\ncreate a wrapper .a \"manually\". I.e.\n\nar crs $prefix/lib/libpython3.11.a $prefix/lib/libpython3.11.so\n\nI tried quite a few things and got confused between the attempts.\n\n\n> I ran out of energy to test on aix with xlc, I spent way more time on this\n> than I have already. I'll pick it up later.\n\nBuilding in the background now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 Jan 2023 18:32:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Hi,\n\nOn 2023-01-24 18:32:46 -0800, Andres Freund wrote:\n> > I ran out of energy to test on aix with xlc, I spent way more time on this\n> > than I have already. I'll pick it up later.\n> \n> Building in the background now.\n\nAlso passes.\n\n\nThus I think getting rid of the #undefines is the best plan going\nforward. Fewer complicated rules to follow => fewer rule violations.\n\n\nPatches attached.\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 24 Jan 2023 20:28:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Patches attached.\n\n+1 for 0001. I'm still nervous about 0002. However, maybe the\ncases that we had trouble with are legacy issues that nobody cares\nabout anymore in 2023. We can always look for another answer if\nwe get complaints, I guess.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 23:37:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Hi,\n\nOn 2023-01-24 23:37:44 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Patches attached.\n> \n> +1 for 0001.\n\nCool, will push tomorrow.\n\n\n> I'm still nervous about 0002. However, maybe the cases that we had trouble\n> with are legacy issues that nobody cares about anymore in 2023. We can\n> always look for another answer if we get complaints, I guess.\n\nYea, it's a patch that should be easily revertable, if it comes to that. I'll\nadd a note to the commit message about potentially needing to do that if\nthere's not easily addressed fallout.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Jan 2023 00:52:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "On Tue, Jan 24, 2023 at 11:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Patches attached.\n>\n> +1 for 0001. I'm still nervous about 0002. However, maybe the\n> cases that we had trouble with are legacy issues that nobody cares\n> about anymore in 2023. We can always look for another answer if\n> we get complaints, I guess.\n\nIt feels like things are changing so fast these days that whatever was\nhappening 12 years ago is not likely to be relevant. Compilers change\nenough to cause warnings and even errors in just a few years. A decade\nis long enough for an entire platform to become irrelevant.\n\nPlus, the cost of experimentation here seems very low. Sure, something\nmight break, but if it does, we can just change it back, or change it\nagain. That's not really a big deal. The thing that would be a big\ndeal, maybe, is if we released and only found out afterward that this\ncaused some subtle and horrible problem for which we had no\nback-patchable fix, but that seems pretty unlikely.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 08:31:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" }, { "msg_contents": "Hi,\n\nPushed the patches. So far no fallout, and hoverfly recovered.\n\nI just checked a few of the more odd animals (Illumos, Solaris, old OpenBSD,\nAIX) that already ran without finding new warnings.\n\nThere's a few more animals to run before I'll fully relax though.\n\n\nOn 2023-01-25 08:31:23 -0500, Robert Haas wrote:\n> Plus, the cost of experimentation here seems very low. Sure, something\n> might break, but if it does, we can just change it back, or change it\n> again. That's not really a big deal. The thing that would be a big\n> deal, maybe, is if we released and only found out afterward that this\n> caused some subtle and horrible problem for which we had no\n> back-patchable fix, but that seems pretty unlikely.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Jan 2023 13:26:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: plpython vs _POSIX_C_SOURCE" } ]
[ { "msg_contents": "Hi,\n\nHaving a query string, I am trying to use the postgres parser to find which\nrelations the query accesses. This is what I currently have:\n\nconst char *query_string=\"select * from dummytable;\";\nList *parsetree_list=pg_parse_query(query_string);\nListCell *parsetree_item;\n\nforeach(parsetree_item,parsetree_list){\n RawStmt *parsetree=lfirst_node(RawStmt,parsetree_item);\n Query *query=parse_analyze(parsetree,query_string,NULL,0,NULL);\n}\n\nHowever, when I inspect the variable \"query\", it is not populated\ncorrectly. For example, commandType is set to CMD_DELETE while I have\npassed a SELECT query.\n- What am I doing wrong?\n- Once I get the query correctly, how can I get the list of relations it\ngets access to?\n- Or any other ways to get the list of relations from raw query string\nthrough postgres calls?\n\nThank you!\n\nHi,Having a query string, I am trying to use the postgres parser to find which relations the query accesses. This is what I currently have:const char *query_string=\"select * from dummytable;\";List *parsetree_list=pg_parse_query(query_string);\t\tListCell *parsetree_item; foreach(parsetree_item,parsetree_list){      RawStmt *parsetree=lfirst_node(RawStmt,parsetree_item);      Query *query=parse_analyze(parsetree,query_string,NULL,0,NULL);}However, when I inspect the variable \"query\", it is not populated correctly. For example, commandType is set to CMD_DELETE while I have passed a SELECT query.- What am I doing wrong?- Once I get the query correctly, how can I get the list of relations it gets access to?- Or any other ways to get the list of relations from raw query string through postgres calls?Thank you!", "msg_date": "Tue, 24 Jan 2023 15:34:59 -0800", "msg_from": "Amin <amin.fallahi@gmail.com>", "msg_from_op": true, "msg_subject": "Getting relations accessed by a query using the raw query string" } ]
[ { "msg_contents": "Hi,\n\nDavid Rowley and I were discussing how to test the\nNoMovementScanDirection case for heapgettup() and heapgettup_pagemode()\nin [1] (since there is not currently coverage). We are actually\nwondering if it is dead code (in core).\n\nThis is a link to the code in question on github in [2] (side note: is\nthere a more evergreen way to do this that doesn't involve pasting a\nhundred lines of code into this email? You need quite a few lines of\ncontext for it to be clear what code I am talking about.)\n\nstandard_ExecutorRun() doesn't run ExecutePlan() if scan direction is no\nmovement.\n\n if (!ScanDirectionIsNoMovement(direction))\n {\n ...\n ExecutePlan(estate,\n queryDesc->planstate,\n }\n\nAnd other users of heapgettup() through table_scan_getnextslot() and the\nlike all seem to pass ForwardScanDirection as the direction.\n\nA skilled code archaeologist brought our attention to adbfab119b308a\nwhich appears to remove the only users in the core codebase calling\nheapgettup() and heapgettup_pagemode() with NoMovementScanDirection (and\nthose users were not themselves used).\n\nPerhaps we can remove support for NoMovementScanDirection with\nheapgettup()? Unless someone knows of a good use case for a table AM to\ndo this?\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_ZyiXwWS1WXSZneoy%2BsjoH_%2BF5UhO-1uFhyi-u0d6z%2BfA%40mail.gmail.com\n[2] https://github.com/postgres/postgres/blob/master/src/backend/access/heap/heapam.c#L656\n\n\n", "msg_date": "Tue, 24 Jan 2023 19:55:23 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "On Wed, 25 Jan 2023 at 13:55, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> David Rowley and I were discussing how to test the\n> NoMovementScanDirection case for heapgettup() and heapgettup_pagemode()\n> in [1] (since there is not currently coverage). We are actually\n> wondering if it is dead code (in core).\n\nYeah, so I see nothing in core that can cause heapgettup() or\nheapgettup_pagemode() to be called with NoMovementScanDirection. I\nimagine one possible way to hit it might be in an extension where\nsomeone's written their own ExecutorRun_hook that does not have the\nsame NoMovementScanDirection check that standard_ExecutorRun() has.\n\nSo far my thoughts are that we should just rip it out and see if\nanyone complains. If they complain loudly enough, then it's easy\nenough to put it back without any compatibility issues. However, if\nit's for the ExecutorRun_hook reason, then they'll likely be better to\nadd the same NoMovementScanDirection as we have in\nstandard_ExecutorRun().\n\nI'm just not keen on refactoring the code without the means to test\nthat the new code actually works.\n\nDoes anyone know of any reason why we shouldn't ditch the nomovement\ncode in heapgettup/heapgettup_pagemode?\n\nMaybe we'd also want to Assert that the direction is either forwards\nor backwards in table_scan_getnextslot and\ntable_scan_getnextslot_tidrange. (I see heap_getnextslot_tidrange()\ndoes not have any handling for NoMovementScanDirection, so if this is\nnot dead code, that probably needs to be fixed)\n\nDavid\n\n\n", "msg_date": "Wed, 25 Jan 2023 22:03:51 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Does anyone know of any reason why we shouldn't ditch the nomovement\n> code in heapgettup/heapgettup_pagemode?\n\nAFAICS, the remaining actual use-case for NoMovementScanDirection\nis that defined by ExecutorRun:\n\n * If direction is NoMovementScanDirection then nothing is done\n * except to start up/shut down the destination. Otherwise,\n * we retrieve up to 'count' tuples in the specified direction.\n *\n * Note: count = 0 is interpreted as no portal limit, i.e., run to\n * completion.\n\nWe must have the NoMovementScanDirection option because count = 0\ndoes not mean \"do nothing\", and I noted at least two call sites\nthat require it.\n\nThe heapgettup definition is thus not only unreachable, but confusingly\ninconsistent with this meaning.\n\nI wonder if we couldn't also get rid of this confusingly-inconsistent\nalternative usage in the planner:\n\n * 'indexscandir' is one of:\n * ForwardScanDirection: forward scan of an ordered index\n * BackwardScanDirection: backward scan of an ordered index\n * NoMovementScanDirection: scan of an unordered index, or don't care\n * (The executor doesn't care whether it gets ForwardScanDirection or\n * NoMovementScanDirection for an indexscan, but the planner wants to\n * distinguish ordered from unordered indexes for building pathkeys.)\n\nWhile that comment's claim is plausible, I think it's been wrong for\nyears. AFAICS indxpath.c makes pathkeys before it ever does this:\n\n index_is_ordered ?\n ForwardScanDirection :\n NoMovementScanDirection,\n\nand nothing depends on that later either. So I think we could\nsimplify this to something like \"indexscandir is either\nForwardScanDirection or BackwardScanDirection. (Unordered\nindex types need not support BackwardScanDirection.)\"\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:02:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "Hi,\n\nOn 2023-01-25 10:02:28 -0500, Tom Lane wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Does anyone know of any reason why we shouldn't ditch the nomovement\n> > code in heapgettup/heapgettup_pagemode?\n\n+1\n\nBecause I dug it up yesterday. There used to be callers of heap* with\nNoMovement. But they were unused themselves:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=adbfab119b308a7e0e6b1305de9be222cfd5c85b\n\n\n> * If direction is NoMovementScanDirection then nothing is done\n> * except to start up/shut down the destination. Otherwise,\n> * we retrieve up to 'count' tuples in the specified direction.\n> *\n> * Note: count = 0 is interpreted as no portal limit, i.e., run to\n> * completion.\n> \n> We must have the NoMovementScanDirection option because count = 0\n> does not mean \"do nothing\", and I noted at least two call sites\n> that require it.\n\nI wonder if we'd be better off removing NoMovementScanDirection, and using\ncount == (uint64)-1 for what NoMovementScanDirection is currently used for as\nan ExecutorRun parameter. Seems less confusing to me - right now we have two\nparameters with non-obbvious meanings and interactions.\n\n\n> I wonder if we couldn't also get rid of this confusingly-inconsistent\n> alternative usage in the planner:\n> \n> * 'indexscandir' is one of:\n> * ForwardScanDirection: forward scan of an ordered index\n> * BackwardScanDirection: backward scan of an ordered index\n> * NoMovementScanDirection: scan of an unordered index, or don't care\n> * (The executor doesn't care whether it gets ForwardScanDirection or\n> * NoMovementScanDirection for an indexscan, but the planner wants to\n> * distinguish ordered from unordered indexes for building pathkeys.)\n\n+1\n\nCertainly seems confusing to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:27:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-25 10:02:28 -0500, Tom Lane wrote:\n>> We must have the NoMovementScanDirection option because count = 0\n>> does not mean \"do nothing\", and I noted at least two call sites\n>> that require it.\n\n> I wonder if we'd be better off removing NoMovementScanDirection, and using\n> count == (uint64)-1 for what NoMovementScanDirection is currently used for as\n> an ExecutorRun parameter. Seems less confusing to me - right now we have two\n> parameters with non-obbvious meanings and interactions.\n\nI'm down on that because it seems just about certain to break extensions\nthat call the executor, and it isn't adding enough clarity (IMHO) to\njustify that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Jan 2023 16:31:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "Hi,\n\nI have written the patch to remove the unreachable code in\nheapgettup_pagemode]().\n\nOn Wed, Jan 25, 2023 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wonder if we couldn't also get rid of this confusingly-inconsistent\n> alternative usage in the planner:\n>\n> * 'indexscandir' is one of:\n> * ForwardScanDirection: forward scan of an ordered index\n> * BackwardScanDirection: backward scan of an ordered index\n> * NoMovementScanDirection: scan of an unordered index, or don't care\n> * (The executor doesn't care whether it gets ForwardScanDirection or\n> * NoMovementScanDirection for an indexscan, but the planner wants to\n> * distinguish ordered from unordered indexes for building pathkeys.)\n>\n> While that comment's claim is plausible, I think it's been wrong for\n> years. AFAICS indxpath.c makes pathkeys before it ever does this:\n>\n> index_is_ordered ?\n> ForwardScanDirection :\n> NoMovementScanDirection,\n>\n> and nothing depends on that later either. So I think we could\n> simplify this to something like \"indexscandir is either\n> ForwardScanDirection or BackwardScanDirection. (Unordered\n> index types need not support BackwardScanDirection.)\"\n>\n\nI also did what I *think* Tom is suggesting here -- make index scan's\nscan direction always forward or backward...\n\nMaybe the set should be two patches...dunno.\n\n- Melanie", "msg_date": "Fri, 27 Jan 2023 16:40:01 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> I have written the patch to remove the unreachable code in\n> heapgettup_pagemode]().\n\nA few thoughts ...\n\n1. Do we really need quite so many Asserts? I'd kind of lean\nto just having one, at some higher level of the executor.\n\n2. I'm not sure if we want to do this:\n\n-\tdirection = estate->es_direction;\n-\t/* flip direction if this is an overall backward scan */\n-\tif (ScanDirectionIsBackward(((IndexScan *) node->ss.ps.plan)->indexorderdir))\n-\t{\n-\t\tif (ScanDirectionIsForward(direction))\n-\t\t\tdirection = BackwardScanDirection;\n-\t\telse if (ScanDirectionIsBackward(direction))\n-\t\t\tdirection = ForwardScanDirection;\n-\t}\n+\tdirection = estate->es_direction * ((IndexScan *) node->ss.ps.plan)->indexorderdir;\n\nAFAIR, there is noplace today that depends on the exact encoding\nof ForwardScanDirection and BackwardScanDirection, and I'm not\nsure that we want to introduce such a dependency. If we do it\nat least deserves a comment here, and you probably ought to adjust\nthe wishy-washy comment in sdir.h as well. Taking out the existing\ncomment explaining what this code is doing is not an improvement\neither.\n\n3. You didn't update the header comment for heapgettup, nor the\none in pathnodes.h for IndexPath.indexscandir.\n\n4. I don't think the proposed test case is worth the cycles.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 17:05:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "On Fri, Jan 27, 2023 at 05:05:16PM -0500, Tom Lane wrote:\n> Melanie Plageman <melanieplageman@gmail.com> writes:\n> > I have written the patch to remove the unreachable code in\n> > heapgettup_pagemode]().\n> \n> A few thoughts ...\n> \n> 1. Do we really need quite so many Asserts? I'd kind of lean\n> to just having one, at some higher level of the executor.\n\nYes, perhaps I was a bit overzealous putting them in functions called\nfor every tuple.\n\nI'm not sure where in the executor would make the most sense.\nExecInitSeqScan() comes to mind, but I'm not sure that covers all of the\ndesired cases.\n\n> \n> 2. I'm not sure if we want to do this:\n> \n> -\tdirection = estate->es_direction;\n> -\t/* flip direction if this is an overall backward scan */\n> -\tif (ScanDirectionIsBackward(((IndexScan *) node->ss.ps.plan)->indexorderdir))\n> -\t{\n> -\t\tif (ScanDirectionIsForward(direction))\n> -\t\t\tdirection = BackwardScanDirection;\n> -\t\telse if (ScanDirectionIsBackward(direction))\n> -\t\t\tdirection = ForwardScanDirection;\n> -\t}\n> +\tdirection = estate->es_direction * ((IndexScan *) node->ss.ps.plan)->indexorderdir;\n> \n> AFAIR, there is noplace today that depends on the exact encoding\n> of ForwardScanDirection and BackwardScanDirection, and I'm not\n> sure that we want to introduce such a dependency. If we do it\n> at least deserves a comment here, and you probably ought to adjust\n> the wishy-washy comment in sdir.h as well. Taking out the existing\n> comment explaining what this code is doing is not an improvement\n> either.\n\nI think you mean the enum value when you say encoding? I actually\nstarted using the ScanDirection value in the refactor of heapgettup()\nand heapgettup_pagemode() which I proposed here [1]. Why wouldn't we\nwant to introduce such a dependency?\n\n> \n> 3. You didn't update the header comment for heapgettup, nor the\n> one in pathnodes.h for IndexPath.indexscandir.\n\nOops -- thanks for catching those!\n\n> \n> 4. I don't think the proposed test case is worth the cycles.\n\nJust the one I wrote or any test case?\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/CAAKRu_ZJg_N7zHtWP%2BJoSY_hrce4%2BGKioL137Y2c2En-kuXQ7g%40mail.gmail.com#8a106c6625bc069cf439230cd9fa1000\n\n\n", "msg_date": "Fri, 27 Jan 2023 17:35:53 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> On Fri, Jan 27, 2023 at 05:05:16PM -0500, Tom Lane wrote:\n>> AFAIR, there is noplace today that depends on the exact encoding\n>> of ForwardScanDirection and BackwardScanDirection, and I'm not\n>> sure that we want to introduce such a dependency.\n\n> I think you mean the enum value when you say encoding? I actually\n> started using the ScanDirection value in the refactor of heapgettup()\n> and heapgettup_pagemode() which I proposed here [1]. Why wouldn't we\n> want to introduce such a dependency?\n\nIt's just that in general, depending on the numeric values of an\nenum isn't a great coding practice.\n\nAfter thinking about it for awhile, I'd be happier if we added\nsomething like this to sdir.h, and then used it rather than\ndirectly depending on multiplication:\n\n/*\n * Determine the net effect of two direction specifications.\n * This relies on having ForwardScanDirection = +1, BackwardScanDirection = -1,\n * and will probably not do what you want if applied to any other values.\n */\n#define CombineScanDirections(a, b) ((a) * (b))\n\nThe main thing this'd buy us is being able to grep for uses of the\ntrick. If it's written as just multiplication, good luck being\nable to find what's depending on that, should you ever need to.\n\n>> 4. I don't think the proposed test case is worth the cycles.\n\n> Just the one I wrote or any test case?\n\nI think that all this code is quite well-tested already, so I'm\nnot sure what's the point of adding another test for it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:15:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "On Sat, 28 Jan 2023 at 12:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> /*\n> * Determine the net effect of two direction specifications.\n> * This relies on having ForwardScanDirection = +1, BackwardScanDirection = -1,\n> * and will probably not do what you want if applied to any other values.\n> */\n> #define CombineScanDirections(a, b) ((a) * (b))\n>\n> The main thing this'd buy us is being able to grep for uses of the\n> trick. If it's written as just multiplication, good luck being\n> able to find what's depending on that, should you ever need to.\n\nYeah, I think the multiplication macro is a good way of doing it.\nHaving the definition of it close to the ScanDirection enum's\ndefinition is likely a very good idea so that anyone adjusting the\nenum values is more likely to notice that it'll cause an issue. A\nsmall note on the enum declaration about the -1, +1 values being\nexploited in various places might be a good idea too. I see v6-0006 in\n[1] further exploits this, so that's further reason to document that.\n\nMy personal preference would have been to call it\nScanDirectionCombine, so the naming is more aligned to the 4 other\nmacro names that start with ScanDirection in sdir.h, but I'm not going\nto fuss over it.\n\nDavid\n\n[1] https://postgr.es/m/CAAKRu_ZyiXwWS1WXSZneoy+sjoH_+F5UhO-1uFhyi-u0d6z+fA@mail.gmail.com\n\n\nDavid\n\n\n", "msg_date": "Sat, 28 Jan 2023 12:28:14 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> My personal preference would have been to call it\n> ScanDirectionCombine, so the naming is more aligned to the 4 other\n> macro names that start with ScanDirection in sdir.h, but I'm not going\n> to fuss over it.\n\nNo objection to that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:30:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "v2 attached\n\nOn Fri, Jan 27, 2023 at 6:28 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 28 Jan 2023 at 12:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > /*\n> > * Determine the net effect of two direction specifications.\n> > * This relies on having ForwardScanDirection = +1, BackwardScanDirection = -1,\n> > * and will probably not do what you want if applied to any other values.\n> > */\n> > #define CombineScanDirections(a, b) ((a) * (b))\n> >\n> > The main thing this'd buy us is being able to grep for uses of the\n> > trick. If it's written as just multiplication, good luck being\n> > able to find what's depending on that, should you ever need to.\n>\n> Yeah, I think the multiplication macro is a good way of doing it.\n> Having the definition of it close to the ScanDirection enum's\n> definition is likely a very good idea so that anyone adjusting the\n> enum values is more likely to notice that it'll cause an issue. A\n> small note on the enum declaration about the -1, +1 values being\n> exploited in various places might be a good idea too. I see v6-0006 in\n> [1] further exploits this, so that's further reason to document that.\n>\n> My personal preference would have been to call it\n> ScanDirectionCombine, so the naming is more aligned to the 4 other\n> macro names that start with ScanDirection in sdir.h, but I'm not going\n> to fuss over it.\n\nI've gone with this macro name.\nI've also updated comments Tom mentioned and removed the test.\n\nAs for the asserts, I was at a bit of a loss as to where to put an\nassert which will make it clear that heapgettup() and\nheapgettup_pagemode() do not handle NoMovementScanDirection but was\nat a higher level of the executor. Do we not have to accommodate the\ndirection changing from tuple to tuple? If we don't expect the plan node\ndirection to change during execution, then why recalculate\nestate->es_direction for each invocation of Index/SeqNext()?\n\nAs such, in this version I've put the asserts in heapgettup() and\nheapgettup_pagemode().\n\nI also realized that it doesn't really make sense to assert about the\nindex scan direction in ExecInitIndexOnlyScan() and ExecInitIndexScan()\n-- so I've moved the assertion to planner when we make the index plan\nfrom the path. I'm not sure if it is needed.\n\n- Melanie", "msg_date": "Mon, 30 Jan 2023 15:57:34 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "On Tue, 31 Jan 2023 at 09:57, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> As for the asserts, I was at a bit of a loss as to where to put an\n> assert which will make it clear that heapgettup() and\n> heapgettup_pagemode() do not handle NoMovementScanDirection but was\n> at a higher level of the executor.\n\nMy thoughts were that we might want to put them\ntable_scan_getnextslot() and table_scan_getnextslot_tidrange(). My\nrationale for that was that it makes it more clear to table AM devs\nthat they don't need to handle NoMovementScanDirection.\n\n> Do we not have to accommodate the\n> direction changing from tuple to tuple? If we don't expect the plan node\n> direction to change during execution, then why recalculate\n> estate->es_direction for each invocation of Index/SeqNext()?\n\nYeah, this needs to be handled. FETCH can fetch forwards or backwards\nfrom a cursor. The code you have looks fine to me.\n\n> As such, in this version I've put the asserts in heapgettup() and\n> heapgettup_pagemode().\n>\n> I also realized that it doesn't really make sense to assert about the\n> index scan direction in ExecInitIndexOnlyScan() and ExecInitIndexScan()\n> -- so I've moved the assertion to planner when we make the index plan\n> from the path. I'm not sure if it is needed.\n\nThat's probably slightly better.\n\nThe only thing I really have on this is my thoughts on the Asserts\ngoing in tableam.h plus the following comment:\n\n/*\n * These enum values were originally int8 values. Using -1, 0, and 1 as their\n * values conveniently mirrors their semantic value when used during execution.\n */\n\nI don't really see any reason to keep the historical note here. I\nthink something like the following might be better:\n\n/*\n * Defines the direction for scanning a table or an index. Scans are never\n * invoked using NoMovementScanDirectionScans. For convenience, we use the\n * values -1 and 1 for backward and forward scans. This allows us to perform\n * a few mathematical tricks such as what is done in ScanDirectionCombine.\n */\n\nAlso, a nitpick around the inconsistency with the Asserts. In\nmake_indexscan() and make_indexonlyscan() you're checking you're\ngetting a forward and backward value, but in heapgettup() and\nheapgettup_pagemode() you're checking you don't get\nNoMovementScanDirection. I think the != NoMovementScanDirection is\nfine for both cases.\n\nBoth can be easily fixed, so no need to submit another patch as far as\nI'm concerned.\n\nI'll leave this til tomorrow in case Tom wants to have another look too.\n\nDavid\n\n\n", "msg_date": "Tue, 31 Jan 2023 23:46:05 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "On Tue, Jan 31, 2023 at 11:46:05PM +1300, David Rowley wrote:\n> On Tue, 31 Jan 2023 at 09:57, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > As for the asserts, I was at a bit of a loss as to where to put an\n> > assert which will make it clear that heapgettup() and\n> > heapgettup_pagemode() do not handle NoMovementScanDirection but was\n> > at a higher level of the executor.\n> \n> My thoughts were that we might want to put them\n> table_scan_getnextslot() and table_scan_getnextslot_tidrange(). My\n> rationale for that was that it makes it more clear to table AM devs\n> that they don't need to handle NoMovementScanDirection.\n\nI previously had the asserts here, but I thought perhaps we shouldn't\nrestrict table AMs from using NoMovementScanDirection in whatever way\nthey'd like. We care about protecting heapgettup() and\nheapgettup_pagemode(). We could put a comment in the table AM API about\nNoMovementScanDirection not necessarily making sense for a next() type\nfunction and informing table AMs that they need not support it.\n\n> \n> > Do we not have to accommodate the\n> > direction changing from tuple to tuple? If we don't expect the plan node\n> > direction to change during execution, then why recalculate\n> > estate->es_direction for each invocation of Index/SeqNext()?\n> \n> Yeah, this needs to be handled. FETCH can fetch forwards or backwards\n> from a cursor. The code you have looks fine to me.\n> \n> > As such, in this version I've put the asserts in heapgettup() and\n> > heapgettup_pagemode().\n> >\n> > I also realized that it doesn't really make sense to assert about the\n> > index scan direction in ExecInitIndexOnlyScan() and ExecInitIndexScan()\n> > -- so I've moved the assertion to planner when we make the index plan\n> > from the path. I'm not sure if it is needed.\n> \n> That's probably slightly better.\n> \n> The only thing I really have on this is my thoughts on the Asserts\n> going in tableam.h plus the following comment:\n> \n> /*\n> * These enum values were originally int8 values. Using -1, 0, and 1 as their\n> * values conveniently mirrors their semantic value when used during execution.\n> */\n> \n> I don't really see any reason to keep the historical note here. I\n> think something like the following might be better:\n> \n> /*\n> * Defines the direction for scanning a table or an index. Scans are never\n> * invoked using NoMovementScanDirectionScans. For convenience, we use the\n> * values -1 and 1 for backward and forward scans. This allows us to perform\n> * a few mathematical tricks such as what is done in ScanDirectionCombine.\n> */\n\nThis comment looks good to me.\n\n> Also, a nitpick around the inconsistency with the Asserts. In\n> make_indexscan() and make_indexonlyscan() you're checking you're\n> getting a forward and backward value, but in heapgettup() and\n> heapgettup_pagemode() you're checking you don't get\n> NoMovementScanDirection. I think the != NoMovementScanDirection is\n> fine for both cases.\n\nYes, I thought about it being weird that they are different. Perhaps we\nshould check in both places that it is forward or backward. In\nheapgettup[_pagemode()] there is if/else -- so if the assert is only for\nNoMovementScanDirection and a new scan direction is added, it would fall\nthrough to the else.\n\nIn planner, it is not that we are not \"handling\" NoMovementScanDirection\n(like in heapgettup) but rather that we are only passing Forward and\nBackward scan directions when creating the path nodes, so the Assert\nwould be mainly to remind the developer that if they are creating a plan\nwith a different scan direction that they should be intentional about\nit.\n\nSo, I would favor having both asserts check that the direction is one of\nforward or backward.\n \n> Both can be easily fixed, so no need to submit another patch as far as\n> I'm concerned.\n\nI realized I forgot a commit message in the second version. Patch v1 has\none.\n\n- Melanie\n\n\n", "msg_date": "Tue, 31 Jan 2023 09:02:24 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "On Wed, 1 Feb 2023 at 03:02, Melanie Plageman <melanieplageman@gmail.com> wrote:\n>\n> On Tue, Jan 31, 2023 at 11:46:05PM +1300, David Rowley wrote:\n> > My thoughts were that we might want to put them\n> > table_scan_getnextslot() and table_scan_getnextslot_tidrange(). My\n> > rationale for that was that it makes it more clear to table AM devs\n> > that they don't need to handle NoMovementScanDirection.\n>\n> I previously had the asserts here, but I thought perhaps we shouldn't\n> restrict table AMs from using NoMovementScanDirection in whatever way\n> they'd like. We care about protecting heapgettup() and\n> heapgettup_pagemode(). We could put a comment in the table AM API about\n> NoMovementScanDirection not necessarily making sense for a next() type\n> function and informing table AMs that they need not support it.\n\nhmm, but the recent discovery is that we'll never call ExecutePlan()\nwith NoMovementScanDirection, so what exactly is going to call\ntable_scan_getnextslot() and table_scan_getnextslot_tidrange() with\nNoMovementScanDirection?\n\n> So, I would favor having both asserts check that the direction is one of\n> forward or backward.\n\nThat sounds fine to me.\n\nDavid\n\n\n", "msg_date": "Wed, 1 Feb 2023 09:28:22 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 1 Feb 2023 at 03:02, Melanie Plageman <melanieplageman@gmail.com> wrote:\n>> I previously had the asserts here, but I thought perhaps we shouldn't\n>> restrict table AMs from using NoMovementScanDirection in whatever way\n>> they'd like. We care about protecting heapgettup() and\n>> heapgettup_pagemode(). We could put a comment in the table AM API about\n>> NoMovementScanDirection not necessarily making sense for a next() type\n>> function and informing table AMs that they need not support it.\n\n> hmm, but the recent discovery is that we'll never call ExecutePlan()\n> with NoMovementScanDirection, so what exactly is going to call\n> table_scan_getnextslot() and table_scan_getnextslot_tidrange() with\n> NoMovementScanDirection?\n\nYeah. This is not an AM-local API.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Jan 2023 15:36:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" }, { "msg_contents": "On Wed, 1 Feb 2023 at 03:02, Melanie Plageman <melanieplageman@gmail.com> wrote:\n>\n> On Tue, Jan 31, 2023 at 11:46:05PM +1300, David Rowley wrote:\n> > Both can be easily fixed, so no need to submit another patch as far as\n> > I'm concerned.\n>\n> I realized I forgot a commit message in the second version. Patch v1 has\n> one.\n\nI made a couple of other adjustments to the Asserts and comments and\npushed the result.\n\nOn further looking, I felt the Assert was better off done in\ncreate_indexscan_plan() rather than make_index[only]scan(). I also put\nthe asserts in tableam.h and removed the heapam.c ones. The rest was\njust comment adjustments.\n\nDavid\n\n\n", "msg_date": "Wed, 1 Feb 2023 10:56:31 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: heapgettup() with NoMovementScanDirection unused in core?" } ]
[ { "msg_contents": "Attached is a quick-and-dirty attempt to add MSVC support for the\nrightmost/leftmost-one-pos functions.\n\n0001 adds asserts to the existing coding.\n0002 adds MSVC support. Tests pass on CI, but it's of course possible that\nthere is some bug that prevents hitting the fastpath. I've mostly used\nthe platform specific types, so some further cleanup might be needed.\n0003 tries one way to reduce the duplication that arose in 0002. Maybe\nthere is a better way.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 25 Jan 2023 08:42:44 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "bitscan forward/reverse on Windows" }, { "msg_contents": "I wrote:\n> Attached is a quick-and-dirty attempt to add MSVC support for the\nrightmost/leftmost-one-pos functions.\n>\n> 0001 adds asserts to the existing coding.\n> 0002 adds MSVC support. Tests pass on CI, but it's of course possible\nthat there is some bug that prevents hitting the fastpath. I've mostly used\nthe platform specific types, so some further cleanup might be needed.\n\nI've cleaned these up and verified on godbolt.org that they work as\nintended and still pass CI. I kept the Windows types as does other Winows\ncode in the tree, but used bool instead of unsigned char because it's used\nlike a boolean.\n\n0003 is separate because I'm not quite sure how detailed to comment the\n#ifdef maze. Could be left out.\n0004 simplifies AllocSetFreeIndex() in the course of supporting MSVC. The\noutput is identical to HEAD in non-assert builds using gcc.\n\n0002 through 0004 could be squashed together.\n\nThis plugs a hole in our platform-specific intrinsic support and is fairly\nstraightforward. Review welcome, but if there is none I intend to commit in\na week or two.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Feb 2023 15:14:08 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: bitscan forward/reverse on Windows" }, { "msg_contents": "On Wed, Feb 8, 2023 at 3:14 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n> > 0001 adds asserts to the existing coding.\n> > 0002 adds MSVC support. Tests pass on CI, but it's of course possible\nthat there is some bug that prevents hitting the fastpath. I've mostly used\nthe platform specific types, so some further cleanup might be needed.\n>\n> I've cleaned these up and verified on godbolt.org that they work as\nintended and still pass CI. I kept the Windows types as does other Winows\ncode in the tree, but used bool instead of unsigned char because it's used\nlike a boolean.\n>\n> 0003 is separate because I'm not quite sure how detailed to comment the\n#ifdef maze. Could be left out.\n> 0004 simplifies AllocSetFreeIndex() in the course of supporting MSVC. The\noutput is identical to HEAD in non-assert builds using gcc.\n>\n> 0002 through 0004 could be squashed together.\n>\n> This plugs a hole in our platform-specific intrinsic support and is\nfairly straightforward. Review welcome, but if there is none I intend to\ncommit in a week or two.\n\nI've committed 0001 separately, and squashed 0002 and 0004, deciding that\n0003 didn't really add to readability.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Feb 8, 2023 at 3:14 PM John Naylor <john.naylor@enterprisedb.com> wrote:> > 0001 adds asserts to the existing coding.> > 0002 adds MSVC support. Tests pass on CI, but it's of course possible that there is some bug that prevents hitting the fastpath. I've mostly used the platform specific types, so some further cleanup might be needed.>> I've cleaned these up and verified on godbolt.org that they work as intended and still pass CI. I kept the Windows types as does other Winows code in the tree, but used bool instead of unsigned char because it's used like a boolean.>> 0003 is separate because I'm not quite sure how detailed to comment the #ifdef maze. Could be left out.> 0004 simplifies AllocSetFreeIndex() in the course of supporting MSVC. The output is identical to HEAD in non-assert builds using gcc.>> 0002 through 0004 could be squashed together.>> This plugs a hole in our platform-specific intrinsic support and is fairly straightforward. Review welcome, but if there is none I intend to commit in a week or two.I've committed 0001 separately, and squashed 0002 and 0004, deciding that 0003 didn't really add to readability.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 20 Feb 2023 15:30:33 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: bitscan forward/reverse on Windows" } ]
[ { "msg_contents": "Hello,\n\nI have discovered a bug in one usage of enums. If a table with hash\npartitions uses an enum as a partitioning key, it can no longer be\nbacked up and restored correctly. This is because enums are represented\nsimply as oids, and the hash function for enums hashes that oid to\ndetermine partition distribution. Given the way oids are assigned, any\ndump+restore of a database with such a table may fail with the error\n\"ERROR: new row for relation \"TABLENAME\" violates partition constraint\".\n\nThis can be reproduced with the following steps:\n************************************************\ncreate database test;\n\\c test\ncreate type colors as enum ('red', 'green', 'blue', 'yellow');\ncreate table part (color colors) partition by hash(color);\n\ncreate table prt_0 partition of part for values \nwith (modulus 3, remainder 0);\n\ncreate table prt_1 partition of part for values \nwith (modulus 3, remainder 1);\n\ncreate table prt_2 partition of part for values \nwith (modulus 3, remainder 2);\ninsert into part values ('red');\n\n/usr/local/pgsql/bin/pg_dump -d test -f /tmp/dump.sql\n/usr/local/pgsql/bin/createdb test2\n/usr/local/pgsql/bin/psql test2 -f /tmp/dump.sql\n************************************************\n\nI have written a patch to fix this bug (attached), by instead having the\nhashenum functions look up the enumsortorder ID of the value being\nhashed. These are deterministic across databases, and so allow for\nstable dump and restore. This admittedly comes at the performance cost\nof doing a catalog lookup, but there is precedent for this in\ne.g. hashrange and hashtext.\n\nI look forward to your feedback on this, thank you!\n\nSincerely,\nAndrew J Repp (VMware)", "msg_date": "Tue, 24 Jan 2023 21:30:06 -0600", "msg_from": "Andrew <pgsqlhackers@andrewrepp.com>", "msg_from_op": true, "msg_subject": "Fix to enum hashing for dump and restore" }, { "msg_contents": "Andrew <pgsqlhackers@andrewrepp.com> writes:\n> I have discovered a bug in one usage of enums. If a table with hash\n> partitions uses an enum as a partitioning key, it can no longer be\n> backed up and restored correctly. This is because enums are represented\n> simply as oids, and the hash function for enums hashes that oid to\n> determine partition distribution. Given the way oids are assigned, any\n> dump+restore of a database with such a table may fail with the error\n> \"ERROR: new row for relation \"TABLENAME\" violates partition constraint\".\n\nUgh, that was not well thought out :-(. I suppose this isn't a problem\nfor pg_upgrade, which should preserve the enum value OIDs, but an\nordinary dump/restore will indeed hit this.\n\n> I have written a patch to fix this bug (attached), by instead having the\n> hashenum functions look up the enumsortorder ID of the value being\n> hashed. These are deterministic across databases, and so allow for\n> stable dump and restore.\n\nUnfortunately, I'm not sure those are as deterministic as all that.\nThey are floats, so there's a question of roundoff error, not to\nmention cross-platform variations in what a float looks like. (At the\nabsolute minimum, I think we'd have to change your patch to force\nconsistent byte ordering of the floats.) Actually though, roundoff\nerror wouldn't be a problem for the normal exact-integer values of\nenumsortorder. Where it could come into play is with the fractional\nvalues used after you insert a value into the existing sort order.\nAnd then the whole idea fails, because a dump/restore won't duplicate\nthose fractional values.\n\nAnother problem with this approach is that we can't get from here to there\nwithout a guaranteed dump/reload failure, since it's quite unlikely that\nthe partition assignment will be the same when based on enumsortorder\nas it was when based on OIDs. Worse, it also breaks the pg_upgrade case.\n\nI wonder if it'd work to make pg_dump force --load-via-partition-root\nmode when a hashed partition key includes an enum.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 22:56:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix to enum hashing for dump and restore" }, { "msg_contents": "Those are excellent points. \nWe will investigate adjusting pg_dump behavior,\nas this is primarily a dump+restore issue.\n\nThank you!\n\n-Andrew J Repp (VMware)\n\nOn Tue, Jan 24, 2023, at 9:56 PM, Tom Lane wrote:\n> Andrew <pgsqlhackers@andrewrepp.com> writes:\n> > I have discovered a bug in one usage of enums. If a table with hash\n> > partitions uses an enum as a partitioning key, it can no longer be\n> > backed up and restored correctly. This is because enums are represented\n> > simply as oids, and the hash function for enums hashes that oid to\n> > determine partition distribution. Given the way oids are assigned, any\n> > dump+restore of a database with such a table may fail with the error\n> > \"ERROR: new row for relation \"TABLENAME\" violates partition constraint\".\n> \n> Ugh, that was not well thought out :-(. I suppose this isn't a problem\n> for pg_upgrade, which should preserve the enum value OIDs, but an\n> ordinary dump/restore will indeed hit this.\n> \n> > I have written a patch to fix this bug (attached), by instead having the\n> > hashenum functions look up the enumsortorder ID of the value being\n> > hashed. These are deterministic across databases, and so allow for\n> > stable dump and restore.\n> \n> Unfortunately, I'm not sure those are as deterministic as all that.\n> They are floats, so there's a question of roundoff error, not to\n> mention cross-platform variations in what a float looks like. (At the\n> absolute minimum, I think we'd have to change your patch to force\n> consistent byte ordering of the floats.) Actually though, roundoff\n> error wouldn't be a problem for the normal exact-integer values of\n> enumsortorder. Where it could come into play is with the fractional\n> values used after you insert a value into the existing sort order.\n> And then the whole idea fails, because a dump/restore won't duplicate\n> those fractional values.\n> \n> Another problem with this approach is that we can't get from here to there\n> without a guaranteed dump/reload failure, since it's quite unlikely that\n> the partition assignment will be the same when based on enumsortorder\n> as it was when based on OIDs. Worse, it also breaks the pg_upgrade case.\n> \n> I wonder if it'd work to make pg_dump force --load-via-partition-root\n> mode when a hashed partition key includes an enum.\n> \n> regards, tom lane\n> \nThose are excellent points.  We will investigate adjusting pg_dump behavior,as this is primarily a dump+restore issue.Thank you!-Andrew J Repp (VMware)On Tue, Jan 24, 2023, at 9:56 PM, Tom Lane wrote:Andrew <pgsqlhackers@andrewrepp.com> writes:> I have discovered a bug in one usage of enums. If a table with hash> partitions uses an enum as a partitioning key, it can no longer be> backed up and restored correctly. This is because enums are represented> simply as oids, and the hash function for enums hashes that oid to> determine partition distribution. Given the way oids are assigned, any> dump+restore of a database with such a table may fail with the error> \"ERROR: new row for relation \"TABLENAME\" violates partition constraint\".Ugh, that was not well thought out :-(.  I suppose this isn't a problemfor pg_upgrade, which should preserve the enum value OIDs, but anordinary dump/restore will indeed hit this.> I have written a patch to fix this bug (attached), by instead having the> hashenum functions look up the enumsortorder ID of the value being> hashed. These are deterministic across databases, and so allow for> stable dump and restore.Unfortunately, I'm not sure those are as deterministic as all that.They are floats, so there's a question of roundoff error, not tomention cross-platform variations in what a float looks like.  (At theabsolute minimum, I think we'd have to change your patch to forceconsistent byte ordering of the floats.)  Actually though, roundofferror wouldn't be a problem for the normal exact-integer values ofenumsortorder.  Where it could come into play is with the fractionalvalues used after you insert a value into the existing sort order.And then the whole idea fails, because a dump/restore won't duplicatethose fractional values.Another problem with this approach is that we can't get from here to therewithout a guaranteed dump/reload failure, since it's quite unlikely thatthe partition assignment will be the same when based on enumsortorderas it was when based on OIDs.  Worse, it also breaks the pg_upgrade case.I wonder if it'd work to make pg_dump force --load-via-partition-rootmode when a hashed partition key includes an enum.regards, tom lane", "msg_date": "Wed, 25 Jan 2023 07:23:11 -0600", "msg_from": "Andrew <pgsqlhackers@andrewrepp.com>", "msg_from_op": true, "msg_subject": "Re: Fix to enum hashing for dump and restore" } ]
[ { "msg_contents": "Rename contrib module basic_archive to basic_wal_module\n\nThis rename is in preparation for the introduction of recovery modules,\nwhere basic_wal_module will be used as a base template for the set of\ncallbacks introduced. The former name did not really reflect all that.\n\nAuthor: Nathan Bossart\nDiscussion: https://postgr.es/m/20221227192449.GA3672473@nathanxps13\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/0ad3c60caf5f77edfefaf8850fbba5ea4fe28640\n\nModified Files\n--------------\ncontrib/Makefile | 2 +-\ncontrib/basic_archive/basic_archive.conf | 4 ---\ncontrib/basic_archive/meson.build | 34 ----------------------\n.../{basic_archive => basic_wal_module}/.gitignore | 0\n.../{basic_archive => basic_wal_module}/Makefile | 14 ++++-----\n.../basic_wal_module.c} | 26 ++++++++---------\ncontrib/basic_wal_module/basic_wal_module.conf | 4 +++\n.../expected/basic_wal_module.out} | 0\ncontrib/basic_wal_module/meson.build | 34 ++++++++++++++++++++++\n.../sql/basic_wal_module.sql} | 0\ncontrib/meson.build | 2 +-\ndoc/src/sgml/appendix-obsolete-basic-archive.sgml | 25 ++++++++++++++++\ndoc/src/sgml/appendix-obsolete.sgml | 1 +\ndoc/src/sgml/archive-modules.sgml | 2 +-\n.../{basic-archive.sgml => basic-wal-module.sgml} | 30 +++++++++----------\ndoc/src/sgml/contrib.sgml | 2 +-\ndoc/src/sgml/filelist.sgml | 3 +-\n17 files changed, 105 insertions(+), 78 deletions(-)", "msg_date": "Wed, 25 Jan 2023 05:37:17 +0000", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "On Wed, Jan 25, 2023 at 12:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Rename contrib module basic_archive to basic_wal_module\n\nFWIW, I find this new name much less clear than the old one.\n\nIf we want to provide a basic_archive module and a basic_recovery\nmodule, that seems fine. Why merge them?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 12:49:45 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "On Wed, Jan 25, 2023 at 12:49:45PM -0500, Robert Haas wrote:\n> On Wed, Jan 25, 2023 at 12:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Rename contrib module basic_archive to basic_wal_module\n> \n> FWIW, I find this new name much less clear than the old one.\n> \n> If we want to provide a basic_archive module and a basic_recovery\n> module, that seems fine. Why merge them?\n\nI'll admit I've been stewing on whether \"WAL Modules\" is the right name.\nMy first instinct was to simply call it \"Archive and Recovery Modules,\"\nwhich is longer but (IMHO) clearer.\n\nI wanted to merge basic_archive and basic_recovery because there's a decent\nchunk of duplicated code. Perhaps that is okay, but I would rather just\nhave one test module. AFAICT the biggest reason to split it is because we\ncan't determine a good name. Maybe we could leave the name as\n\"basic_archive\" since it deals with creating and recovering archive files.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:17:50 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "On Wed, Jan 25, 2023 at 1:17 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Wed, Jan 25, 2023 at 12:49:45PM -0500, Robert Haas wrote:\n> > On Wed, Jan 25, 2023 at 12:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Rename contrib module basic_archive to basic_wal_module\n> >\n> > FWIW, I find this new name much less clear than the old one.\n> >\n> > If we want to provide a basic_archive module and a basic_recovery\n> > module, that seems fine. Why merge them?\n>\n> I'll admit I've been stewing on whether \"WAL Modules\" is the right name.\n> My first instinct was to simply call it \"Archive and Recovery Modules,\"\n> which is longer but (IMHO) clearer.\n>\n> I wanted to merge basic_archive and basic_recovery because there's a decent\n> chunk of duplicated code. Perhaps that is okay, but I would rather just\n> have one test module. AFAICT the biggest reason to split it is because we\n> can't determine a good name. Maybe we could leave the name as\n> \"basic_archive\" since it deals with creating and recovering archive files.\n\nYeah, maybe. I'm not sure what the best thing to do is, but if I see a\nmodule called basic_archive or basic_restore, I know what it's about,\nwhereas basic_wal_module seems a lot less specific. It sounds like it\ncould be generating or streaming it just as easily as it could be\narchiving it. It would be nice to have a name that is less prone to\nthat kind of unclarity.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 14:05:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "On Wed, Jan 25, 2023 at 02:05:39PM -0500, Robert Haas wrote:\n> On Wed, Jan 25, 2023 at 1:17 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I wanted to merge basic_archive and basic_recovery because there's a decent\n>> chunk of duplicated code. Perhaps that is okay, but I would rather just\n>> have one test module. AFAICT the biggest reason to split it is because we\n>> can't determine a good name. Maybe we could leave the name as\n>> \"basic_archive\" since it deals with creating and recovering archive files.\n> \n> Yeah, maybe. I'm not sure what the best thing to do is, but if I see a\n> module called basic_archive or basic_restore, I know what it's about,\n> whereas basic_wal_module seems a lot less specific. It sounds like it\n> could be generating or streaming it just as easily as it could be\n> archiving it. It would be nice to have a name that is less prone to\n> that kind of unclarity.\n\nGood point. It seems like the most straightforward approach is just to\nhave separate modules. Unless Michael objects, I'll go that route.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 13:34:51 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I wanted to merge basic_archive and basic_recovery because there's a decent\n> chunk of duplicated code.\n\nWould said code likely be duplicated into non-test uses of this feature?\nIf so, maybe you ought to factor it out into a common location. I agree\nwith Robert's point that basic_wal_module is a pretty content-free name.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Jan 2023 16:50:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "Hi,\n\nOn 2023-01-25 14:05:39 -0500, Robert Haas wrote:\n> > I wanted to merge basic_archive and basic_recovery because there's a decent\n> > chunk of duplicated code. Perhaps that is okay, but I would rather just\n> > have one test module. AFAICT the biggest reason to split it is because we\n> > can't determine a good name. Maybe we could leave the name as\n> > \"basic_archive\" since it deals with creating and recovering archive files.\n> \n> Yeah, maybe. I'm not sure what the best thing to do is, but if I see a\n> module called basic_archive or basic_restore, I know what it's about,\n> whereas basic_wal_module seems a lot less specific. It sounds like it\n> could be generating or streaming it just as easily as it could be\n> archiving it. It would be nice to have a name that is less prone to\n> that kind of unclarity.\n\nI think it'd be just fine to keep the name as basic_archive and use it for\nboth archiving and restoring. Restoring from an archive still deals with\narchiving.\n\nI agree that basic_wal_module isn't a good name.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Jan 2023 13:58:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "On Wed, Jan 25, 2023 at 04:50:22PM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> I wanted to merge basic_archive and basic_recovery because there's a decent\n>> chunk of duplicated code.\n> \n> Would said code likely be duplicated into non-test uses of this feature?\n> If so, maybe you ought to factor it out into a common location. I agree\n> with Robert's point that basic_wal_module is a pretty content-free name.\n\nI doubt it. The duplicated parts are things like _PG_init(), the check\nhook for the GUC, and all the rest of the usual boilerplate stuff for\nextensions (e.g., Makefile, meson.build). This module is small enough that\nthis probably makes up the majority of the code.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 14:37:04 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "On Wed, Jan 25, 2023 at 01:58:01PM -0800, Andres Freund wrote:\n> I think it'd be just fine to keep the name as basic_archive and use it for\n> both archiving and restoring. Restoring from an archive still deals with\n> archiving.\n\nThis is my preference. If Michael and Robert are okay with it, I think\nthis is what we should do. Else, I'll create separate basic_archive and\nbasic_restore modules.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 14:41:18 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" }, { "msg_contents": "On Wed, Jan 25, 2023 at 02:41:18PM -0800, Nathan Bossart wrote:\n> This is my preference. If Michael and Robert are okay with it, I think\n> this is what we should do. Else, I'll create separate basic_archive and\n> basic_restore modules.\n\nGrouping both things into the same module has the advantage to ease\nthe configuration, at least in the example, where the archive\ndirectory GUC can be used for both paths.\n\nRegarding the renaming, I pushed for that. There's little love for it\nas far as I can see, so will revert accordingly. Keeping\nbasic_archive as name for the module while it includes recovery is\nfine by me.\n--\nMichael", "msg_date": "Thu, 26 Jan 2023 07:53:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Rename contrib module basic_archive to basic_wal_module" } ]
[ { "msg_contents": "This works in PG 15:\n\n CREATE ROLE service CREATEROLE;\n CREATE ROLE service1 WITH LOGIN IN ROLE service;\n SET SESSION AUTHORIZATION service;\n CREATE ROLE service2 WITH LOGIN IN ROLE service;\n\nbut generates an error in git master:\n\n CREATE ROLE service CREATEROLE;\n CREATE ROLE service1 WITH LOGIN IN ROLE service;\n SET SESSION AUTHORIZATION service;\n CREATE ROLE service2 WITH LOGIN IN ROLE service;\n--> ERROR: must have admin option on role \"service\"\n\nIf I make 'service' a superuser, it works:\n\n CREATE ROLE service SUPERUSER;\n CREATE ROLE service1 WITH LOGIN IN ROLE service;\n SET SESSION AUTHORIZATION service;\n CREATE ROLE service2 WITH LOGIN IN ROLE service;\n\nIt is probably related to this discussion and change:\n\n https://www.postgresql.org/message-id/flat/CA+TgmobGds7oefDjZUY+k_J7p1sS=pTq3sZ060qdb=oKei1Dkw@mail.gmail.com\n\nI am not sure if the behavior is wrong, the error message is wrong, or\nit is working as expected.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Wed, 25 Jan 2023 08:29:32 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "CREATE ROLE bug?" }, { "msg_contents": "On Wed, Jan 25, 2023 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n> This works in PG 15:\n>\n> CREATE ROLE service CREATEROLE;\n> CREATE ROLE service1 WITH LOGIN IN ROLE service;\n> SET SESSION AUTHORIZATION service;\n> CREATE ROLE service2 WITH LOGIN IN ROLE service;\n>\n> but generates an error in git master:\n>\n> CREATE ROLE service CREATEROLE;\n> CREATE ROLE service1 WITH LOGIN IN ROLE service;\n> SET SESSION AUTHORIZATION service;\n> CREATE ROLE service2 WITH LOGIN IN ROLE service;\n> --> ERROR: must have admin option on role \"service\"\n>\n> If I make 'service' a superuser, it works:\n>\n> CREATE ROLE service SUPERUSER;\n> CREATE ROLE service1 WITH LOGIN IN ROLE service;\n> SET SESSION AUTHORIZATION service;\n> CREATE ROLE service2 WITH LOGIN IN ROLE service;\n>\n> It is probably related to this discussion and change:\n>\n> https://www.postgresql.org/message-id/flat/CA+TgmobGds7oefDjZUY+k_J7p1sS=pTq3sZ060qdb=oKei1Dkw@mail.gmail.com\n>\n> I am not sure if the behavior is wrong, the error message is wrong, or\n> it is working as expected.\n\nIt is indeed related to that discussion and change. In existing\nreleased branches, a CREATEROLE user can make any role a member of any\nother role even if they have no rights at all with respect to that\nrole. This means that a CREATEROLE user can create a new user in the\npg_execute_server_programs group even though they have no access to\nit. That allows any CREATEROLE user to take over the OS account, and\nthus also superuser. In master, the rules have been tightened up.\nCREATEROLE no longer exempts you from the usual permission checks\nabout adding a user to a group. This means that a CREATEROLE user now\nneeds the same permissions to add a user to a group as any other user\nwould need, i.e. ADMIN OPTION on the group.\n\nIn your example, the \"service\" user has CREATEROLE and is therefore\nentitled to create new roles. However, \"service\" can only add those\nnew roles to groups for which \"service\" possesses ADMIN OPTION. And\n\"service\" does not have ADMIN OPTION on itself, because no role ever\npossesses ADMIN OPTION on itself.\n\nI wrote a blog about this yesterday, which may or may not be of help:\n\nhttp://rhaas.blogspot.com/2023/01/surviving-without-superuser-coming-to.html\n\nI think that the new behavior will surprise some people, as it has\nsurprised you, and it will take some time to get used to. However, I\nalso think that the changes are absolutely essential. We've been\nshipping major releases for years and just pretending that it's OK\nthat having CREATEROLE lets you take over the OS account and the\nsuperuser account. That's a security hole -- and not a subtle one. I\ncould have easily exploited it as a teenager. My goals in doing this\nproject were to (1) fix the security holes, (2) otherwise change as\nlittle about the behavior of CREATEROLE as possible, and (3) make\nCREATEROLE do something useful. We could have accomplished the first\ngoal by just removing CREATEROLE or making it not do anything at all,\nbut meeting the second and third goals at the same time require\nletting CREATEROLE continue to work but putting just enough\nrestrictions on its power to keep it from being used as a\nprivilege-escalation attack. I hope that what's been committed\naccomplishes that goal.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 08:47:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE bug?" }, { "msg_contents": "On Wed, Jan 25, 2023 at 08:47:14AM -0500, Robert Haas wrote:\n> > I am not sure if the behavior is wrong, the error message is wrong, or\n> > it is working as expected.\n> \n> It is indeed related to that discussion and change. In existing\n> released branches, a CREATEROLE user can make any role a member of any\n> other role even if they have no rights at all with respect to that\n> role. This means that a CREATEROLE user can create a new user in the\n> pg_execute_server_programs group even though they have no access to\n> it. That allows any CREATEROLE user to take over the OS account, and\n> thus also superuser. In master, the rules have been tightened up.\n> CREATEROLE no longer exempts you from the usual permission checks\n> about adding a user to a group. This means that a CREATEROLE user now\n> needs the same permissions to add a user to a group as any other user\n> would need, i.e. ADMIN OPTION on the group.\n> \n> In your example, the \"service\" user has CREATEROLE and is therefore\n> entitled to create new roles. However, \"service\" can only add those\n> new roles to groups for which \"service\" possesses ADMIN OPTION. And\n> \"service\" does not have ADMIN OPTION on itself, because no role ever\n> possesses ADMIN OPTION on itself.\n\nSo, how would someone with CREATEROLE permission add people to their own\nrole, without superuser permission? Are we adding any security by\npreventing this?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Wed, 25 Jan 2023 09:35:31 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE bug?" }, { "msg_contents": "On Wed, Jan 25, 2023 at 7:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> So, how would someone with CREATEROLE permission add people to their own\n> role, without superuser permission? Are we adding any security by\n> preventing this?\n>\n>\nAs an encouraged design choice you wouldn't. You'd create a new group and\nadd both yourself and the new role to it - then grant it the desired\npermissions.\n\nA CREATEROLE role should probably be a user (LOGIN) role and user roles\nshould not have members.\n\nDavid J.\n\nOn Wed, Jan 25, 2023 at 7:35 AM Bruce Momjian <bruce@momjian.us> wrote:\nSo, how would someone with CREATEROLE permission add people to their own\nrole, without superuser permission?  Are we adding any security by\npreventing this?As an encouraged design choice you wouldn't.  You'd create a new group and add both yourself and the new role to it - then grant it the desired permissions.A CREATEROLE role should probably be a user (LOGIN) role and user roles should not have members.David J.", "msg_date": "Wed, 25 Jan 2023 07:38:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE bug?" }, { "msg_contents": "On Wed, Jan 25, 2023 at 07:38:51AM -0700, David G. Johnston wrote:\n> On Wed, Jan 25, 2023 at 7:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> \n> So, how would someone with CREATEROLE permission add people to their own\n> role, without superuser permission?  Are we adding any security by\n> preventing this?\n> \n> \n> \n> As an encouraged design choice you wouldn't.  You'd create a new group and add\n> both yourself and the new role to it - then grant it the desired permissions.\n> \n> A CREATEROLE role should probably be a user (LOGIN) role and user roles should\n> not have members.\n\nMakes sense. I was actually using that pattern, but in running some\ntest scripts that didn't revert back to the superuser, I saw the errors\nand was confused.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:40:50 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE bug?" }, { "msg_contents": "On Wed, Jan 25, 2023 at 9:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n> So, how would someone with CREATEROLE permission add people to their own\n> role, without superuser permission? Are we adding any security by\n> preventing this?\n\nThey can't, because a role can't ever have ADMIN OPTION on itself, and\nyou need ADMIN OPTION on a role to confer membership in that role.\n\nThe security argument here is complicated, but I think it basically\nboils down to wanting to distinguish between accessing the permissions\nof a role and administering the role, or in other words, being a\nmember of a role is supposed to be different than having ADMIN OPTION\non it. Probably for that reason, we've never allowed making a role a\nmember of itself. In any release, this fails:\n\nrhaas=# grant bruce to bruce with admin option;\nERROR: role \"bruce\" is a member of role \"bruce\"\n\nIf that worked, then role \"bruce\" would be able to grant membership in\nrole \"bruce\" to anyone -- but all those people wouldn't just get\nmembership, they'd get ADMIN OPTION, too, because *bruce* has ADMIN\nOPTION on himself and anyone to whom he grants access to his role will\ntherefore get admin option too. Someone might argue that this is OK,\nbut our precedent is otherwise. It used to be the case that the users\nimplicitly enjoyed ADMIN OPTION on their own roles and thus could do\nthe sort of thing that you were proposing. That led to CVE-2014-0060\nand commit fea164a72a7bfd50d77ba5fb418d357f8f2bb7d0. That CVE is, as I\nunderstand it, all about maintaining the distinction between\nmembership and ADMIN OPTION. In other words, we've made an intentional\ndecision to not let ordinary users do the sort of thing you tried to\ndo here.\n\nSo the only reason your example ever worked is because the \"service\"\nrole had CREATEROLE, and thus, in earlier releases, got to bypass all\nthe permissions checks. But it turns out that letting CREATEROLE\nbypass all the permissions checks is *also* a big security problem, so\nthat is now restricted as well.\n\nI don't want to take the position that we couldn't find some way to\nallow ordinary users to do this. I think that the desire to maintain\nthe distinction between membership and ADMIN OPTION makes sense as a\ngeneral rule, but if somebody wants to abolish it in a particular case\nby making strange grants, would that really be that bad? I'm not\ntotally convinced that it would be. It probably depends somewhat on\nhow much you want to try to keep people from accidentally giving away\nmore privileges than they intended, and also on whether you think that\nthis is a useful thing for someone to be able to do in the first\nplace. However, it's the way we've designed the system and we've even\nrequested CVEs when we accidentally did something inconsistent with\nthat general principle. Similarly, I don't want to take the position\nthat the restrictions I put on CREATEROLE are the *only* way that we\ncould have plugged the security holes that it has had for years now. I\nthink they are pretty sensible and pretty consistent with the overall\nsystem design, but somebody else might have been able to come up with\nsome other set of restrictions that allowed this case to work.\n\nI think David is right to raise the question of how useful it would be\nto allow this case. In general, I think that if role A creates role B,\nit is more sensible to grant B's permissions to A than to grant A's\npermissions to B. The former is useful because it permits the more\npowerful user to act on behalf of the less powerful user, just as the\nsuperuser is able to administer the whole system by being able to act\non behalf of any other user. But the latter makes you wonder why you\nare even bothering to have two users, because you end up with A and B\nhaving exactly identical privileges. That's a bit of a strange thing\nto want, but if you do happen to want it, you can get it with this new\nsystem: again, as David says, you should just create one role to use\nas a group, and then grant membership in that group to multiple roles\nthat are used as users.\n\nBut it does seem pretty important to keep talking about these things,\nbecause there's definitely no guarantee whatsoever that all of the\ncommits I've made to master in this area are without problems. If we\nfind important cases that can't be supported given the new\nrestrictions on CREATEROLE, or even important cases that never worked\nbut we wish they did, well then we should think about what to change.\nI'm not too concerned about this particular case not working, but the\nnext one might be different.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 12:21:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE ROLE bug?" }, { "msg_contents": "On Wed, Jan 25, 2023 at 12:21:14PM -0500, Robert Haas wrote:\n> But it does seem pretty important to keep talking about these things,\n> because there's definitely no guarantee whatsoever that all of the\n> commits I've made to master in this area are without problems. If we\n> find important cases that can't be supported given the new\n> restrictions on CREATEROLE, or even important cases that never worked\n> but we wish they did, well then we should think about what to change.\n> I'm not too concerned about this particular case not working, but the\n> next one might be different.\n\nAgreed, thanks for the details.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Wed, 25 Jan 2023 12:34:00 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: CREATE ROLE bug?" } ]
[ { "msg_contents": "Hi,\n\nattached is proposal idea by Tomas (in CC) for protecting and\nprioritizing OLTP latency on syncrep over other heavy WAL hitting\nsessions. This is the result of internal testing and research related\nto the syncrep behavior with Tomas, Alvaro and me. The main objective\nof this work-in-progress/crude patch idea (with GUC default disabled)\nis to prevent build-up of lag against the synchronous standby. It\nallows DBA to maintain stable latency for many backends when in\nparallel there is some WAL-heavy activity happening (full table\nrewrite, VACUUM, MV build, archiving, etc.). In other words it allows\nslow down of any backend activity. Any feedback on such a feature is\nwelcome, including better GUC name proposals ;) and conditions in\nwhich such feature should be disabled even if it would be enabled\nglobally (right now only anti- wraparound VACUUM comes to mind, it's\nnot in the patch).\n\nDemo; Given:\n- two DBs in syncrep configuration and artificial RTT 10ms latency\nbetween them (introduced via tc qdisc netem)\n- insertBIG.sql = \"insert into bandwidthhog select repeat('c', 1000)\nfrom generate_series(1, 500000);\" (50MB of WAL data)\n- pgbench (8c) and 1x INSERT session\n\nThere are clearly visible drops of pgbench (OLTP) latency when the WAL\nsocket is saturated:\n\nwith 16devel/master and synchronous_commit_flush_wal_after=0\n(disabled, default/baseline):\npostgres@host1:~$ pgbench -n -R 50 -c 8 -T 15 -P 1\npgbench (16devel)\nprogress: 1.0 s, 59.0 tps, lat 18.840 ms stddev 11.251, 0 failed, lag 0.059 ms\nprogress: 2.0 s, 48.0 tps, lat 14.332 ms stddev 4.272, 0 failed, lag 0.063 ms\nprogress: 3.0 s, 56.0 tps, lat 15.383 ms stddev 6.270, 0 failed, lag 0.061 ms\nprogress: 4.0 s, 51.0 tps, lat 15.104 ms stddev 5.850, 0 failed, lag 0.061 ms\nprogress: 5.0 s, 47.0 tps, lat 15.184 ms stddev 5.472, 0 failed, lag 0.063 ms\nprogress: 6.0 s, 23.0 tps, lat 88.495 ms stddev 141.845, 0 failed, lag 0.064 ms\nprogress: 7.0 s, 1.0 tps, lat 999.053 ms stddev 0.000, 0 failed, lag 0.077 ms\nprogress: 8.0 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failed, lag 0.000 ms\nprogress: 9.0 s, 1.0 tps, lat 2748.142 ms stddev NaN, 0 failed, lag 0.072 ms\nprogress: 10.0 s, 68.1 tps, lat 3368.267 ms stddev 282.842, 0 failed,\nlag 2911.857 ms\nprogress: 11.0 s, 97.0 tps, lat 2560.750 ms stddev 216.844, 0 failed,\nlag 2478.261 ms\nprogress: 12.0 s, 96.0 tps, lat 1463.754 ms stddev 376.276, 0 failed,\nlag 1383.873 ms\nprogress: 13.0 s, 94.0 tps, lat 616.243 ms stddev 230.673, 0 failed,\nlag 527.241 ms\nprogress: 14.0 s, 59.0 tps, lat 48.265 ms stddev 72.533, 0 failed, lag 15.181 ms\nprogress: 15.0 s, 39.0 tps, lat 14.237 ms stddev 6.073, 0 failed, lag 0.063 ms\ntransaction type: <builtin: TPC-B (sort of)>\n[..]\nlatency average = 931.383 ms\nlatency stddev = 1188.530 ms\nrate limit schedule lag: avg 840.170 (max 3605.569) ms\n\nsession2 output:\n postgres=# \\i insertBIG.sql\n Timing is on.\n INSERT 0 500000\n Time: 4119.485 ms (00:04.119)\n\nThis new GUC makes it possible for the OLTP traffic to be less\naffected (latency-wise) when the heavy bulk traffic hits. With\nsynchronous_commit_flush_wal_after=1024 (kB) it's way better, but\nlatency rises up to 45ms:\npostgres@host1:~$ pgbench -n -R 50 -c 8 -T 15 -P 1\npgbench (16devel)\nprogress: 1.0 s, 52.0 tps, lat 17.300 ms stddev 10.178, 0 failed, lag 0.061 ms\nprogress: 2.0 s, 51.0 tps, lat 19.490 ms stddev 12.626, 0 failed, lag 0.061 ms\nprogress: 3.0 s, 48.0 tps, lat 14.839 ms stddev 5.429, 0 failed, lag 0.061 ms\nprogress: 4.0 s, 53.0 tps, lat 24.635 ms stddev 13.449, 0 failed, lag 0.062 ms\nprogress: 5.0 s, 48.0 tps, lat 17.999 ms stddev 9.291, 0 failed, lag 0.062 ms\nprogress: 6.0 s, 57.0 tps, lat 21.513 ms stddev 17.011, 0 failed, lag 0.058 ms\nprogress: 7.0 s, 50.0 tps, lat 28.071 ms stddev 21.622, 0 failed, lag 0.061 ms\nprogress: 8.0 s, 45.0 tps, lat 27.244 ms stddev 11.975, 0 failed, lag 0.064 ms\nprogress: 9.0 s, 57.0 tps, lat 35.988 ms stddev 25.752, 0 failed, lag 0.057 ms\nprogress: 10.0 s, 56.0 tps, lat 45.478 ms stddev 39.831, 0 failed, lag 0.534 ms\nprogress: 11.0 s, 62.0 tps, lat 45.146 ms stddev 32.881, 0 failed, lag 0.058 ms\nprogress: 12.0 s, 51.0 tps, lat 24.250 ms stddev 12.405, 0 failed, lag 0.063 ms\nprogress: 13.0 s, 57.0 tps, lat 18.554 ms stddev 8.677, 0 failed, lag 0.060 ms\nprogress: 14.0 s, 44.0 tps, lat 15.923 ms stddev 6.958, 0 failed, lag 0.065 ms\nprogress: 15.0 s, 54.0 tps, lat 19.773 ms stddev 10.024, 0 failed, lag 0.063 ms\ntransaction type: <builtin: TPC-B (sort of)>\n[..]\nlatency average = 25.575 ms\nlatency stddev = 21.540 ms\n\nsession2 output:\n postgres=# set synchronous_commit_flush_wal_after = 1024;\n SET\n postgres=# \\i insertBIG.sql\n INSERT 0 500000\n Time: 8889.318 ms (00:08.889)\n\n\nWith 16devel/master and synchronous_commit_flush_wal_after=256 (kB)\nall is smooth:\npostgres@host1:~$ pgbench -n -R 50 -c 8 -T 15 -P 1\npgbench (16devel)\nprogress: 1.0 s, 49.0 tps, lat 14.345 ms stddev 4.700, 0 failed, lag 0.062 ms\nprogress: 2.0 s, 45.0 tps, lat 14.812 ms stddev 5.816, 0 failed, lag 0.064 ms\nprogress: 3.0 s, 49.0 tps, lat 13.145 ms stddev 4.320, 0 failed, lag 0.063 ms\nprogress: 4.0 s, 44.0 tps, lat 14.429 ms stddev 4.715, 0 failed, lag 0.063 ms\nprogress: 5.0 s, 49.0 tps, lat 18.111 ms stddev 8.536, 0 failed, lag 0.062 ms\nprogress: 6.0 s, 58.0 tps, lat 17.929 ms stddev 8.198, 0 failed, lag 0.060 ms\nprogress: 7.0 s, 65.0 tps, lat 20.186 ms stddev 12.973, 0 failed, lag 0.059 ms\nprogress: 8.0 s, 47.0 tps, lat 16.174 ms stddev 6.508, 0 failed, lag 0.061 ms\nprogress: 9.0 s, 45.0 tps, lat 14.485 ms stddev 4.736, 0 failed, lag 0.061 ms\nprogress: 10.0 s, 53.0 tps, lat 16.879 ms stddev 8.783, 0 failed, lag 0.061 ms\nprogress: 11.0 s, 42.0 tps, lat 13.711 ms stddev 4.464, 0 failed, lag 0.062 ms\nprogress: 12.0 s, 49.0 tps, lat 13.252 ms stddev 4.082, 0 failed, lag 0.062 ms\nprogress: 13.0 s, 48.0 tps, lat 14.179 ms stddev 6.238, 0 failed, lag 0.058 ms\nprogress: 14.0 s, 43.0 tps, lat 12.210 ms stddev 2.993, 0 failed, lag 0.060 ms\nprogress: 15.0 s, 34.0 tps, lat 14.811 ms stddev 6.544, 0 failed, lag 0.062 ms\ntransaction type: <builtin: TPC-B (sort of)>\n[..]\nlatency average = 15.454 ms\nlatency stddev = 7.354 ms\n\nsession2 output (notice the INSERT took much longer but did NOT affect\nthe pgbench's stddev at all):\n postgres=# set synchronous_commit_flush_wal_after = 256;\n SET\n postgres=# \\i insertBIG.sql\n Timing is on.\n [..]\n INSERT 0 500000\n Time: 22737.729 ms (00:22.738)\n\nWithout this feature (or with synchronous_commit_flush_wal_after=0)\nthe TCP's SendQ on socket walsender-->walreceiver is growing and as\nsuch any next sendto() by OLTP backends/walwriter ends being queued\ntoo much causing stalls of activity.\n\n-Jakub Wartak.", "msg_date": "Wed, 25 Jan 2023 14:32:51 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-01-25 14:32:51 +0100, Jakub Wartak wrote:\n> In other words it allows slow down of any backend activity. Any feedback on\n> such a feature is welcome, including better GUC name proposals ;) and\n> conditions in which such feature should be disabled even if it would be\n> enabled globally (right now only anti- wraparound VACUUM comes to mind, it's\n> not in the patch).\n\nSuch a feature could be useful - but I don't think the current place of\nthrottling has any hope of working reliably:\n\n> @@ -1021,6 +1025,21 @@ XLogInsertRecord(XLogRecData *rdata,\n> \t\tpgWalUsage.wal_bytes += rechdr->xl_tot_len;\n> \t\tpgWalUsage.wal_records++;\n> \t\tpgWalUsage.wal_fpi += num_fpi;\n> +\n> +\t\tbackendWalInserted += rechdr->xl_tot_len;\n> +\n> +\t\tif ((synchronous_commit == SYNCHRONOUS_COMMIT_REMOTE_APPLY || synchronous_commit == SYNCHRONOUS_COMMIT_REMOTE_WRITE) &&\n> +\t\t\tsynchronous_commit_flush_wal_after > 0 &&\n> +\t\t\tbackendWalInserted > synchronous_commit_flush_wal_after * 1024L)\n> +\t\t{\n> +\t\t\telog(DEBUG3, \"throttling WAL down on this session (backendWalInserted=%d)\", backendWalInserted);\n> +\t\t\tXLogFlush(EndPos);\n> +\t\t\t/* XXX: refactor SyncRepWaitForLSN() to have different waitevent than default WAIT_EVENT_SYNC_REP */\n> +\t\t\t/* maybe new WAIT_EVENT_SYNC_REP_BIG or something like that */\n> +\t\t\tSyncRepWaitForLSN(EndPos, false);\n> +\t\t\telog(DEBUG3, \"throttling WAL down on this session - end\");\n> +\t\t\tbackendWalInserted = 0;\n> +\t\t}\n> \t}\n\nYou're blocking in the middle of an XLOG insertion. We will commonly hold\nimportant buffer lwlocks, it'll often be in a critical section (no cancelling\n/ terminating the session!). This'd entail loads of undetectable deadlocks\n(i.e. hard hangs). And even leaving that aside, doing an unnecessary\nXLogFlush() with important locks held will seriously increase contention.\n\n\nMy best idea for how to implement this in a somewhat safe way would be for\nXLogInsertRecord() to set a flag indicating that we should throttle, and set\nInterruptPending = true. Then the next CHECK_FOR_INTERRUPTS that's allowed to\nproceed (i.e. we'll not be in a critical / interrupts off section) can\nactually perform the delay. That should fix the hard deadlock danger and\nremove most of the increase in lock contention.\n\n\nI don't think doing an XLogFlush() of a record that we just wrote is a good\nidea - that'll sometimes significantly increase the number of fdatasyncs/sec\nwe're doing. To make matters worse, this will often cause partially filled WAL\npages to be flushed out - rewriting a WAL page multiple times can\nsignificantly increase overhead on the storage level. At the very least this'd\nhave to flush only up to the last fully filled page.\n\n\nJust counting the number of bytes inserted by a backend will make the overhead\neven worse, as the flush will be triggered even for OLTP sessions doing tiny\ntransactions, even though they don't contribute to the problem you're trying\nto address. How about counting how many bytes of WAL a backend has inserted\nsince the last time that backend did an XLogFlush()?\n\nA bulk writer won't do a lot of XLogFlush()es, so the time/bytes since the\nlast XLogFlush() will increase quickly, triggering a flush at the next\nopportunity. But short OLTP transactions will do XLogFlush()es at least at\nevery commit.\n\n\nI also suspect the overhead will be more manageable if you were to force a\nflush only up to a point further back than the last fully filled page. We\ndon't want to end up flushing WAL for every page, but if you just have a\nbackend-local accounting mechanism, I think that's inevitably what you'd end\nup with when you have a large number of sessions. But if you'd limit the\nflushing to be done to synchronous_commit_flush_wal_after / 2 boundaries, and\nonly ever to a prior boundary, the amount of unnecessary WAL flushes would be\nproportional to synchronous_commit_flush_wal_after.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Jan 2023 11:05:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On Thu, Jan 26, 2023 at 12:35 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-01-25 14:32:51 +0100, Jakub Wartak wrote:\n> > In other words it allows slow down of any backend activity. Any feedback on\n> > such a feature is welcome, including better GUC name proposals ;) and\n> > conditions in which such feature should be disabled even if it would be\n> > enabled globally (right now only anti- wraparound VACUUM comes to mind, it's\n> > not in the patch).\n>\n> Such a feature could be useful - but I don't think the current place of\n> throttling has any hope of working reliably:\n\n+1 for the feature as it keeps replication lag in check and provides a\nway for defining RPO (recovery point objective).\n\n> You're blocking in the middle of an XLOG insertion. We will commonly hold\n> important buffer lwlocks, it'll often be in a critical section (no cancelling\n> / terminating the session!). This'd entail loads of undetectable deadlocks\n> (i.e. hard hangs). And even leaving that aside, doing an unnecessary\n> XLogFlush() with important locks held will seriously increase contention.\n>\n> My best idea for how to implement this in a somewhat safe way would be for\n> XLogInsertRecord() to set a flag indicating that we should throttle, and set\n> InterruptPending = true. Then the next CHECK_FOR_INTERRUPTS that's allowed to\n> proceed (i.e. we'll not be in a critical / interrupts off section) can\n> actually perform the delay. That should fix the hard deadlock danger and\n> remove most of the increase in lock contention.\n\nWe've discussed this feature quite extensively in a recent thread -\nhttps://www.postgresql.org/message-id/CAHg%2BQDcO_zhgBCMn5SosvhuuCoJ1vKmLjnVuqUEOd4S73B1urw%40mail.gmail.com.\nFolks were agreeing to Andres' suggestion there -\nhttps://www.postgresql.org/message-id/20220105174643.lozdd3radxv4tlmx%40alap3.anarazel.de.\nHowever, that thread got lost from my radar. It's good that someone\nelse started working on it and I'm happy to help with this feature.\n\n> I don't think doing an XLogFlush() of a record that we just wrote is a good\n> idea - that'll sometimes significantly increase the number of fdatasyncs/sec\n> we're doing. To make matters worse, this will often cause partially filled WAL\n> pages to be flushed out - rewriting a WAL page multiple times can\n> significantly increase overhead on the storage level. At the very least this'd\n> have to flush only up to the last fully filled page.\n>\n> Just counting the number of bytes inserted by a backend will make the overhead\n> even worse, as the flush will be triggered even for OLTP sessions doing tiny\n> transactions, even though they don't contribute to the problem you're trying\n> to address.\n\nRight.\n\n> How about counting how many bytes of WAL a backend has inserted\n> since the last time that backend did an XLogFlush()?\n\nSeems reasonable.\n\n> A bulk writer won't do a lot of XLogFlush()es, so the time/bytes since the\n> last XLogFlush() will increase quickly, triggering a flush at the next\n> opportunity. But short OLTP transactions will do XLogFlush()es at least at\n> every commit.\n\nRight.\n\n> I also suspect the overhead will be more manageable if you were to force a\n> flush only up to a point further back than the last fully filled page. We\n> don't want to end up flushing WAL for every page, but if you just have a\n> backend-local accounting mechanism, I think that's inevitably what you'd end\n> up with when you have a large number of sessions. But if you'd limit the\n> flushing to be done to synchronous_commit_flush_wal_after / 2 boundaries, and\n> only ever to a prior boundary, the amount of unnecessary WAL flushes would be\n> proportional to synchronous_commit_flush_wal_after.\n\nWhen the WAL records are of large in size, then the backend inserting\nthem would throttle frequently generating more flushes and more waits\nfor sync replication ack and more contention on WALWriteLock. Not sure\nif this is good unless the impact is measured.\n\nFew more thoughts:\n\n1. This feature keeps replication lag in check at the cost of\nthrottling write traffic. I'd be curious to know improvement in\nreplication lag vs drop in transaction throughput, say pgbench with a\ncustom insert script and one or more async/sync standbys.\n\n2. So, heavy WAL throttling can turn into a lot of writes and fsyncs.\nEventually, each backend gets a chance to throttle WAL if it ever\ngenerates WAL irrespective of whether there's a replication lag or\nnot. How about we let backends throttle themselves not just based on\nwal_distance_from_last_flush but also depending on the replication lag\nat the moment, say, if replication lag crosses\nwal_throttle_replication_lag_threshold bytes?\n\n3. Should the backends wait indefinitely for sync rep ack when they\ncrossed wal_throttle_threshold? Well, I don't think so, it must be\ngiven a chance to do its stuff instead of favouring other backends by\nthrottling itself.\n\n4. I'd prefer adding a TAP test for this feature to check if the WAL\nthrottle is picked up by a backend.\n\n5. Can we also extend this WAL throttling feature to logical\nreplication to keep replication lag in check there as well?\n\n6. Backends can ignore throttling for WAL records marked as unimportant, no?\n\n7. I think we need to not let backends throttle too frequently even\nthough they have crossed wal_throttle_threshold bytes. The best way is\nto rely on replication lag, after all the goal of this feature is to\nkeep replication lag under check - say, throttle only when\nwal_distance > wal_throttle_threshold && replication_lag >\nwal_throttle_replication_lag_threshold.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Jan 2023 13:33:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "\n\nOn 1/25/23 20:05, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-25 14:32:51 +0100, Jakub Wartak wrote:\n>> In other words it allows slow down of any backend activity. Any feedback on\n>> such a feature is welcome, including better GUC name proposals ;) and\n>> conditions in which such feature should be disabled even if it would be\n>> enabled globally (right now only anti- wraparound VACUUM comes to mind, it's\n>> not in the patch).\n> \n> Such a feature could be useful - but I don't think the current place of\n> throttling has any hope of working reliably:\n> \n>> @@ -1021,6 +1025,21 @@ XLogInsertRecord(XLogRecData *rdata,\n>> \t\tpgWalUsage.wal_bytes += rechdr->xl_tot_len;\n>> \t\tpgWalUsage.wal_records++;\n>> \t\tpgWalUsage.wal_fpi += num_fpi;\n>> +\n>> +\t\tbackendWalInserted += rechdr->xl_tot_len;\n>> +\n>> +\t\tif ((synchronous_commit == SYNCHRONOUS_COMMIT_REMOTE_APPLY || synchronous_commit == SYNCHRONOUS_COMMIT_REMOTE_WRITE) &&\n>> +\t\t\tsynchronous_commit_flush_wal_after > 0 &&\n>> +\t\t\tbackendWalInserted > synchronous_commit_flush_wal_after * 1024L)\n>> +\t\t{\n>> +\t\t\telog(DEBUG3, \"throttling WAL down on this session (backendWalInserted=%d)\", backendWalInserted);\n>> +\t\t\tXLogFlush(EndPos);\n>> +\t\t\t/* XXX: refactor SyncRepWaitForLSN() to have different waitevent than default WAIT_EVENT_SYNC_REP */\n>> +\t\t\t/* maybe new WAIT_EVENT_SYNC_REP_BIG or something like that */\n>> +\t\t\tSyncRepWaitForLSN(EndPos, false);\n>> +\t\t\telog(DEBUG3, \"throttling WAL down on this session - end\");\n>> +\t\t\tbackendWalInserted = 0;\n>> +\t\t}\n>> \t}\n> \n> You're blocking in the middle of an XLOG insertion. We will commonly hold\n> important buffer lwlocks, it'll often be in a critical section (no cancelling\n> / terminating the session!). This'd entail loads of undetectable deadlocks\n> (i.e. hard hangs). And even leaving that aside, doing an unnecessary\n> XLogFlush() with important locks held will seriously increase contention.\n> \n\nYeah, I agree the sleep would have to happen elsewhere.\n\nIt's not clear to me how could it cause deadlocks, as we're not waiting\nfor a lock/resource locked by someone else, but it's certainly an issue\nfor uninterruptible hangs.\n\n> \n> My best idea for how to implement this in a somewhat safe way would be for\n> XLogInsertRecord() to set a flag indicating that we should throttle, and set\n> InterruptPending = true. Then the next CHECK_FOR_INTERRUPTS that's allowed to\n> proceed (i.e. we'll not be in a critical / interrupts off section) can\n> actually perform the delay. That should fix the hard deadlock danger and\n> remove most of the increase in lock contention.\n> \n\nThe solution I've imagined is something like autovacuum throttling - do\nsome accounting of how much \"WAL bandwidth\" each process consumed, and\nthen do the delay/sleep in a suitable place.\n\n> \n> I don't think doing an XLogFlush() of a record that we just wrote is a good\n> idea - that'll sometimes significantly increase the number of fdatasyncs/sec\n> we're doing. To make matters worse, this will often cause partially filled WAL\n> pages to be flushed out - rewriting a WAL page multiple times can\n> significantly increase overhead on the storage level. At the very least this'd\n> have to flush only up to the last fully filled page.\n> \n\nI don't see why would that significantly increase the number of flushes.\nThe goal is to do this only every ~1MB of a WAL or so, and chances are\nthere were many (perhaps hundreds or more) commits in between. At least\nin a workload with a fair number of OLTP transactions (which is kinda\nthe point of all this).\n\nAnd the \"small\" OLTP transactions don't really do any extra fsyncs,\nbecause the accounting resets at commit (or places that flush WAL).\n\nSame for the flushes of partially flushed pages - if there's enough of\nsmall OLTP transactions, we're already having this issue. I don't see\nwhy would this make it measurably worse. But if needed, we can simply\nbackoff to the last page boundary, so that we only flush the complete\npage. That would work too - the goal is not to flush everything, but to\nreduce how much of the lag is due to the current process (i.e. to wait\nfor sync replica to catch up).\n\n> \n> Just counting the number of bytes inserted by a backend will make the overhead\n> even worse, as the flush will be triggered even for OLTP sessions doing tiny\n> transactions, even though they don't contribute to the problem you're trying\n> to address. How about counting how many bytes of WAL a backend has inserted\n> since the last time that backend did an XLogFlush()?\n> \n\nNo, we should reset the counter at commit, so small OLTP transactions\nshould not reach really trigger this. That's kinda the point, to only\ndelay \"large\" transactions producing a lot of WAL.\n\n> A bulk writer won't do a lot of XLogFlush()es, so the time/bytes since the\n> last XLogFlush() will increase quickly, triggering a flush at the next\n> opportunity. But short OLTP transactions will do XLogFlush()es at least at\n> every commit.\n> \n\nRight, that's kinda the approach the patch is trying to do.\n\n> \n> I also suspect the overhead will be more manageable if you were to force a\n> flush only up to a point further back than the last fully filled page. We\n> don't want to end up flushing WAL for every page, but if you just have a\n> backend-local accounting mechanism, I think that's inevitably what you'd end\n> up with when you have a large number of sessions. But if you'd limit the\n> flushing to be done to synchronous_commit_flush_wal_after / 2 boundaries, and\n> only ever to a prior boundary, the amount of unnecessary WAL flushes would be\n> proportional to synchronous_commit_flush_wal_after.\n> \n\nTrue, that's kinda what I suggested above as a solution for partially\nfilled WAL pages.\n\nI agree this is crude and we'd probably need some sort of \"balancing\"\nlogic that distributes WAL bandwidth between backends, and throttles\nbackends producing a lot of WAL.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 26 Jan 2023 12:08:16 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "> On 1/25/23 20:05, Andres Freund wrote:\n> > Hi,\n> >\n> > Such a feature could be useful - but I don't think the current place of\n> > throttling has any hope of working reliably:\n[..]\n> > You're blocking in the middle of an XLOG insertion.\n[..]\n> Yeah, I agree the sleep would have to happen elsewhere.\n\nFixed.\n\n> > My best idea for how to implement this in a somewhat safe way would be for\n> > XLogInsertRecord() to set a flag indicating that we should throttle, and set\n> > InterruptPending = true. Then the next CHECK_FOR_INTERRUPTS that's allowed to\n> > proceed (i.e. we'll not be in a critical / interrupts off section) can\n> > actually perform the delay. That should fix the hard deadlock danger and\n> > remove most of the increase in lock contention.\n> >\n>\n> The solution I've imagined is something like autovacuum throttling - do\n> some accounting of how much \"WAL bandwidth\" each process consumed, and\n> then do the delay/sleep in a suitable place.\n>\n\nBy the time you replied I've already tried what Andres has recommended.\n\n[..]\n>> At the very least this'd\n> > have to flush only up to the last fully filled page.\n> >\n> Same for the flushes of partially flushed pages - if there's enough of\n> small OLTP transactions, we're already having this issue. I don't see\n> why would this make it measurably worse. But if needed, we can simply\n> backoff to the last page boundary, so that we only flush the complete\n> page. That would work too - the goal is not to flush everything, but to\n> reduce how much of the lag is due to the current process (i.e. to wait\n> for sync replica to catch up).\n\nI've introduced the rounding to the last written page (hopefully).\n\n> > Just counting the number of bytes inserted by a backend will make the overhead\n> > even worse, as the flush will be triggered even for OLTP sessions doing tiny\n> > transactions, even though they don't contribute to the problem you're trying\n> > to address. How about counting how many bytes of WAL a backend has inserted\n> > since the last time that backend did an XLogFlush()?\n> >\n>\n> No, we should reset the counter at commit, so small OLTP transactions\n> should not reach really trigger this. That's kinda the point, to only\n> delay \"large\" transactions producing a lot of WAL.\n\nFixed.\n\n> > I also suspect the overhead will be more manageable if you were to force a\n> > flush only up to a point further back than the last fully filled page. We\n> > don't want to end up flushing WAL for every page, but if you just have a\n> > backend-local accounting mechanism, I think that's inevitably what you'd end\n> > up with when you have a large number of sessions. But if you'd limit the\n> > flushing to be done to synchronous_commit_flush_wal_after / 2 boundaries, and\n> > only ever to a prior boundary, the amount of unnecessary WAL flushes would be\n> > proportional to synchronous_commit_flush_wal_after.\n> >\n>\n> True, that's kinda what I suggested above as a solution for partially\n> filled WAL pages.\n>\n> I agree this is crude and we'd probably need some sort of \"balancing\"\n> logic that distributes WAL bandwidth between backends, and throttles\n> backends producing a lot of WAL.\n\nOK - that's not included (yet?), as it would make this much more complex.\n\nIn summary: Attached is a slightly reworked version of this patch.\n1. Moved logic outside XLogInsertRecord() under ProcessInterrupts()\n2. Flushes up to the last page boundary, still uses SyncRepWaitForLSN()\n3. Removed GUC for now (always on->256kB); potentially to be restored\n4. Resets backendWal counter on commits\n\nIt's still crude, but first tests indicate that it behaves similarly\n(to the initial one with GUC = 256kB).\n\nAlso from the Bharath email, I've found another patch proposal by\nSimon [1] and I would like to avoid opening the Pandora's box again,\nbut to stress this: this feature is specifically aimed at solving OLTP\nlatency on *sync*rep (somewhat some code could be added/generalized\nlater on and this feature could be extended to async case, but this\nthen opens the question of managing the possible WAL throughput\nbudget/back throttling - this was also raised in first thread here [2]\nby Konstantin).\n\nIMHO it could implement various strategies under user-settable GUC\n\"wal_throttle_larger_transactions=(sync,256kB)\" or\n\"wal_throttle_larger_transactions=off\" , and someday later someone\ncould teach the code the async case (let's say under\n\"wal_throttle_larger_transactions=(asyncMaxRPO, maxAllowedLag=8MB,\n256kB)\"). Thoughts?\n\n[1] - https://www.postgresql.org/message-id/flat/CA%2BU5nMLfxBgHQ1VLSeBHYEMjHXz_OHSkuFdU6_1quiGM0RNKEg%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/71f3e6fb-2fca-a798-856a-f23c8ede2333%40garret.ru", "msg_date": "Thu, 26 Jan 2023 14:40:56 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-01-26 12:08:16 +0100, Tomas Vondra wrote:\n> It's not clear to me how could it cause deadlocks, as we're not waiting\n> for a lock/resource locked by someone else, but it's certainly an issue\n> for uninterruptible hangs.\n\nMaybe not. But I wouldn't want to bet on it. It's a violation of all kinds of\nlock-ordering rules.\n\n\n> > My best idea for how to implement this in a somewhat safe way would be for\n> > XLogInsertRecord() to set a flag indicating that we should throttle, and set\n> > InterruptPending = true. Then the next CHECK_FOR_INTERRUPTS that's allowed to\n> > proceed (i.e. we'll not be in a critical / interrupts off section) can\n> > actually perform the delay. That should fix the hard deadlock danger and\n> > remove most of the increase in lock contention.\n> > \n> \n> The solution I've imagined is something like autovacuum throttling - do\n> some accounting of how much \"WAL bandwidth\" each process consumed, and\n> then do the delay/sleep in a suitable place.\n\nWhere would such a point be, except for ProcessInterrupts()? Iteratively\nadding a separate set of \"wal_delay()\" points all over the executor,\ncommands/, ... seems like a bad plan.\n\n\n> > I don't think doing an XLogFlush() of a record that we just wrote is a good\n> > idea - that'll sometimes significantly increase the number of fdatasyncs/sec\n> > we're doing. To make matters worse, this will often cause partially filled WAL\n> > pages to be flushed out - rewriting a WAL page multiple times can\n> > significantly increase overhead on the storage level. At the very least this'd\n> > have to flush only up to the last fully filled page.\n> > \n> \n> I don't see why would that significantly increase the number of flushes.\n> The goal is to do this only every ~1MB of a WAL or so, and chances are\n> there were many (perhaps hundreds or more) commits in between. At least\n> in a workload with a fair number of OLTP transactions (which is kinda\n> the point of all this).\n\nBecause the accounting is done separately in each process. Even if you just\nadd a few additional flushes for each connection, in aggregate that'll be a\nlot.\n\n\n> And the \"small\" OLTP transactions don't really do any extra fsyncs,\n> because the accounting resets at commit (or places that flush WAL).\n> [...]\n> > Just counting the number of bytes inserted by a backend will make the overhead\n> > even worse, as the flush will be triggered even for OLTP sessions doing tiny\n> > transactions, even though they don't contribute to the problem you're trying\n> > to address. How about counting how many bytes of WAL a backend has inserted\n> > since the last time that backend did an XLogFlush()?\n> > \n> \n> No, we should reset the counter at commit, so small OLTP transactions\n> should not reach really trigger this. That's kinda the point, to only\n> delay \"large\" transactions producing a lot of WAL.\n\nI might have missed something, but I don't think the first version of patch\nresets the accounting at commit?\n\n\n> Same for the flushes of partially flushed pages - if there's enough of\n> small OLTP transactions, we're already having this issue. I don't see\n> why would this make it measurably worse.\n\nYes, we already have that problem, and it hurts. *Badly*. I don't see how v1\ncould *not* make it worse. Suddenly you get a bunch of additional XLogFlush()\ncalls to partial boundaries by every autovacuum, by every process doing a bit\nmore bulky work. Because you're flushing the LSN after a record you just\ninserted, you're commonly not going to be \"joining\" the work of an already\nin-process XLogFlush().\n\n\n> But if needed, we can simply backoff to the last page boundary, so that we\n> only flush the complete page. That would work too - the goal is not to flush\n> everything, but to reduce how much of the lag is due to the current process\n> (i.e. to wait for sync replica to catch up).\n\nYes, as I proposed...\n\n\n> > I also suspect the overhead will be more manageable if you were to force a\n> > flush only up to a point further back than the last fully filled page. We\n> > don't want to end up flushing WAL for every page, but if you just have a\n> > backend-local accounting mechanism, I think that's inevitably what you'd end\n> > up with when you have a large number of sessions. But if you'd limit the\n> > flushing to be done to synchronous_commit_flush_wal_after / 2 boundaries, and\n> > only ever to a prior boundary, the amount of unnecessary WAL flushes would be\n> > proportional to synchronous_commit_flush_wal_after.\n> > \n> \n> True, that's kinda what I suggested above as a solution for partially\n> filled WAL pages.\n\nI think flushing only up to a point further in the past than the last page\nboundary is somewhat important to prevent unnecessary flushes.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 26 Jan 2023 07:40:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-01-26 14:40:56 +0100, Jakub Wartak wrote:\n> In summary: Attached is a slightly reworked version of this patch.\n> 1. Moved logic outside XLogInsertRecord() under ProcessInterrupts()\n> 2. Flushes up to the last page boundary, still uses SyncRepWaitForLSN()\n> 3. Removed GUC for now (always on->256kB); potentially to be restored\n\nHuh? Why did you remove the GUC? Clearly this isn't something we can default\nto on.\n\n\n> diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\n> index d85e313908..05d56d65f9 100644\n> --- a/src/backend/access/transam/xact.c\n> +++ b/src/backend/access/transam/xact.c\n> @@ -2395,6 +2395,7 @@ CommitTransaction(void)\n> \n> \tXactTopFullTransactionId = InvalidFullTransactionId;\n> \tnParallelCurrentXids = 0;\n> +\tbackendWalInserted = 0;\n> \n> \t/*\n> \t * done with commit processing, set current transaction state back to\n\nI don't like the resets around like this. Why not just move it into\nXLogFlush()?\n\n\n\n> +/*\n> + * Called from ProcessMessageInterrupts() to avoid waiting while being in critical section\n> + */ \n> +void HandleXLogDelayPending()\n> +{\n> +\t/* flush only up to the last fully filled page */\n> +\tXLogRecPtr \tLastFullyWrittenXLogPage = XactLastRecEnd - (XactLastRecEnd % XLOG_BLCKSZ);\n> +\tXLogDelayPending = false;\n\nHm - wonder if it'd be better to determine the LSN to wait for in\nXLogInsertRecord(). There'll be plenty cases where we'll insert multiple (but\nbounded number of) WAL records before processing interrupts. No need to flush\nmore aggressively than necessary...\n\n\n> +\t//HOLD_INTERRUPTS();\n> +\n> +\t/* XXX Debug for now */\n> +\telog(WARNING, \"throttling WAL down on this session (backendWalInserted=%d, LSN=%X/%X flushingTo=%X/%X)\", \n> +\t\tbackendWalInserted, \n> +\t\tLSN_FORMAT_ARGS(XactLastRecEnd),\n> +\t\tLSN_FORMAT_ARGS(LastFullyWrittenXLogPage));\n> +\n> +\t/* XXX: refactor SyncRepWaitForLSN() to have different waitevent than default WAIT_EVENT_SYNC_REP */\n> +\t/* maybe new WAIT_EVENT_SYNC_REP_BIG or something like that */\n> +\tXLogFlush(LastFullyWrittenXLogPage);\n> +\tSyncRepWaitForLSN(LastFullyWrittenXLogPage, false);\n\nSyncRepWaitForLSN() has this comment:\n * 'lsn' represents the LSN to wait for. 'commit' indicates whether this LSN\n * represents a commit record. If it doesn't, then we wait only for the WAL\n * to be flushed if synchronous_commit is set to the higher level of\n * remote_apply, because only commit records provide apply feedback.\n\n\n> +\telog(WARNING, \"throttling WAL down on this session - end\");\n> +\tbackendWalInserted = 0;\n> +\n> +\t//RESUME_INTERRUPTS();\n> +}\n\nI think we'd want a distinct wait event for this.\n\n\n\n> diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\n> index 1b1d814254..8ed66b2eae 100644\n> --- a/src/backend/utils/init/globals.c\n> +++ b/src/backend/utils/init/globals.c\n> @@ -37,6 +37,7 @@ volatile sig_atomic_t IdleSessionTimeoutPending = false;\n> volatile sig_atomic_t ProcSignalBarrierPending = false;\n> volatile sig_atomic_t LogMemoryContextPending = false;\n> volatile sig_atomic_t IdleStatsUpdateTimeoutPending = false;\n> +volatile sig_atomic_t XLogDelayPending = false;\n> volatile uint32 InterruptHoldoffCount = 0;\n> volatile uint32 QueryCancelHoldoffCount = 0;\n> volatile uint32 CritSectionCount = 0;\n\nI don't think the new variable needs to be volatile, or even\nsig_atomic_t. We'll just manipulate it from outside signal handlers.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 26 Jan 2023 07:49:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-01-26 13:33:27 +0530, Bharath Rupireddy wrote:\n> 6. Backends can ignore throttling for WAL records marked as unimportant, no?\n\nWhy would that be a good idea? Not that it matters today, but those records\nstill need to be flushed in case of a commit by another transaction.\n\n\n> 7. I think we need to not let backends throttle too frequently even\n> though they have crossed wal_throttle_threshold bytes. The best way is\n> to rely on replication lag, after all the goal of this feature is to\n> keep replication lag under check - say, throttle only when\n> wal_distance > wal_throttle_threshold && replication_lag >\n> wal_throttle_replication_lag_threshold.\n\nI think my idea of only forcing to flush/wait an LSN some distance in the past\nwould automatically achieve that?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 26 Jan 2023 07:51:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "\n\nOn 1/26/23 16:40, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-26 12:08:16 +0100, Tomas Vondra wrote:\n>> It's not clear to me how could it cause deadlocks, as we're not waiting\n>> for a lock/resource locked by someone else, but it's certainly an issue\n>> for uninterruptible hangs.\n> \n> Maybe not. But I wouldn't want to bet on it. It's a violation of all kinds of\n> lock-ordering rules.\n> \n\nNot sure what lock ordering issues you have in mind, but I agree it's\nnot the right place for the sleep, no argument here.\n\n> \n>>> My best idea for how to implement this in a somewhat safe way would be for\n>>> XLogInsertRecord() to set a flag indicating that we should throttle, and set\n>>> InterruptPending = true. Then the next CHECK_FOR_INTERRUPTS that's allowed to\n>>> proceed (i.e. we'll not be in a critical / interrupts off section) can\n>>> actually perform the delay. That should fix the hard deadlock danger and\n>>> remove most of the increase in lock contention.\n>>>\n>>\n>> The solution I've imagined is something like autovacuum throttling - do\n>> some accounting of how much \"WAL bandwidth\" each process consumed, and\n>> then do the delay/sleep in a suitable place.\n> \n> Where would such a point be, except for ProcessInterrupts()? Iteratively\n> adding a separate set of \"wal_delay()\" points all over the executor,\n> commands/, ... seems like a bad plan.\n> \n\nI haven't thought about where to do the work, TBH. ProcessInterrupts()\nmay very well be a good place.\n\nI should have been clearer, but the main benefit of autovacuum-like\nthrottling is IMHO that it involves all processes and a global limit,\nwhile the current approach is per-backend.\n\n> \n>>> I don't think doing an XLogFlush() of a record that we just wrote is a good\n>>> idea - that'll sometimes significantly increase the number of fdatasyncs/sec\n>>> we're doing. To make matters worse, this will often cause partially filled WAL\n>>> pages to be flushed out - rewriting a WAL page multiple times can\n>>> significantly increase overhead on the storage level. At the very least this'd\n>>> have to flush only up to the last fully filled page.\n>>>\n>>\n>> I don't see why would that significantly increase the number of flushes.\n>> The goal is to do this only every ~1MB of a WAL or so, and chances are\n>> there were many (perhaps hundreds or more) commits in between. At least\n>> in a workload with a fair number of OLTP transactions (which is kinda\n>> the point of all this).\n> \n> Because the accounting is done separately in each process. Even if you just\n> add a few additional flushes for each connection, in aggregate that'll be a\n> lot.\n> \n\nHow come? Imagine the backend does flush only after generating e.g. 1MB\nof WAL. Small transactions won't do any additional flushes at all\n(because commit resets the accounting). Large transactions will do an\nextra flush every 1MB, so 16x per WAL segment. But in between there will\nbe many commits from the small transactions. If we backoff to the last\ncomplete page, that should eliminate even most of these flushes.\n\nSo where would the additional flushes come from?\n\n> \n>> And the \"small\" OLTP transactions don't really do any extra fsyncs,\n>> because the accounting resets at commit (or places that flush WAL).\n>> [...]\n>>> Just counting the number of bytes inserted by a backend will make the overhead\n>>> even worse, as the flush will be triggered even for OLTP sessions doing tiny\n>>> transactions, even though they don't contribute to the problem you're trying\n>>> to address. How about counting how many bytes of WAL a backend has inserted\n>>> since the last time that backend did an XLogFlush()?\n>>>\n>>\n>> No, we should reset the counter at commit, so small OLTP transactions\n>> should not reach really trigger this. That's kinda the point, to only\n>> delay \"large\" transactions producing a lot of WAL.\n> \n> I might have missed something, but I don't think the first version of patch\n> resets the accounting at commit?\n> \n\nYeah, it doesn't. It's mostly a minimal PoC patch, sufficient to test\nthe behavior / demonstrate the effect.\n\n> \n>> Same for the flushes of partially flushed pages - if there's enough of\n>> small OLTP transactions, we're already having this issue. I don't see\n>> why would this make it measurably worse.\n> \n> Yes, we already have that problem, and it hurts. *Badly*. I don't see how v1\n> could *not* make it worse. Suddenly you get a bunch of additional XLogFlush()\n> calls to partial boundaries by every autovacuum, by every process doing a bit\n> more bulky work. Because you're flushing the LSN after a record you just\n> inserted, you're commonly not going to be \"joining\" the work of an already\n> in-process XLogFlush().\n> \n\nRight. We do ~16 additional flushes per 16MB segment (or something like\nthat, depending on the GUC value). Considering we do thousand of commits\nper segment, each of which does a flush, I don't see how would this make\nit measurably worse.\n\n> \n>> But if needed, we can simply backoff to the last page boundary, so that we\n>> only flush the complete page. That would work too - the goal is not to flush\n>> everything, but to reduce how much of the lag is due to the current process\n>> (i.e. to wait for sync replica to catch up).\n> \n> Yes, as I proposed...\n> \n\nRight.\n\n> \n>>> I also suspect the overhead will be more manageable if you were to force a\n>>> flush only up to a point further back than the last fully filled page. We\n>>> don't want to end up flushing WAL for every page, but if you just have a\n>>> backend-local accounting mechanism, I think that's inevitably what you'd end\n>>> up with when you have a large number of sessions. But if you'd limit the\n>>> flushing to be done to synchronous_commit_flush_wal_after / 2 boundaries, and\n>>> only ever to a prior boundary, the amount of unnecessary WAL flushes would be\n>>> proportional to synchronous_commit_flush_wal_after.\n>>>\n>>\n>> True, that's kinda what I suggested above as a solution for partially\n>> filled WAL pages.\n> \n> I think flushing only up to a point further in the past than the last page\n> boundary is somewhat important to prevent unnecessary flushes.\n> \n\nNot sure I agree with that. Yes, we should not be doing flushes unless\nwe need to, but OTOH we should not delay sending WAL to standby too much\n- because that's what affects syncrep latency for small transactions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 26 Jan 2023 19:53:20 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On Thu, Jan 26, 2023 at 9:21 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > 7. I think we need to not let backends throttle too frequently even\n> > though they have crossed wal_throttle_threshold bytes. The best way is\n> > to rely on replication lag, after all the goal of this feature is to\n> > keep replication lag under check - say, throttle only when\n> > wal_distance > wal_throttle_threshold && replication_lag >\n> > wal_throttle_replication_lag_threshold.\n>\n> I think my idea of only forcing to flush/wait an LSN some distance in the past\n> would automatically achieve that?\n\nI'm sorry, I couldn't get your point, can you please explain it a bit more?\n\nLooking at the patch, the feature, in its current shape, focuses on\nimproving replication lag (by throttling WAL on the primary) only when\nsynchronous replication is enabled. Why is that? Why can't we design\nit for replication in general (async, sync, and logical replication)?\n\nKeeping replication lag under check enables one to provide a better\nRPO guarantee as discussed in the other thread\nhttps://www.postgresql.org/message-id/CAHg%2BQDcO_zhgBCMn5SosvhuuCoJ1vKmLjnVuqUEOd4S73B1urw%40mail.gmail.com.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 12:48:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On 2023-Jan-27, Bharath Rupireddy wrote:\n\n> Looking at the patch, the feature, in its current shape, focuses on\n> improving replication lag (by throttling WAL on the primary) only when\n> synchronous replication is enabled. Why is that? Why can't we design\n> it for replication in general (async, sync, and logical replication)?\n> \n> Keeping replication lag under check enables one to provide a better\n> RPO guarantee\n\nHmm, but you can already do that by tuning walwriter, no?\n\n-- \nÁlvaro Herrera\n\n\n", "msg_date": "Fri, 27 Jan 2023 09:20:46 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On Fri, Jan 27, 2023 at 2:03 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Jan-27, Bharath Rupireddy wrote:\n>\n> > Looking at the patch, the feature, in its current shape, focuses on\n> > improving replication lag (by throttling WAL on the primary) only when\n> > synchronous replication is enabled. Why is that? Why can't we design\n> > it for replication in general (async, sync, and logical replication)?\n> >\n> > Keeping replication lag under check enables one to provide a better\n> > RPO guarantee\n>\n> Hmm, but you can already do that by tuning walwriter, no?\n\nIIUC, one can increase wal_writer_delay or wal_writer_flush_after to\ncontrol the amount of WAL walwriter writes and flushes so that the\nwalsenders will get slower a bit thus improving replication lag. If my\nunderstanding is correct, what if other backends doing WAL writes and\nflushes? How do we control that?\n\nI think tuning walwriter alone may not help to keep replication lag\nunder check. Even if it does, it requires manual intervention.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 16:34:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nv2 is attached.\n\nOn Thu, Jan 26, 2023 at 4:49 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Huh? Why did you remove the GUC?\n\nAfter reading previous threads, my optimism level of getting it ever\nin shape of being widely accepted degraded significantly (mainly due\nto the discussion of wider category of 'WAL I/O throttling' especially\nin async case, RPO targets in async case and potentially calculating\nglobal bandwidth). I've assumed that it is a working sketch, and as\nsuch not having GUC name right now (just for sync case) would still\nallow covering various other async cases in future without breaking\ncompatibility with potential name GUC changes (see my previous\n\"wal_throttle_larger_transactions=<strategies>\" proposal ). Although\nI've restored the synchronous_commit_flush_wal_after GUC into v2\npatch, sticking to such a name removes the way of using the code to\nachieve async WAL throttling in future.\n\n> Clearly this isn't something we can default to on..\n\nYes, I agree. It's never meant to be on by default.\n\n> I don't like the resets around like this. Why not just move it into\n> XLogFlush()?\n\nFixed.\n\n> Hm - wonder if it'd be better to determine the LSN to wait for in\n> XLogInsertRecord(). There'll be plenty cases where we'll insert multiple (but\n> bounded number of) WAL records before processing interrupts. No need to flush\n> more aggressively than necessary...\n\nFixed.\n\n> SyncRepWaitForLSN() has this comment:\n> * 'lsn' represents the LSN to wait for. 'commit' indicates whether this LSN\n> * represents a commit record. If it doesn't, then we wait only for the WAL\n> * to be flushed if synchronous_commit is set to the higher level of\n> * remote_apply, because only commit records provide apply feedback.\n\nHm, not sure if I understand: are you saying that we should (in the\nthrottled scenario) have some special feedback msgs or not --\nirrespective of the setting? To be honest the throttling shouldn't\nwait for the standby full setting, it's just about slowdown fact (so\nIMHO it would be fine even in remote_write/remote_apply scenario if\nthe remote walreceiver just received the data, not necessarily write\nit into file or wait for for applying it). Just this waiting for a\nround-trip ack about LSN progress would be enough to slow down the\nwriter (?). I've added some timing log into the draft and it shows\nmore or less constantly solid RTT even as it stands:\n\npsql:insertBIG.sql:2: WARNING: throttling WAL down on this session\n(backendWalInserted=262632, LSN=2/A6CB70B8 flushingTo=2/A6CB6000)\npsql:insertBIG.sql:2: WARNING: throttling WAL down on this session -\nend (10.500052 ms)\npsql:insertBIG.sql:2: WARNING: throttling WAL down on this session\n(backendWalInserted=262632, LSN=2/A6CF7C08 flushingTo=2/A6CF6000)\npsql:insertBIG.sql:2: WARNING: throttling WAL down on this session -\nend (10.655370 ms)\npsql:insertBIG.sql:2: WARNING: throttling WAL down on this session\n(backendWalInserted=262632, LSN=2/A6D385E0 flushingTo=2/A6D38000)\npsql:insertBIG.sql:2: WARNING: throttling WAL down on this session -\nend (10.627334 ms)\npsql:insertBIG.sql:2: WARNING: throttling WAL down on this session\n(backendWalInserted=262632, LSN=2/A6D78FA0 flushingTo=2/A6D78000)\n[..]\n\n > I think we'd want a distinct wait event for this.\n\nAdded and tested that it shows up.\n\n> > +volatile sig_atomic_t XLogDelayPending = false;\n> I don't think the new variable needs to be volatile, or even\n> sig_atomic_t. We'll just manipulate it from outside signal handlers.\n\nChanged to bool, previously I wanted it to \"fit\" the above ones.\n\n-J.", "msg_date": "Fri, 27 Jan 2023 12:06:49 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi Bharath,\n\nOn Fri, Jan 27, 2023 at 12:04 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jan 27, 2023 at 2:03 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2023-Jan-27, Bharath Rupireddy wrote:\n> >\n> > > Looking at the patch, the feature, in its current shape, focuses on\n> > > improving replication lag (by throttling WAL on the primary) only when\n> > > synchronous replication is enabled. Why is that? Why can't we design\n> > > it for replication in general (async, sync, and logical replication)?\n> > >\n> > > Keeping replication lag under check enables one to provide a better\n> > > RPO guarantee\n\nSorry for not answering earlier; although the title of the thread goes\nfor the SyncRep-only I think everyone would be open to also cover the\nmore advanced async scenarios that Satyanarayana proposed in those\nearlier threads (as just did Simon much earlier). I was proposing\nwal_throttle_larger_transactions=<..> (for the lack of better name),\nhowever v2 of the patch from today right now contains again reference\nto syncrep (it could be changed of course). It's just the code that is\nmissing that could be also added on top of v2, so we could combine our\nefforts. It's just the competency and time that I lack on how to\nimplement such async-scenario code-paths (maybe Tomas V. has something\nin mind with his own words [1]) so also any feedback from senior\nhackers is more than welcome ...\n\n-J.\n\n[1] - \"The solution I've imagined is something like autovacuum\nthrottling - do some accounting of how much \"WAL bandwidth\" each\nprocess consumed, and then do the delay/sleep in a suitable place. \"\n\n\n", "msg_date": "Fri, 27 Jan 2023 12:28:59 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On 1/27/23 08:18, Bharath Rupireddy wrote:\n> On Thu, Jan 26, 2023 at 9:21 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>>> 7. I think we need to not let backends throttle too frequently even\n>>> though they have crossed wal_throttle_threshold bytes. The best way is\n>>> to rely on replication lag, after all the goal of this feature is to\n>>> keep replication lag under check - say, throttle only when\n>>> wal_distance > wal_throttle_threshold && replication_lag >\n>>> wal_throttle_replication_lag_threshold.\n>>\n>> I think my idea of only forcing to flush/wait an LSN some distance in the past\n>> would automatically achieve that?\n> \n> I'm sorry, I couldn't get your point, can you please explain it a bit more?\n> \n\nThe idea is that we would not flush the exact current LSN, because\nthat's likely somewhere in the page, and we always write the whole page\nwhich leads to write amplification.\n\nBut if we backed off a bit, and wrote e.g. to the last page boundary,\nthat wouldn't have this issue (either the page was already flushed -\nnoop, or we'd have to flush it anyway).\n\nWe could even back off a bit more, to increase the probability it was\nactually flushed / sent to standby. That would still work, because the\nwhole point is not to allow one process to generate too much unflushed\nWAL, forcing the other (small) xacts to wait at commit.\n\nImagine we have the limit set to 8MB, i.e. the backend flushes WAL after\ngenerating 8MB of WAL. If we flush to the exact current LSN, the other\nbackends will wait for ~4MB on average. If we back off to 1MB, the wait\naverage increases to ~4.5MB. (This is simplified, as it ignores WAL from\nthe small xacts. But those flush regularly, which limit the amount. It\nalso ignores there might be multiple large xacts.)\n\n> Looking at the patch, the feature, in its current shape, focuses on\n> improving replication lag (by throttling WAL on the primary) only when\n> synchronous replication is enabled. Why is that? Why can't we design\n> it for replication in general (async, sync, and logical replication)?\n> \n\nThis focuses on sync rep, because that's where the commit latency comes\nfrom. Async doesn't have that issue, because it doesn't wait for the\nstandby.\n\nIn particular, the trick is in penalizing the backends generating a lot\nof WAL, while leaving the small xacts alone.\n\n> Keeping replication lag under check enables one to provide a better\n> RPO guarantee as discussed in the other thread\n> https://www.postgresql.org/message-id/CAHg%2BQDcO_zhgBCMn5SosvhuuCoJ1vKmLjnVuqUEOd4S73B1urw%40mail.gmail.com.\n> \n\nIsn't that a bit over-complicated? RPO generally only cares about xacts\nthat committed (because that's what you want to not lose), so why not to\nsimply introduce a \"sync mode\" that simply uses a bit older LSN when\nwaiting for the replica? Seems much simpler and similar to what we\nalready do.\n\nYeah, if someone generates a lot of WAL in uncommitted transaction, all\nof that may be lost. But who cares (from the RPO point of view)?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 27 Jan 2023 21:45:16 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 12:06:49 +0100, Jakub Wartak wrote:\n> On Thu, Jan 26, 2023 at 4:49 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > Huh? Why did you remove the GUC?\n> \n> After reading previous threads, my optimism level of getting it ever\n> in shape of being widely accepted degraded significantly (mainly due\n> to the discussion of wider category of 'WAL I/O throttling' especially\n> in async case, RPO targets in async case and potentially calculating\n> global bandwidth).\n\nI think it's quite reasonable to limit this to a smaller scope. Particularly\nbecause those other goals are pretty vague but ambitious goals. IMO the\nproblem with a lot of the threads is precisely that that they aimed at a level\nof generallity that isn't achievable in one step.\n\n\n> I've assumed that it is a working sketch, and as such not having GUC name\n> right now (just for sync case) would still allow covering various other\n> async cases in future without breaking compatibility with potential name GUC\n> changes (see my previous \"wal_throttle_larger_transactions=<strategies>\"\n> proposal ).\n\nIt's harder to benchmark something like this without a GUC, so I think it's\nworth having, even if it's not the final name.\n\n\n> > SyncRepWaitForLSN() has this comment:\n> > * 'lsn' represents the LSN to wait for. 'commit' indicates whether this LSN\n> > * represents a commit record. If it doesn't, then we wait only for the WAL\n> > * to be flushed if synchronous_commit is set to the higher level of\n> > * remote_apply, because only commit records provide apply feedback.\n> \n> Hm, not sure if I understand: are you saying that we should (in the\n> throttled scenario) have some special feedback msgs or not --\n> irrespective of the setting? To be honest the throttling shouldn't\n> wait for the standby full setting, it's just about slowdown fact (so\n> IMHO it would be fine even in remote_write/remote_apply scenario if\n> the remote walreceiver just received the data, not necessarily write\n> it into file or wait for for applying it). Just this waiting for a\n> round-trip ack about LSN progress would be enough to slow down the\n> writer (?). I've added some timing log into the draft and it shows\n> more or less constantly solid RTT even as it stands:\n\nMy problem was that the function header for SyncRepWaitForLSN() seems to say\nthat we don't wait at all if commit=false and synchronous_commit <\nremote_apply. But I think that might just be bad formulation.\n\n [...] 'commit' indicates whether this LSN\n * represents a commit record. If it doesn't, then we wait only for the WAL\n * to be flushed if synchronous_commit is set to the higher level of\n * remote_apply, because only commit records provide apply feedback.\n\nbecause the code does something entirely different afaics:\n\n\t/* Cap the level for anything other than commit to remote flush only. */\n\tif (commit)\n\t\tmode = SyncRepWaitMode;\n\telse\n\t\tmode = Min(SyncRepWaitMode, SYNC_REP_WAIT_FLUSH);\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 13:19:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 12:48:43 +0530, Bharath Rupireddy wrote:\n> Looking at the patch, the feature, in its current shape, focuses on\n> improving replication lag (by throttling WAL on the primary) only when\n> synchronous replication is enabled. Why is that? Why can't we design\n> it for replication in general (async, sync, and logical replication)?\n> \n> Keeping replication lag under check enables one to provide a better\n> RPO guarantee as discussed in the other thread\n> https://www.postgresql.org/message-id/CAHg%2BQDcO_zhgBCMn5SosvhuuCoJ1vKmLjnVuqUEOd4S73B1urw%40mail.gmail.com.\n\nI think something narrower and easier to achieve is a good thing. We've\nalready had loads of discussion for the more general problem, without a lot of\nactual progress.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 13:20:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 21:45:16 +0100, Tomas Vondra wrote:\n> On 1/27/23 08:18, Bharath Rupireddy wrote:\n> >> I think my idea of only forcing to flush/wait an LSN some distance in the past\n> >> would automatically achieve that?\n> > \n> > I'm sorry, I couldn't get your point, can you please explain it a bit more?\n> > \n> \n> The idea is that we would not flush the exact current LSN, because\n> that's likely somewhere in the page, and we always write the whole page\n> which leads to write amplification.\n> \n> But if we backed off a bit, and wrote e.g. to the last page boundary,\n> that wouldn't have this issue (either the page was already flushed -\n> noop, or we'd have to flush it anyway).\n\nYep.\n\n\n> We could even back off a bit more, to increase the probability it was\n> actually flushed / sent to standby.\n\nThat's not the sole goal, from my end: I'd like to avoid writing out +\nflushing the WAL in too small chunks. Imagine a few concurrent vacuums or\nCOPYs or such - if we're unlucky they'd each end up exceeding their \"private\"\nlimit close to each other, leading to a number of small writes of the\nWAL. Which could end up increasing local commit latency / iops.\n\nIf we instead decide to only ever flush up to something like\n last_page_boundary - 1/8 * throttle_pages * XLOG_BLCKSZ\n\nwe'd make sure that the throttling mechanism won't cause a lot of small\nwrites.\n\n\n> > Keeping replication lag under check enables one to provide a better\n> > RPO guarantee as discussed in the other thread\n> > https://www.postgresql.org/message-id/CAHg%2BQDcO_zhgBCMn5SosvhuuCoJ1vKmLjnVuqUEOd4S73B1urw%40mail.gmail.com.\n> > \n> \n> Isn't that a bit over-complicated? RPO generally only cares about xacts\n> that committed (because that's what you want to not lose), so why not to\n> simply introduce a \"sync mode\" that simply uses a bit older LSN when\n> waiting for the replica? Seems much simpler and similar to what we\n> already do.\n\nI don't think that really helps you that much. If there's e.g. a huge VACUUM /\nCOPY emitting loads of WAL you'll suddenly see commit latency of a\nconcurrently committing transactions spike into oblivion. Whereas a general\nWAL throttling mechanism would throttle the VACUUM, without impacting the\ncommit latency of normal transactions.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 13:33:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "\n\nOn 1/27/23 22:33, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-27 21:45:16 +0100, Tomas Vondra wrote:\n>> On 1/27/23 08:18, Bharath Rupireddy wrote:\n>>>> I think my idea of only forcing to flush/wait an LSN some distance in the past\n>>>> would automatically achieve that?\n>>>\n>>> I'm sorry, I couldn't get your point, can you please explain it a bit more?\n>>>\n>>\n>> The idea is that we would not flush the exact current LSN, because\n>> that's likely somewhere in the page, and we always write the whole page\n>> which leads to write amplification.\n>>\n>> But if we backed off a bit, and wrote e.g. to the last page boundary,\n>> that wouldn't have this issue (either the page was already flushed -\n>> noop, or we'd have to flush it anyway).\n> \n> Yep.\n> \n> \n>> We could even back off a bit more, to increase the probability it was\n>> actually flushed / sent to standby.\n> \n> That's not the sole goal, from my end: I'd like to avoid writing out +\n> flushing the WAL in too small chunks. Imagine a few concurrent vacuums or\n> COPYs or such - if we're unlucky they'd each end up exceeding their \"private\"\n> limit close to each other, leading to a number of small writes of the\n> WAL. Which could end up increasing local commit latency / iops.\n> \n> If we instead decide to only ever flush up to something like\n> last_page_boundary - 1/8 * throttle_pages * XLOG_BLCKSZ\n> \n> we'd make sure that the throttling mechanism won't cause a lot of small\n> writes.\n> \n\nI'm not saying we shouldn't do this, but I still don't see how this\ncould make a measurable difference. At least assuming a sensible value\nof the throttling limit (say, more than 256kB per backend), and OLTP\nworkload running concurrently. That means ~64 extra flushes/writes per\n16MB segment (at most). Yeah, a particular page might get unlucky and be\nflushed by multiple backends, but the average still holds. Meanwhile,\nthe OLTP transactions will generate (at least) an order of magnitude\nmore flushes.\n\n> \n>>> Keeping replication lag under check enables one to provide a better\n>>> RPO guarantee as discussed in the other thread\n>>> https://www.postgresql.org/message-id/CAHg%2BQDcO_zhgBCMn5SosvhuuCoJ1vKmLjnVuqUEOd4S73B1urw%40mail.gmail.com.\n>>>\n>>\n>> Isn't that a bit over-complicated? RPO generally only cares about xacts\n>> that committed (because that's what you want to not lose), so why not to\n>> simply introduce a \"sync mode\" that simply uses a bit older LSN when\n>> waiting for the replica? Seems much simpler and similar to what we\n>> already do.\n> \n> I don't think that really helps you that much. If there's e.g. a huge VACUUM /\n> COPY emitting loads of WAL you'll suddenly see commit latency of a\n> concurrently committing transactions spike into oblivion. Whereas a general\n> WAL throttling mechanism would throttle the VACUUM, without impacting the\n> commit latency of normal transactions.\n> \n\nTrue, but it solves the RPO goal which is what the other thread was about.\n\nIMHO it's useful to look at this as a resource scheduling problem:\nlimited WAL bandwidth consumed by backends, with the bandwidth\ndistributed using some sort of policy.\n\nThe patch discussed in this thread uses fundamentally unfair policy,\nwith throttling applied only on backends that produce a lot of WAL. And\ntrying to leave the OLTP as unaffected as possible.\n\nThe RPO thread seems to be aiming for a \"fair\" policy, providing the\nsame fraction of bandwidth to all processes. This will affect all xacts\nthe same way (sleeps proportional to amount of WAL generated by the xact).\n\nPerhaps we want such alternative scheduling policies, but it'll probably\nrequire something like the autovacuum throttling.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 28 Jan 2023 01:36:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On 1/27/23 22:19, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-27 12:06:49 +0100, Jakub Wartak wrote:\n>> On Thu, Jan 26, 2023 at 4:49 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>>> Huh? Why did you remove the GUC?\n>>\n>> After reading previous threads, my optimism level of getting it ever\n>> in shape of being widely accepted degraded significantly (mainly due\n>> to the discussion of wider category of 'WAL I/O throttling' especially\n>> in async case, RPO targets in async case and potentially calculating\n>> global bandwidth).\n> \n> I think it's quite reasonable to limit this to a smaller scope. Particularly\n> because those other goals are pretty vague but ambitious goals. IMO the\n> problem with a lot of the threads is precisely that that they aimed at a level\n> of generallity that isn't achievable in one step.\n> \n\n+1 to that\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 28 Jan 2023 01:37:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On Sat, Jan 28, 2023 at 6:06 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> >\n> > That's not the sole goal, from my end: I'd like to avoid writing out +\n> > flushing the WAL in too small chunks. Imagine a few concurrent vacuums or\n> > COPYs or such - if we're unlucky they'd each end up exceeding their \"private\"\n> > limit close to each other, leading to a number of small writes of the\n> > WAL. Which could end up increasing local commit latency / iops.\n> >\n> > If we instead decide to only ever flush up to something like\n> > last_page_boundary - 1/8 * throttle_pages * XLOG_BLCKSZ\n> >\n> > we'd make sure that the throttling mechanism won't cause a lot of small\n> > writes.\n> >\n>\n> I'm not saying we shouldn't do this, but I still don't see how this\n> could make a measurable difference. At least assuming a sensible value\n> of the throttling limit (say, more than 256kB per backend), and OLTP\n> workload running concurrently. That means ~64 extra flushes/writes per\n> 16MB segment (at most). Yeah, a particular page might get unlucky and be\n> flushed by multiple backends, but the average still holds. Meanwhile,\n> the OLTP transactions will generate (at least) an order of magnitude\n> more flushes.\n\nI think measuring the number of WAL flushes with and without this\nfeature that the postgres generates is great to know this feature\neffects on IOPS. Probably it's even better with variations in\nsynchronous_commit_flush_wal_after or the throttling limits.\n\nOn Sat, Jan 28, 2023 at 6:08 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 1/27/23 22:19, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2023-01-27 12:06:49 +0100, Jakub Wartak wrote:\n> >> On Thu, Jan 26, 2023 at 4:49 PM Andres Freund <andres@anarazel.de> wrote:\n> >>\n> >>> Huh? Why did you remove the GUC?\n> >>\n> >> After reading previous threads, my optimism level of getting it ever\n> >> in shape of being widely accepted degraded significantly (mainly due\n> >> to the discussion of wider category of 'WAL I/O throttling' especially\n> >> in async case, RPO targets in async case and potentially calculating\n> >> global bandwidth).\n> >\n> > I think it's quite reasonable to limit this to a smaller scope. Particularly\n> > because those other goals are pretty vague but ambitious goals. IMO the\n> > problem with a lot of the threads is precisely that that they aimed at a level\n> > of generallity that isn't achievable in one step.\n> >\n>\n> +1 to that\n\nOkay, I agree to limit the scope first to synchronous replication.\n\nAlthough v2 is a WIP patch, I have some comments:\n1. Coding style doesn't confirm to the Postgres standard:\n+/*\n+ * Called from ProcessMessageInterrupts() to avoid waiting while\nbeing in critical section\n+ */\n80-line char limit\n\n+void HandleXLogDelayPending()\nA new line missing between function return type and functin name\n\n+ elog(WARNING, \"throttling WAL down on this session\n(backendWalInserted=%d, LSN=%X/%X flushingTo=%X/%X)\",\n+ backendWalInserted,\n+ LSN_FORMAT_ARGS(XactLastThrottledRecEnd),\n+ LSN_FORMAT_ARGS(LastFullyWrittenXLogPage));\nIndentation issue - space needed in the next lines after elog(WARNING,..\n\n+ elog(WARNING, \"throttling WAL down on this session - end (%f\nms)\", timediff);\n80-line char limit, timediff); be on the new line.\n\n+ //RESUME_INTERRUPTS();\n+ //HOLD_INTERRUPTS();\nMulti-line comments are used elsewhere in the code.\n\nBetter to run pgindent on the patch.\n\n2. It'd be better to add a TAP test hitting the WAL throttling.\n\n3. We generally don't need timings to be calculated explicitly when we\nemit before and after log messages. If needed one can calculate the\nwait time from timestamps of the log messages. If it still needs an\nexplicit time difference which I don't think is required, because we\nhave a special event and before/after log messages, I think it needs\nto be under track_wal_io_timing to keep things simple.\n\n4. XLogInsertRecord() can be a hot path for high-write workload, so\neffects of adding anything in it needs to be measured. So, it's better\nto run benchmarks with this feature enabled and disabled.\n\n5. Missing documentation of this feature and the GUC. I think we can\ndescribe extensively why we'd need a feature of this kind in the\ndocumentation for better adoption or usability.\n\n6. Shouldn't the max limit be MAX_KILOBYTES?\n+ &synchronous_commit_flush_wal_after,\n+ 0, 0, 1024L*1024L,\n\n7. Can this GUC name be something like\nsynchronous_commit_wal_throttle_threshold to better reflect what it\ndoes for a backend?\n+ {\"synchronous_commit_flush_wal_after\", PGC_USERSET,\nREPLICATION_SENDING,\n\n8. A typo - s/confiration/confirmation\n+ gettext_noop(\"Sets the maximum logged WAL in kbytes,\nafter which wait for sync commit confiration even without commit \"),\n\n9. This\n\"Sets the maximum logged WAL in kbytes, after which wait for sync\ncommit confiration even without commit \"\nbetter be something like below?\n\"Sets the maximum amount of WAL in kilobytes a backend generates after\nwhich it waits for synchronous commit confiration even without commit\n\"\n\n10. I think there's nothing wrong in adding some assertions in\nHandleXLogDelayPending():\nAssert(synchronous_commit_flush_wal_after > 0);\nAssert(backendWalInserted > synchronous_commit_flush_wal_after * 1024L);\nAssert(XactLastThrottledRecEnd is not InvalidXLogRecPtr);\n\n11. Specify the reason in the comments as to why we had to do the\nfollowing things:\nHere:\n+ /* flush only up to the last fully filled page */\n+ XLogRecPtr LastFullyWrittenXLogPage = XactLastThrottledRecEnd\n- (XactLastThrottledRecEnd % XLOG_BLCKSZ);\nTalk about how this avoids multiple-flushes for the same page.\n\nHere:\n+ * Called from ProcessMessageInterrupts() to avoid waiting while\nbeing in critical section\n+ */\n+void HandleXLogDelayPending()\nTalk about how waiting in a critical section, that is in\nXLogInsertRecord() causes locks to be held longer durations and other\neffects.\n\nHere:\n+ /* WAL throttling */\n+ backendWalInserted += rechdr->xl_tot_len;\nBe a bit more verbose about why we try to throttle WAL and why only\nfor sync replication, the advantages, effects etc.\n\n12. This better be a DEBUG1 or 2 message instead of WARNINGs to not\nclutter server logs.\n+ /* XXX Debug for now */\n+ elog(WARNING, \"throttling WAL down on this session\n(backendWalInserted=%d, LSN=%X/%X flushingTo=%X/%X)\",\n+ backendWalInserted,\n+ LSN_FORMAT_ARGS(XactLastThrottledRecEnd),\n+ LSN_FORMAT_ARGS(LastFullyWrittenXLogPage));\n\n+ elog(WARNING, \"throttling WAL down on this session - end (%f\nms)\", timediff);\n\n13. BTW, we don't need to hold interrupts while waiting for sync\nreplication ack as it may block query cancels or proc die pendings.\nYou can remove these, unless there's a strong reason.\n+ //HOLD_INTERRUPTS();\n+ //RESUME_INTERRUPTS();\n\n14. Add this wait event in the documentation.\n+ case WAIT_EVENT_SYNC_REP_THROTTLED:\n+ event_name = \"SyncRepThrottled\";\n+ break;\n\n15. Why can XLogDelayPending not be a volatile atomic variable? Is it\nbecause it's not something being set within a signal handler?\n extern PGDLLIMPORT volatile sig_atomic_t IdleStatsUpdateTimeoutPending;\n+extern PGDLLIMPORT bool XLogDelayPending;\n\n16.\n+ instr_time wal_throttle_time_start, wal_throttle_time_end;\n+ double timediff;\n+ XLogDelayPending = false;\nAn extra line needed after variable declaration and assignment.\n\n17. I think adding how many times a backend throttled WAL to\npg_stat_activity is a good metric.\n\n18. Can you avoid the need of new functions SyncRepWaitForLSNInternal\nand SyncRepWaitForLSNThrottled by relying on the global throttling\nstate to determine the correct waitevent in SyncRepWaitForLSN?\n\n19. I think measuring the number of WAL flushes with and without this\nfeature that the postgres generates is great to know this feature\neffects on IOPS. Probably it's even better with variations in\nsynchronous_commit_flush_wal_after or the throttling limits.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 13:46:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On Mon, Jan 30, 2023 at 9:16 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n\nHi Bharath, thanks for reviewing.\n\n> I think measuring the number of WAL flushes with and without this\n> feature that the postgres generates is great to know this feature\n> effects on IOPS. Probably it's even better with variations in\n> synchronous_commit_flush_wal_after or the throttling limits.\n\nIt's the same as point 19, so I covered it there.\n\n> Although v2 is a WIP patch, I have some comments:\n> 1. Coding style doesn't confirm to the Postgres standard:\n\nFixed all thoses that you mentioned and I've removed elog() and code\nfor timing. BTW: is there a way to pgindent only my changes (on git\ndiff?)\n\n> 2. It'd be better to add a TAP test hitting the WAL throttling.\n\nTODO, any hints on how that test should look like are welcome\n(checking pg_stat_wal?) I've could spot only 1 place for it --\nsrc/test/recovery/007_sync_rep.pl\n\n> 3. We generally don't need timings to be calculated explicitly when we\n> emit before and after log messages. If needed one can calculate the\n> wait time from timestamps of the log messages. If it still needs an\n> explicit time difference which I don't think is required, because we\n> have a special event and before/after log messages, I think it needs\n> to be under track_wal_io_timing to keep things simple.\n\nRemoved, it was just debugging aid to demonstrate the effect in the\nsession waiting.\n\n> 4. XLogInsertRecord() can be a hot path for high-write workload, so\n> effects of adding anything in it needs to be measured. So, it's better\n> to run benchmarks with this feature enabled and disabled.\n\nWhen the GUC is off(0 / default), in my tests the impact is none (it's\njust set of simple IFs), however if the feature is enabled then the\nINSERT is slowed down (I've repeated the initial test from 1st post\nand single-statement INSERT for 50MB WAL result is the same 4-5s and\n~23s +/- 1s when feature is activated when the RTT is 10.1ms, but\nthat's intentional).\n\n> 5. Missing documentation of this feature and the GUC. I think we can\n> describe extensively why we'd need a feature of this kind in the\n> documentation for better adoption or usability.\n\nTODO, I'm planning on adding documentation when we'll be a little bit\ncloser to adding to CF.\n\n> 6. Shouldn't the max limit be MAX_KILOBYTES?\n> + &synchronous_commit_flush_wal_after,\n> + 0, 0, 1024L*1024L,\n\nFixed.\n\n> 7. Can this GUC name be something like\n> synchronous_commit_wal_throttle_threshold to better reflect what it\n> does for a backend?\n> + {\"synchronous_commit_flush_wal_after\", PGC_USERSET,\n> REPLICATION_SENDING,\n\nDone.\n\n> 8. A typo - s/confiration/confirmation\n[..]\n> 9. This\n> \"Sets the maximum logged WAL in kbytes, after which wait for sync\n> commit confiration even without commit \"\n> better be something like below?\n> \"Sets the maximum amount of WAL in kilobytes a backend generates after\n> which it waits for synchronous commit confiration even without commit\n> \"\n\nFixed as you have suggested.\n\n> 10. I think there's nothing wrong in adding some assertions in\n> HandleXLogDelayPending():\n> Assert(synchronous_commit_flush_wal_after > 0);\n> Assert(backendWalInserted > synchronous_commit_flush_wal_after * 1024L);\n> Assert(XactLastThrottledRecEnd is not InvalidXLogRecPtr);\n\nSure, added.\n\n> 11. Specify the reason in the comments as to why we had to do the\n> following things:\n> Here:\n> + /* flush only up to the last fully filled page */\n> + XLogRecPtr LastFullyWrittenXLogPage = XactLastThrottledRecEnd\n> - (XactLastThrottledRecEnd % XLOG_BLCKSZ);\n> Talk about how this avoids multiple-flushes for the same page.\n>\n> Here:\n> + * Called from ProcessMessageInterrupts() to avoid waiting while\n> being in critical section\n> + */\n> +void HandleXLogDelayPending()\n> Talk about how waiting in a critical section, that is in\n> XLogInsertRecord() causes locks to be held longer durations and other\n> effects.\n\nAdded.\n\n> Here:\n> + /* WAL throttling */\n> + backendWalInserted += rechdr->xl_tot_len;\n> Be a bit more verbose about why we try to throttle WAL and why only\n> for sync replication, the advantages, effects etc.\n\nAdded.\n\n> 12. This better be a DEBUG1 or 2 message instead of WARNINGs to not\n> clutter server logs.\n> + /* XXX Debug for now */\n> + elog(WARNING, \"throttling WAL down on this session\n> (backendWalInserted=%d, LSN=%X/%X flushingTo=%X/%X)\",\n> + backendWalInserted,\n> + LSN_FORMAT_ARGS(XactLastThrottledRecEnd),\n> + LSN_FORMAT_ARGS(LastFullyWrittenXLogPage));\n>\n> + elog(WARNING, \"throttling WAL down on this session - end (%f\n> ms)\", timediff);\n\nOK, that's entirely removed.\n\n> 13. BTW, we don't need to hold interrupts while waiting for sync\n> replication ack as it may block query cancels or proc die pendings.\n> You can remove these, unless there's a strong reason.\n> + //HOLD_INTERRUPTS();\n> + //RESUME_INTERRUPTS();\n\nSure, removed. However, one problem I've discovered is that we were\nhitting Assert(InterruptHoldoffCount > 0) in SyncRepWaitForLSN, so\nI've fixed that too.\n\n> 14. Add this wait event in the documentation.\n> + case WAIT_EVENT_SYNC_REP_THROTTLED:\n> + event_name = \"SyncRepThrottled\";\n> + break;\n>\n> 15. Why can XLogDelayPending not be a volatile atomic variable? Is it\n> because it's not something being set within a signal handler?\n> extern PGDLLIMPORT volatile sig_atomic_t IdleStatsUpdateTimeoutPending;\n> +extern PGDLLIMPORT bool XLogDelayPending;\n\nAdded a comment explaining that.\n\n> 16.\n> + instr_time wal_throttle_time_start, wal_throttle_time_end;\n> + double timediff;\n> + XLogDelayPending = false;\n> An extra line needed after variable declaration and assignment.\n\nFixed.\n\n> 17. I think adding how many times a backend throttled WAL to\n> pg_stat_activity is a good metric.\n\nGood idea, added; catversion and pgstat format id were bumped. I've\nalso added it to the per-query EXPLAIN (WAL) so it logs something like\n\"WAL: records=500000 bytes=529500000 throttled=2016\" , however I would\nappreciate a better name proposal on how to name that.\n\n> 18. Can you avoid the need of new functions SyncRepWaitForLSNInternal\n> and SyncRepWaitForLSNThrottled by relying on the global throttling\n> state to determine the correct waitevent in SyncRepWaitForLSN?\n\nDone.\n\n> 19. I think measuring the number of WAL flushes with and without this\n> feature that the postgres generates is great to know this feature\n> effects on IOPS. Probably it's even better with variations in\n> synchronous_commit_flush_wal_after or the throttling limits.\n\ndefault =>\nINSERT 0 500000\nTime: 6996.340 ms (00:06.996)\n-[ RECORD 1 ]----+------------------------------\nwal_records | 500001\nwal_fpi | 0\nwal_bytes | 529500034\nwal_buffers_full | 44036\nwal_write | 44056\nwal_sync | 40\nwal_write_time | 317.991\nwal_sync_time | 2690.425\nwal_throttled | 0\nstats_reset | 2023-02-01 10:31:44.475651+01\nand 1 call to probe_postgres:XLogFlush\n\nset synchronous_commit_wal_throttle_threshold to '256kB'; =>\nINSERT 0 500000\nTime: 25476.155 ms (00:25.476)\n-[ RECORD 1 ]----+------------------------------\nwal_records | 500001\nwal_fpi | 0\nwal_bytes | 529500034\nwal_buffers_full | 0\nwal_write | 2062\nwal_sync | 2062\nwal_write_time | 180.177\nwal_sync_time | 1409.522\nwal_throttled | 2016\nstats_reset | 2023-02-01 10:32:01.955513+01\nand 2017 calls to probe_postgres:XLogFlush\n\nset synchronous_commit_wal_throttle_threshold to '1MB'; =>\nINSERT 0 500000\nTime: 10113.278 ms (00:10.113)\n-[ RECORD 1 ]----+------------------------------\nwal_records | 642864\nwal_fpi | 0\nwal_bytes | 537929315\nwal_buffers_full | 0\nwal_write | 560\nwal_sync | 559\nwal_write_time | 182.586\nwal_sync_time | 988.72\nwal_throttled | 504\nstats_reset | 2023-02-01 10:32:36.250678+01\n\nMaybe we should avoid calling fsyncs for WAL throttling? (by teaching\nHandleXLogDelayPending()->XLogFlush()->XLogWrite() to NOT to sync when\nwe are flushing just because of WAL thortting ?) Would that still be\nsafe?\n\n-J.", "msg_date": "Wed, 1 Feb 2023 11:04:14 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "\n\nOn 2/1/23 11:04, Jakub Wartak wrote:\n> On Mon, Jan 30, 2023 at 9:16 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> Hi Bharath, thanks for reviewing.\n> \n>> I think measuring the number of WAL flushes with and without this\n>> feature that the postgres generates is great to know this feature\n>> effects on IOPS. Probably it's even better with variations in\n>> synchronous_commit_flush_wal_after or the throttling limits.\n> \n> It's the same as point 19, so I covered it there.\n> \n>> Although v2 is a WIP patch, I have some comments:\n>> 1. Coding style doesn't confirm to the Postgres standard:\n> \n> Fixed all thoses that you mentioned and I've removed elog() and code\n> for timing. BTW: is there a way to pgindent only my changes (on git\n> diff?)\n> \n>> 2. It'd be better to add a TAP test hitting the WAL throttling.\n> \n> TODO, any hints on how that test should look like are welcome\n> (checking pg_stat_wal?) I've could spot only 1 place for it --\n> src/test/recovery/007_sync_rep.pl\n> \n>> 3. We generally don't need timings to be calculated explicitly when we\n>> emit before and after log messages. If needed one can calculate the\n>> wait time from timestamps of the log messages. If it still needs an\n>> explicit time difference which I don't think is required, because we\n>> have a special event and before/after log messages, I think it needs\n>> to be under track_wal_io_timing to keep things simple.\n> \n> Removed, it was just debugging aid to demonstrate the effect in the\n> session waiting.\n> \n>> 4. XLogInsertRecord() can be a hot path for high-write workload, so\n>> effects of adding anything in it needs to be measured. So, it's better\n>> to run benchmarks with this feature enabled and disabled.\n> \n> When the GUC is off(0 / default), in my tests the impact is none (it's\n> just set of simple IFs), however if the feature is enabled then the\n> INSERT is slowed down (I've repeated the initial test from 1st post\n> and single-statement INSERT for 50MB WAL result is the same 4-5s and\n> ~23s +/- 1s when feature is activated when the RTT is 10.1ms, but\n> that's intentional).\n> \n>> 5. Missing documentation of this feature and the GUC. I think we can\n>> describe extensively why we'd need a feature of this kind in the\n>> documentation for better adoption or usability.\n> \n> TODO, I'm planning on adding documentation when we'll be a little bit\n> closer to adding to CF.\n> \n>> 6. Shouldn't the max limit be MAX_KILOBYTES?\n>> + &synchronous_commit_flush_wal_after,\n>> + 0, 0, 1024L*1024L,\n> \n> Fixed.\n> \n>> 7. Can this GUC name be something like\n>> synchronous_commit_wal_throttle_threshold to better reflect what it\n>> does for a backend?\n>> + {\"synchronous_commit_flush_wal_after\", PGC_USERSET,\n>> REPLICATION_SENDING,\n> \n> Done.\n> \n>> 8. A typo - s/confiration/confirmation\n> [..]\n>> 9. This\n>> \"Sets the maximum logged WAL in kbytes, after which wait for sync\n>> commit confiration even without commit \"\n>> better be something like below?\n>> \"Sets the maximum amount of WAL in kilobytes a backend generates after\n>> which it waits for synchronous commit confiration even without commit\n>> \"\n> \n> Fixed as you have suggested.\n> \n>> 10. I think there's nothing wrong in adding some assertions in\n>> HandleXLogDelayPending():\n>> Assert(synchronous_commit_flush_wal_after > 0);\n>> Assert(backendWalInserted > synchronous_commit_flush_wal_after * 1024L);\n>> Assert(XactLastThrottledRecEnd is not InvalidXLogRecPtr);\n> \n> Sure, added.\n> \n>> 11. Specify the reason in the comments as to why we had to do the\n>> following things:\n>> Here:\n>> + /* flush only up to the last fully filled page */\n>> + XLogRecPtr LastFullyWrittenXLogPage = XactLastThrottledRecEnd\n>> - (XactLastThrottledRecEnd % XLOG_BLCKSZ);\n>> Talk about how this avoids multiple-flushes for the same page.\n>>\n>> Here:\n>> + * Called from ProcessMessageInterrupts() to avoid waiting while\n>> being in critical section\n>> + */\n>> +void HandleXLogDelayPending()\n>> Talk about how waiting in a critical section, that is in\n>> XLogInsertRecord() causes locks to be held longer durations and other\n>> effects.\n> \n> Added.\n> \n>> Here:\n>> + /* WAL throttling */\n>> + backendWalInserted += rechdr->xl_tot_len;\n>> Be a bit more verbose about why we try to throttle WAL and why only\n>> for sync replication, the advantages, effects etc.\n> \n> Added.\n> \n>> 12. This better be a DEBUG1 or 2 message instead of WARNINGs to not\n>> clutter server logs.\n>> + /* XXX Debug for now */\n>> + elog(WARNING, \"throttling WAL down on this session\n>> (backendWalInserted=%d, LSN=%X/%X flushingTo=%X/%X)\",\n>> + backendWalInserted,\n>> + LSN_FORMAT_ARGS(XactLastThrottledRecEnd),\n>> + LSN_FORMAT_ARGS(LastFullyWrittenXLogPage));\n>>\n>> + elog(WARNING, \"throttling WAL down on this session - end (%f\n>> ms)\", timediff);\n> \n> OK, that's entirely removed.\n> \n>> 13. BTW, we don't need to hold interrupts while waiting for sync\n>> replication ack as it may block query cancels or proc die pendings.\n>> You can remove these, unless there's a strong reason.\n>> + //HOLD_INTERRUPTS();\n>> + //RESUME_INTERRUPTS();\n> \n> Sure, removed. However, one problem I've discovered is that we were\n> hitting Assert(InterruptHoldoffCount > 0) in SyncRepWaitForLSN, so\n> I've fixed that too.\n> \n>> 14. Add this wait event in the documentation.\n>> + case WAIT_EVENT_SYNC_REP_THROTTLED:\n>> + event_name = \"SyncRepThrottled\";\n>> + break;\n>>\n>> 15. Why can XLogDelayPending not be a volatile atomic variable? Is it\n>> because it's not something being set within a signal handler?\n>> extern PGDLLIMPORT volatile sig_atomic_t IdleStatsUpdateTimeoutPending;\n>> +extern PGDLLIMPORT bool XLogDelayPending;\n> \n> Added a comment explaining that.\n> \n>> 16.\n>> + instr_time wal_throttle_time_start, wal_throttle_time_end;\n>> + double timediff;\n>> + XLogDelayPending = false;\n>> An extra line needed after variable declaration and assignment.\n> \n> Fixed.\n> \n>> 17. I think adding how many times a backend throttled WAL to\n>> pg_stat_activity is a good metric.\n> \n> Good idea, added; catversion and pgstat format id were bumped. I've\n> also added it to the per-query EXPLAIN (WAL) so it logs something like\n> \"WAL: records=500000 bytes=529500000 throttled=2016\" , however I would\n> appreciate a better name proposal on how to name that.\n> \n>> 18. Can you avoid the need of new functions SyncRepWaitForLSNInternal\n>> and SyncRepWaitForLSNThrottled by relying on the global throttling\n>> state to determine the correct waitevent in SyncRepWaitForLSN?\n> \n> Done.\n> \n>> 19. I think measuring the number of WAL flushes with and without this\n>> feature that the postgres generates is great to know this feature\n>> effects on IOPS. Probably it's even better with variations in\n>> synchronous_commit_flush_wal_after or the throttling limits.\n> \n> default =>\n> INSERT 0 500000\n> Time: 6996.340 ms (00:06.996)\n> -[ RECORD 1 ]----+------------------------------\n> wal_records | 500001\n> wal_fpi | 0\n> wal_bytes | 529500034\n> wal_buffers_full | 44036\n> wal_write | 44056\n> wal_sync | 40\n> wal_write_time | 317.991\n> wal_sync_time | 2690.425\n> wal_throttled | 0\n> stats_reset | 2023-02-01 10:31:44.475651+01\n> and 1 call to probe_postgres:XLogFlush\n> \n> set synchronous_commit_wal_throttle_threshold to '256kB'; =>\n> INSERT 0 500000\n> Time: 25476.155 ms (00:25.476)\n> -[ RECORD 1 ]----+------------------------------\n> wal_records | 500001\n> wal_fpi | 0\n> wal_bytes | 529500034\n> wal_buffers_full | 0\n> wal_write | 2062\n> wal_sync | 2062\n> wal_write_time | 180.177\n> wal_sync_time | 1409.522\n> wal_throttled | 2016\n> stats_reset | 2023-02-01 10:32:01.955513+01\n> and 2017 calls to probe_postgres:XLogFlush\n> \n> set synchronous_commit_wal_throttle_threshold to '1MB'; =>\n> INSERT 0 500000\n> Time: 10113.278 ms (00:10.113)\n> -[ RECORD 1 ]----+------------------------------\n> wal_records | 642864\n> wal_fpi | 0\n> wal_bytes | 537929315\n> wal_buffers_full | 0\n> wal_write | 560\n> wal_sync | 559\n> wal_write_time | 182.586\n> wal_sync_time | 988.72\n> wal_throttled | 504\n> stats_reset | 2023-02-01 10:32:36.250678+01\n> \n\nI'm not quite sure how to interpret these numbers and what conclusions\nto draw. ~520MB of WAL is about 2000 x 256kB chunks, so it's not\nsurprising we do ~2017 XLogFlush calls.\n\nBut does that mean we did more I/O operations (like fsyncs)? Not\nnecessarily, because something else might have done a flush too. For\nexample, if there are concurrent sessions doing commit, that'll trigger\nfsync. And if we back off a bit (e.g. to the LSN of the last complete\npage), we'll simply piggyback on that and won't do any extra fsync or\nother IO.\n\nOf course, if there is no other concurrent activity triggering flushes,\nthis will do extra (network) I/O and fsync. But that's expected and\nflushing data more often naturally reduces throughput. The impact is\nmostly proportional to network latency.\n\nThe way I see this is \"latency guarantee\" - not allowing more unflushed\n(not confirmed by sync replica) WAL than can be sent/confirmed within\nsome time limit. Of course, we don't have a way to specify this in time\ndirectly, so we have to specify amount of WAL instead. And stricter\nguarantees generally lead to lower throughput.\n\nI don't think there's a way around this, except for disabling the\nthrottling by default - if someone enables that, he/she intentionally\ndid that because latency is more important than throughput.\n\n> Maybe we should avoid calling fsyncs for WAL throttling? (by teaching\n> HandleXLogDelayPending()->XLogFlush()->XLogWrite() to NOT to sync when\n> we are flushing just because of WAL thortting ?) Would that still be\n> safe?\n\nIt's not clear to me how could this work and still be safe. I mean, we\n*must* flush the local WAL first, otherwise the replica could get ahead\n(if we send unflushed WAL to replica and then crash). Which would be\nreally bad, obviously.\n\nAnd we *have* to send the data to the sync replica, because that's the\nwhole point of this thread.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Feb 2023 14:13:57 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On Wed, Feb 1, 2023 at 2:14 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n> > Maybe we should avoid calling fsyncs for WAL throttling? (by teaching\n> > HandleXLogDelayPending()->XLogFlush()->XLogWrite() to NOT to sync when\n> > we are flushing just because of WAL thortting ?) Would that still be\n> > safe?\n>\n> It's not clear to me how could this work and still be safe. I mean, we\n> *must* flush the local WAL first, otherwise the replica could get ahead\n> (if we send unflushed WAL to replica and then crash). Which would be\n> really bad, obviously.\n\nWell it was just a thought: in this particular test - with no other\nconcurrent activity happening - we are fsyncing() uncommitted\nHeap/INSERT data much earlier than the final Transaction/COMMIT WAL\nrecord comes into play. I agree that some other concurrent backend's\nCOMMIT could fsync it, but I was wondering if that's sensible\noptimization to perform (so that issue_fsync() would be called for\nonly commit/rollback records). I can imagine a scenario with 10 such\nconcurrent backends running - all of them with this $thread-GUC set -\nbut that would cause 20k unnecessary fsyncs (?) -- (assuming single\nHDD with IOlat=20ms and standby capable of sync-ack < 0.1ms , that\nwould be wasted close to 400s just due to local fsyncs?). I don't have\na strong opinion or in-depth on this, but that smells like IO waste.\n\n-J.\n\n\n", "msg_date": "Wed, 1 Feb 2023 14:40:18 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On 2/1/23 14:40, Jakub Wartak wrote:\n> On Wed, Feb 1, 2023 at 2:14 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n>>> Maybe we should avoid calling fsyncs for WAL throttling? (by teaching\n>>> HandleXLogDelayPending()->XLogFlush()->XLogWrite() to NOT to sync when\n>>> we are flushing just because of WAL thortting ?) Would that still be\n>>> safe?\n>>\n>> It's not clear to me how could this work and still be safe. I mean, we\n>> *must* flush the local WAL first, otherwise the replica could get ahead\n>> (if we send unflushed WAL to replica and then crash). Which would be\n>> really bad, obviously.\n> \n> Well it was just a thought: in this particular test - with no other\n> concurrent activity happening - we are fsyncing() uncommitted\n> Heap/INSERT data much earlier than the final Transaction/COMMIT WAL\n> record comes into play.\n\nRight. I see it as testing (more or less) a worst-case scenario,\nmeasuring impact on commands generating a lot of WAL. I'm not sure the\nslowdown comes from the extra fsyncs, thgouh - I'd bet it's more about\nthe extra waits for confirmations from the replica.\n\n> I agree that some other concurrent backend's\n> COMMIT could fsync it, but I was wondering if that's sensible\n> optimization to perform (so that issue_fsync() would be called for\n> only commit/rollback records). I can imagine a scenario with 10 such\n> concurrent backends running - all of them with this $thread-GUC set -\n> but that would cause 20k unnecessary fsyncs (?) -- (assuming single\n> HDD with IOlat=20ms and standby capable of sync-ack < 0.1ms , that\n> would be wasted close to 400s just due to local fsyncs?). I don't have\n> a strong opinion or in-depth on this, but that smells like IO waste.\n> \n\nNot sure what optimization you mean, but triggering the WAL flushes from\na separate process would be beneficial. But we already do that, more or\nless - that's what WAL writer is about, right? Maybe it's not aggressive\nenough or something, not sure.\n\nBut I think the backends still have to sleep at some point, so that they\ndon't queue too much unflushed WAL - that's kinda the whole point, no?\nThe issue is more about triggering the throttling too early, before we\nhit the bandwidth limit. Which happens simply because we don't have a\nvery good way to decide whether the latency is growing, so the patch\njust throttles everything.\n\nConsider a replica on a network link with 10ms round trip. Then commit\nlatency can't really be better than 10ms, and throttling at that point\ncan't really improve anything, it just makes it slower. Instead, we\nshould measure the latency somehow, and only throttle when it increases.\nAnd probably make it proportional to the delta (so the higher it's from\nthe \"minimal\" latency, the more we'd throttle).\n\nI'd imagine we'd measure the latency (or the wait for sync replica) over\nreasonably short time windows (1/10 of a second?), and using that to\ndrive the throttling. If the latency is below some acceptable value,\ndon't throttle at all. If it increases, start throttling.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 2 Feb 2023 11:03:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On Thu, Feb 2, 2023 at 11:03 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n> > I agree that some other concurrent backend's\n> > COMMIT could fsync it, but I was wondering if that's sensible\n> > optimization to perform (so that issue_fsync() would be called for\n> > only commit/rollback records). I can imagine a scenario with 10 such\n> > concurrent backends running - all of them with this $thread-GUC set -\n> > but that would cause 20k unnecessary fsyncs (?) -- (assuming single\n> > HDD with IOlat=20ms and standby capable of sync-ack < 0.1ms , that\n> > would be wasted close to 400s just due to local fsyncs?). I don't have\n> > a strong opinion or in-depth on this, but that smells like IO waste.\n> >\n>\n> Not sure what optimization you mean,\n\nLet me clarify, let's say something like below (on top of the v3) just\nto save IOPS:\n\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -2340,6 +2340,7 @@ XLogWrite(XLogwrtRqst WriteRqst, TimeLineID tli,\nbool flexible)\n if (sync_method != SYNC_METHOD_OPEN &&\n sync_method != SYNC_METHOD_OPEN_DSYNC)\n {\n+ bool openedLogFile = false;\n if (openLogFile >= 0 &&\n !XLByteInPrevSeg(LogwrtResult.Write,\nopenLogSegNo,\n\nwal_segment_size))\n@@ -2351,9 +2352,15 @@ XLogWrite(XLogwrtRqst WriteRqst, TimeLineID\ntli, bool flexible)\n openLogTLI = tli;\n openLogFile = XLogFileOpen(openLogSegNo, tli);\n ReserveExternalFD();\n+ openedLogFile = true;\n }\n\n- issue_xlog_fsync(openLogFile, openLogSegNo, tli);\n+ /* can we bypass fsyncing() XLOG from the backend if\n+ * we have been called without commit request?\n+ * usually the feature will be off here\n(XLogDelayPending=false)\n+ */\n+ if(openedLogFile == true || XLogDelayPending == false)\n+ issue_xlog_fsync(openLogFile,\nopenLogSegNo, tli);\n }\n\n+ maybe some additional logic to ensure that this micro-optimization\nfor saving IOPS would be not enabled if the backend is calling that\nXLogFlush/Write() for actual COMMIT record\n\n> But I think the backends still have to sleep at some point, so that they\n> don't queue too much unflushed WAL - that's kinda the whole point, no?\n\nYes, but it can be flushed to standby, flushed locally but not fsynced\nlocally (?) - provided that it was not COMMIT - I'm just wondering\nwhether it makes sense (Question 1)\n\n> The issue is more about triggering the throttling too early, before we\n> hit the bandwidth limit. Which happens simply because we don't have a\n> very good way to decide whether the latency is growing, so the patch\n> just throttles everything.\n\nMaximum TCP bandwidth limit seems to be fluctuating in the real world\nI suppose, so it couldn't be a hard limit. On the other hand I can\nimagine operators setting\n\"throttle-those-backends-if-global-WALlatencyORrate>XXX\"\n(administrative decision). That would be cool to have but yes it would\nrequire WAL latency and rate measurement first (on its own that would\nmake a very nice addition to the pg_stat_replication). But one thing\nto note would be that there could be many potential latencies (& WAL\nthroughput rates) to consider (e.g. quorum of 3 standby sync having\ndifferent latencies) - which one to choose?\n\n(Question 2) I think we have reached simply a decision point on\nwhether the WIP/PoC is good enough as it is (like Andres wanted and\nyou +1 to this) or it should work as you propose or maybe keep it as\nan idea for the future?\n\n-J.\n\n\n", "msg_date": "Thu, 2 Feb 2023 12:12:44 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nI keep getting occasional complaints about the impact of large/bulk\ntransactions on latency of small OLTP transactions, so I'd like to\nrevive this thread a bit and move it forward.\n\nAttached is a rebased v3, followed by 0002 patch with some review\ncomments, missing comments and minor tweaks. More about that later ...\n\nIt's been a couple months, and there's been a fair amount of discussion\nand changes earlier, so I guess it makes sense to post a summary,\nstating the purpose (and scope), and then go through the various open\nquestions etc.\n\n\ngoals\n-----\nThe goal is to limit the impact of large transactions (producing a lot\nof WAL) on small OLTP transactions, in a cluster with a sync replica.\nImagine a backend executing single-row inserts, or something like that.\nThe commit will wait for the replica to confirm the WAL, which may be\nexpensive, but it's well determined by the network roundtrip.\n\nBut then a large transaction comes, and inserts a lot of WAL (imagine a\nCOPY which inserts 100MB of data, VACUUM, CREATE INDEX and so on). A\nsmall transaction may insert a COMMIT record right after this WAL chunk,\nand locally that's (mostly) fine. But with the sync replica it's much\nworse - we don't send WAL until it's flushed locally, and then we need\nto wait for the WAL to be sent, applied and confirmed by the replica.\nThis takes time (depending on the bandwidth), and it may not happen\nuntil the small transaction does COMMIT (because we may not flush WAL\nfrom in-progress transaction very often).\n\nJakub Wartak presented some examples of the impact when he started this\nthread, and it can be rather bad. Particularly for latency-sensitive\napplications. I plan to do more experiments with the current patch, but\nI don't have the results yet.\n\n\nscope\n-----\nNow, let's talk about scope - what the patch does not aim to do. The\npatch is explicitly intended for syncrep clusters, not async. There have\nbeen proposals to also support throttling for async replicas, logical\nreplication etc. I suppose all of that could be implemented, and I do\nsee the benefit of defining some sort of maximum lag even for async\nreplicas. But the agreement was to focus on the syncrep case, where it's\nparticularly painful, and perhaps extend it in the future.\n\nI believe adding throttling for physical async replication should not be\ndifficult - in principle we need to determine how far the replica got,\nand compare it to the local LSN. But there's likely complexity with\ndefining which async replicas to look at, inventing a sensible way to\nconfigure this, etc. It'd be helpful if people interested in that\nfeature took a look at this patch and tried extending etc.\n\nIt's not clear to me what to do about disconnected replicas, though. We\nmay not even know about them, if there's no slot (and how would we know\nwhat the slot is for). So maybe this would need a new GUC listing the\ninteresting replicas, and all would need to be connected. But that's an\navailability issue, because then all replicas need to be connected.\n\nI'm not sure about logical replication, but I guess we could treat it\nsimilarly to async.\n\nBut what I think would need to be different is handling of small\ntransactions. For syncrep we automatically wait for those at commit,\nwhich means automatic throttling. But for async (and logical), it's\ntrivial to cause ever-increasing lag with only tiny transactions, thanks\nto the single-process replay, so maybe we'd need to throttle those too.\n(The recovery prefetching improved this for async quite a bit, ofc.)\n\n\nimplementation\n--------------\nThe implementation is fairly straightforward, and happens in two places.\nXLogInsertRecord() decides if a throttling might be needed for this\nbackend, and then HandleXLogDelayPending() does the wait.\n\nXLogInsertRecord() checks if the backend produced certain amount of WAL\n(might be 1MB, for example). We do this because we don't want to do the\nexpensive stuff in HandleXLogDelayPending() too often (e.g. after every\nXLOG record).\n\nHandleXLogDelayPending() picks a suitable LSN, flushes it and then also\nwaits for the sync replica, as if it was a commit. This limits the lag,\ni.e. the amount of WAL that the small transaction will need to wait for\nto be replicated and confirmed by the replica.\n\nThere was a fair amount of discussion about how to pick the LSN. I think\nthe agreement is we certainly can't pick the current LSN (because that\nwould lead to write amplification for the partially filled page), and we\nprobably even want to backoff a bit more, to make it more likely the LSN\nis already flushed. So for example with the threshold set to 8MB we\nmight go back 1MB, or something like that. That'd still limit the lag.\n\n\nproblems\n--------\nNow let's talk about some problems - both conceptual and technical\n(essentially review comments for the patch).\n\n1) The goal of the patch is to limit the impact on latency, but the\nrelationship between WAL amounts and latency may not be linear. But we\ndon't have a good way to predict latency, and WAL lag is the only thing\nwe have, so there's that. Ultimately, it's a best effort.\n\n2) The throttling is per backend. That makes it simple, but it means\nthat it's hard to enforce a global lag limit. Imagine the limit is 8MB,\nand with a single backend that works fine - the lag should not exceed\nthe 8MB value. But if there are N backends, the lag could be up to\nN-times 8MB, I believe. That's a bit annoying, but I guess the only\nsolution would be to have some autovacuum-like cost balancing, with all\nbackends (or at least those running large stuff) doing the checks more\noften. I'm not sure we want to do that.\n\n3) The actual throttling (flush and wait for syncrep) happens in\nProcessInterrupts(), which mostly works but it has two drawbacks:\n\n * It may not happen \"early enough\" if the backends inserts a lot of\nXLOG records without processing interrupts in between.\n\n * It may happen \"too early\" if the backend inserts enough WAL to need\nthrottling (i.e. sets XLogDelayPending), but then after processing\ninterrupts it would be busy with other stuff, not inserting more WAL.\n\nI think ideally we'd do the throttling right before inserting the next\nXLOG record, but there's no convenient place, I think. We'd need to\nannotate a lot of places, etc. So maybe ProcessInterrupts() is a\nreasonable approximation.\n\nWe may need to add CHECK_FOR_INTERRUPTS() to a couple more places, but\nthat seems reasonable.\n\n4) I'm not sure I understand why we need XactLastThrottledRecEnd. Why\ncan't we just use XLogRecEnd?\n\n5) I think the way XLogFlush() resets backendWalInserted is a bit wrong.\nImagine a backend generates a fair amount of WAL, and then calls\nXLogFlush(lsn). Why is it OK to set backendWalInserted=0 when we don't\nknow if the generated WAL was before the \"lsn\"? I suppose we don't use\nvery old lsn values for flushing, but I don't know if this drift could\naccumulate over time, or cause some other issues.\n\n6) Why XLogInsertRecord() did skip SYNCHRONOUS_COMMIT_REMOTE_FLUSH?\n\n7) I find the \"synchronous_commit_wal_throttle_threshold\" name\nannoyingly long, so I renamed it to just \"wal_throttle_threshold\". I've\nalso renamed the GUC to \"wal_throttle_after\" and I wonder if maybe it\nshould be configured in , maybe it should be in GUC_UNIT_BLOCKS just\nlike the other _after options? But those changes are more a matter of\ntaste, feel free to ignore this.\n\n\nmissing pieces\n--------------\nThe thing that's missing is that some processes (like aggressive\nanti-wraparound autovacuum) should not be throttled. If people set the\nGUC in the postgresql.conf, I guess that'll affect those processes too,\nso I guess we should explicitly reset the GUC for those processes. I\nwonder if there are other cases that should not be throttled.\n\n\ntangents\n--------\nWhile discussing this with Andres a while ago, he mentioned a somewhat\northogonal idea - sending unflushed data to the replica.\n\nWe currently never send unflushed data to the replica, which makes sense\nbecause this data is not durable and if the primary crashes/restarts,\nthis data will disappear. But it also means there may be a fairly large\nchunk of WAL data that we may need to send at COMMIT and wait for the\nconfirmation.\n\nHe suggested we might actually send the data to the replica, but the\nreplica would know this data is not flushed yet and so would not do the\nrecovery etc. And at commit we could just send a request to flush,\nwithout having to transfer the data at that moment.\n\nI don't have a very good intuition about how large the effect would be,\ni.e. how much unflushed WAL data could accumulate on the primary\n(kilobytes/megabytes?), and how big is the difference between sending a\ncouple kilobytes or just a request to flush.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 4 Nov 2023 20:00:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-11-04 20:00:46 +0100, Tomas Vondra wrote:\n> scope\n> -----\n> Now, let's talk about scope - what the patch does not aim to do. The\n> patch is explicitly intended for syncrep clusters, not async. There have\n> been proposals to also support throttling for async replicas, logical\n> replication etc. I suppose all of that could be implemented, and I do\n> see the benefit of defining some sort of maximum lag even for async\n> replicas. But the agreement was to focus on the syncrep case, where it's\n> particularly painful, and perhaps extend it in the future.\n\nPerhaps we should take care to make the configuration extensible in that\ndirection in the future?\n\n\nHm - is this feature really tied to replication, at all? Pretty much the same\nsituation exists without. On an ok-ish local nvme I ran pgbench with 1 client\nand -P1. Guess where I started a VACUUM (on a fully cached table, so no\ncontinuous WAL flushes):\n\nprogress: 64.0 s, 634.0 tps, lat 1.578 ms stddev 0.477, 0 failed\nprogress: 65.0 s, 634.0 tps, lat 1.577 ms stddev 0.546, 0 failed\nprogress: 66.0 s, 639.0 tps, lat 1.566 ms stddev 0.656, 0 failed\nprogress: 67.0 s, 642.0 tps, lat 1.557 ms stddev 0.273, 0 failed\nprogress: 68.0 s, 556.0 tps, lat 1.793 ms stddev 0.690, 0 failed\nprogress: 69.0 s, 281.0 tps, lat 3.568 ms stddev 1.050, 0 failed\nprogress: 70.0 s, 282.0 tps, lat 3.539 ms stddev 1.072, 0 failed\nprogress: 71.0 s, 273.0 tps, lat 3.663 ms stddev 2.602, 0 failed\nprogress: 72.0 s, 261.0 tps, lat 3.832 ms stddev 1.889, 0 failed\nprogress: 73.0 s, 268.0 tps, lat 3.738 ms stddev 0.934, 0 failed\n\nAt 32 clients we go from ~10k to 2.5k, with a full 2s of 0.\n\nSubtracting pg_current_wal_flush_lsn() from pg_current_wal_insert_lsn() the\n\"good times\" show a delay of ~8kB (note that this includes WAL records that\nare still being inserted). Once the VACUUM runs, it's ~2-3MB.\n\nThe picture with more clients is similar.\n\nIf I instead severely limit the amount of outstanding (but not the amount of\nunflushed) WAL by setting wal_buffers to 128, latency dips quite a bit less\n(down to ~400 instead of ~260 at 1 client, ~10k to ~5k at 32). Of course\nthat's ridiculous and will completely trash performance in many other cases,\nbut it shows that limiting the amount of outstanding WAL could help without\nreplication as well. With remote storage, that'd likely be a bigger\ndifference.\n\n\n\n\n> problems\n> --------\n> Now let's talk about some problems - both conceptual and technical\n> (essentially review comments for the patch).\n>\n> 1) The goal of the patch is to limit the impact on latency, but the\n> relationship between WAL amounts and latency may not be linear. But we\n> don't have a good way to predict latency, and WAL lag is the only thing\n> we have, so there's that. Ultimately, it's a best effort.\n\nIt's indeed probably not linear. Realistically, to do better, we probably need\nstatistics for the specific system in question - the latency impact will\ndiffer hugely between different storage/network.\n\n\n> 2) The throttling is per backend. That makes it simple, but it means\n> that it's hard to enforce a global lag limit. Imagine the limit is 8MB,\n> and with a single backend that works fine - the lag should not exceed\n> the 8MB value. But if there are N backends, the lag could be up to\n> N-times 8MB, I believe. That's a bit annoying, but I guess the only\n> solution would be to have some autovacuum-like cost balancing, with all\n> backends (or at least those running large stuff) doing the checks more\n> often. I'm not sure we want to do that.\n\nHm. The average case is likely fine - the throttling of the different backends\nwill intersperse and flush more frequently - but the worst case is presumably\npart of the issue here. I wonder if we could deal with this by somehow\noffsetting the points at which backends flush at somehow.\n\nI doubt we want to go for something autovacuum balancing like - that doesn't\nseem to work well - but I think we could take the amount of actually unflushed\nWAL into account when deciding whether to throttle. We have the necessary\nstate in local memory IIRC. We'd have to be careful to not throttle every\nbackend at the same time, or we'll introduce latency penalties that way. But\nwhat if we scaled synchronous_commit_wal_throttle_threshold depending on the\namount of unflushed WAL? By still taking backendWalInserted into account, we'd\navoid throttling everyone at the same time, but still would make throttling\nmore aggressive depending on the amount of unflushed/unreplicated WAL.\n\n\n> 3) The actual throttling (flush and wait for syncrep) happens in\n> ProcessInterrupts(), which mostly works but it has two drawbacks:\n>\n> * It may not happen \"early enough\" if the backends inserts a lot of\n> XLOG records without processing interrupts in between.\n\nDoes such code exist? And if so, is there a reason not to fix said code?\n\n\n> * It may happen \"too early\" if the backend inserts enough WAL to need\n> throttling (i.e. sets XLogDelayPending), but then after processing\n> interrupts it would be busy with other stuff, not inserting more WAL.\n\n> I think ideally we'd do the throttling right before inserting the next\n> XLOG record, but there's no convenient place, I think. We'd need to\n> annotate a lot of places, etc. So maybe ProcessInterrupts() is a\n> reasonable approximation.\n\nYea, I think there's no way to do that with reasonable effort. Starting to\nwait with a bunch of lwlocks held would obviously be bad.\n\n\n> We may need to add CHECK_FOR_INTERRUPTS() to a couple more places, but\n> that seems reasonable.\n\nAnd independently beneficial.\n\n\n> missing pieces\n> --------------\n> The thing that's missing is that some processes (like aggressive\n> anti-wraparound autovacuum) should not be throttled. If people set the\n> GUC in the postgresql.conf, I guess that'll affect those processes too,\n> so I guess we should explicitly reset the GUC for those processes. I\n> wonder if there are other cases that should not be throttled.\n\nHm, that's a bit hairy. If we just exempt it we'll actually slow down everyone\nelse even further, even though the goal of the feature might be the opposite.\nI don't think that's warranted for anti-wraparound vacuums - they're normal. I\nthink failsafe vacuums are a different story - there we really just don't care\nabout impacting other backends, the goal is to prevent moving the cluster to\nread only pretty soon.\n\n\n> tangents\n> --------\n> While discussing this with Andres a while ago, he mentioned a somewhat\n> orthogonal idea - sending unflushed data to the replica.\n>\n> We currently never send unflushed data to the replica, which makes sense\n> because this data is not durable and if the primary crashes/restarts,\n> this data will disappear. But it also means there may be a fairly large\n> chunk of WAL data that we may need to send at COMMIT and wait for the\n> confirmation.\n>\n> He suggested we might actually send the data to the replica, but the\n> replica would know this data is not flushed yet and so would not do the\n> recovery etc. And at commit we could just send a request to flush,\n> without having to transfer the data at that moment.\n>\n> I don't have a very good intuition about how large the effect would be,\n> i.e. how much unflushed WAL data could accumulate on the primary\n> (kilobytes/megabytes?),\n\nObviously heavily depends on the workloads. If you have anything with bulk\nwrites it can be many megabytes.\n\n\n> and how big is the difference between sending a couple kilobytes or just a\n> request to flush.\n\nObviously heavily depends on the network...\n\n\nI used netperf's tcp_rr between my workstation and my laptop on a local 10Gbit\nnetwork (albeit with a crappy external card for my laptop), to put some\nnumbers to this. I used -r $s,100 to test sending a variable sized data to the\nother size, with the other side always responding with 100 bytes (assuming\nthat'd more than fit a feedback response).\n\nCommand:\nfields=\"request_size,response_size,min_latency,mean_latency,max_latency,p99_latency,transaction_rate\"; echo $fields; for s in 10 100 1000 10000 100000 1000000;do netperf -P0 -t TCP_RR -l 3 -H alap5 -- -r $s,100 -o \"$fields\";done\n\n10gbe:\n\nrequest_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n10 100 43 64.30 390 96 15526.084\n100 100 57 75.12 428 122 13286.602\n1000 100 47 74.41 270 108 13412.125\n10000 100 89 114.63 712 152 8700.643\n100000 100 167 255.90 584 312 3903.516\n1000000 100 891 1015.99 2470 1143 983.708\n\n\nSame hosts, but with my workstation forced to use a 1gbit connection:\n\nrequest_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n10 100 78 131.18 2425 257 7613.416\n100 100 81 129.25 425 255 7727.473\n1000 100 100 162.12 1444 266 6161.388\n10000 100 310 686.19 1797 927 1456.204\n100000 100 1006 1114.20 1472 1199 896.770\n1000000 100 8338 8420.96 8827 8498 118.410\n\nI haven't checked, but I'd assume that 100bytes back and forth should easily\nfit a new message to update LSNs and the existing feedback response. Even just\nthe difference between sending 100 bytes and sending 10k (a bit more than a\nsingle WAL page) is pretty significant on a 1gbit network.\n\nOf course, the relatively low latency between these systems makes this more\npronounced than if this were a cross regional or even cross continental link,\nwere the roundtrip latency is more likely to be dominated by distance rather\nthan throughput.\n\nTesting between europe and western US:\nrequest_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n10 100 157934 167627.12 317705 160000 5.652\n100 100 161294 171323.59 324017 170000 5.530\n1000 100 161392 171521.82 324629 170000 5.524\n10000 100 163651 173651.06 328488 170000 5.456\n100000 100 166344 198070.20 638205 170000 4.781\n1000000 100 225555 361166.12 1302368 240000 2.568\n\n\nNo meaningful difference before getting to 100k. But it's pretty easy to lag\nby 100k on a longer distance link...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Nov 2023 22:40:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "On 11/8/23 07:40, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-04 20:00:46 +0100, Tomas Vondra wrote:\n>> scope\n>> -----\n>> Now, let's talk about scope - what the patch does not aim to do. The\n>> patch is explicitly intended for syncrep clusters, not async. There have\n>> been proposals to also support throttling for async replicas, logical\n>> replication etc. I suppose all of that could be implemented, and I do\n>> see the benefit of defining some sort of maximum lag even for async\n>> replicas. But the agreement was to focus on the syncrep case, where it's\n>> particularly painful, and perhaps extend it in the future.\n> \n> Perhaps we should take care to make the configuration extensible in that\n> direction in the future?\n> \n\nYes, if we can come up with a suitable configuration, that would work\nfor the other use cases. I don't have a very good idea what to do about\nreplicas that may not be connected, of have connected but need to catch\nup. IMHO it would be silly to turn this into \"almost a sync rep\".\n\n> \n> Hm - is this feature really tied to replication, at all? Pretty much the same\n> situation exists without. On an ok-ish local nvme I ran pgbench with 1 client\n> and -P1. Guess where I started a VACUUM (on a fully cached table, so no\n> continuous WAL flushes):\n> \n> progress: 64.0 s, 634.0 tps, lat 1.578 ms stddev 0.477, 0 failed\n> progress: 65.0 s, 634.0 tps, lat 1.577 ms stddev 0.546, 0 failed\n> progress: 66.0 s, 639.0 tps, lat 1.566 ms stddev 0.656, 0 failed\n> progress: 67.0 s, 642.0 tps, lat 1.557 ms stddev 0.273, 0 failed\n> progress: 68.0 s, 556.0 tps, lat 1.793 ms stddev 0.690, 0 failed\n> progress: 69.0 s, 281.0 tps, lat 3.568 ms stddev 1.050, 0 failed\n> progress: 70.0 s, 282.0 tps, lat 3.539 ms stddev 1.072, 0 failed\n> progress: 71.0 s, 273.0 tps, lat 3.663 ms stddev 2.602, 0 failed\n> progress: 72.0 s, 261.0 tps, lat 3.832 ms stddev 1.889, 0 failed\n> progress: 73.0 s, 268.0 tps, lat 3.738 ms stddev 0.934, 0 failed\n> \n> At 32 clients we go from ~10k to 2.5k, with a full 2s of 0.\n> \n> Subtracting pg_current_wal_flush_lsn() from pg_current_wal_insert_lsn() the\n> \"good times\" show a delay of ~8kB (note that this includes WAL records that\n> are still being inserted). Once the VACUUM runs, it's ~2-3MB.\n> \n> The picture with more clients is similar.\n> \n> If I instead severely limit the amount of outstanding (but not the amount of\n> unflushed) WAL by setting wal_buffers to 128, latency dips quite a bit less\n> (down to ~400 instead of ~260 at 1 client, ~10k to ~5k at 32). Of course\n> that's ridiculous and will completely trash performance in many other cases,\n> but it shows that limiting the amount of outstanding WAL could help without\n> replication as well. With remote storage, that'd likely be a bigger\n> difference.\n> \n\nYeah, that's an interesting idea. I think the idea of enforcing \"maximum\nlag\" is somewhat general, the difference is against what LSN the lag is\nmeasured. For the syncrep case it was about LSN confirmed by the\nreplica, what you described would measure it for either flush LSN or\nwrite LSN (which would be the \"outstanding\" case I think).\n\nI guess the remote storage is somewhat similar to the syncrep case, in\nthat the lag includes some network communication.\n\n> \n>> problems\n>> --------\n>> Now let's talk about some problems - both conceptual and technical\n>> (essentially review comments for the patch).\n>>\n>> 1) The goal of the patch is to limit the impact on latency, but the\n>> relationship between WAL amounts and latency may not be linear. But we\n>> don't have a good way to predict latency, and WAL lag is the only thing\n>> we have, so there's that. Ultimately, it's a best effort.\n> \n> It's indeed probably not linear. Realistically, to do better, we probably need\n> statistics for the specific system in question - the latency impact will\n> differ hugely between different storage/network.\n> \n\nTrue. I can imagine two ways to measure that.\n\nWe could have a standalone tool similar to pg_test_fsync that would\nmimic how we write/flush WAL, and measure the latency for different\namounts flushed data. The DBA would then be responsible for somehow\nusing this to configure the database (perhaps the tool could calculate\nsome \"optimal\" value to flush).\n\nAlternatively, we could collect timing in XLogFlush, so that we'd track\namount of data to flush + timing, and then use that to calculate\nexpected latency (e.g. by binning by data size and using average latency\nfor each bin). And then use that, somehow.\n\nSo you could say - maximum commit latency is 10ms, and the system would\nbe able to estimate that the maximum amount of unflushed WAL is 256kB,\nand it'd enforce this distance.\n\nStill only best offort, no guarantees, of course.\n\n> \n>> 2) The throttling is per backend. That makes it simple, but it means\n>> that it's hard to enforce a global lag limit. Imagine the limit is 8MB,\n>> and with a single backend that works fine - the lag should not exceed\n>> the 8MB value. But if there are N backends, the lag could be up to\n>> N-times 8MB, I believe. That's a bit annoying, but I guess the only\n>> solution would be to have some autovacuum-like cost balancing, with all\n>> backends (or at least those running large stuff) doing the checks more\n>> often. I'm not sure we want to do that.\n> \n> Hm. The average case is likely fine - the throttling of the different backends\n> will intersperse and flush more frequently - but the worst case is presumably\n> part of the issue here. I wonder if we could deal with this by somehow\n> offsetting the points at which backends flush at somehow.\n> \n\nIf I understand correctly, you want to ensure the backends start\nmeasuring the WAL from different LSNs, in order to \"distribute\" them\nuniformly within the WAL (and not \"coordinate\" them, which leads to\nhigher lag).\n\nI guess we could do that, say by flushing only up to\n\n hash(pid) % maximum_allowed_lag\n\nI'm not sure that'll really work, especially if the backends are\nsomewhat naturally \"correlated\".\n\nMaybe it could work if the backends explicitly coordinated the flushes\nto distribute them, but then how is that different from just doing what\nautovacuum-like costing does in principle.\n\nHowever, perhaps there's an \"adaptive\" way to do this - each backend\nknow how much WAL it produced since the last flush LSN, and it can\neasily measure the actual lag (unflushed WAL). It could compare those,\nand estimate what fraction of the lag it's likely responsible for. And\nthen adjust the \"flush distance\" based on that.\n\nImagine we aim for 1MB unflushed WAL, and you have two backends that\nhappen to execute at the same time. They both generate 1MB of WAL and\nhit the throttling code at about the same size. They discover the actual\nlag is not the desired 1MB but 2MB, so (requested_lag/actual_lag) = 0.5,\nand they'd adjust the flush distance to 1/2MB. And from that point we\nknow the lag is 1MB even with two backends.\n\nThen one of the backends terminates, and the other backend eventually\nhits the 1/2MB limit again, but the desired lag is 1MB, and it doubles\nthe distance again.\n\nOf course, in practice the behavior would be more complicated, thanks to\nbackends that generate WAL but don't really hit the threshold.\n\nThere'd probably also be some sort of ramp-up, i.e. the backed would not\nstart with the \"full\" 1MB limit, but perhaps something lower. Would need\nto be careful to be high enough to ignore the OLTP transactions, though.\n\n\n> I doubt we want to go for something autovacuum balancing like - that doesn't\n> seem to work well - but I think we could take the amount of actually unflushed\n> WAL into account when deciding whether to throttle. We have the necessary\n> state in local memory IIRC. We'd have to be careful to not throttle every\n> backend at the same time, or we'll introduce latency penalties that way. But\n> what if we scaled synchronous_commit_wal_throttle_threshold depending on the\n> amount of unflushed WAL? By still taking backendWalInserted into account, we'd\n> avoid throttling everyone at the same time, but still would make throttling\n> more aggressive depending on the amount of unflushed/unreplicated WAL.\n> \n\nOh! Perhaps similar to the adaptive behavior I explained above?\n\n> \n>> 3) The actual throttling (flush and wait for syncrep) happens in\n>> ProcessInterrupts(), which mostly works but it has two drawbacks:\n>>\n>> * It may not happen \"early enough\" if the backends inserts a lot of\n>> XLOG records without processing interrupts in between.\n> \n> Does such code exist? And if so, is there a reason not to fix said code?\n> \n\nNot sure. I thought maybe index builds might do something like that, but\nit doesn't seem to be the case (at least for the built-in indexes). But\nif adding CHECK_FOR_INTERRUPTS to more places is an acceptable fix, I'm\nOK with that.\n\n> \n>> * It may happen \"too early\" if the backend inserts enough WAL to need\n>> throttling (i.e. sets XLogDelayPending), but then after processing\n>> interrupts it would be busy with other stuff, not inserting more WAL.\n> \n>> I think ideally we'd do the throttling right before inserting the next\n>> XLOG record, but there's no convenient place, I think. We'd need to\n>> annotate a lot of places, etc. So maybe ProcessInterrupts() is a\n>> reasonable approximation.\n> \n> Yea, I think there's no way to do that with reasonable effort. Starting to\n> wait with a bunch of lwlocks held would obviously be bad.\n> \n\nOK.\n\n> \n>> We may need to add CHECK_FOR_INTERRUPTS() to a couple more places, but\n>> that seems reasonable.\n> \n> And independently beneficial.\n> \n\nOK.\n\n> \n>> missing pieces\n>> --------------\n>> The thing that's missing is that some processes (like aggressive\n>> anti-wraparound autovacuum) should not be throttled. If people set the\n>> GUC in the postgresql.conf, I guess that'll affect those processes too,\n>> so I guess we should explicitly reset the GUC for those processes. I\n>> wonder if there are other cases that should not be throttled.\n> \n> Hm, that's a bit hairy. If we just exempt it we'll actually slow down everyone\n> else even further, even though the goal of the feature might be the opposite.\n> I don't think that's warranted for anti-wraparound vacuums - they're normal. I\n> think failsafe vacuums are a different story - there we really just don't care\n> about impacting other backends, the goal is to prevent moving the cluster to\n> read only pretty soon.\n> \n\nRight, I confused those two autovacuum modes.\n\n> \n>> tangents\n>> --------\n>> While discussing this with Andres a while ago, he mentioned a somewhat\n>> orthogonal idea - sending unflushed data to the replica.\n>>\n>> We currently never send unflushed data to the replica, which makes sense\n>> because this data is not durable and if the primary crashes/restarts,\n>> this data will disappear. But it also means there may be a fairly large\n>> chunk of WAL data that we may need to send at COMMIT and wait for the\n>> confirmation.\n>>\n>> He suggested we might actually send the data to the replica, but the\n>> replica would know this data is not flushed yet and so would not do the\n>> recovery etc. And at commit we could just send a request to flush,\n>> without having to transfer the data at that moment.\n>>\n>> I don't have a very good intuition about how large the effect would be,\n>> i.e. how much unflushed WAL data could accumulate on the primary\n>> (kilobytes/megabytes?),\n> \n> Obviously heavily depends on the workloads. If you have anything with bulk\n> writes it can be many megabytes.\n> \n> \n>> and how big is the difference between sending a couple kilobytes or just a\n>> request to flush.\n> \n> Obviously heavily depends on the network...\n> \n\nI know it depends on workload/network. I'm merely saying I don't have a\nvery good intuition what value would be suitable for a particular\nworkload / network.\n\n> \n> I used netperf's tcp_rr between my workstation and my laptop on a local 10Gbit\n> network (albeit with a crappy external card for my laptop), to put some\n> numbers to this. I used -r $s,100 to test sending a variable sized data to the\n> other size, with the other side always responding with 100 bytes (assuming\n> that'd more than fit a feedback response).\n> \n> Command:\n> fields=\"request_size,response_size,min_latency,mean_latency,max_latency,p99_latency,transaction_rate\"; echo $fields; for s in 10 100 1000 10000 100000 1000000;do netperf -P0 -t TCP_RR -l 3 -H alap5 -- -r $s,100 -o \"$fields\";done\n> \n> 10gbe:\n> \n> request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n> 10 100 43 64.30 390 96 15526.084\n> 100 100 57 75.12 428 122 13286.602\n> 1000 100 47 74.41 270 108 13412.125\n> 10000 100 89 114.63 712 152 8700.643\n> 100000 100 167 255.90 584 312 3903.516\n> 1000000 100 891 1015.99 2470 1143 983.708\n> \n> \n> Same hosts, but with my workstation forced to use a 1gbit connection:\n> \n> request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n> 10 100 78 131.18 2425 257 7613.416\n> 100 100 81 129.25 425 255 7727.473\n> 1000 100 100 162.12 1444 266 6161.388\n> 10000 100 310 686.19 1797 927 1456.204\n> 100000 100 1006 1114.20 1472 1199 896.770\n> 1000000 100 8338 8420.96 8827 8498 118.410\n> \n> I haven't checked, but I'd assume that 100bytes back and forth should easily\n> fit a new message to update LSNs and the existing feedback response. Even just\n> the difference between sending 100 bytes and sending 10k (a bit more than a\n> single WAL page) is pretty significant on a 1gbit network.\n> \n\nI'm on decaf so I may be a bit slow, but it's not very clear to me what\nconclusion to draw from these numbers. What is the takeaway?\n\nMy understanding is that in both cases the latency is initially fairly\nstable, independent of the request size. This applies to request up to\n~1000B. And then the latency starts increasing fairly quickly, even\nthough it shouldn't hit the bandwidth (except maybe the 1MB requests).\n\nI don't think it says we should be replicating WAL in tiny chunks,\nbecause if you need to send a chunk of data it's always more efficient\nto send it at once (compared to sending multiple smaller pieces). But if\nwe manage to send most of this \"in the background\", only leaving the\nlast small bit to be sent at the very end, that'd help.\n\n> Of course, the relatively low latency between these systems makes this more\n> pronounced than if this were a cross regional or even cross continental link,\n> were the roundtrip latency is more likely to be dominated by distance rather\n> than throughput.\n> \n> Testing between europe and western US:\n> request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n> 10 100 157934 167627.12 317705 160000 5.652\n> 100 100 161294 171323.59 324017 170000 5.530\n> 1000 100 161392 171521.82 324629 170000 5.524\n> 10000 100 163651 173651.06 328488 170000 5.456\n> 100000 100 166344 198070.20 638205 170000 4.781\n> 1000000 100 225555 361166.12 1302368 240000 2.568\n> \n> \n> No meaningful difference before getting to 100k. But it's pretty easy to lag\n> by 100k on a longer distance link...\n> \n\nRight.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Nov 2023 13:59:55 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-11-08 13:59:55 +0100, Tomas Vondra wrote:\n> > I used netperf's tcp_rr between my workstation and my laptop on a local 10Gbit\n> > network (albeit with a crappy external card for my laptop), to put some\n> > numbers to this. I used -r $s,100 to test sending a variable sized data to the\n> > other size, with the other side always responding with 100 bytes (assuming\n> > that'd more than fit a feedback response).\n> >\n> > Command:\n> > fields=\"request_size,response_size,min_latency,mean_latency,max_latency,p99_latency,transaction_rate\"; echo $fields; for s in 10 100 1000 10000 100000 1000000;do netperf -P0 -t TCP_RR -l 3 -H alap5 -- -r $s,100 -o \"$fields\";done\n> >\n> > 10gbe:\n> >\n> > request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n> > 10 100 43 64.30 390 96 15526.084\n> > 100 100 57 75.12 428 122 13286.602\n> > 1000 100 47 74.41 270 108 13412.125\n> > 10000 100 89 114.63 712 152 8700.643\n> > 100000 100 167 255.90 584 312 3903.516\n> > 1000000 100 891 1015.99 2470 1143 983.708\n> >\n> >\n> > Same hosts, but with my workstation forced to use a 1gbit connection:\n> >\n> > request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n> > 10 100 78 131.18 2425 257 7613.416\n> > 100 100 81 129.25 425 255 7727.473\n> > 1000 100 100 162.12 1444 266 6161.388\n> > 10000 100 310 686.19 1797 927 1456.204\n> > 100000 100 1006 1114.20 1472 1199 896.770\n> > 1000000 100 8338 8420.96 8827 8498 118.410\n\nLooks like the 1gbit numbers were somewhat bogus-ified due having configured\njumbo frames and some network component doing something odd with that\n(handling them in software maybe?).\n\n10gbe:\nrequest_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n10\t\t100\t\t56\t\t68.56\t\t483\t\t87\t\t14562.476\n100\t\t100\t\t57\t\t75.68\t\t353\t\t123\t\t13185.485\n1000\t\t100\t\t60\t\t71.97\t\t391\t\t94\t\t13870.659\n10000\t\t100\t\t58\t\t92.42\t\t489\t\t140\t\t10798.444\n100000\t\t100\t\t184\t\t260.48\t\t1141\t\t338\t\t3834.504\n1000000\t\t100\t\t926\t\t1071.46\t\t2012\t\t1466\t\t933.009\n\n1gbe\nrequest_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n10\t\t100\t\t77\t\t132.19\t\t1097\t\t257\t\t7555.420\n100\t\t100\t\t79\t\t127.85\t\t534\t\t249\t\t7810.862\n1000\t\t100\t\t98\t\t155.91\t\t966\t\t265\t\t6406.818\n10000\t\t100\t\t176\t\t235.37\t\t1451\t\t314\t\t4245.304\n100000\t\t100\t\t944\t\t1022.00\t\t1380\t\t1148\t\t977.930\n1000000\t\t100\t\t8649\t\t8768.42\t\t9018\t\t8895\t\t113.703\n\n\n> > I haven't checked, but I'd assume that 100bytes back and forth should easily\n> > fit a new message to update LSNs and the existing feedback response. Even just\n> > the difference between sending 100 bytes and sending 10k (a bit more than a\n> > single WAL page) is pretty significant on a 1gbit network.\n> >\n>\n> I'm on decaf so I may be a bit slow, but it's not very clear to me what\n> conclusion to draw from these numbers. What is the takeaway?\n>\n> My understanding is that in both cases the latency is initially fairly\n> stable, independent of the request size. This applies to request up to\n> ~1000B. And then the latency starts increasing fairly quickly, even\n> though it shouldn't hit the bandwidth (except maybe the 1MB requests).\n\nExcept for the smallest end, these are bandwidth related, I think. Converting\n1gbit/s to bytes/us is 125 bytes / us - before tcp/ip overhead. Even leaving\nthe overhead aside, 10kB/100kB outstanding take ~80us/800us to send on\n1gbit. If you subtract the minmum latency of about 130us, that's nearly all of\nthe latency.\n\nThe reason this matters is that the numbers show that the latency of having to\nsend a small message with updated positions is far smaller than having to send\nall the outstanding data. Even having to send a single WAL page over the\nnetwork ~doubles the latency of the response on 1gbit! Of course the impact\nis smaller on 10gbit, but even there latency substantially increases around\n100kB of outstanding data.\n\nIn a local pgbench with 32 clients I see WAL write sizes between 8kB and\n~220kB. Being able to stream those out before the local flush completed\ntherefore seems likely to reduce synchronous_commit overhead substantially.\n\n\n> I don't think it says we should be replicating WAL in tiny chunks,\n> because if you need to send a chunk of data it's always more efficient\n> to send it at once (compared to sending multiple smaller pieces).\n\nI don't think that's a very large factor for network data, once your minimal\ndata sizes is ~8kB (or ~4kB if we lower wal_block_size). TCP messsages will\nget chunked into something smaller anyway and small messages don't need to get\nacknowledged individually. Sending more data at once is good for CPU\nefficiency (reducing syscall and network device overhead), but doesn't do much\nfor throughput.\n\nSending 4kB of data in each send() in a bandwidth oriented test already gets\nto ~9.3gbit/s in my network. That's close to the maximum atainable with normal\nframing. If I change the mtu back to 9000 I get 9.89 gbit/s, again very close\nto the theoretical max.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Nov 2023 09:11:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "\n\nOn 11/8/23 18:11, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-08 13:59:55 +0100, Tomas Vondra wrote:\n>>> I used netperf's tcp_rr between my workstation and my laptop on a local 10Gbit\n>>> network (albeit with a crappy external card for my laptop), to put some\n>>> numbers to this. I used -r $s,100 to test sending a variable sized data to the\n>>> other size, with the other side always responding with 100 bytes (assuming\n>>> that'd more than fit a feedback response).\n>>>\n>>> Command:\n>>> fields=\"request_size,response_size,min_latency,mean_latency,max_latency,p99_latency,transaction_rate\"; echo $fields; for s in 10 100 1000 10000 100000 1000000;do netperf -P0 -t TCP_RR -l 3 -H alap5 -- -r $s,100 -o \"$fields\";done\n>>>\n>>> 10gbe:\n>>>\n>>> request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n>>> 10 100 43 64.30 390 96 15526.084\n>>> 100 100 57 75.12 428 122 13286.602\n>>> 1000 100 47 74.41 270 108 13412.125\n>>> 10000 100 89 114.63 712 152 8700.643\n>>> 100000 100 167 255.90 584 312 3903.516\n>>> 1000000 100 891 1015.99 2470 1143 983.708\n>>>\n>>>\n>>> Same hosts, but with my workstation forced to use a 1gbit connection:\n>>>\n>>> request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n>>> 10 100 78 131.18 2425 257 7613.416\n>>> 100 100 81 129.25 425 255 7727.473\n>>> 1000 100 100 162.12 1444 266 6161.388\n>>> 10000 100 310 686.19 1797 927 1456.204\n>>> 100000 100 1006 1114.20 1472 1199 896.770\n>>> 1000000 100 8338 8420.96 8827 8498 118.410\n> \n> Looks like the 1gbit numbers were somewhat bogus-ified due having configured\n> jumbo frames and some network component doing something odd with that\n> (handling them in software maybe?).\n> \n> 10gbe:\n> request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n> 10\t\t100\t\t56\t\t68.56\t\t483\t\t87\t\t14562.476\n> 100\t\t100\t\t57\t\t75.68\t\t353\t\t123\t\t13185.485\n> 1000\t\t100\t\t60\t\t71.97\t\t391\t\t94\t\t13870.659\n> 10000\t\t100\t\t58\t\t92.42\t\t489\t\t140\t\t10798.444\n> 100000\t\t100\t\t184\t\t260.48\t\t1141\t\t338\t\t3834.504\n> 1000000\t\t100\t\t926\t\t1071.46\t\t2012\t\t1466\t\t933.009\n> \n> 1gbe\n> request_size response_size min_latency mean_latency max_latency p99_latency transaction_rate\n> 10\t\t100\t\t77\t\t132.19\t\t1097\t\t257\t\t7555.420\n> 100\t\t100\t\t79\t\t127.85\t\t534\t\t249\t\t7810.862\n> 1000\t\t100\t\t98\t\t155.91\t\t966\t\t265\t\t6406.818\n> 10000\t\t100\t\t176\t\t235.37\t\t1451\t\t314\t\t4245.304\n> 100000\t\t100\t\t944\t\t1022.00\t\t1380\t\t1148\t\t977.930\n> 1000000\t\t100\t\t8649\t\t8768.42\t\t9018\t\t8895\t\t113.703\n> \n> \n>>> I haven't checked, but I'd assume that 100bytes back and forth should easily\n>>> fit a new message to update LSNs and the existing feedback response. Even just\n>>> the difference between sending 100 bytes and sending 10k (a bit more than a\n>>> single WAL page) is pretty significant on a 1gbit network.\n>>>\n>>\n>> I'm on decaf so I may be a bit slow, but it's not very clear to me what\n>> conclusion to draw from these numbers. What is the takeaway?\n>>\n>> My understanding is that in both cases the latency is initially fairly\n>> stable, independent of the request size. This applies to request up to\n>> ~1000B. And then the latency starts increasing fairly quickly, even\n>> though it shouldn't hit the bandwidth (except maybe the 1MB requests).\n> \n> Except for the smallest end, these are bandwidth related, I think. Converting\n> 1gbit/s to bytes/us is 125 bytes / us - before tcp/ip overhead. Even leaving\n> the overhead aside, 10kB/100kB outstanding take ~80us/800us to send on\n> 1gbit. If you subtract the minmum latency of about 130us, that's nearly all of\n> the latency.\n> \n\nMaybe I don't understand what you mean \"bandwidth related\" but surely\nthe smaller requests are not limited by bandwidth. I mean, 100B and 1kB\n(and even 10kB) requests have almost the same transaction rate, yet\nthere's an order of magnitude difference in bandwidth (sure, there's\noverhead, but this much magnitude?).\n\nOn the higher end, sure, that seems bandwidth related. But for 100kB,\nit's still just ~50% of the 1Gbps.\n\n> The reason this matters is that the numbers show that the latency of having to\n> send a small message with updated positions is far smaller than having to send\n> all the outstanding data. Even having to send a single WAL page over the\n> network ~doubles the latency of the response on 1gbit! Of course the impact\n> is smaller on 10gbit, but even there latency substantially increases around\n> 100kB of outstanding data.\n\nUnderstood. I wonder if this is one of the things we'd need to measure\nto adjust the write size (i.e. how eagerly to write WAL to disk / over\nnetwork). Essentially, we'd get an the size where the latency starts\nincreasing much faster, and try to write WAL faster than that.\n\nI wonder if storage (not network) has a similar pattern.\n\n> In a local pgbench with 32 clients I see WAL write sizes between 8kB and\n> ~220kB. Being able to stream those out before the local flush completed\n> therefore seems likely to reduce synchronous_commit overhead substantially.\n> \n\nYeah, those writes are certainly too large. If we can write them\nearlier, and then only do smaller messages to write the remaining bit\nand the positions, that'd help a lot.\n\n> \n>> I don't think it says we should be replicating WAL in tiny chunks,\n>> because if you need to send a chunk of data it's always more efficient\n>> to send it at once (compared to sending multiple smaller pieces).\n> \n> I don't think that's a very large factor for network data, once your minimal\n> data sizes is ~8kB (or ~4kB if we lower wal_block_size). TCP messsages will\n> get chunked into something smaller anyway and small messages don't need to get\n> acknowledged individually. Sending more data at once is good for CPU\n> efficiency (reducing syscall and network device overhead), but doesn't do much\n> for throughput.\n> \n> Sending 4kB of data in each send() in a bandwidth oriented test already gets\n> to ~9.3gbit/s in my network. That's close to the maximum atainable with normal\n> framing. If I change the mtu back to 9000 I get 9.89 gbit/s, again very close\n> to the theoretical max.\n> \n\nGot it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Nov 2023 19:29:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nOn 2023-11-08 19:29:38 +0100, Tomas Vondra wrote:\n> >>> I haven't checked, but I'd assume that 100bytes back and forth should easily\n> >>> fit a new message to update LSNs and the existing feedback response. Even just\n> >>> the difference between sending 100 bytes and sending 10k (a bit more than a\n> >>> single WAL page) is pretty significant on a 1gbit network.\n> >>>\n> >>\n> >> I'm on decaf so I may be a bit slow, but it's not very clear to me what\n> >> conclusion to draw from these numbers. What is the takeaway?\n> >>\n> >> My understanding is that in both cases the latency is initially fairly\n> >> stable, independent of the request size. This applies to request up to\n> >> ~1000B. And then the latency starts increasing fairly quickly, even\n> >> though it shouldn't hit the bandwidth (except maybe the 1MB requests).\n> >\n> > Except for the smallest end, these are bandwidth related, I think. Converting\n> > 1gbit/s to bytes/us is 125 bytes / us - before tcp/ip overhead. Even leaving\n> > the overhead aside, 10kB/100kB outstanding take ~80us/800us to send on\n> > 1gbit. If you subtract the minmum latency of about 130us, that's nearly all of\n> > the latency.\n> >\n>\n> Maybe I don't understand what you mean \"bandwidth related\" but surely\n> the smaller requests are not limited by bandwidth. I mean, 100B and 1kB\n> (and even 10kB) requests have almost the same transaction rate, yet\n> there's an order of magnitude difference in bandwidth (sure, there's\n> overhead, but this much magnitude?).\n\nWhat I mean is that bandwidth is the biggest factor determining latency in the\nnumbers I showed (due to decent sized packet and it being a local network). At\nline rate it takes ~80us to send 10kB over 1gbit ethernet. So a roundtrip\ncannot be faster than 80us, even if everything else added zero latency.\n\nThat's why my numbers show such a lower latency for the 10gbit network - it's\nsimply faster to put even small-ish amounts of data onto the wire.\n\nThat does not mean that the link is fully utilized over time - because we wait\nfor the other side to receive the data, wake up a user space process, send\nback 100 bytes, wait for the data be transmitted, and then wake up a process,\nthere are periods where the link in one direction is largely idle. But in\ncase of a 10kB packet on the 1gbit network, yes, we are bandwidth limited for\n~80us (or perhaps more interestingly, we are bandwidth limited for 0.8ms when\nsending 100kB).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Nov 2023 13:21:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hi,\n\nSince the last patch version I've done a number of experiments with this\nthrottling idea, so let me share some of the ideas and results, and see\nwhere that gets us.\n\nThe patch versions so far tied everything to syncrep - commit latency\nwith sync replica was the original motivation, so this makes sense. But\nwhile thinking about this and discussing this with a couple people, I've\nbeen wondering why to limit this to just that particular option. There's\na couple other places in the WAL write path where we might do a similar\nthing (i.e. wait) or be a bit more aggressive (and do a write/flush),\ndepending on circumstances.\n\nIf I simplify this a bit, there are about 3 WAL positions that I could\nthink of:\n\n- write LSN (how far we wrote WAL to disk)\n- flush LSN (how far we flushed WAL to local disk)\n- syncrep LSN (how far the sync replica confirmed WAL)\n\nSo, why couldn't there be a similar \"throttling threshold\" for these\nevents too? Imagine we have three GUCs, with values satisfying this:\n\n wal_write_after < wal_flush_after_local < wal_flush_after_remote\n\nand this meaning:\n\n wal_write_after - if a backend generates this amount of WAL, it will\n write the completed WAL (but only whole pages)\n\n wal_flush_after_local - if a backend generates this amount of WAL, it\n will not only write the WAL, but also issue a\n flush (if still needed)\n\n wal_flush_after_remote - if this amount of WAL is generated, it will\n wait for syncrep to confirm the flushed LSN\n\nThe attached PoC patch does this, mostly the same way as earlier\npatches. XLogInsertRecord is where the decision whether throttling may\nbe needed is done, HandleXLogDelayPending then does the actual work\n(writing WAL, flushing it, waiting for syncrep).\n\nThe one new thing HandleXLogDelayPending also does is auto-tuning the\nvalues a bit. The idea is that with per-backend threshold, it's hard to\nenforce some sort of global limit, because if depends on the number of\nactive backends. If you set 1MB of WAL per backend, the total might be\n1MB or 1000MB, if there are 1000 backends. Who knows. So this tries to\nreduce the threshold (if the backend generated only a tiny fraction of\nthe WAL), or increase the threshold (if it generated most of it). I'm\nnot entirely sure this behaves sanely under all circumstances, but for a\nPoC patch it seems OK.\n\nThe first two GUCs remind me what walwriter is doing, and I've been\nasking myself if maybe making it more aggressive would have the same\neffect. But I don't think so, because a big part of this throttling\npatch is ... well, throttling. Making the backends sleep for a bit (or\nwait for something), to slow it down. And walwriter doesn't really do\nthat I think.\n\n\nIn a recent off-list discussion, someone asked if maybe this might be\nuseful to prevent emergencies due to archiver not keeping up and WAL\nfilling disk. A bit like enforcing a more \"strict\" limit on WAL than the\ncurrent max_wal_size GUC. I'm not sure about that, it's certainly a very\ndifferent use case than minimizing impact on OLTP latency. But it seems\nlike \"archived LSN\" might be another \"event\" the backends would wait\nfor, just like they wait for syncrep to confirm a LSN. Ideally it'd\nnever happen, ofc, and it seems a bit like a great footgun (outage on\narchiver may kill PROD), but if you're at risk of ENOSPACE on pg_wal,\nnot doing anything may be risky too ...\n\nFWIW I wonder if maybe we should frame this a as a QoS feature, where\ninstead of \"minimize impact of bulk loads\" we'd try to \"guarantee\" or\n\"reserve\" some part of the capacity to certain backends/...\n\n\nNow, let's look at results from some of the experiments. I wanted to see\nhow effective this approach could be in minimizing impact of large bulk\nloads at small OLTP transactions in different setups. Thanks to the two\nnew GUCs this is not strictly about syncrep, so I decided to try three\ncases:\n\n1) local, i.e. single-node instance\n\n2) syncrep on the same switch, with 0.1ms latency (1Gbit)\n\n2) syncrep with 10ms latency (also 1Gbit)\n\nAnd for each configuration I did ran a pgbench (30 minutes), either on\nit's own, or concurrently with bulk COPY of 1GB data. The load was done\neither by a single backend (so one backend loading 1GB of data), or the\nfile was split into 10 files 100MB each, and this was loaded by 10\nconcurrent backends.\n\nAnd I did this test with three configurations:\n\n(a) master - unpatched, current behavior\n\n(b) throttle-1: patched with limits set like this:\n\n # Add settings for extensions here\n wal_write_after = '8kB'\n wal_flush_after_local = '16kB'\n wal_flush_after_remote = '32kB'\n\n(c) throttle-2: patched with throttling limits set to 4x of (b), i.e.\n\n # Add settings for extensions here\n wal_write_after = '32kB'\n wal_flush_after_local = '64kB'\n wal_flush_after_remote = '128kB'\n\nAnd I did this for the traditional three scales (small, medium, large),\nto hit different bottlenecks. And of course, I measured both throughput\nand latencies.\n\nThe full results are available here:\n\n[1] https://github.com/tvondra/wal-throttle-results/tree/master\n\nI'm not going to attach the files visualizing the results here, because\nit's like 1MB per file, which is not great for e-mail.\n\n\nhttps://github.com/tvondra/wal-throttle-results/blob/master/wal-throttling.pdf\n----------------------------------------------------------------------\n\nThe first file summarizes the throughput results for the three\nconfigurations, different scales etc. On the left is throughput, on the\nright is the number of load cycles completed.\n\nI think this behaves mostly as expected - with the bulk loads, the\nthroughput drops. How much depends on the configuration (for syncrep\nit's far more pronounced). The throttling recovers a lot of it, at the\nexpense of doing fewer loads - and it's quite significant drop. But\nthat's expected, and it was kinda what this patch was about - prioritise\nthe small OLTP transactions by doing fewer loads. This is not a patch\nthat would magically inflate capacity of the system to do more things.\n\nI however agree this does not really represent a typical production OLTP\nsystem. Those systems don't run at 100% saturation, except for short\nperiods, certainly not if they're doing something latency sensitive. So\na somewhat realistic test would be pgbench throttled at 75% capacity,\nleaving some spare capacity for the bulk loads.\n\nI actually tried that, and there are results in [1], but the behavior is\npretty similar to what I'm describing here (except that the system does\nactually manages to do more bulk loads, ofc).\n\n\nhttps://raw.githubusercontent.com/tvondra/wal-throttle-results/master/syncrep/latencies-1000-full.eps\n-----------------------------------------------------------------------\nNow let's look at the second file, which shows latency percentiles for\nthe medium dataset on syncrep. The difference between master (on the\nleft) and the two throttling builds is pretty obvious. It's not exactly\nthe same as \"no concurrent bulk loads\" in the top row, but not far from it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 4 Dec 2023 02:45:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" }, { "msg_contents": "Hey Tomas,\n\nShirisha had posted a recent re-attempt here [1] and then we were\ngraciously redirected here by Jakub.\n\nWe took a look at v5-0001-v4.patch and also a brief look at\nv5-0002-rework.patch. We feel that it might be worth considering\nthrottling based on the remote standby to begin with for simplicity (i.e.\nwal_flush_after_remote), and then we can add the other knobs incrementally?\n\nOur comments on v5-0001-v4.patch are below:\n\n> +\n> + /*\n> + * Decide if we need to throttle this backend, so that it does not write\n> + * WAL too fast, causing lag against the sync standby (which in turn\n> + * increases latency for standby confirmations). We may be holding locks\n> + * and blocking interrupts here, so we only make the decision, but the\n> + * wait (for sync standby confirmation) happens elsewhere.\n\nSlightly reworded:\n\n* wait (for sync standby confirmation) happens as part of the next\n* CHECK_FOR_INTERRUPTS(). See HandleXLogDelayPending() for details on\n* why the delay is deferred.\n\n> + *\n> + * The throttling is applied only to large transactions (producing more\n> + * than wal_throttle_threshold kilobytes of WAL). Throttled backends\n> + * can be identified by a new wait event SYNC_REP_THROTTLED.\n> + *\n> + * Small transactions (by amount of produced WAL) are still subject to\n> + * the sync replication, so the same wait happens at commit time.\n> + *\n> + * XXX Not sure this is the right place for a comment explaining how the\n> + * throttling works. This place is way too low level, and rather far from\n> + * the place where the wait actually happens.\n\nPerhaps the best course of action is to move the code and the comments\nabove into an inline function: SetAndCheckWALThrottle().\n\n> +\n> +/*\n> + * HandleXLogDelayPending\n> + * Throttle backends generating large amounts of WAL.\n> + *\n> + * The throttling is implemented by waiting for a sync replica confirmation for\n> + * a convenient LSN position. In particular, we do not wait for the current LSN,\n> + * which may be in a partially filled WAL page (and we don't want to write this\n> + * one out - we'd have to write it out again, causing write amplification).\n> + * Instead, we move back to the last fully WAL page.\n> + *\n> + * Called from ProcessMessageInterrupts() to avoid syncrep waits in XLogInsert(),\n> + * which happens in critical section and with blocked interrupts (so it would be\n> + * impossible to cancel the wait if it gets stuck). Also, there may be locks held\n> + * and we don't want to hold them longer just because of the wait.\n> + *\n> + * XXX Andres suggested we actually go back a couple pages, to increase the\n> + * probability the LSN was already flushed (obviously, this depends on how much\n> + * lag we allow).\n> + *\n> + * XXX Not sure why we use XactLastThrottledRecEnd and not simply XLogRecEnd?\n> + */\n> +void\n> +HandleXLogDelayPending()\n> +{\n> + XLogRecPtr lsn;\n> +\n> + /* calculate last fully filled page */\n> + lsn = XactLastThrottledRecEnd - (XactLastThrottledRecEnd % XLOG_BLCKSZ);\n> +\n> + Assert(wal_throttle_threshold > 0);\n> + Assert(backendWalInserted >= wal_throttle_threshold * 1024L);\n> + Assert(XactLastThrottledRecEnd != InvalidXLogRecPtr);\n> +\n> + XLogFlush(lsn);\n> + SyncRepWaitForLSN(lsn, false);\n> +\n> + XLogDelayPending = false;\n> +}\n\n\n(1) Can't we simply call SyncRepWaitForLSN(LogwrtResult.Flush, false); here?\nLogwrtResult.Flush will guarantee that we are waiting on something that\nhas already been flushed or will be flushed soon. Then we wouldn't need\nto maintain XactLastThrottledRecEnd, nor call XLogFlush() before calling\nSyncRepWaitForLSN(). LogwrtResult can be slightly stale, but does that\nreally matter here?\n\n(2) Also, to make things more understandable, instead of maintaining a\ncounter to track the number of WAL bytes written, maybe we should\nmaintain a LSN pointer called XLogDelayWindowStart. And then in here,\nwe can assert:\n\nAssert(LogwrtResult.Flush - XLogDelayWindowStart >\nwal_throttle_threshold * 1024);\n\nSimilarly, we can check the same conditional in SetAndCheckWALThrottle().\n\nAfter the wait is done and we reset XLogDelayPending = false, we can\nperhaps reset XLogDelayWindowStart = LogwrtResult.Flush.\n\nThe only downside probably is that if our standby is caught up enough, we may be\nrepeatedly and unnecessarily invoking a HandleXLogDelayPending(), where our\nSyncRepWaitForLSN() would be called with an older LSN and it would be a no-op\nearly exit.\n\n\n> + * XXX Should this be done even if XLogDelayPending is already set? Maybe\n> + * that should only update XactLastThrottledRecEnd, withoug incrementing\n> + * the pgWalUsage.wal_throttled counter?\n> + */\n> + backendWalInserted += rechdr->xl_tot_len;\n> +\n> + if ((synchronous_commit >= SYNCHRONOUS_COMMIT_REMOTE_WRITE) &&\n> + (wal_throttle_threshold > 0) &&\n> + (backendWalInserted >= wal_throttle_threshold * 1024L))\n> + {\n> + XactLastThrottledRecEnd = XactLastRecEnd;\n> + InterruptPending = true;\n> + XLogDelayPending = true;\n> + pgWalUsage.wal_throttled++;\nXLogDelayWindowStart = LogwrtResult.Flush;\n> + }\n\nYeah we shouldn't do all of this if XLogDelayPending is already set.\nWe can add a quick\n!XLogDelayPending leading conditional.\n\nAlso, [1] has a TAP test that may be useful.\n\nRegards,\nSoumyadeep & Shirisha\nBroadcom\n\n[1] https://www.postgresql.org/message-id/CAP3-t08umaBEUEppzBVY6%3D%3D3tbdLwG7b4wfrba73zfOAUrRsoQ%40mail.gmail.com\n\n\n", "msg_date": "Tue, 24 Sep 2024 23:00:18 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Syncrep and improving latency due to WAL throttling" } ]
[ { "msg_contents": "After I committed 1249371632 I thought that I should really go ahead and\ndo what I suggested and allow multiple exclude pattern files for\npgindent. One obvious case is to exclude an in tree meson build\ndirectory. I also sometimes have other in tree objects I'd like to be\nable exclude.\n\nThe attached adds this ability. It also unifies the logic for finding\nthe regular exclude pattern file and the typedefs file.\n\nI took the opportunity to remove a badly thought out and dangerous\nfeature whereby the first non-option argument, if it's not a .c or .h\nfile, is taken as the typedefs file. That's particularly dangerous in a\nsituation where git is producing a list of files that have changed and\npassing it on the command line to pgindent. I also removed a number of\nextraneous blank lines.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 25 Jan 2023 08:59:44 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "More pgindent tweaks" }, { "msg_contents": "On Wed, Jan 25, 2023 at 08:59:44AM -0500, Andrew Dunstan wrote:\n> After I committed 1249371632 I thought that I should really go ahead and\n> do what I suggested and allow multiple exclude pattern files for\n> pgindent. One obvious case is to exclude an in tree meson build\n> directory. I also sometimes have other in tree objects I'd like to be\n> able exclude.\n> \n> The attached adds this ability. It also unifies the logic for finding\n> the regular exclude pattern file and the typedefs file.\n> \n> I took the opportunity to remove a badly thought out and dangerous\n> feature whereby the first non-option argument, if it's not a .c or .h\n> file, is taken as the typedefs file. That's particularly dangerous in a\n> situation where git is producing a list of files that have changed and\n> passing it on the command line to pgindent. I also removed a number of\n> extraneous blank lines.\n\nCan we make the pgindent options more visible, perhaps by adding them to\npgindent.man or at least saying type pgindent --help?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Wed, 25 Jan 2023 09:41:18 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: More pgindent tweaks" }, { "msg_contents": "\nOn 2023-01-25 We 09:41, Bruce Momjian wrote:\n> On Wed, Jan 25, 2023 at 08:59:44AM -0500, Andrew Dunstan wrote:\n>> After I committed 1249371632 I thought that I should really go ahead and\n>> do what I suggested and allow multiple exclude pattern files for\n>> pgindent. One obvious case is to exclude an in tree meson build\n>> directory. I also sometimes have other in tree objects I'd like to be\n>> able exclude.\n>>\n>> The attached adds this ability. It also unifies the logic for finding\n>> the regular exclude pattern file and the typedefs file.\n>>\n>> I took the opportunity to remove a badly thought out and dangerous\n>> feature whereby the first non-option argument, if it's not a .c or .h\n>> file, is taken as the typedefs file. That's particularly dangerous in a\n>> situation where git is producing a list of files that have changed and\n>> passing it on the command line to pgindent. I also removed a number of\n>> extraneous blank lines.\n> Can we make the pgindent options more visible, perhaps by adding them to\n> pgindent.man or at least saying type pgindent --help?\n\n\nSure, will do.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 25 Jan 2023 11:01:14 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: More pgindent tweaks" } ]
[ { "msg_contents": "Hi hackers,\n\nCurrently we allow self-conflicting inserts for ON CONFLICT DO NOTHING:\n\n```\nCREATE TABLE t (a INT UNIQUE, b INT);\nINSERT INTO t VALUES (1,1), (1,2) ON CONFLICT DO NOTHING;\n-- succeeds, inserting the first row and ignoring the second\n```\n... but not for ON CONFLICT .. DO UPDATE:\n\n```\nINSERT INTO t VALUES (1,1), (1,2) ON CONFLICT (a) DO UPDATE SET b = 0;\nERROR: ON CONFLICT DO UPDATE command cannot affect row a second time\nHINT: Ensure that no rows proposed for insertion within the same\ncommand have duplicate constrained values.\n```\n\nTom pointed out in 2016 that this is actually a bug [1] and I agree.\n\nThe proposed patch fixes this.\n\n[1]: https://www.postgresql.org/message-id/22438.1477265185%40sss.pgh.pa.us\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 25 Jan 2023 18:45:12 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\nOn 2023-01-25 18:45:12 +0300, Aleksander Alekseev wrote:\n> Currently we allow self-conflicting inserts for ON CONFLICT DO NOTHING:\n> \n> ```\n> CREATE TABLE t (a INT UNIQUE, b INT);\n> INSERT INTO t VALUES (1,1), (1,2) ON CONFLICT DO NOTHING;\n> -- succeeds, inserting the first row and ignoring the second\n> ```\n> ... but not for ON CONFLICT .. DO UPDATE:\n> \n> ```\n> INSERT INTO t VALUES (1,1), (1,2) ON CONFLICT (a) DO UPDATE SET b = 0;\n> ERROR: ON CONFLICT DO UPDATE command cannot affect row a second time\n> HINT: Ensure that no rows proposed for insertion within the same\n> command have duplicate constrained values.\n> ```\n> \n> Tom pointed out in 2016 that this is actually a bug [1] and I agree.\n\nI don't think I agree with this being a bug.\n\nWe can't sensible implement updating a row twice within a statement - hence\nerroring out for ON CONFLICT DO UPDATE affecting a row twice. But what's the\njustification for erroring out in the DO NOTHING case? ISTM that it's useful\nto be able to handle such duplicates, and I don't immediately see what\nsemantic confusion or implementation difficulty we avoid by erroring out.\n\nIt seems somewhat likely that a behavioural change will cause trouble for some\nof the uses of DO NOTHING out there.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:34:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi Andres,\n\n> I don't think I agree with this being a bug.\n\nPerhaps that's not a bug especially considering the fact that the\ndocumentation describes this behavior, but in any case the fact that:\n\n```\nINSERT INTO t VALUES (1,1) ON CONFLICT (a) DO UPDATE SET b = 0;\nINSERT INTO t VALUES (1,2) ON CONFLICT (a) DO UPDATE SET b = 0;\n```\n\nand:\n\n```\nINSERT INTO t VALUES (1,1), (1,2) ON CONFLICT (a) DO NOTHING;\n``\n\n.. both work, and:\n\n```\nINSERT INTO t VALUES (1,1), (1,2) ON CONFLICT (a) DO UPDATE SET b = 0;\n```\n\n... doesn't is rather confusing. There is no reason why the latest\nquery shouldn't work except for a slight complication of the code.\nWhich seems to be a reasonable tradeoff, for me at least.\n\n> But what's the justification for erroring out in the DO NOTHING case?\n>\n> [...]\n>\n> It seems somewhat likely that a behavioural change will cause trouble for some\n> of the uses of DO NOTHING out there.\n\nJust to make sure we are on the same page. The patch doesn't break the\ncurrent DO NOTHING behavior but rather makes DO UPDATE work the same\nway DO NOTHING does.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Jan 2023 22:00:50 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "On Wed, Jan 25, 2023 at 11:01 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Just to make sure we are on the same page. The patch doesn't break the\n> current DO NOTHING behavior but rather makes DO UPDATE work the same\n> way DO NOTHING does.\n\nIt also makes DO UPDATE not work the same way as either UPDATE itself\n(which will silently skip a second or subsequent update of the same\nrow by the same UPDATE statement in RC mode), or MERGE (which has\nsimilar cardinality violations).\n\nDO NOTHING doesn't lock any conflicting row, and so won't have to\ndirty pages that have matching rows. It was always understood to be\nmore susceptible to certain issues (when in READ COMMITTED mode) as a\nresult. There are some halfway reasonable arguments against this sort\nof behavior, but I believe that we made the right trade-off.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Jan 2023 11:08:04 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi Peter,\n\n> It also makes DO UPDATE not work the same way as either UPDATE itself\n> (which will silently skip a second or subsequent update of the same\n> row by the same UPDATE statement in RC mode), or MERGE (which has\n> similar cardinality violations).\n\nThat's true. On the flip side, UPDATE and MERGE are different\nstatements and arguably shouldn't behave the same way INSERT does.\n\nTo clarify, I'm merely proposing the change and playing the role of\nDevil's advocate here. I'm not arguing that the patch should be\nnecessarily accepted. In the end of the day it's up to the community\nto decide. Personally I think it would make the users a bit happier.\n\nThe actual reason why I made the patch is that a colleague of mine,\nSven Klemm, encountered this limitation recently and was puzzled by it\nand so was I. The only workaround the user currently has is to execute\nseveral INSERTs one by one which is expensive when you have a lot of\nINSERTs.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Jan 2023 22:27:38 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\nOn 2023-01-25 22:00:50 +0300, Aleksander Alekseev wrote:\n> Perhaps that's not a bug especially considering the fact that the\n> documentation describes this behavior, but in any case the fact that:\n> \n> ```\n> INSERT INTO t VALUES (1,1) ON CONFLICT (a) DO UPDATE SET b = 0;\n> INSERT INTO t VALUES (1,2) ON CONFLICT (a) DO UPDATE SET b = 0;\n> ```\n> \n> and:\n> \n> ```\n> INSERT INTO t VALUES (1,1), (1,2) ON CONFLICT (a) DO NOTHING;\n> ``\n> \n> .. both work, and:\n> \n> ```\n> INSERT INTO t VALUES (1,1), (1,2) ON CONFLICT (a) DO UPDATE SET b = 0;\n> ```\n> \n> ... doesn't is rather confusing. There is no reason why the latest\n> query shouldn't work except for a slight complication of the code.\n> Which seems to be a reasonable tradeoff, for me at least.\n\nI don't agree that this is just about a \"slight complication\" of the code. I\nthink semantically the proposed new behaviour is pretty bogus.\n\n\nIt *certainly* can't be right to just continue with the update in heap_update,\nas you've done. You'd have to skip the update, not execute it. What am I\nmissing here?\n\nI think this'd completely break triggers, for example, because they won't be\nable to get the prior row version, since it won't actually be a row ever\nvisible (due to cmin=cmax).\n\nI suspect it might break unique constraints as well, because we'd end up with\nan invisible row in part of the ctid chain.\n\n\n\n> > But what's the justification for erroring out in the DO NOTHING case?\n> >\n> > [...]\n> >\n> > It seems somewhat likely that a behavioural change will cause trouble for some\n> > of the uses of DO NOTHING out there.\n> \n> Just to make sure we are on the same page. The patch doesn't break the\n> current DO NOTHING behavior but rather makes DO UPDATE work the same\n> way DO NOTHING does.\n\nI see that now - I somehow thought you were recommending to error out in both\ncases, rather than the other way round.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Jan 2023 14:34:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi Andres,\n\n> It *certainly* can't be right to just continue with the update in heap_update,\n\nI see no reason why. What makes this case so different from updating a\ntuple created by the previous command?\n\n> as you've done. You'd have to skip the update, not execute it. What am I\n> missing here?\n\nSimply skipping updates in a statement that literally says DO UPDATE\ndoesn't seem to be the behavior a user would expect.\n\n> I think this'd completely break triggers, for example, because they won't be\n> able to get the prior row version, since it won't actually be a row ever\n> visible (due to cmin=cmax).\n>\n> I suspect it might break unique constraints as well, because we'd end up with\n> an invisible row in part of the ctid chain.\n\nThat's a reasonable concern, however I was unable to break unique\nconstraints or triggers so far:\n\n```\nCREATE TABLE t (a INT UNIQUE, b INT);\n\nCREATE OR REPLACE FUNCTION t_insert() RETURNS TRIGGER AS $$\nBEGIN\n RAISE NOTICE 't_insert triggered: new = %, old = %', NEW, OLD;\n RETURN NULL;\nEND\n$$ LANGUAGE 'plpgsql';\n\nCREATE OR REPLACE FUNCTION t_update() RETURNS TRIGGER AS $$\nBEGIN\n RAISE NOTICE 't_update triggered: new = %, old = %', NEW, OLD;\n RETURN NULL;\nEND\n$$ LANGUAGE 'plpgsql';\n\nCREATE TRIGGER t_insert_trigger\nAFTER INSERT ON t\nFOR EACH ROW EXECUTE PROCEDURE t_insert();\n\nCREATE TRIGGER t_insert_update\nAFTER UPDATE ON t\nFOR EACH ROW EXECUTE PROCEDURE t_update();\n\nINSERT INTO t VALUES (1,1), (1,2) ON CONFLICT (a) DO UPDATE SET b = 0;\n\nNOTICE: t_insert triggered: new = (1,1), old = <NULL>\nNOTICE: t_update triggered: new = (1,0), old = (1,1)\n\nINSERT INTO t VALUES (2,1), (2,2), (3,1) ON CONFLICT (a) DO UPDATE SET b = 0;\n\nNOTICE: t_insert triggered: new = (2,1), old = <NULL>\nNOTICE: t_update triggered: new = (2,0), old = (2,1)\nNOTICE: t_insert triggered: new = (3,1), old = <NULL>\n\n=# SELECT * FROM t;\n a | b\n---+---\n 1 | 0\n 2 | 0\n 3 | 1\n```\n\nPFA patch v2 that also includes the test shown above.\n\nAre there any other scenarios we should check?\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 26 Jan 2023 13:07:08 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\nOn 2023-01-26 13:07:08 +0300, Aleksander Alekseev wrote:\n> > It *certainly* can't be right to just continue with the update in heap_update,\n> \n> I see no reason why. What makes this case so different from updating a\n> tuple created by the previous command?\n\nTo me it's a pretty fundamental violation of how heap visibility works. I'm\nquite sure that there will be problems, but I don't feel like investing the\ntime to find a reproducer for something that I'm ready to reject on principle.\n\n\n> > as you've done. You'd have to skip the update, not execute it. What am I\n> > missing here?\n> \n> Simply skipping updates in a statement that literally says DO UPDATE\n> doesn't seem to be the behavior a user would expect.\n\nGiven that we skip the update in \"UPDATE\", your argument doesn't hold much\nwater.\n\n\n> > I think this'd completely break triggers, for example, because they won't be\n> > able to get the prior row version, since it won't actually be a row ever\n> > visible (due to cmin=cmax).\n> >\n> > I suspect it might break unique constraints as well, because we'd end up with\n> > an invisible row in part of the ctid chain.\n> \n> That's a reasonable concern, however I was unable to break unique\n> constraints or triggers so far:\n\nI think you'd have to do a careful analysis of a lot of code for that to hold\nany water.\n\n\nI continue to think that we should just reject this behavioural change.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Feb 2023 10:27:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\n> To me it's a pretty fundamental violation of how heap visibility works.\n\nI don't think this has much to do with heap visibility. It's true that\ngenerally a command doesn't see its own tuples. This is done in order\nto avoid the Halloween problem which however can't happen in this\nparticular case.\n\nOther than that the heap doesn't care about the visibility, it merely\nstores the tuples. The visibility is determined by xmin/xmax, the\nisolation level, etc.\n\nIt's true that the patch changes visibility rules in one very\nparticular edge case. This alone is arguably not a good enough reason\nto reject a patch.\n\n> Given that we skip the update in \"UPDATE\", your argument doesn't hold much\n> water.\n\nPeter made this argument above too and I will give the same anwer.\nThere is no reason why two completely different SQL statements should\nbehave the same.\n\n> > That's a reasonable concern, however I was unable to break unique\n> > constraints or triggers so far:\n>\n> I think you'd have to do a careful analysis of a lot of code for that to hold\n> any water.\n\nAlternatively we could work smarter, not harder, and let the hardware\nfind the bugs for us. Writing tests is much simpler and bullet-proof\nthan analyzing the code.\n\nAgain, to clarify, I'm merely playing the role of Devil's advocate\nhere. I'm not saying that the patch should necessarily be accepted,\nnor am I 100% sure that it has any undiscovered bugs. However the\narguments against received so far don't strike me personally as being\nparticularly convincing.\n\nAs an example, one could argue that there are applications that\n*expect* to get an ERROR in the case of self-conflicting inserts. And\nby changing this behavior we will break these applications. If the\nmajority believes that we seriously care about this it would be a good\nenough reason to withdraw the patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 8 Feb 2023 16:08:39 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\nOn 2023-02-08 16:08:39 +0300, Aleksander Alekseev wrote:\n> > To me it's a pretty fundamental violation of how heap visibility works.\n> \n> I don't think this has much to do with heap visibility. It's true that\n> generally a command doesn't see its own tuples. This is done in order\n> to avoid the Halloween problem which however can't happen in this\n> particular case.\n> \n> Other than that the heap doesn't care about the visibility, it merely\n> stores the tuples. The visibility is determined by xmin/xmax, the\n> isolation level, etc.\n\nYes, and the fact is that cmin == cmax is something that we don't normally\nproduce, yet you emit it now, without, as far as I can tell it, a convincing\nreason.\n\n\n> > > That's a reasonable concern, however I was unable to break unique\n> > > constraints or triggers so far:\n> >\n> > I think you'd have to do a careful analysis of a lot of code for that to hold\n> > any water.\n> \n> Alternatively we could work smarter, not harder, and let the hardware\n> find the bugs for us. Writing tests is much simpler and bullet-proof\n> than analyzing the code.\n\nThat's a spectactularly wrong argument in almost all cases. Unless you have a\nway to get to full branch coverage or use a model checker that basically does\nthe same, testing isn't going to give you a whole lot of confidence that you\nhaven't introduced bugs. This is particularly true for something like heapam,\nwhere a lot of the tricky behaviour requires complicated interactions between\nmultiple connections.\n\n\n> Again, to clarify, I'm merely playing the role of Devil's advocate\n> here. I'm not saying that the patch should necessarily be accepted,\n> nor am I 100% sure that it has any undiscovered bugs. However the\n> arguments against received so far don't strike me personally as being\n> particularly convincing.\n\nI've said my piece, as-is I vote to reject the patch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Feb 2023 07:49:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "On Wed, Feb 8, 2023 at 5:08 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > To me it's a pretty fundamental violation of how heap visibility works.\n>\n> I don't think this has much to do with heap visibility. It's true that\n> generally a command doesn't see its own tuples. This is done in order\n> to avoid the Halloween problem which however can't happen in this\n> particular case.\n>\n> Other than that the heap doesn't care about the visibility, it merely\n> stores the tuples. The visibility is determined by xmin/xmax, the\n> isolation level, etc.\n\nI think that in a green field situation we would probably make READ\nCOMMITTED updates throw cardinality violations in the same way as ON\nCONFLICT DO UPDATE, while not changing anything about ON CONFLICT DO\nNOTHING. We made a deliberate trade-off with the design of DO NOTHING,\nwhich won't lock conflicting rows, and so won't dirty any heap pages\nthat it doesn't insert on to.\n\nI don't buy your argument about DO UPDATE needing to be brought into\nline with DO NOTHING. In any case I'm pretty sure that Tom's remarks\nin 2016 about a behavioral inconsistencies (which you cited) actually\ncalled for making DO NOTHING more like DO UPDATE -- not the other way\naround.\n\nTo me it seems as if allowing the same command to update the same row\nmore than once is just not desirable in general. It doesn't seem\nnecessary to bring low level arguments about cmin/cmax into it, nor\ndoes it seem necessary to talk about things like the Halloween\nproblem. To me the best argument is also the simplest: who would want\nus to allow it, and for what purpose?\n\nI suppose that we might theoretically prefer to throw a cardinality\nviolation for DO NOTHING, but I don't see a way to do that without\nlocking rows and dirtying heap pages. If somebody were to argue that\nwe should make DO NOTHING lock rows and throw similar errors now then\nI'd also disagree with them, but to a much lesser degree. I don't\nthink that this patch is a good idea.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 8 Feb 2023 09:34:59 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\n> Yes, and the fact is that cmin == cmax is something that we don't normally\n> produce\n\nNot sure if this is particularly relevant to this discussion but I\ncan't resist noticing that the heap doesn't even store cmin and\ncmax... There is only HeapTupleHeaderData.t_cid and flags. cmin/cmax\nare merely smoke and mirrors we use to trick a user.\n\nAnd yes, the patch doesn't seem to break much mirrors:\n\n```\n=# create table t (a int unique, b int);\n=# insert into t values (1,1), (1,2) on conflict (a) do update set b = 0;\n=# SELECT xmin, xmax, cmin, cmax, * FROM t;\n xmin | xmax | cmin | cmax | a | b\n------+------+------+------+---+---\n 731 | 0 | 0 | 0 | 1 | 0\n=# begin;\n=# insert into t values (2,1), (2,2), (3,1) on conflict (a) do update set b = 0;\n=# SELECT xmin, xmax, cmin, cmax, * FROM t;\n xmin | xmax | cmin | cmax | a | b\n------+------+------+------+---+---\n 731 | 0 | 0 | 0 | 1 | 0\n 732 | 0 | 0 | 0 | 2 | 0\n 732 | 0 | 0 | 0 | 3 | 1\n\n=# insert into t values (2,1), (2,2), (3,1) on conflict (a) do update set b = 0;\n=# SELECT xmin, xmax, cmin, cmax, * FROM t;\n xmin | xmax | cmin | cmax | a | b\n------+------+------+------+---+---\n 731 | 0 | 0 | 0 | 1 | 0\n 732 | 732 | 1 | 1 | 2 | 0\n 732 | 732 | 1 | 1 | 3 | 0\n\n=# commit;\n=# SELECT xmin, xmax, cmin, cmax, * FROM t;\n xmin | xmax | cmin | cmax | a | b\n------+------+------+------+---+---\n 731 | 0 | 0 | 0 | 1 | 0\n 732 | 732 | 1 | 1 | 2 | 0\n 732 | 732 | 1 | 1 | 3 | 0\n```\n\n> That's a spectactularly wrong argument in almost all cases. Unless you have a\n> way to get to full branch coverage or use a model checker that basically does\n> the same, testing isn't going to give you a whole lot of confidence that you\n> haven't introduced bugs.\n\nBut neither will reviewing a lot of code...\n\n> I've said my piece, as-is I vote to reject the patch.\n\nFair enough. I'm merely saying that rejecting a patch because it\ndoesn't include a TLA+ model is something novel :)\n\n> I don't buy your argument about DO UPDATE needing to be brought into\n> line with DO NOTHING. In any case I'm pretty sure that Tom's remarks\n> in 2016 about a behavioral inconsistencies (which you cited) actually\n> called for making DO NOTHING more like DO UPDATE -- not the other way\n> around.\n\nInteresting. Yep, we could use a bit of input from Tom on this one.\n\nThis of course would break backward compatibility. But we can always\ninvent something like:\n\n```\nINSERT INTO ..\nON CONFLICT DO [NOTHING|UPDATE .. ]\n[ALLOWING|FORBIDDING] SELF CONFLICTS;\n```\n\n... if we really want to.\n\n> problem. To me the best argument is also the simplest: who would want\n> us to allow it, and for what purpose?\n\nGood question.\n\nThis arguably has little use for application developers. As an\napplication developer you typically know your unique constraints and\nusing this knowledge you can rewrite the query as needed and add any\nother accompanying logic.\n\nHowever, extension developers, as an example, often don't know the\nunderlying unique constraints (more specifically, it's difficult to\nlook for them and process them manually) and often have to process any\ngarbage the application developer passes to an extension.\n\nThis of course is applicable not only to extensions, but to any\nmiddleware between the DBMS and the application.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 9 Feb 2023 13:06:04 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\n```\n=# commit;\n=# SELECT xmin, xmax, cmin, cmax, * FROM t;\n xmin | xmax | cmin | cmax | a | b\n------+------+------+------+---+---\n 731 | 0 | 0 | 0 | 1 | 0\n 732 | 732 | 1 | 1 | 2 | 0\n 732 | 732 | 1 | 1 | 3 | 0\n```\n\nOops, you got me :) This of course isn't right - the xmax transaction\nis committed but we still see the data, etc.\n\nIf we really are going to work on this, this part is going to require more work.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 9 Feb 2023 13:16:20 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\nOn 2023-02-09 13:06:04 +0300, Aleksander Alekseev wrote:\n> > Yes, and the fact is that cmin == cmax is something that we don't normally\n> > produce\n> \n> Not sure if this is particularly relevant to this discussion but I\n> can't resist noticing that the heap doesn't even store cmin and\n> cmax... There is only HeapTupleHeaderData.t_cid and flags. cmin/cmax\n> are merely smoke and mirrors we use to trick a user.\n\nNo, they're not just that. Yes, cmin/cmax aren't both stored on-disk, but if\nboth are needed, they *are* stored in-memory. We can do that because it's only\never needed from within a transaction.\n\n\n> > That's a spectactularly wrong argument in almost all cases. Unless you have a\n> > way to get to full branch coverage or use a model checker that basically does\n> > the same, testing isn't going to give you a whole lot of confidence that you\n> > haven't introduced bugs.\n> \n> But neither will reviewing a lot of code...\n\nAnd yet my review did figure out that your patch would have visibility\nproblems, which you did end up having, as you noticed yourself downthread :)\n\n\n> > I've said my piece, as-is I vote to reject the patch.\n> \n> Fair enough. I'm merely saying that rejecting a patch because it\n> doesn't include a TLA+ model is something novel :)\n\nI obviously am not suggesting that (although some things could probably\nbenefit). Just that not having an example showing something working, isn't\nsufficient to consider something suspicious OK.\n\nAnd changes affecting heapam.c visibility semantics need extremely careful\nreview, I have the battle scars to prove that to be true :P.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Feb 2023 02:28:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "Hi,\n\n> And yet my review did figure out that your patch would have visibility\n> problems, which you did end up having, as you noticed yourself downthread :)\n\nYep, this particular implementation turned out to be buggy.\n\n>> I don't buy your argument about DO UPDATE needing to be brought into\n>> line with DO NOTHING. In any case I'm pretty sure that Tom's remarks\n>> in 2016 about a behavioral inconsistencies (which you cited) actually\n>> called for making DO NOTHING more like DO UPDATE -- not the other way\n>> around.\n>\n> Interesting. Yep, we could use a bit of input from Tom on this one.\n>\n> This of course would break backward compatibility. But we can always\n> invent something like:\n>\n> ```\n> INSERT INTO ..\n> ON CONFLICT DO [NOTHING|UPDATE .. ]\n> [ALLOWING|FORBIDDING] SELF CONFLICTS;\n> ```\n>\n> ... if we really want to.\n\nI suggest we discuss if we even want to support something like this\nbefore processing further and then think about a particular\nimplementation if necessary.\n\nOne thing that occured to me during the discussion is that we don't\nnecessarily have to physically write one tuple at a time to the heap.\nAlternatively we could use information about the existing unique\nconstraints and write only the needed tuples.\n\n> However, extension developers, as an example, often don't know the\n> underlying unique constraints (more specifically, it's difficult to\n> look for them and process them manually) and often have to process any\n> garbage the application developer passes to an extension.\n>\n> This of course is applicable not only to extensions, but to any\n> middleware between the DBMS and the application.\n\nThis however is arguably a niche use case. So maybe we don't want to\nspend time on this.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 9 Feb 2023 13:42:36 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" }, { "msg_contents": "On Thu, 9 Feb 2023 at 05:43, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> >> I don't buy your argument about DO UPDATE needing to be brought into\n> >> line with DO NOTHING. In any case I'm pretty sure that Tom's remarks\n> >> in 2016 about a behavioral inconsistencies (which you cited) actually\n> >> called for making DO NOTHING more like DO UPDATE -- not the other way\n> >> around.\n> >\n> > Interesting. Yep, we could use a bit of input from Tom on this one.\n\nI realize there are still unanswered conceptual questions about this\npatch but with two votes against it seems unlikely to make much more\nprogress unless you rethink what you're trying to accomplish and\npackage it in a way that doesn't step on these more controversial\nissues.\n\nI'm going to mark the patch Returned With Feedback. If Tom or someone\nelse disagrees with Peter and Andres or has some new insights about\nhow to make it more palatable then we can always revisit that.\n\n> > This of course would break backward compatibility. But we can always\n> > invent something like:\n> >\n> > ```\n> > INSERT INTO ..\n> > ON CONFLICT DO [NOTHING|UPDATE .. ]\n> > [ALLOWING|FORBIDDING] SELF CONFLICTS;\n> > ```\n> >\n> > ... if we really want to.\n\nSomething like that might be what I mean about new insights though I\nsuspect this is overly complex. It looks like having the ON CONFLICT\nUPDATE happen before the row is already inserted might simplify things\nconceptually but then it might make the implementation prohibitively\ncomplex.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Wed, 1 Mar 2023 14:42:14 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make ON CONFLICT DO NOTHING and ON CONFLICT DO UPDATE\n consistent" } ]
[ { "msg_contents": "Hi hackers,\n\nI attempted to perform an upgrade from PG-14.5 to PG-15.1 with pg_upgrade and unfortunately it errors out because of a function that does not exist anymore in PG-15.1.\nThe function is ‘pg_catalog.close_lb’ and it exists in 14.5 but not in 15.1.\nIn our scenario we changed the permissions of this function in PG14.5 (via an automated tool) and then pg_upgrade tries to change the permissions in PG15.1 as well.\n\n\nSteps to reproduce:\n\n\n 1. Run initdb for 14.5\n 2. Run initdb for 15.1\n 3. Run psql client on 14.5\n * postgres=# REVOKE ALL ON FUNCTION close_lb(line, box) FROM $USER;\n 4. Run pg_upgrade from 14.5 to 15.1\n\nThis will error out because pg_upgrade will attempt to REVOKE the persmissions on close_lb on 15.1.\nIs there a way to specify which functions/objects to exclude in pg_upgrade?\nThanks in advance!\n\nDimos\n(ServiceNow)\n\n\n\n\n\n\n\n\n\nHi hackers,\n \nI attempted to perform an upgrade from PG-14.5 to PG-15.1 with pg_upgrade and unfortunately it errors out because of a function that does not exist anymore in PG-15.1.\nThe function is ‘pg_catalog.close_lb’ and it exists in 14.5 but not in 15.1.\nIn our scenario we changed the permissions of this function in PG14.5 (via an automated tool) and then pg_upgrade tries to change the permissions in PG15.1 as well.\n \n \nSteps to reproduce:\n \n\nRun initdb for 14.5Run initdb for 15.1Run psql client on 14.5\npostgres=# REVOKE ALL ON FUNCTION close_lb(line, box) FROM $USER;\nRun pg_upgrade from 14.5 to 15.1\n \nThis will error out because pg_upgrade will attempt to REVOKE the persmissions on close_lb on 15.1.\nIs there a way to specify which functions/objects to exclude in pg_upgrade?\nThanks in advance!\n \nDimos\n(ServiceNow)", "msg_date": "Wed, 25 Jan 2023 18:38:55 +0000", "msg_from": "Dimos Stamatakis <dimos.stamatakis@servicenow.com>", "msg_from_op": true, "msg_subject": "pg_upgrade from PG-14.5 to PG-15.1 failing due to non-existing\n function" }, { "msg_contents": "## Dimos Stamatakis (dimos.stamatakis@servicenow.com):\n\n> In our scenario we changed the permissions of this function in PG14.5\n> (via an automated tool) and then pg_upgrade tries to change the\n> permissions in PG15.1 as well.\n\nGiven that this function wasn't even documented and did nothing but\nthrow an error \"function close_lb not implemented\" - couldn't you\nrevert that permissions change for the upgrade? (if it comes to the\nworst, a superuser could UPDATE pg_catalog.pg_proc and set proacl\nto NULL for that function, but that's not how you manage ACLs in\nproduction, it's for emergency fixing only).\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Wed, 25 Jan 2023 20:06:50 +0100", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade from PG-14.5 to PG-15.1 failing due to non-existing\n function" }, { "msg_contents": "Hi,\n\nOn 1/25/23 19:38, Dimos Stamatakis wrote:\n>\n> Hi hackers,\n>\n> I attempted to perform an upgrade from PG-14.5 to PG-15.1 with \n> pg_upgrade and unfortunately it errors out because of a function that \n> does not exist anymore in PG-15.1.\n>\n> The function is ‘pg_catalog.close_lb’ and it exists in 14.5 but not in \n> 15.1.\n>\n> In our scenario we changed the permissions of this function in PG14.5 \n> (via an automated tool) and then pg_upgrade tries to change the \n> permissions in PG15.1 as well.\n>\nHere [1] is a very similar issue that has been reported in 2019.\n\nThe patch didn't make it in but it also seems to not fix the issue \nreported by Dimos. The patch in [1] seems to be concerned with changed \nfunction signatures rather than with dropped functions. Maybe [1] could \nbe revived and extended to also ignore dropped functions?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/f85991ad-bbd4-ad57-fde4-e12f0661dbf0%40postgrespro.ru\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n", "msg_date": "Thu, 26 Jan 2023 10:14:05 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade from PG-14.5 to PG-15.1 failing due to non-existing\n function" }, { "msg_contents": "## Dimos Stamatakis (dimos.stamatakis@servicenow.com):\r\n\r\n> In our scenario we changed the permissions of this function in PG14.5\r\n> (via an automated tool) and then pg_upgrade tries to change the\r\n> permissions in PG15.1 as well.\r\n\r\nGiven that this function wasn't even documented and did nothing but\r\nthrow an error \"function close_lb not implemented\" - couldn't you\r\nrevert that permissions change for the upgrade? (if it comes to the\r\nworst, a superuser could UPDATE pg_catalog.pg_proc and set proacl\r\nto NULL for that function, but that's not how you manage ACLs in\r\nproduction, it's for emergency fixing only).\r\n\r\nThanks Christoph! Actually, I already tried reverting the permissions but pg_upgrade attempts to replicate the revert SQL statement as well 😊\r\nIt would be nice to make pg_upgrade ignore some statements while upgrading.\r\nAs David mentions, we can alter the patch to ignore dropped functions.\r\n\r\nThanks,\r\nDimos\r\n(ServiceNow)\r\n\n\n\n\n\n\n\n\n\n\n\n## Dimos Stamatakis (dimos.stamatakis@servicenow.com):\n\r\n> In our scenario we changed the permissions of this function in PG14.5\r\n> (via an automated tool) and then pg_upgrade tries to change the\r\n> permissions in PG15.1 as well.\n\r\nGiven that this function wasn't even documented and did nothing but\r\nthrow an error \"function close_lb not implemented\" - couldn't you\r\nrevert that permissions change for the upgrade? (if it comes to the\r\nworst, a superuser could UPDATE pg_catalog.pg_proc and set proacl\r\nto NULL for that function, but that's not how you manage ACLs in\r\nproduction, it's for emergency fixing only).\n \nThanks Christoph! Actually, I already tried reverting the permissions but pg_upgrade attempts to replicate the revert SQL statement as well\r\n😊\r\nIt would be nice to make pg_upgrade ignore some statements while upgrading.\nAs David mentions, we can alter the patch to ignore dropped functions.\n\n\nThanks,\r\nDimos\r\n(ServiceNow)", "msg_date": "Thu, 26 Jan 2023 13:10:36 +0000", "msg_from": "Dimos Stamatakis <dimos.stamatakis@servicenow.com>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade from PG-14.5 to PG-15.1 failing due to non-existing\n function" } ]
[ { "msg_contents": "The attached patch responds to the discussion at [1] about how\nwe ought to offer a way to set any server GUC from the initdb\ncommand line. Currently, if for some reason the server won't\nstart with default parameters, the only way to get through initdb\nis to change the installed version of postgresql.conf.sample.\nAnd even that is just a kluge, because the initial probes to\nchoose max_connections and shared_buffers will all fail, causing\ninitdb to choose rock-bottom-minimum values of those settings.\nYou can fix that up after the fact if you notice it, but you\nmight not.\n\nSo this invents an initdb switch \"-c NAME=VALUE\" just like the\none that the server itself has long had. The specified settings\nare applied on the command line of the initial probe calls\n(which happen before we've made any config files), and then they\nare added to postgresql.auto.conf, which causes them to take\neffect for the bootstrap backend runs as well as subsequent\npostmaster starts.\n\nI also invented \"--set NAME=VALUE\", mainly because just about\nevery other initdb switch has a long form. The server itself\ndoesn't have that spelling, so I'm not wedded to that part.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/17757-dbdfc1f1c954a6db%40postgresql.org", "msg_date": "Wed, 25 Jan 2023 16:25:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Set arbitrary GUC options during initdb" }, { "msg_contents": "Hi,\n\nOn 2023-01-25 16:25:19 -0500, Tom Lane wrote:\n> The attached patch responds to the discussion at [1] about how\n> we ought to offer a way to set any server GUC from the initdb\n> command line.\n\nAre you thinking of backpatching this, to offer the people affected by the\nissue in [1] a way out?\n\n\n> So this invents an initdb switch \"-c NAME=VALUE\" just like the\n> one that the server itself has long had.\n\nI still am +1 on the idea. I've actually wanted this for development purposes\na couple times...\n\n\n> The specified settings are applied on the command line of the initial probe\n> calls (which happen before we've made any config files), and then they are\n> added to postgresql.auto.conf, which causes them to take effect for the\n> bootstrap backend runs as well as subsequent postmaster starts.\n\nI think this means that if you set e.g. max_connections as an initdb\nparameter, the probes won't do much. Probably fine?\n\n\nPerhaps worth memorializing the priority of the -c options in a test?\nE.g. setting shared_buffers = 20MB or so and then testing that that's the\nvalue when starting the server?\n\n\n> I also invented \"--set NAME=VALUE\", mainly because just about\n> every other initdb switch has a long form. The server itself\n> doesn't have that spelling, so I'm not wedded to that part.\n\nFine with me, but also fine to leave out.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Jan 2023 14:03:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-25 16:25:19 -0500, Tom Lane wrote:\n>> The attached patch responds to the discussion at [1] about how\n>> we ought to offer a way to set any server GUC from the initdb\n>> command line.\n\n> Are you thinking of backpatching this, to offer the people affected by the\n> issue in [1] a way out?\n\nWe could ... it's a new feature for sure, but it seems quite unlikely\nto break things for anybody not using it.\n\n>> The specified settings are applied on the command line of the initial probe\n>> calls (which happen before we've made any config files), and then they are\n>> added to postgresql.auto.conf, which causes them to take effect for the\n>> bootstrap backend runs as well as subsequent postmaster starts.\n\n> I think this means that if you set e.g. max_connections as an initdb\n> parameter, the probes won't do much. Probably fine?\n\nRight, the probed value will be overridden.\n\n> Perhaps worth memorializing the priority of the -c options in a test?\n> E.g. setting shared_buffers = 20MB or so and then testing that that's the\n> value when starting the server?\n\nGiven that it's written into postgresql.auto.conf, I imagine that\nwe have test coverage of that point already.\n\nThere is a more subtle issue, which is that -c max_connections or\n-c shared_buffers should override the probe values *during the\nprobe steps*. My first thought about implementation had been to\ncreate postgresql.auto.conf right off the bat, but that would\nfail to have this property because server command line overrides\nconfig file. I can't think of any very portable way to check\nthat though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Jan 2023 17:21:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "On 25.01.23 22:25, Tom Lane wrote:\n> So this invents an initdb switch \"-c NAME=VALUE\" just like the\n> one that the server itself has long had.\n\nThis seems useful.\n\n> The specified settings\n> are applied on the command line of the initial probe calls\n> (which happen before we've made any config files), and then they\n> are added to postgresql.auto.conf, which causes them to take\n> effect for the bootstrap backend runs as well as subsequent\n> postmaster starts.\n\nI would have expected them to be edited into postgresql.conf. What are \nthe arguments for one or the other?\n\nBtw., something that I have had in my notes for a while, but with this \nit would now be officially exposed: Not all options can be safely set \nduring bootstrap. For example,\n\n initdb -D data -c track_commit_timestamp=on\n\nwill fail an assertion. This might be an exception, or there might be \nothers.\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:48:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "On Fri, 27 Jan 2023 at 09:49, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 25.01.23 22:25, Tom Lane wrote:\n> > So this invents an initdb switch \"-c NAME=VALUE\" just like the\n> > one that the server itself has long had.\n>\n> This seems useful.\n>\n> > The specified settings\n> > are applied on the command line of the initial probe calls\n> > (which happen before we've made any config files), and then they\n> > are added to postgresql.auto.conf, which causes them to take\n> > effect for the bootstrap backend runs as well as subsequent\n> > postmaster starts.\n>\n> I would have expected them to be edited into postgresql.conf. What are\n> the arguments for one or the other?\n>\n\nThat would be my expectation also. I believe that is how it works now for\noptions which can be set by initdb, such as locale and port. I view\npostgresql.auto.conf being for temporary changes, or changes related to\ndifferent instances within a replication setup, or whatever other uses\npeople come up with - but not for the permanent configuration established\nby initdb.\n\nIn particular, I would be surprised if removing a postgresql.auto.conf\ncompletely disabled an instance. Obviously, in my replication setup\nexample, the replication would be broken, but the basic operation of the\ninstance would still be possible.\n\nAlso, if somebody wants to put a change in postgresql.auto.conf, they can\neasily do it using ALTER SYSTEM once the instance is running, or by just\nwriting out their own postgresql.auto.conf before starting it. Putting a\nchange in postgresql.conf programmatically is a bit of a pain.\n\nOn Fri, 27 Jan 2023 at 09:49, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 25.01.23 22:25, Tom Lane wrote:\n> So this invents an initdb switch \"-c NAME=VALUE\" just like the\n> one that the server itself has long had.\n\nThis seems useful.\n\n> The specified settings\n> are applied on the command line of the initial probe calls\n> (which happen before we've made any config files), and then they\n> are added to postgresql.auto.conf, which causes them to take\n> effect for the bootstrap backend runs as well as subsequent\n> postmaster starts.\n\nI would have expected them to be edited into postgresql.conf.  What are \nthe arguments for one or the other?That would be my expectation also. I believe that is how it works now for options which can be set by initdb, such as locale and port. I view postgresql.auto.conf being for temporary changes, or changes related to different instances within a replication setup, or whatever other uses people come up with - but not for the permanent configuration established by initdb.In particular, I would be surprised if removing a postgresql.auto.conf completely disabled an instance. Obviously, in my replication setup example, the replication would be broken, but the basic operation of the instance would still be possible.Also, if somebody wants to put a change in postgresql.auto.conf, they can easily do it using ALTER SYSTEM once the instance is running, or by just writing out their own postgresql.auto.conf before starting it. Putting a change in postgresql.conf programmatically is a bit of a pain.", "msg_date": "Fri, 27 Jan 2023 10:24:16 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "On Wed, Jan 25, 2023 at 4:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So this invents an initdb switch \"-c NAME=VALUE\" just like the\n> one that the server itself has long had.\n\nHUGE +1 from me. This will, I think, be extremely convenient in many situations.\n\n> The specified settings\n> are applied on the command line of the initial probe calls\n> (which happen before we've made any config files), and then they\n> are added to postgresql.auto.conf, which causes them to take\n> effect for the bootstrap backend runs as well as subsequent\n> postmaster starts.\n\nI agree with others that it would seem more natural to edit them in\npostgresql.conf itself, but I also think it doesn't matter nearly as\nmuch as getting the feature in some form.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:29:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 25.01.23 22:25, Tom Lane wrote:\n>> The specified settings\n>> are applied on the command line of the initial probe calls\n>> (which happen before we've made any config files), and then they\n>> are added to postgresql.auto.conf, which causes them to take\n>> effect for the bootstrap backend runs as well as subsequent\n>> postmaster starts.\n\n> I would have expected them to be edited into postgresql.conf. What are \n> the arguments for one or the other?\n\nTBH, the driving reason was that the string-munging code we have in\ninitdb isn't up to snuff for that: it wants to substitute for an\nexactly-known string, which we won't have in the case of an\narbitrary GUC.\n\nOne idea if we want to make it work like that could be to stop\ntrying to edit out the default value, and instead make the file\ncontents look like, say,\n\n#huge_pages = try # on, off, or try\nhuge_pages = off # set by initdb\n\nThen we just need to be able to find the GUC's entry.\n\n> Btw., something that I have had in my notes for a while, but with this \n> it would now be officially exposed: Not all options can be safely set \n> during bootstrap. For example,\n> initdb -D data -c track_commit_timestamp=on\n> will fail an assertion. This might be an exception, or there might be \n> others.\n\nInteresting. We'd probably want to sprinkle some more\ndo-nothing-in-bootstrap-mode tests as we discover that sort of thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:34:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "On Fri, Jan 27, 2023 at 10:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> One idea if we want to make it work like that could be to stop\n> trying to edit out the default value, and instead make the file\n> contents look like, say,\n>\n> #huge_pages = try # on, off, or try\n> huge_pages = off # set by initdb\n\nHow about just making replace_token() a little smarter, and maybe renaming it?\n\nThe idea is that instead of:\n\nreplace_token(conflines, \"#max_connections = 100\", repltok);\n\nYou'd write something like:\n\nreplace_guc_value(conflines, \"max_connections\", repltok);\n\nWhich would look for a line matching /^#max_connections\\s+=\\s/, and\nthen identify everything following that point up to the first #. It\nwould replace all that stuff with repltok, but if the replacement is\nshorter than the original, it would pad with spaces to get back to the\noriginal length. And otherwise it would add a single space, so that if\nyou set a super long GUC value there's still at least one space\nbetween the end of the value and the comment that follows.\n\nThere might be some quoting-related problems with this idea, not sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:41:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The idea is that instead of:\n\n> replace_token(conflines, \"#max_connections = 100\", repltok);\n\n> You'd write something like:\n\n> replace_guc_value(conflines, \"max_connections\", repltok);\n\n> Which would look for a line matching /^#max_connections\\s+=\\s/, and\n> then identify everything following that point up to the first #. It\n> would replace all that stuff with repltok, but if the replacement is\n> shorter than the original, it would pad with spaces to get back to the\n> original length. And otherwise it would add a single space, so that if\n> you set a super long GUC value there's still at least one space\n> between the end of the value and the comment that follows.\n\nWell, yeah, I was trying to avoid writing that ;-). There's even\none more wrinkle: we might already have removed the initial '#',\nif one does say \"-c max_connections=N\", because this logic won't\nknow whether the -c switch matches one of initdb's predetermined\nsubstitutions.\n\n> There might be some quoting-related problems with this idea, not sure.\n\n'#' in a value might confuse it, but we could probably take the last '#'\nnot the first.\n\nAnyway, it seems like I gotta work harder. I'll produce a\nnew patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:53:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "On Fri, Jan 27, 2023 at 8:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > The idea is that instead of:\n>\n> > replace_token(conflines, \"#max_connections = 100\", repltok);\n>\n> > You'd write something like:\n>\n> > replace_guc_value(conflines, \"max_connections\", repltok);\n>\n> > Which would look for a line matching /^#max_connections\\s+=\\s/, and\n> > then identify everything following that point up to the first #. It\n> > would replace all that stuff with repltok, but if the replacement is\n> > shorter than the original, it would pad with spaces to get back to the\n> > original length. And otherwise it would add a single space, so that if\n> > you set a super long GUC value there's still at least one space\n> > between the end of the value and the comment that follows.\n>\n> Well, yeah, I was trying to avoid writing that ;-). There's even\n> one more wrinkle: we might already have removed the initial '#',\n> if one does say \"-c max_connections=N\", because this logic won't\n> know whether the -c switch matches one of initdb's predetermined\n> substitutions.\n>\n> > There might be some quoting-related problems with this idea, not sure.\n>\n> '#' in a value might confuse it, but we could probably take the last '#'\n> not the first.\n>\n> Anyway, it seems like I gotta work harder. I'll produce a\n> new patch.\n>\n>\nHow about just adding a \"section\" to the end of the file as needed:\n\n# AdHoc Settings Specified During InitDB\nmax_connections=75\n...\n\nDavid J.\n\nOn Fri, Jan 27, 2023 at 8:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> The idea is that instead of:\n\n> replace_token(conflines, \"#max_connections = 100\", repltok);\n\n> You'd write something like:\n\n> replace_guc_value(conflines, \"max_connections\", repltok);\n\n> Which would look for a line matching /^#max_connections\\s+=\\s/, and\n> then identify everything following that point up to the first #. It\n> would replace all that stuff with repltok, but if the replacement is\n> shorter than the original, it would pad with spaces to get back to the\n> original length. And otherwise it would add a single space, so that if\n> you set a super long GUC value there's still at least one space\n> between the end of the value and the comment that follows.\n\nWell, yeah, I was trying to avoid writing that ;-).  There's even\none more wrinkle: we might already have removed the initial '#',\nif one does say \"-c max_connections=N\", because this logic won't\nknow whether the -c switch matches one of initdb's predetermined\nsubstitutions.\n\n> There might be some quoting-related problems with this idea, not sure.\n\n'#' in a value might confuse it, but we could probably take the last '#'\nnot the first.\n\nAnyway, it seems like I gotta work harder.  I'll produce a\nnew patch.How about just adding a \"section\" to the end of the file as needed:# AdHoc Settings Specified During InitDBmax_connections=75...David J.", "msg_date": "Fri, 27 Jan 2023 09:01:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Jan 27, 2023 at 8:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, it seems like I gotta work harder. I'll produce a\n>> new patch.\n\n> How about just adding a \"section\" to the end of the file as needed:\n\n> # AdHoc Settings Specified During InitDB\n> max_connections=75\n> ...\n\nNah, I think that would be impossibly confusing. One way or another\nthe live setting has to be near where the GUC is documented.\n\nWe will have to do add-at-the-end for custom GUCs, of course,\nbut in that case there's no matching comment to confuse you.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 11:35:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "I wrote:\n>>> Anyway, it seems like I gotta work harder. I'll produce a\n>>> new patch.\n\nThe string-hacking was fully as tedious as I expected. However, the\noutput looks pretty nice, and this does have the advantage that the\npre-programmed substitutions become a lot more robust: they are no\nlonger dependent on the initdb code exactly matching what is in\npostgresql.conf.sample.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 27 Jan 2023 15:02:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "On Fri, Jan 27, 2023 at 3:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The string-hacking was fully as tedious as I expected. However, the\n> output looks pretty nice, and this does have the advantage that the\n> pre-programmed substitutions become a lot more robust: they are no\n> longer dependent on the initdb code exactly matching what is in\n> postgresql.conf.sample.\n\nAwesome! Thank you very much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:05:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "On 27.01.23 21:02, Tom Lane wrote:\n> I wrote:\n>>>> Anyway, it seems like I gotta work harder. I'll produce a\n>>>> new patch.\n> \n> The string-hacking was fully as tedious as I expected. However, the\n> output looks pretty nice, and this does have the advantage that the\n> pre-programmed substitutions become a lot more robust: they are no\n> longer dependent on the initdb code exactly matching what is in\n> postgresql.conf.sample.\n\nThis patch looks good to me. It's a very nice simplification of the \ninitdb.c code, even without the new feature.\n\nI found that the addition of\n\n#include <ctype.h>\n\ndidn't appear to be necessary. Maybe it was required before \nguc_value_requires_quotes() was changed?\n\nI would remove the\n\n#if DEF_PGPORT != 5432\n\nThis was in the previous code too, but now if we remove it, then we \ndon't have any more hardcoded 5432 left, which seems like a nice \nimprovement in cleanliness.\n\n\n\n", "msg_date": "Wed, 22 Mar 2023 07:45:18 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This patch looks good to me. It's a very nice simplification of the \n> initdb.c code, even without the new feature.\n\nThanks for looking!\n\n> I found that the addition of\n> #include <ctype.h>\n> didn't appear to be necessary. Maybe it was required before \n> guc_value_requires_quotes() was changed?\n\nThere's still an isspace() added by the patch ... oh, the #include\nis not needed because port.h includes ctype.h. That's spectacularly\nawful from an include-footprint standpoint, but I can't say that\nI want to go fix it right this minute.\n\n> I would remove the\n> #if DEF_PGPORT != 5432\n> This was in the previous code too, but now if we remove it, then we \n> don't have any more hardcoded 5432 left, which seems like a nice \n> improvement in cleanliness.\n\nHm. That'll waste a few cycles during initdb; not sure if the extra\ncleanliness is worth it. It's not like that number is going to change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Mar 2023 13:04:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I would remove the\n>> #if DEF_PGPORT != 5432\n>> This was in the previous code too, but now if we remove it, then we \n>> don't have any more hardcoded 5432 left, which seems like a nice \n>> improvement in cleanliness.\n\n> Hm. That'll waste a few cycles during initdb; not sure if the extra\n> cleanliness is worth it. It's not like that number is going to change.\n\nAfter further thought I did it as you suggest. I think the only case\nwhere we really care about shaving milliseconds from initdb is in debug\nbuilds (e.g. buildfarm), which very likely get built with nondefault\nDEF_PGPORT anyway.\n\nI did get a bee in my bonnet about how replace_token (and now\nreplace_guc_value) leak memory like there's no tomorrow. The leakage\namounts to about a megabyte per run according to valgrind, and it's\nnot going anywhere but up as we add more calls of those functions.\nSo I made a quick change to redefine them in a less leak-prone way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Mar 2023 14:33:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "Hi,\n\nThis commit unfortunately broke --wal-segsize. If I use a slightly larger than\nthe default setting, I get:\n\ninitdb --wal-segsize 64 somepath\nrunning bootstrap script ... 2023-03-22 13:06:41.282 PDT [639848] FATAL: \"min_wal_size\" must be at least twice \"wal_segment_size\"\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Mar 2023 13:07:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> This commit unfortunately broke --wal-segsize. If I use a slightly larger than\n> the default setting, I get:\n> initdb --wal-segsize 64 somepath\n> running bootstrap script ... 2023-03-22 13:06:41.282 PDT [639848] FATAL: \"min_wal_size\" must be at least twice \"wal_segment_size\"\n\n[ confused... ] Oh, I see the problem. This:\n\n\t/* set default max_wal_size and min_wal_size */\n\tsnprintf(repltok, sizeof(repltok), \"min_wal_size = %s\",\n\t\t\t pretty_wal_size(DEFAULT_MIN_WAL_SEGS));\n\tconflines = replace_token(conflines, \"#min_wal_size = 80MB\", repltok);\n\nlooks like it's setting a compile-time-constant value of min_wal_size;\nat least that's what I thought it was doing when I revised the code.\nBut it isn't, because somebody had the brilliant idea of making\npretty_wal_size() depend on the wal_segment_size_mb variable.\n\nWill fix, thanks for report.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Mar 2023 16:29:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Set arbitrary GUC options during initdb" }, { "msg_contents": "On 27.01.23 15:48, Peter Eisentraut wrote:\n> Btw., something that I have had in my notes for a while, but with this \n> it would now be officially exposed:  Not all options can be safely set \n> during bootstrap.  For example,\n> \n>     initdb -D data -c track_commit_timestamp=on\n> \n> will fail an assertion.  This might be an exception, or there might be \n> others.\n\nI ran a test across all changeable boolean parameters with initdb \nsetting it to the opposite of their default. The only one besides \ntrack_commit_timestamp that caused initdb to not complete was \ndefault_transaction_read_only, which is to be expected.\n\nWe should fix track_commit_timestamp, but it doesn't look like there is \nwider impact. (Obviously, this tested only boolean settings. If \nsomeone wants to fuzz-test the others ...)\n\n\n\n", "msg_date": "Fri, 14 Apr 2023 10:29:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Set arbitrary GUC options during initdb" } ]
[ { "msg_contents": "Doing some work with extended query protocol, I encountered the same\r\nissue that was discussed in [1]. It appears when a client is using\r\nextended query protocol and sends an Execute message to a portal with\r\nmax_rows, and a portal is executed multiple times,\r\npg_stat_statements does not correctly track rows and calls.\r\n\r\nUsing the attached jdbc script, TEST.java, which can reproduce the issue\r\nwith setFetchSize of 100 with autocommit mode set to OFF. We can\r\nsee that although pg_class has 414 rows, the total call and\r\nrows returned is 14. the first 4 * 100 fetches did not get accounted for,.\r\n\r\npostgres=# select calls, rows, query from pg_stat_statements\r\npostgres-# where queryid = '-1905758228217333571';\r\ncalls | rows | query\r\n---------------------------------\r\n1 | 14 | select * from pg_class\r\n(1 row)\r\n\r\nThe execution work flow goes something like this:\r\nExecutorStart\r\nExecutorRun – which will be called multiple times to fetch from the\r\n portal until the caller Closes the portal or the portal\r\n runs out of rows.\r\nExecutorFinish\r\nExecutorEnd – portal is closed & pg_stat_statements stores the final rows processed\r\n\r\nWhere this breaks for pg_stat_statements is during ExecutorRun,\r\nes_processed is reset to 0 every iteration. So by the time the portal\r\nis closed, es_processed will only show the total from the last execute\r\nmessage.\r\n\r\nThis appears to be only an issue for portals fetched\r\nthrough extended query protocol and not explicit cursors\r\nthat go through simple query protocol (i.e. FETCH <cursor>)\r\n\r\nI attached a JDBC script to repro the issue.\r\n\r\nOne potential fix I see is to introduce 2 new counters in the\r\nExecutionState which will track the total rows processed\r\nand the number of calls. These counters can then be used\r\nby pg_stat_statements. Attached is an experimental patch\r\nwhich shows the correct number of rows and number of\r\ncalls.\r\n\r\npostgres=# select calls, rows, query from pg_stat_statements\r\npostgres-# where queryid = '-1905758228217333571';\r\ncalls | rows | query\r\n---------------------------------\r\n5 | 414 | select * from pg_class\r\n(1 row)\r\n\r\n[1] https://www.postgresql.org/message-id/flat/c90890e7-9c89-c34f-d3c5-d5c763a34bd8%40dunslane.net\r\n\r\nThanks\r\n\r\n–\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 25 Jan 2023 23:22:04 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "[BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Wed, Jan 25, 2023 at 11:22:04PM +0000, Imseih (AWS), Sami wrote:\n> Doing some work with extended query protocol, I encountered the same\n> issue that was discussed in [1]. It appears when a client is using\n> extended query protocol and sends an Execute message to a portal with\n> max_rows, and a portal is executed multiple times,\n> pg_stat_statements does not correctly track rows and calls.\n\nWell, it is one of these areas where it seems to me we have never been\nable to put a definition on what should be the correct behavior when\nit comes to pg_stat_statements. Could it be possible to add some\nregression tests using the recently-added \\bind command and see how\nthis affects things? I would suggest splitting these into their own\nSQL file, following an effort I have been doing recently for the\nregression tests of pg_stat_statements. It would be good to know the\neffects of this change for pg_stat_statements.track = (top|all), as\nwell.\n\n@@ -657,7 +657,9 @@ typedef struct EState\n \n List *es_tupleTable; /* List of TupleTableSlots */\n \n- uint64 es_processed; /* # of tuples processed */\n+ uint64 es_processed; /* # of tuples processed at the top level only */\n+ uint64 es_calls; /* # of calls */\n+ uint64 es_total_processed; /* total # of tuples processed */\n\nSo the root of the logic is here. Anything that makes the executor\nstructures larger freaks me out, FWIW, and that's quite an addition.\n--\nMichael", "msg_date": "Thu, 2 Mar 2023 16:27:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> Well, it is one of these areas where it seems to me we have never been\r\n> able to put a definition on what should be the correct behavior when\r\n> it comes to pg_stat_statements. \r\n\r\nWhat needs to be defined here is how pgss should account for # of rows\r\nprocessed when A) a select goes through extended query (EP) protocol, and \r\nB) it requires multiple executes to complete a portal.\r\n\r\nThe patch being suggested will treat every 'E' ( execute message ) to the same\r\nportal as a new call ( pgss will increment the calls ) and the number of rows\r\nprocessed will be accumulated for every 'E' message.\r\n\r\nCurrently, only the rows fetched in the last 'E' call to the portal is tracked by \r\npgss. This is incorrect.\r\n\r\n> Could it be possible to add some\r\n> regression tests using the recently-added \\bind command and see how\r\n> this affects things? \r\n\r\n\\bind alone will not be enough as we also need a way to fetch from\r\na portal in batches. The code that needs to be exercised\r\nas part of the test is exec_execute_message with max_rows != 0.\r\n\r\n\\bind will call exec_execute_message with max_rows = 0 to fetch\r\nall the rows.\r\n\r\n> I would suggest splitting these into their own\r\n> SQL file, following an effort I have been doing recently for the\r\n> regression tests of pg_stat_statements. It would be good to know the\r\n> effects of this change for pg_stat_statements.track = (top|all), as\r\n> well.\r\n\r\nYes, I agree that proper test coverage is needed here. Will think\r\nabout how to accomplish this.\r\n\r\n> - uint64 es_processed; /* # of tuples processed */\r\n> + uint64 es_processed; /* # of tuples processed at the top level only */\r\n> + uint64 es_calls; /* # of calls */\r\n> + uint64 es_total_processed; /* total # of tuples processed */\r\n\r\n\r\n> So the root of the logic is here. Anything that makes the executor\r\n> structures larger freaks me out, FWIW, and that's quite an addition.\r\n\r\nI am not sure how to get around the changes to Estate and fixing this\r\nIssue. \r\n\r\nWe could potentially only need the es_total_processed field and\r\ncontinue to track calls in pgss. \r\n\r\nes_total_processed in EState however is still needed.\r\n\r\nRegards,\r\n\r\n--\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n", "msg_date": "Mon, 6 Mar 2023 14:19:46 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "\n> Yes, I agree that proper test coverage is needed here. Will think\n> about how to accomplish this.\n\nTried to apply this patch to current master branch and the build was ok, \nhowever it crashed during initdb with a message like below.\n\n\"performing post-bootstrap initialization ... Segmentation fault (core \ndumped)\"\n\nIf I remove this patch and recompile again, then \"initdb -D $PGDATA\" works.\n\nThanks,\n\nDavid\n\n\n\n", "msg_date": "Fri, 10 Mar 2023 12:58:23 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> If I remove this patch and recompile again, then \"initdb -D $PGDATA\" works.\r\n\r\nIt appears you must \"make clean; make install\" to correctly compile after\r\napplying the patch.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Sat, 11 Mar 2023 23:55:22 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Sat, Mar 11, 2023 at 11:55:22PM +0000, Imseih (AWS), Sami wrote:\n> It appears you must \"make clean; make install\" to correctly compile after\n> applying the patch.\n\nIn a git repository, I've learnt to rely on this simple formula, even\nif it means extra cycles when running ./configure:\ngit clean -d -x -f\n--\nMichael", "msg_date": "Sun, 12 Mar 2023 11:57:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "Hi,\n\nOn 3/2/23 8:27 AM, Michael Paquier wrote:\n> On Wed, Jan 25, 2023 at 11:22:04PM +0000, Imseih (AWS), Sami wrote:\n>> Doing some work with extended query protocol, I encountered the same\n>> issue that was discussed in [1]. It appears when a client is using\n>> extended query protocol and sends an Execute message to a portal with\n>> max_rows, and a portal is executed multiple times,\n>> pg_stat_statements does not correctly track rows and calls.\n> \n> Well, it is one of these areas where it seems to me we have never been\n> able to put a definition on what should be the correct behavior when\n> it comes to pg_stat_statements. Could it be possible to add some\n> regression tests using the recently-added \\bind command and see how\n> this affects things? I would suggest splitting these into their own\n> SQL file, following an effort I have been doing recently for the\n> regression tests of pg_stat_statements. It would be good to know the\n> effects of this change for pg_stat_statements.track = (top|all), as\n> well.\n> \n> @@ -657,7 +657,9 @@ typedef struct EState\n> \n> List *es_tupleTable; /* List of TupleTableSlots */\n> \n> - uint64 es_processed; /* # of tuples processed */\n> + uint64 es_processed; /* # of tuples processed at the top level only */\n> + uint64 es_calls; /* # of calls */\n> + uint64 es_total_processed; /* total # of tuples processed */\n> \n> So the root of the logic is here. Anything that makes the executor\n> structures larger freaks me out, FWIW, and that's quite an addition.\n> --\n> Michael\n\nI wonder if we can't \"just\" make use of the \"count\" parameter passed to the\nExecutorRun_hook.\n\nSomething like?\n\n- Increment a \"es_total_processed\" counter in pgss based on the count received in pgss_ExecutorRun()\n- In pgss_ExecutorEnd(): substract the last count we received in pgss_ExecutorRun() and add queryDesc->estate->es_processed? (we'd\nneed to be able to distinguish when we should apply this rule or not).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 13 Mar 2023 10:54:16 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": ">> It appears you must \"make clean; make install\" to correctly compile after\n>> applying the patch.\n> In a git repository, I've learnt to rely on this simple formula, even\n> if it means extra cycles when running ./configure:\n> git clean -d -x -f\n>\nThank you all for pointing out that it needs make clean first. After \nmake clean followed by recompile with the patch then both make check \nfrom regression test and pg_stat_statements extension report all test \npassed. So the current existing test cases can't really detect any \nchange from this patch, then it would be better to add some test cases \nto cover this.\n\nBest regards,\n\nDavid\n\n\n\n\n", "msg_date": "Mon, 13 Mar 2023 14:30:55 -0700", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "Sorry about the delay in response about this.\r\n\r\nI was thinking about this and it seems to me we can avoid\r\nadding new fields to Estate. I think a better place to track\r\nrows and calls is in the Instrumentation struct.\r\n\r\n--- a/src/include/executor/instrument.h\r\n+++ b/src/include/executor/instrument.h\r\n@@ -88,6 +88,8 @@ typedef struct Instrumentation\r\n double nfiltered2; /* # of tuples removed by \"other\" quals */\r\n BufferUsage bufusage; /* total buffer usage */\r\n WalUsage walusage; /* total WAL usage */\r\n+ int64 calls;\r\n+ int64 rows_processed;\r\n } Instrumentation; \r\n\r\n\r\nIf this is more palatable, I can prepare the patch.\r\n\r\nThanks for your feedback.\r\n\r\nRegards.\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Mon, 20 Mar 2023 21:41:12 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Mon, Mar 20, 2023 at 09:41:12PM +0000, Imseih (AWS), Sami wrote:\n> I was thinking about this and it seems to me we can avoid\n> adding new fields to Estate. I think a better place to track\n> rows and calls is in the Instrumentation struct.\n> \n> If this is more palatable, I can prepare the patch.\n\nThis indeed feels a bit more natural seen from here, after looking at\nthe code paths using an Instrumentation in the executor and explain,\nfor example. At least, this stresses me much less than adding 16\nbytes to EState for something restricted to the extended protocol when\nit comes to monitoring capabilities.\n--\nMichael", "msg_date": "Tue, 21 Mar 2023 10:04:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> This indeed feels a bit more natural seen from here, after looking at\r\n> the code paths using an Instrumentation in the executor and explain,\r\n> for example. At least, this stresses me much less than adding 16\r\n> bytes to EState for something restricted to the extended protocol when\r\n> it comes to monitoring capabilities.\r\n\r\nAttached is the patch that uses Instrumentation. \r\n\r\nI did not add any new tests, and we do not have anyway now\r\nof setting a row count when going through the Execute message.\r\nI think this may be need to be addressed separately since there\r\nSeems to be. Gap in extended query protocol testing.\r\n\r\nFor this fix however, The JDBC test does show correct results.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Tue, 21 Mar 2023 13:16:29 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "Hi,\n\nOn 3/21/23 2:16 PM, Imseih (AWS), Sami wrote:\n>> This indeed feels a bit more natural seen from here, after looking at\n>> the code paths using an Instrumentation in the executor and explain,\n>> for example. At least, this stresses me much less than adding 16\n>> bytes to EState for something restricted to the extended protocol when\n>> it comes to monitoring capabilities.\n> \n> Attached is the patch that uses Instrumentation.\n> \n\nThanks, I think this new approach makes sense.\n\n- const BufferUsage *bufusage,\n+ int64 calls, const BufferUsage *bufusage,\n\nWhat about using an uint64 for calls? That seems more appropriate to me (even if\nqueryDesc->totaltime->calls will be passed (which is int64), but that's already\nalso the case for the \"rows\" argument and queryDesc->totaltime->rows_processed)\n\n@@ -88,6 +88,8 @@ typedef struct Instrumentation\n double nfiltered2; /* # of tuples removed by \"other\" quals */\n BufferUsage bufusage; /* total buffer usage */\n WalUsage walusage; /* total WAL usage */\n+ int64 calls; /* # of total calls to ExecutorRun */\n+ int64 rows_processed; /* # of total rows processed in ExecutorRun */\n\nI'm not sure it's worth mentioning that the new counters are \"currently\" used with the ExecutorRun.\n\nWhat about just \"total calls\" and \"total rows processed\" (or \"total rows\", see below)?\n\nAlso, I wonder if \"rows\" (and not rows_processed) would not be a better naming.\n\nThose last comments regarding the Instrumentation are done because ISTM that at the end their usage\ncould vary depending of the use case of the Instrumentation.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 22 Mar 2023 12:04:23 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> What about using an uint64 for calls? That seems more appropriate to me (even if\r\n> queryDesc->totaltime->calls will be passed (which is int64), but that's already\r\n> also the case for the \"rows\" argument and queryDesc->totaltime->rows_processed)\r\n\r\nThat's fair\r\n\r\n\r\n> I'm not sure it's worth mentioning that the new counters are \"currently\" used with the ExecutorRun.\r\n\r\nSure, I suppose these fields could be used outside of ExecutorRun. Good point.\r\n\r\n\r\n> Also, I wonder if \"rows\" (and not rows_processed) would not be a better naming.\r\n\r\nAgree.\r\n\r\nI went with rows_processed initially, since it was accumulating es_processed,\r\nbut as the previous point, this instrumentation could be used outside of\r\nExecutorRun.\r\n\r\nv3 addresses the comments.\r\n\r\n\r\nRegards,\r\n\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 22 Mar 2023 21:35:23 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "Hi,\n\nOn 3/22/23 10:35 PM, Imseih (AWS), Sami wrote:\n>> What about using an uint64 for calls? That seems more appropriate to me (even if\n>> queryDesc->totaltime->calls will be passed (which is int64), but that's already\n>> also the case for the \"rows\" argument and queryDesc->totaltime->rows_processed)\n> \n> That's fair\n> \n> \n>> I'm not sure it's worth mentioning that the new counters are \"currently\" used with the ExecutorRun.\n> \n> Sure, I suppose these fields could be used outside of ExecutorRun. Good point.\n> \n> \n>> Also, I wonder if \"rows\" (and not rows_processed) would not be a better naming.\n> \n> Agree.\n> \n> I went with rows_processed initially, since it was accumulating es_processed,\n> but as the previous point, this instrumentation could be used outside of\n> ExecutorRun.\n> \n> v3 addresses the comments.\n> \n\nThanks! LGTM and also do confirm that, with the patch, the JDBC test does show the correct results.\n\nThat said, not having a test (for the reasons you explained up-thread) associated with the patch worry me a bit.\n\nBut, I'm tempted to say that adding new tests could be addressed separately though (as this patch looks pretty straightforward).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 23 Mar 2023 09:33:16 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Thu, Mar 23, 2023 at 09:33:16AM +0100, Drouvot, Bertrand wrote:\n> Thanks! LGTM and also do confirm that, with the patch, the JDBC test\n> does show the correct results.\n\nHow does JDBC test that? Does it have a dependency on\npg_stat_statements?\n> \n> That said, not having a test (for the reasons you explained\n> up-thread) associated with the patch worry me a bit.\n\nSame impression here.\n\n> But, I'm tempted to say that adding new tests could be addressed\n> separately though (as this patch looks pretty straightforward).\n\nEven small patches can have gotchas. I think that this should have\ntests in-core rather than just depend on JDBC and hope for the best.\nEven if \\bind does not allow that, we could use an approach similar to\nlibpq_pipeline, for example, depending on pg_stat_statements for the\nvalidation with a test module in src/test/modules/?\n--\nMichael", "msg_date": "Thu, 23 Mar 2023 18:21:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> How does JDBC test that? Does it have a dependency on\r\n> pg_stat_statements?\r\n\r\nNo, at the start of the thread, a sample jdbc script was attached.\r\nBut I agree, we need to add test coverage. See below.\r\n\r\n\r\n>> But, I'm tempted to say that adding new tests could be addressed\r\n>> separately though (as this patch looks pretty straightforward).\r\n\r\n\r\n> Even small patches can have gotchas. I think that this should have\r\n> tests in-core rather than just depend on JDBC and hope for the best.\r\n> Even if \\bind does not allow that, we could use an approach similar to\r\n> libpq_pipeline, for example, depending on pg_stat_statements for the\r\n> validation with a test module in src/test/modules/?\r\n\r\nYes, that is possible but we will need to add a libpq API\r\nthat allows the caller to pass in a \"fetch size\".\r\nPQsendQueryParams does not take in a \"fetch size\", \r\nso it returns all rows, through PQsendQueryParams\r\n\r\nhttps://github.com/postgres/postgres/blob/master/src/interfaces/libpq/fe-exec.c#L1882\r\n\r\nAdding such an API that takes in a \"fetch size\" will be beneficial \r\nnot just for this test, but I can see it enabling another psql meta command,\r\nsimilar to \\bind but that takes in a \"fetch size\".\r\n\r\nRegards\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n", "msg_date": "Thu, 23 Mar 2023 13:54:05 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Thu, 23 Mar 2023 09:33:16 +0100\n\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote:\n\n> Hi,\n> \n> On 3/22/23 10:35 PM, Imseih (AWS), Sami wrote:\n> >> What about using an uint64 for calls? That seems more appropriate to me (even if\n> >> queryDesc->totaltime->calls will be passed (which is int64), but that's already\n> >> also the case for the \"rows\" argument and queryDesc->totaltime->rows_processed)\n> > \n> > That's fair\n> > \n> > \n> >> I'm not sure it's worth mentioning that the new counters are \"currently\" used with the ExecutorRun.\n> > \n> > Sure, I suppose these fields could be used outside of ExecutorRun. Good point.\n> > \n> > \n> >> Also, I wonder if \"rows\" (and not rows_processed) would not be a better naming.\n> > \n> > Agree.\n> > \n> > I went with rows_processed initially, since it was accumulating es_processed,\n> > but as the previous point, this instrumentation could be used outside of\n> > ExecutorRun.\n> > \n> > v3 addresses the comments.\n\nI wonder that this patch changes the meaning of \"calls\" in the pg_stat_statement\nview a bit; previously it was \"Number of times the statement was executed\" as\ndescribed in the documentation, but currently this means \"Number of times the\nportal was executed\". I'm worried that this makes users confused. For example,\na user may think the average numbers of rows returned by a statement is given by\nrows/calls, but it is not always correct because some statements could be executed\nwith multiple portal runs. \n\nAlthough it might not big issue to users, I think it is better to add an explanation\nto the doc for clarification.\n\nRegards,\nYugo Nagata\n\n> > \n> \n> Thanks! LGTM and also do confirm that, with the patch, the JDBC test does show the correct results.\n> \n> That said, not having a test (for the reasons you explained up-thread) associated with the patch worry me a bit.\n> \n> But, I'm tempted to say that adding new tests could be addressed separately though (as this patch looks pretty straightforward).\n> \n> Regards,\n> \n> -- \n> Bertrand Drouvot\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 24 Mar 2023 12:21:44 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> I wonder that this patch changes the meaning of \"calls\" in the pg_stat_statement\r\n> view a bit; previously it was \"Number of times the statement was executed\" as\r\n> described in the documentation, but currently this means \"Number of times the\r\n> portal was executed\". I'm worried that this makes users confused. For example,\r\n> a user may think the average numbers of rows returned by a statement is given by\r\n> rows/calls, but it is not always correct because some statements could be executed\r\n> with multiple portal runs.\r\n\r\nI don't think it changes the meaning of \"calls\" in pg_stat_statements, since every\r\ntime the app fetches X amount of rows from a portal, it's still done in a separate\r\nexecution, and thus a separate call.\r\n\r\nI agree, the meaning of \"calls\" should be clarified in docs.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n", "msg_date": "Fri, 24 Mar 2023 13:32:43 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Thu, Mar 23, 2023 at 01:54:05PM +0000, Imseih (AWS), Sami wrote:\n> Yes, that is possible but we will need to add a libpq API\n> that allows the caller to pass in a \"fetch size\".\n> PQsendQueryParams does not take in a \"fetch size\", \n> so it returns all rows, through PQsendQueryParams\n> \n> https://github.com/postgres/postgres/blob/master/src/interfaces/libpq/fe-exec.c#L1882\n> \n> Adding such an API that takes in a \"fetch size\" will be beneficial \n> not just for this test, but I can see it enabling another psql meta command,\n> similar to \\bind but that takes in a \"fetch size\".\n\nSo... The idea here is to set a custom fetch size so as the number of\ncalls can be deterministic in the tests, still more than 1 for the\ntests we'd have. And your point is that libpq enforces always 0 when\nsending the EXECUTE message causing it to always return all the rows\nfor any caller of PQsendQueryGuts().\n\nThe extended protocol allows that, so you would like a libpq API to\nhave more control of what we send with EXECUTE:\nhttps://www.postgresql.org/docs/current/protocol-overview.html#PROTOCOL-QUERY-CONCEPTS\n\nThe extended query protocol would require multiple 'E' messages, but\nwe would not need multiple describe or bind messages, meaning that\nthis cannot just be an extra flavor PQsendQueryParams(). Am I gettig\nthat right? The correct API design seems tricky, to say the least.\nPerhaps requiring this much extra work in libpq for the purpose of\nhaving some tests in this thread is not a brilliant idea.. Or perhaps\nwe could just do it and have something a-la-JDBC with two routines?\nThat would be one libpq routine for describe/bind and one for execute\nwhere the limit can be given by the caller in the latter case, similar\nto sendDescribeStatement() and sendExecute() in\nQueryExecutorImpl.java.\n--\nMichael", "msg_date": "Wed, 29 Mar 2023 15:59:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> So... The idea here is to set a custom fetch size so as the number of\r\n> calls can be deterministic in the tests, still more than 1 for the\r\n> tests we'd have. And your point is that libpq enforces always 0 when\r\n> sending the EXECUTE message causing it to always return all the rows\r\n> for any caller of PQsendQueryGuts().\r\n\r\nThat is correct.\r\n\r\n> The extended protocol allows that, so you would like a libpq API to\r\n> have more control of what we send with EXECUTE:\r\n> https://www.postgresql.org/docs/current/protocol-overview.html#PROTOCOL-QUERY-CONCEPTS\r\n\r\n\r\n> The extended query protocol would require multiple 'E' messages, but\r\n> we would not need multiple describe or bind messages, meaning that\r\n> this cannot just be an extra flavor PQsendQueryParams(). Am I gettig\r\n> that right? \r\n\r\nCorrect, there will need to be separate APIs for Parse/Bind, Execute\r\nand Close of a Portal.\r\n\r\n\r\n> The correct API design seems tricky, to say the least.\r\n> Perhaps requiring this much extra work in libpq for the purpose of\r\n> having some tests in this thread is not a brilliant idea.. Or perhaps\r\n> we could just do it and have something a-la-JDBC with two routines?\r\n> That would be one libpq routine for describe/bind and one for execute\r\n> where the limit can be given by the caller in the latter case, similar\r\n> to sendDescribeStatement() and sendExecute() in\r\n> QueryExecutorImpl.java.\r\n\r\nI am not too clear on your point here. ISTM you are suggesting adding\r\nnew libpq APis similar to JDBC, which is what I am also suggesting.\r\n\r\nDid I understand correctly?\r\n\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n", "msg_date": "Fri, 31 Mar 2023 18:06:46 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "\"Imseih (AWS), Sami\" <simseih@amazon.com> writes:\n>> So... The idea here is to set a custom fetch size so as the number of\n>> calls can be deterministic in the tests, still more than 1 for the\n>> tests we'd have. And your point is that libpq enforces always 0 when\n>> sending the EXECUTE message causing it to always return all the rows\n>> for any caller of PQsendQueryGuts().\n\n> That is correct.\n\nHi, I took a quick look through this thread, and I have a couple of\nthoughts:\n\n* Yeah, it'd be nice to have an in-core test, but it's folly to insist\non one that works via libpq and psql. That requires a whole new set\nof features that you're apparently designing on-the-fly with no other\nuse cases in mind. I don't think that will accomplish much except to\nensure that this bug fix doesn't make it into v16.\n\n* I don't understand why it was thought good to add two new counters\nto struct Instrumentation. In EXPLAIN ANALYZE cases those will be\nwasted space *per plan node*, not per Query.\n\n* It also seems quite bizarre to add counters to a core data structure\nand then leave it to pg_stat_statements to maintain them. (BTW, I didn't\nmuch care for putting that maintenance into pgss_ExecutorRun without\nupdating its header comment.)\n\nI'm inclined to think that adding the counters to struct EState is\nfine. That's 304 bytes already on 64-bit platforms, another 8 or 16\nwon't matter.\n\nAlso, I'm doubtful that counting calls this way is a great idea,\nwhich would mean you only need one new counter field not two. The\nfact that you're having trouble defining what it means certainly\nsuggests that the implementation is out front of the design.\n\nIn short, what I think I'd suggest is adding an es_total_processed\nfield to EState and having standard_ExecutorRun do \"es_total_processed\n+= es_processed\" near the end, then just change pg_stat_statements to\nuse es_total_processed not es_processed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Apr 2023 12:43:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> * Yeah, it'd be nice to have an in-core test, but it's folly to insist\r\n> on one that works via libpq and psql. That requires a whole new set\r\n> of features that you're apparently designing on-the-fly with no other\r\n> use cases in mind. I don't think that will accomplish much except to\r\n> ensure that this bug fix doesn't make it into v16.\r\n\r\nI agree, Solving the lack of in-core testing for this area belong in a\r\ndifferent discussion. \r\n\r\n\r\n> * I don't understand why it was thought good to add two new counters\r\n> to struct Instrumentation. In EXPLAIN ANALYZE cases those will be\r\n> wasted space *per plan node*, not per Query.\r\n\r\nIndeed, looking at ExplainNode, Instrumentation struct is allocated\r\nper node and the new fields will be wasted space. Thanks for highlighting\r\nthis.\r\n\r\n> * It also seems quite bizarre to add counters to a core data structure\r\n> and then leave it to pg_stat_statements to maintain them. \r\n\r\nThat is fair point\r\n\r\n> I'm inclined to think that adding the counters to struct EState is\r\n> fine. That's 304 bytes already on 64-bit platforms, another 8 or 16\r\n> won't matter.\r\n\r\nWith the point you raise about Insrumentation per node, Estate \r\nis the better place for the new counters.\r\n\r\n\r\n> Also, I'm doubtful that counting calls this way is a great idea,\r\n> which would mean you only need one new counter field not two. The\r\n> fact that you're having trouble defining what it means certainly\r\n> suggests that the implementation is out front of the design.\r\n\r\nISTM you are not in agreement that a call count should be incremented \r\nafter every executorRun, but should only be incremented after \r\nthe portal is closed, at executorEnd. Is that correct?\r\n\r\nFWIW, The rationale for incrementing calls in executorRun is that calls refers \r\nto the number of times a client executes a portal, whether partially or to completion.\r\n\r\nClients can also fetch rows from portals at various rates, so to determine the\r\n\"rows per call\" accurately from pg_stat_statements, we should track a calls as \r\nthe number of times executorRun was called on a portal.\r\n\r\n> In short, what I think I'd suggest is adding an es_total_processed\r\n> field to EState and having standard_ExecutorRun do \"es_total_processed\r\n> += es_processed\" near the end, then just change pg_stat_statements to\r\n> use es_total_processed not es_processed.\r\n\r\nThe original proposal in 0001-correct-pg_stat_statements-tracking-of-portals.patch,\r\nwas to add a \"calls\" and \"es_total_processed\" field to EState.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Tue, 4 Apr 2023 02:19:46 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "\"Imseih (AWS), Sami\" <simseih@amazon.com> writes:\n>> Also, I'm doubtful that counting calls this way is a great idea,\n>> which would mean you only need one new counter field not two. The\n>> fact that you're having trouble defining what it means certainly\n>> suggests that the implementation is out front of the design.\n\n> ISTM you are not in agreement that a call count should be incremented \n> after every executorRun, but should only be incremented after \n> the portal is closed, at executorEnd. Is that correct?\n\nRight. That makes the \"call count\" equal to the number of times the\nquery is invoked.\n\n> FWIW, The rationale for incrementing calls in executorRun is that calls refers \n> to the number of times a client executes a portal, whether partially or to completion.\n\nWhy should that be the definition? Partial execution of a portal\nmight be something that is happening at the driver level, behind the\nuser's back. You can't make rational calculations of, say, plan\ntime versus execution time if that's how \"calls\" is measured.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Apr 2023 22:47:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> Why should that be the definition? Partial execution of a portal\r\n> might be something that is happening at the driver level, behind the\r\n> user's back. You can't make rational calculations of, say, plan\r\n> time versus execution time if that's how \"calls\" is measured.\r\n\r\nCorrect, and there are also drivers that implement fetch size using\r\ncursor statements, i.e. DECLARE CURSOR, FETCH CURSOR,\r\nand each FETCH gets counted as 1 call.\r\n\r\nI wonder if the right answer here is to track fetches as \r\na separate counter in pg_stat_statements, in which fetch\r\nrefers to the number of times a portal is executed?\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Tue, 4 Apr 2023 03:01:05 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "\"Imseih (AWS), Sami\" <simseih@amazon.com> writes:\n> I wonder if the right answer here is to track fetches as \n> a separate counter in pg_stat_statements, in which fetch\n> refers to the number of times a portal is executed?\n\nMaybe, but is there any field demand for that?\n\nIMV, the existing behavior is that we count one \"call\" per overall\nquery execution (that is, per ExecutorEnd invocation). The argument\nthat that's a bug and we should change it seems unsupportable to me,\nand even the argument that we should also count ExecutorRun calls\nseems quite lacking in evidence. We clearly do need to fix the\nreported rowcount for cases where ExecutorRun is invoked more than\nonce per ExecutorEnd call; but I think that's sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Apr 2023 23:13:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> Maybe, but is there any field demand for that?\r\n\r\nI don't think there is.\r\n\r\n> We clearly do need to fix the\r\n> reported rowcount for cases where ExecutorRun is invoked more than\r\n> once per ExecutorEnd call; but I think that's sufficient.\r\n\r\nSure, the original proposed fix, but with tracking the es_total_processed\r\nonly in Estate should be enough for now.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n", "msg_date": "Tue, 4 Apr 2023 03:29:07 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Tue, Apr 04, 2023 at 03:29:07AM +0000, Imseih (AWS), Sami wrote:\n>> We clearly do need to fix the\n>> reported rowcount for cases where ExecutorRun is invoked more than\n>> once per ExecutorEnd call; but I think that's sufficient.\n> \n> Sure, the original proposed fix, but with tracking the es_total_processed\n> only in Estate should be enough for now.\n\nI was looking back at this thread, and the suggestion to use one field\nin EState sounds fine to me. Sami, would you like to send a new\nversion of the patch (simplified version based on v1)?\n--\nMichael", "msg_date": "Tue, 4 Apr 2023 21:15:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> I was looking back at this thread, and the suggestion to use one field\r\n> in EState sounds fine to me. Sami, would you like to send a new\r\n> version of the patch (simplified version based on v1)?\r\n\r\nHere is v4.\r\n\r\nThe \"calls\" tracking is removed from Estate. Unlike v1 however,\r\nI added a check for the operation type. Inside ExecutorRun,\r\nes_total_processed is incremented when the operation is\r\na SELECT. This check is done for es_processed as well inside\r\nexecutorRun -> ExecutePlan.\r\n\r\nFor non select operations, es_total_processed is set to\r\nes_processed in executorfinish. This is because the modify\r\nplan nodes set es_processed outside of execMain.c\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Tue, 4 Apr 2023 21:48:17 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Tue, Apr 04, 2023 at 09:48:17PM +0000, Imseih (AWS), Sami wrote:\n> The \"calls\" tracking is removed from Estate. Unlike v1 however,\n> I added a check for the operation type. Inside ExecutorRun,\n> es_total_processed is incremented when the operation is\n> a SELECT. This check is done for es_processed as well inside\n> executorRun -> ExecutePlan.\n\nI see. This seems in line with ExecutePlan() where es_processed is\nincremented only for SELECT queries.\n\n> For non select operations, es_total_processed is set to\n> es_processed in executorfinish. This is because the modify\n> plan nodes set es_processed outside of execMain.c\n\nMy java is very rusty but (after some time reminding myself how to do\na CLASSPATH) I can see the row counts being right for things like\nSELECT, INSERT RETURNING, WITH queries for both of them and\nautocommit, and the proposed fix influences SELECT only depending on\nthe fetch size. Doing nothing for calls now is fine by me, though I\nagree that this could be improved at some point, as seeing only 1\nrather than N for each fetch depending on the size is a bit confusing.\n\n * There is no return value, but output tuples (if any) are sent to\n * the destination receiver specified in the QueryDesc; and the number\n * of tuples processed at the top level can be found in\n * estate->es_processed.\n\nDoesn't this comment at the top of ExecutorRun() need an update? It\nseems to me that this comment should mention both es_total_processed\nand es_processed, telling that es_processed is the number of rows\nprocessed in a single call, while es_total_processed is the sum of all\ntuples processed in the ExecutorRun() calls.\n\n@@ -441,6 +451,13 @@ standard_ExecutorFinish(QueryDesc *queryDesc)\n if (queryDesc->totaltime)\n InstrStopNode(queryDesc->totaltime, 0);\n \n+ /*\n+ * For non-SELECT operations, es_total_processed will always be\n+ * equal to es_processed.\n+ */\n+ if (operation != CMD_SELECT)\n+ queryDesc->estate->es_total_processed = queryDesc->estate->es_processed;\n\nThere is no need for this part in ExecutorFinish(), actually, as long\nas we always increment es_total_processed at the end ExecutorRun() for\nall the operation types? If the portal does not have a store, we\nwould do one ExecutorRun() call per execute fetch. If the portal has\na store, we'd do only one ExecutorRun(). Both cases call once\nExecutorFinish(), but the finish would happen during the first\nexecute when filling a portal's tuple store, and during the last\nexecute fetch if the portal has no store. This is Tom's point in [1],\nfrom what I can see. That seems simpler to me, as well.\n\n- uint64 es_processed; /* # of tuples processed */\n+ uint64 es_processed; /* # of tuples processed for a single\n+ * execution of a portal */\n+ uint64 es_total_processed; /* # of tuples processed for all\n+ * executions of a portal */\n\nHmm. This does not reflect completely the reality for non-SELECT\nstatements, no? For SELECT statements, that's correct, because\nes_processed is reset in standard_ExecutorFinish() each time the\nbackend does an execute fetch thanks to PortalRunSelect(). For\nnon-SELECT statements, the extended query protocol uses the tuples\nstored in the portal after one execution, meaning that we run through\nthe executor once with both es_processed and es_total_processed set to\ntheir final numbers from the start, before any fetches. I would\nsuggest something like that to document both fields:\n- es_processed: number of tuples processed during one ExecutorRun()\ncall.\n- es_total_processed: total number of tuples aggregated across all\nExecutorRun() calls.\n\nAt the end, I'm OK with the proposal after a closer look, but I think\nthat we should do a much better job at describing es_processed and\nes_total_processed in execnodes.h, particularly in the case of a\nportal holding a store where es_processed may not reflect the number\nof rows for a single portal execution, and it seems to me that the\nproposal of incrementing es_total_processed at the end of\nExecutorRun() for all commands is simpler, based on what I have\ntested.\n\n[1]: https://www.postgresql.org/message-id/1311773.1680577992@sss.pgh.pa.us\n--\nMichael", "msg_date": "Wed, 5 Apr 2023 11:48:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> Doing nothing for calls now is fine by me, though I\r\n> agree that this could be improved at some point, as seeing only 1\r\n> rather than N for each fetch depending on the size is a bit confusing.\r\n\r\nI think we will need to clearly define what \"calls\" is. Perhaps as mentioned\r\nabove, we may need separate counters for \"calls\" vs \"fetches\". This is\r\ndefinitely a separate thread.\r\n\r\n\r\n> Doesn't this comment at the top of ExecutorRun() need an update? It\r\n> seems to me that this comment should mention both es_total_processed\r\n\r\nYes, updated in v5.\r\n\r\n\r\n> There is no need for this part in ExecutorFinish(), actually, as long\r\n> as we always increment es_total_processed at the end ExecutorRun() for\r\n> all the operation types? \r\n\r\nAh, correct. I changed that and tested again.\r\n\r\n> - es_processed: number of tuples processed during one ExecutorRun()\r\n> call.\r\n> - es_total_processed: total number of tuples aggregated across all\r\n> ExecutorRun() calls.\r\n\r\nI thought hard about this point and for some reason I did not want to\r\nmention ExecutorRun in the comment. But, I agree with what you suggest.\r\nIt's more clear as to the intention of the fields.\r\n\r\nAttached is v5 addressing the comments.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 5 Apr 2023 04:07:21 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Wed, Apr 05, 2023 at 04:07:21AM +0000, Imseih (AWS), Sami wrote:\n>> - es_processed: number of tuples processed during one ExecutorRun()\n>> call.\n>> - es_total_processed: total number of tuples aggregated across all\n>> ExecutorRun() calls.\n> \n> I thought hard about this point and for some reason I did not want to\n> mention ExecutorRun in the comment. But, I agree with what you suggest.\n> It's more clear as to the intention of the fields.\n> \n> Attached is v5 addressing the comments.\n\nThanks, this should be enough to persist the number of tuples tracked\nacross multiple ExecutorRun() calls. This looks pretty good to me.\n\nWe should do something about providing more control over that to\nlibpq in the long run, IMO, and have more test coverage, but let's see\nabout that in 17~.\n--\nMichael", "msg_date": "Wed, 5 Apr 2023 13:58:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Apr 05, 2023 at 04:07:21AM +0000, Imseih (AWS), Sami wrote:\n>> Attached is v5 addressing the comments.\n\n> Thanks, this should be enough to persist the number of tuples tracked\n> across multiple ExecutorRun() calls. This looks pretty good to me.\n\nv5 seems OK to me except I think CreateExecutorState() should explicitly\nzero the new es_total_processed field, alongside zeroing es_processed.\n(I realize that the makeNode would have done it already, but our\ncoding conventions generally run towards not relying on that. This is\nmainly for greppability, so you can find where a field is initialized.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Apr 2023 17:39:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Wed, Apr 05, 2023 at 05:39:13PM -0400, Tom Lane wrote:\n> v5 seems OK to me except I think CreateExecutorState() should explicitly\n> zero the new es_total_processed field, alongside zeroing es_processed.\n> (I realize that the makeNode would have done it already, but our\n> coding conventions generally run towards not relying on that. This is\n> mainly for greppability, so you can find where a field is initialized.)\n\nMakes sense to me. I'll look at that again today, potentially apply\nthe fix on HEAD.\n--\nMichael", "msg_date": "Thu, 6 Apr 2023 07:09:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "> Makes sense to me. I'll look at that again today, potentially apply\r\n> the fix on HEAD.\r\n\r\nHere is v6. That was my mistake not to zero out the es_total_processed.\r\nI had it in the first version.\r\n\r\n--\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 5 Apr 2023 22:16:19 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "Hi,\n\nOn 2023-04-06 07:09:37 +0900, Michael Paquier wrote:\n> I'll look at that again today, potentially apply the fix on HEAD.\n\nSeems like a complicated enough facility to benefit from a test or two? Peter\nEisentraut added support for the extended query protocol to psql, so it\nshouldn't be too hard...\n\ncommit 5b66de3433e\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: 2022-11-15 13:50:27 +0100\n \n psql: Add command to use extended query protocol\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Apr 2023 17:39:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Wed, Apr 05, 2023 at 10:16:19PM +0000, Imseih (AWS), Sami wrote:\n> Here is v6. That was my mistake not to zero out the es_total_processed.\n> I had it in the first version.\n\nThe update of es_total_processed in standard_ExecutorRun() felt a bit\nlonely, so I have added an extra comment, ran an indentation, and\napplied the result. Thanks Sami for the patch, and everyone else for\nthe feedback!\n--\nMichael", "msg_date": "Thu, 6 Apr 2023 09:39:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" }, { "msg_contents": "On Wed, Apr 05, 2023 at 05:39:35PM -0700, Andres Freund wrote:\n> Seems like a complicated enough facility to benefit from a test or two? Peter\n> Eisentraut added support for the extended query protocol to psql, so it\n> shouldn't be too hard...\n\nPQsendQueryGuts() does not split yet the bind/describe phase and the\nexecute phases, so we'd need a couple more libpq APIs to do that, with\nmore tracking of the state we're currently on when looping across\nmultiple execute fetches. My guess is that it is possible to follow a\nmodel similar to JDBC here. I don't think that's necessarily\ncomplicated, but it is not as straight-forward as it looks. \\bind was \nmuch more straight-forward than that, as it can feed on a single call\nof PQsendQueryParams() after saving a set of parameters. An \\exec\nwould not completely do that.\n\nAttaching one of the scripts I've played with, in a very rusty java\nwith no classes or such, for future reference. Just update CLASSPATH\nto point to a copy of the JDBC driver, run it with a java command, and\nthen look at rows, query in pg_stat_statements.\n--\nMichael", "msg_date": "Thu, 6 Apr 2023 09:53:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_stat_statements and extended query protocol" } ]
[ { "msg_contents": "Hi,\n\nIt seems that the planner currently elides an Append/MergeAppend that\nhas run-time pruning info (part_prune_index) set, but which I think is\na bug. Here's an example:\n\ncreate table p (a int) partition by list (a);\ncreate table p1 partition of p for values in (1);\nset plan_cache_mode to force_generic_plan ;\nprepare q as select * from p where a = $1;\nexplain execute q (0);\n QUERY PLAN\n------------------------------------------------------\n Seq Scan on p1 p (cost=0.00..41.88 rows=13 width=4)\n Filter: (a = $1)\n(2 rows)\n\nBecause the Append is elided in this case, run-time pruning doesn't\nkick in to prune p1, even though PartitionPruneInfo to do so has been\ngenerated.\n\nAttached find a patch to fix that. There are some expected output\ndiffs in partition_prune suite, though they all look sane to me.\n\nThoughts?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 26 Jan 2023 21:27:43 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "wrong Append/MergeAppend elision?" }, { "msg_contents": "On Fri, 27 Jan 2023 at 01:30, Amit Langote <amitlangote09@gmail.com> wrote:\n> It seems that the planner currently elides an Append/MergeAppend that\n> has run-time pruning info (part_prune_index) set, but which I think is\n> a bug.\n\nThis is actually how I intended it to work. Whether it was a good idea\nor not, I'm currently unsure. I mentioned it in [1].\n\nI think the plan shapes I was talking about were some ordered paths\nfrom partial paths per what is being added right at the end of\nadd_paths_to_append_rel(). However, now that I look at it again, I'm\nnot sure why it wouldn't be correct to still have those paths with a\nsingle-child Append. Certainly, the \"if (list_length(live_childrels)\n== 1)\" test made in add_paths_to_append_rel() is no longer aligned to\nthe equivalent test in set_append_references(), so it's possible even\nnow that we make a plan that uses the extra sorted partial paths added\nin add_paths_to_append_rel() and still have the Append in the final\nplan.\n\nThere is still the trade-off of having to pull tuples through the\nAppend node for when run-time pruning is unable to prune the last\npartition. So your proposal to leave the Append alone when there's\nrun-time pruning info is certainly not a no-brainer.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f_utf1Mbp8UeoByAarziO4e4qb4Z8FksurpaM+3Q_HOmQ@mail.gmail.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 09:13:09 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong Append/MergeAppend elision?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 27 Jan 2023 at 01:30, Amit Langote <amitlangote09@gmail.com> wrote:\n>> It seems that the planner currently elides an Append/MergeAppend that\n>> has run-time pruning info (part_prune_index) set, but which I think is\n>> a bug.\n\n> There is still the trade-off of having to pull tuples through the\n> Append node for when run-time pruning is unable to prune the last\n> partition. So your proposal to leave the Append alone when there's\n> run-time pruning info is certainly not a no-brainer.\n\nYeah. Amit's proposal amounts to optimizing for the case that all\npartitions get pruned, which does not seem to me to be the way\nto bet. I'm inclined to think it's fine as-is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 15:43:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wrong Append/MergeAppend elision?" }, { "msg_contents": "On Fri, Jan 27, 2023 at 5:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Fri, 27 Jan 2023 at 01:30, Amit Langote <amitlangote09@gmail.com> wrote:\n> >> It seems that the planner currently elides an Append/MergeAppend that\n> >> has run-time pruning info (part_prune_index) set, but which I think is\n> >> a bug.\n>\n> > There is still the trade-off of having to pull tuples through the\n> > Append node for when run-time pruning is unable to prune the last\n> > partition. So your proposal to leave the Append alone when there's\n> > run-time pruning info is certainly not a no-brainer.\n>\n> Yeah. Amit's proposal amounts to optimizing for the case that all\n> partitions get pruned, which does not seem to me to be the way\n> to bet. I'm inclined to think it's fine as-is.\n\nFair enough. I thought for a second that maybe it was simply an\noversight but David confirms otherwise. This was interacting badly\nwith the other patch I'm working on and I just figured out the problem\nwas with that other patch.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:39:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wrong Append/MergeAppend elision?" } ]
[ { "msg_contents": "The symptom being exhibited by Michael's new BF animal tanager\nis perfectly reproducible elsewhere.\n\n$ cat /home/postgres/tmp/temp_config\n#default_toast_compression = lz4\nwal_compression = lz4\n$ export TEMP_CONFIG=/home/postgres/tmp/temp_config\n$ cd ~/pgsql/src/test/recovery\n$ make check PROVE_TESTS=t/011_crash_recovery.pl\n...\n+++ tap check in src/test/recovery +++\nt/011_crash_recovery.pl .. 1/? \n# Failed test 'new xid after restart is greater'\n# at t/011_crash_recovery.pl line 53.\n# '729'\n# >\n# '729'\n\n# Failed test 'xid is aborted after crash'\n# at t/011_crash_recovery.pl line 57.\n# got: 'committed'\n# expected: 'aborted'\n# Looks like you failed 2 tests of 3.\n\nMaybe this is somehow the test script's fault, but I don't see how.\n\nIt fails the same way with 'wal_compression = pglz', so I think it's\ngeneric to that whole feature rather than specific to LZ4.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 14:43:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Something is wrong with wal_compression" }, { "msg_contents": "On Thu, Jan 26, 2023 at 02:43:29PM -0500, Tom Lane wrote:\n> The symptom being exhibited by Michael's new BF animal tanager\n> is perfectly reproducible elsewhere.\n\nI think these tests have always failed with wal_compression ?\n\nhttps://www.postgresql.org/message-id/20210308.173242.463790587797836129.horikyota.ntt%40gmail.com\nhttps://www.postgresql.org/message-id/20210313012820.GJ29463@telsasoft.com\nhttps://www.postgresql.org/message-id/20220222231948.GJ9008@telsasoft.com\n\nhttps://www.postgresql.org/message-id/YNqWd2GSMrnqWIfx@paquier.xyz\n|My buildfarm machine has been changed to use wal_compression = lz4,\n|while on it for HEAD runs.\n\n\n\n", "msg_date": "Thu, 26 Jan 2023 14:08:27 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Jan 26, 2023 at 02:43:29PM -0500, Tom Lane wrote:\n>> The symptom being exhibited by Michael's new BF animal tanager\n>> is perfectly reproducible elsewhere.\n\n> I think these tests have always failed with wal_compression ?\n\nIf that's a known problem, and we've done nothing about it,\nthat is pretty horrid. That test case is demonstrating fundamental\ndatabase corruption after a crash.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 15:12:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Thu, Jan 26, 2023 at 02:08:27PM -0600, Justin Pryzby wrote:\n> On Thu, Jan 26, 2023 at 02:43:29PM -0500, Tom Lane wrote:\n> > The symptom being exhibited by Michael's new BF animal tanager\n> > is perfectly reproducible elsewhere.\n> \n> I think these tests have always failed with wal_compression ?\n> \n> https://www.postgresql.org/message-id/20210308.173242.463790587797836129.horikyota.ntt%40gmail.com\n> https://www.postgresql.org/message-id/20210313012820.GJ29463@telsasoft.com\n> https://www.postgresql.org/message-id/20220222231948.GJ9008@telsasoft.com\n\n+ https://www.postgresql.org/message-id/c86ce84f-dd38-9951-102f-13a931210f52%40dunslane.net\n\n\n", "msg_date": "Thu, 26 Jan 2023 14:15:56 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Thu, Jan 26, 2023 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> That test case is demonstrating fundamental\n> database corruption after a crash.\n>\n\nNot exactly corruption. XID was not persisted and buffer data did not\nhit a disk. Database is in the correct state.\n\nIt was discussed long before WAL compression here [0]. The thing is it\nis easier to reproduce with compression, but compression has nothing\nto do with it, as far as I understand.\n\nProposed fix is here[1], but I think it's better to fix the test. It\nshould not veryfi Xid, but rather side effects of \"CREATE TABLE mine(x\ninteger);\".\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/565FB155-C6B0-41E2-8C44-7B514DC25132%2540yandex-team.ru\n[1] https://www.postgresql.org/message-id/flat/20210313012820.GJ29463%40telsasoft.com#0f18d3a4d593ea656fdc761e026fee81\n\n\n", "msg_date": "Thu, 26 Jan 2023 13:28:50 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Andrey Borodin <amborodin86@gmail.com> writes:\n> On Thu, Jan 26, 2023 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That test case is demonstrating fundamental\n>> database corruption after a crash.\n\n> Not exactly corruption. XID was not persisted and buffer data did not\n> hit a disk. Database is in the correct state.\n\nReally? I don't see how this part is even a little bit okay:\n\n[00:40:50.744](0.046s) not ok 3 - xid is aborted after crash\n[00:40:50.745](0.001s) \n[00:40:50.745](0.000s) # Failed test 'xid is aborted after crash'\n# at t/011_crash_recovery.pl line 57.\n[00:40:50.746](0.001s) # got: 'committed'\n# expected: 'aborted'\n\nIf any tuples made by that transaction had reached disk,\nwe'd have a problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 17:14:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Fri, Jan 27, 2023 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andrey Borodin <amborodin86@gmail.com> writes:\n> > On Thu, Jan 26, 2023 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> That test case is demonstrating fundamental\n> >> database corruption after a crash.\n>\n> > Not exactly corruption. XID was not persisted and buffer data did not\n> > hit a disk. Database is in the correct state.\n>\n> Really? I don't see how this part is even a little bit okay:\n>\n> [00:40:50.744](0.046s) not ok 3 - xid is aborted after crash\n> [00:40:50.745](0.001s)\n> [00:40:50.745](0.000s) # Failed test 'xid is aborted after crash'\n> # at t/011_crash_recovery.pl line 57.\n> [00:40:50.746](0.001s) # got: 'committed'\n> # expected: 'aborted'\n>\n> If any tuples made by that transaction had reached disk,\n> we'd have a problem.\n\nThe problem is that the WAL wasn't flushed, allowing the same xid to\nbe allocated again after crash recovery. But for any data pages to\nhit the disk, we'd have to flush WAL first, so then it couldn't\nhappen, no? FWIW I also re-complained about the dangers of anyone\nrelying on pg_xact_status() for its stated purpose after seeing\ntanager's failure[1].\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJ9p2JPPMA4eYAKq%3Dr9d_4_8vziet_tS1LEBbiny5-ypA%40mail.gmail.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 11:50:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Jan 27, 2023 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If any tuples made by that transaction had reached disk,\n>> we'd have a problem.\n\n> The problem is that the WAL wasn't flushed, allowing the same xid to\n> be allocated again after crash recovery. But for any data pages to\n> hit the disk, we'd have to flush WAL first, so then it couldn't\n> happen, no?\n\nAh, now I get the point: the \"committed xact\" seen after restart\nisn't the same one as we saw before the crash, but a new one that\nwas given the same XID because nothing about the old one had made\nit to disk yet.\n\n> FWIW I also re-complained about the dangers of anyone\n> relying on pg_xact_status() for its stated purpose after seeing\n> tanager's failure[1].\n\nIndeed, it seems like this behavior makes pg_xact_status() basically\nuseless as things stand.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 18:04:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Thu, Jan 26, 2023 at 3:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Indeed, it seems like this behavior makes pg_xact_status() basically\n> useless as things stand.\n>\n\nIf we agree that xid allocation is not something persistent, let's fix\nthe test? We can replace a check with select * from pg_class or,\nmaybe, add an amcheck run.\nAs far as I recollect, this test was introduced to test this new\nfunction in 857ee8e391f.\n\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Thu, 26 Jan 2023 16:14:57 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Andrey Borodin <amborodin86@gmail.com> writes:\n> On Thu, Jan 26, 2023 at 3:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Indeed, it seems like this behavior makes pg_xact_status() basically\n>> useless as things stand.\n\n> If we agree that xid allocation is not something persistent, let's fix\n> the test?\n\nIf we're not going to fix this behavior, we need to fix the docs\nto disclaim that pg_xact_status() is of use for what it's said\nto be good for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 19:23:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Thu, Jan 26, 2023 at 04:14:57PM -0800, Andrey Borodin wrote:\n> If we agree that xid allocation is not something persistent, let's fix\n> the test? We can replace a check with select * from pg_class or,\n> maybe, add an amcheck run.\n> As far as I recollect, this test was introduced to test this new\n> function in 857ee8e391f.\n\nMy opinion would be to make this function more reliable, FWIW, even if\nthat involves a performance impact when called in a close loop by\nforcing more WAL flushes to ensure its report durability and\nconsistency. As things stand, this is basically unreliable, and we\ndocument it as something applications can *use*. Adding a note in the\ndocs to say that this function can be unstable for some edge cases\ndoes not make much sense to me, either. Commit 857ee8e itself says\nthat we can use it if a database connection is lost, which could\nhappen on a crash..\n--\nMichael", "msg_date": "Fri, 27 Jan 2023 09:30:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Fri, Jan 27, 2023 at 1:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Jan 26, 2023 at 04:14:57PM -0800, Andrey Borodin wrote:\n> > If we agree that xid allocation is not something persistent, let's fix\n> > the test? We can replace a check with select * from pg_class or,\n> > maybe, add an amcheck run.\n> > As far as I recollect, this test was introduced to test this new\n> > function in 857ee8e391f.\n>\n> My opinion would be to make this function more reliable, FWIW, even if\n> that involves a performance impact when called in a close loop by\n> forcing more WAL flushes to ensure its report durability and\n> consistency. As things stand, this is basically unreliable, and we\n> document it as something applications can *use*. Adding a note in the\n> docs to say that this function can be unstable for some edge cases\n> does not make much sense to me, either. Commit 857ee8e itself says\n> that we can use it if a database connection is lost, which could\n> happen on a crash..\n\nYeah, the other thread has a patch for that. But it would hurt some\nworkloads. A better patch would do some kind of amortisation\n(reserving N xids at a time or some such scheme, while being careful\nto make sure the right CLOG pages etc exist if you crash and skip a\nbunch of xids on recovery) but be more complicated. For the record,\nback before release 13 added the 64 bit xid allocator, these functions\n(or rather their txid_XXX ancestors) were broken in a different way:\nthey didn't track epochs reliably, the discovery of which led to the\nnew xid8-based functions, so that might provide a natural\nback-patching range, if a back-patchable solution can be agreed on.\n\n\n", "msg_date": "Fri, 27 Jan 2023 13:46:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Jan 27, 2023 at 1:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> My opinion would be to make this function more reliable, FWIW, even if\n>> that involves a performance impact when called in a close loop by\n>> forcing more WAL flushes to ensure its report durability and\n>> consistency.\n\n> Yeah, the other thread has a patch for that. But it would hurt some\n> workloads.\n\nI think we need to get the thing correct first and worry about\nperformance later. What's wrong with simply making pg_xact_status\nwrite and flush a record of the XID's existence before returning it?\nYeah, it will cost you if you use that function, but not if you don't.\n\n> A better patch would do some kind of amortisation\n> (reserving N xids at a time or some such scheme, while being careful\n> to make sure the right CLOG pages etc exist if you crash and skip a\n> bunch of xids on recovery) but be more complicated.\n\nMaybe that would be appropriate for HEAD, but I'd be wary of adding\nanything complicated to the back branches. This is clearly a very\nunder-tested area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 21:04:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Fri, Jan 27, 2023 at 3:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Fri, Jan 27, 2023 at 1:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> My opinion would be to make this function more reliable, FWIW, even if\n> >> that involves a performance impact when called in a close loop by\n> >> forcing more WAL flushes to ensure its report durability and\n> >> consistency.\n>\n> > Yeah, the other thread has a patch for that. But it would hurt some\n> > workloads.\n>\n> I think we need to get the thing correct first and worry about\n> performance later. What's wrong with simply making pg_xact_status\n> write and flush a record of the XID's existence before returning it?\n> Yeah, it will cost you if you use that function, but not if you don't.\n\nIt would be pg_current_xact_id() that would have to pay the cost of\nthe WAL flush, not pg_xact_status() itself, but yeah that's what the\npatch does (with some optimisations). I guess one question is whether\nthere are any other reasonable real world uses of\npg_current_xact_id(), other than the original goal[1]. If not, then\nat least you are penalising the right users, even though they probably\nonly actually call pg_current_status() in extremely rare circumstances\n(if COMMIT hangs up). But I wouldn't be surprised if people have\nfound other reasons to be interested in xid observability, related to\ndistributed transactions and snapshots and suchlike. There is no\ndoubt that the current situation is unacceptable, though, so maybe we\nreally should just do it and make a faster one later. Anyone else\nwant to vote on this?\n\n[1] https://www.postgresql.org/message-id/flat/CAMsr%2BYHQiWNEi0daCTboS40T%2BV5s_%2Bdst3PYv_8v2wNVH%2BXx4g%40mail.gmail.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 16:15:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Jan 27, 2023 at 3:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think we need to get the thing correct first and worry about\n>> performance later. What's wrong with simply making pg_xact_status\n>> write and flush a record of the XID's existence before returning it?\n>> Yeah, it will cost you if you use that function, but not if you don't.\n\n> It would be pg_current_xact_id() that would have to pay the cost of\n> the WAL flush, not pg_xact_status() itself,\n\nRight, typo on my part.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 22:22:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Fri, 2023-01-27 at 16:15 +1300, Thomas Munro wrote:\n> On Fri, Jan 27, 2023 at 3:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > On Fri, Jan 27, 2023 at 1:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > > > My opinion would be to make this function more reliable, FWIW, even if\n> > > > that involves a performance impact when called in a close loop by\n> > > > forcing more WAL flushes to ensure its report durability and\n> > > > consistency.\n> > \n> > > Yeah, the other thread has a patch for that.  But it would hurt some\n> > > workloads.\n> > \n> > I think we need to get the thing correct first and worry about\n> > performance later.  What's wrong with simply making pg_xact_status\n> > write and flush a record of the XID's existence before returning it?\n> > Yeah, it will cost you if you use that function, but not if you don't.\n> \n> There is no\n> doubt that the current situation is unacceptable, though, so maybe we\n> really should just do it and make a faster one later.  Anyone else\n> want to vote on this?\n\nI wasn't aware of the existence of pg_xact_status, so I suspect that it\nis not a widely known and used feature. After reading the documentation,\nI'd say that anybody who uses it will want it to give a reliable answer.\nSo I'd agree that it is better to make it more expensive, but live up to\nits promise.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 27 Jan 2023 06:06:05 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Fri, Jan 27, 2023 at 06:06:05AM +0100, Laurenz Albe wrote:\n> On Fri, 2023-01-27 at 16:15 +1300, Thomas Munro wrote:\n>> There is no\n>> doubt that the current situation is unacceptable, though, so maybe we\n>> really should just do it and make a faster one later.  Anyone else\n>> want to vote on this?\n> \n> I wasn't aware of the existence of pg_xact_status, so I suspect that it\n> is not a widely known and used feature. After reading the documentation,\n> I'd say that anybody who uses it will want it to give a reliable answer.\n> So I'd agree that it is better to make it more expensive, but live up to\n> its promise.\n\nA code search within the Debian packages (codesearch.debian.net) and\ngithub does not show that it is not actually used, pg_xact_status() is\nreported as parts of copies of the Postgres code in the regression\ntests.\n\nFWIW, my vote goes for a more expensive but reliable function even in\nstable branches. Even 857ee8e mentions that this could be used on a\nlost connection, so we don't even satisfy the use case of the original\ncommit as things stand (right?), because lost connection could just be\na result of a crash, and if crash recovery reassigns the XID, then the\nclient gets it wrong.\n--\nMichael", "msg_date": "Sat, 28 Jan 2023 11:38:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 16:15:08 +1300, Thomas Munro wrote:\n> It would be pg_current_xact_id() that would have to pay the cost of\n> the WAL flush, not pg_xact_status() itself, but yeah that's what the\n> patch does (with some optimisations). I guess one question is whether\n> there are any other reasonable real world uses of\n> pg_current_xact_id(), other than the original goal[1].\n\ntxid_current() is a lot older than pg_current_xact_id(), and they're backed by\nthe same code afaict. 8.4 I think.\n\nUnfortunately txid_current() is used in plenty montiring setups IME.\n\nI don't think it's a good idea to make a function that was quite cheap for 15\nyears, suddenly be several orders of magnitude more expensive...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:57:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Fri, Jan 27, 2023, 18:58 Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-01-27 16:15:08 +1300, Thomas Munro wrote:\n> > It would be pg_current_xact_id() that would have to pay the cost of\n> > the WAL flush, not pg_xact_status() itself, but yeah that's what the\n> > patch does (with some optimisations). I guess one question is whether\n> > there are any other reasonable real world uses of\n> > pg_current_xact_id(), other than the original goal[1].\n>\n> txid_current() is a lot older than pg_current_xact_id(), and they're\n> backed by\n> the same code afaict. 8.4 I think.\n>\n> Unfortunately txid_current() is used in plenty montiring setups IME.\n>\n> I don't think it's a good idea to make a function that was quite cheap for\n> 15\n> years, suddenly be several orders of magnitude more expensive...\n\n\nAs someone working on a monitoring tool that uses it (well, both), +1. We'd\nhave to rethink a few things if this becomes a performance concern.\n\nThanks,\nMaciek\n\nOn Fri, Jan 27, 2023, 18:58 Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-01-27 16:15:08 +1300, Thomas Munro wrote:\n> It would be pg_current_xact_id() that would have to pay the cost of\n> the WAL flush, not pg_xact_status() itself, but yeah that's what the\n> patch does (with some optimisations).  I guess one question is whether\n> there are any other reasonable real world uses of\n> pg_current_xact_id(), other than the original goal[1].\n\ntxid_current() is a lot older than pg_current_xact_id(), and they're backed by\nthe same code afaict. 8.4 I think.\n\nUnfortunately txid_current() is used in plenty montiring setups IME.\n\nI don't think it's a good idea to make a function that was quite cheap for 15\nyears, suddenly be several orders of magnitude more expensive...As someone working on a monitoring tool that uses it (well, both), +1. We'd have to rethink a few things if this becomes a performance concern.Thanks,Maciek", "msg_date": "Fri, 27 Jan 2023 19:07:40 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Hi,\n\nOn 2023-01-28 11:38:50 +0900, Michael Paquier wrote:\n> On Fri, Jan 27, 2023 at 06:06:05AM +0100, Laurenz Albe wrote:\n> > On Fri, 2023-01-27 at 16:15 +1300, Thomas Munro wrote:\n> >> There is no\n> >> doubt that the current situation is unacceptable, though, so maybe we\n> >> really should just do it and make a faster one later.� Anyone else\n> >> want to vote on this?\n> > \n> > I wasn't aware of the existence of pg_xact_status, so I suspect that it\n> > is not a widely known and used feature. After reading the documentation,\n> > I'd say that anybody who uses it will want it to give a reliable answer.\n> > So I'd agree that it is better to make it more expensive, but live up to\n> > its promise.\n\n> A code search within the Debian packages (codesearch.debian.net) and\n> github does not show that it is not actually used, pg_xact_status() is\n> reported as parts of copies of the Postgres code in the regression\n> tests.\n\nNot finding a user at codesearch.debian.net provides useful information for C\nAPIs, but a negative result for an SQL exposed function doesn't provide any\ninformation. Those callers will largely be in application code, which largely\nwon't be in debian.\n\nAnd as noted two messages up, we wouldn't need to flush in pg_xact_status(),\nwe'd need to flush in pg_current_xact_id()/txid_current().\n\n\n> FWIW, my vote goes for a more expensive but reliable function even in\n> stable branches.\n\nI very strenuously object. If we make txid_current() (by way of\npg_current_xact_id()) flush WAL, we'll cause outages.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 19:07:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-28 11:38:50 +0900, Michael Paquier wrote:\n>> FWIW, my vote goes for a more expensive but reliable function even in\n>> stable branches.\n\n> I very strenuously object. If we make txid_current() (by way of\n> pg_current_xact_id()) flush WAL, we'll cause outages.\n\nWhat are you using it for, that you don't care whether the answer\nis trustworthy?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 22:39:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 22:39:56 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-01-28 11:38:50 +0900, Michael Paquier wrote:\n> >> FWIW, my vote goes for a more expensive but reliable function even in\n> >> stable branches.\n> \n> > I very strenuously object. If we make txid_current() (by way of\n> > pg_current_xact_id()) flush WAL, we'll cause outages.\n> \n> What are you using it for, that you don't care whether the answer\n> is trustworthy?\n\nIt's quite commonly used as part of trigger based replication tools (IIRC\nthat's its origin), monitoring, as part of client side logging, as part of\nsnapshot management.\n\ntxid_current() predates pg_xact_status() by well over 10 years. Clearly we had\nlots of uses for it before pg_xact_status() was around.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 19:49:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Fri, Jan 27, 2023 at 7:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> What are you using it for, that you don't care whether the answer\n> is trustworthy?\n>\n\nIt's not trustworthy anyway. Xid wraparound might happen during\nreconnect. I suspect we can design a test that will show that it does\nnot always show correct results during xid->2pc conversion (there is a\npoint in time when xid is not in regular and not in 2pc, and I'm not\nsure ProcArrayLock is held). Maybe there are other edge cases.\n\nAnyway, if a user wants to know the status of xid in case of\ndisconnection they have prepared xacts.\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Fri, 27 Jan 2023 19:57:35 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 19:49:17 -0800, Andres Freund wrote:\n> It's quite commonly used as part of trigger based replication tools (IIRC\n> that's its origin), monitoring, as part of client side logging, as part of\n> snapshot management.\n\nForgot one: Queues.\n\nThe way it's used for trigger based replication, queues and also some\nmaterialized aggregation tooling, is that there's a trigger that inserts into\na \"log\" table. And that log table has a column into which txid_current() will\nbe inserted. Together with txid_current_snapshot() etc that's used to get a\n(at least semi) \"transactional\" order out of such log tables.\n\nI believe that's originally been invented by londiste / skytool, later slony\nmigrated to it. The necessary C code was added as contrib/txid in 1f92630fc4e\n2007-10-07 and then moved into core a few days later in 18e3fcc31e7.\n\n\nFor those cases making txid_current() flush would approximately double the WAL\nflush rate.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 20:11:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 19:57:35 -0800, Andrey Borodin wrote:\n> On Fri, Jan 27, 2023 at 7:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > What are you using it for, that you don't care whether the answer\n> > is trustworthy?\n> >\n> \n> It's not trustworthy anyway. Xid wraparound might happen during\n> reconnect.\n\nI think that part would be approximately fine, as long as you can live with\nan answer of \"too old\". The xid returned by txid_status/pg_current_xact_id()\nis 64bit, and there is code to verify that the relevant range is covered by\nthe clog.\n\nHowever - there's nothing preventing the xid to become too old in case of a\ncrash.\n\nIf you have an open connection, you can prevent the clog from being truncated\nby having an open snapshot. But you can't really without using e.g. 2PC if you\nwant to handle crashes - obviously snapshots don't survive them.\n\n\nI really don't think txid_status() can be used for anything but informational\nprobing of the clog / procarray.\n\n\n\n> I suspect we can design a test that will show that it does not always show\n> correct results during xid->2pc conversion (there is a point in time when\n> xid is not in regular and not in 2pc, and I'm not sure ProcArrayLock is\n> held). Maybe there are other edge cases.\n\nUnless I am missing something, that would be very bad [TM], completely\nindependent of txid_status(). The underlying functions like\nTransactionIdIsInProgress() are used for MVCC.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 20:26:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Sat, Jan 28, 2023 at 4:57 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> It's not trustworthy anyway. Xid wraparound might happen during\n> reconnect. I suspect we can design a test that will show that it does\n> not always show correct results during xid->2pc conversion (there is a\n> point in time when xid is not in regular and not in 2pc, and I'm not\n> sure ProcArrayLock is held). Maybe there are other edge cases.\n\nI'm not sure I understand the edge cases, but it is true that this can\nonly give you the answer until the CLOG is truncated, which is pretty\narbitrary and you could be unlucky. I guess a reliable version of\nthis would have new policies about CLOG retention, and CLOG segment\nfilenames derived from 64 bit xids so they don't wrap around.\n\n> Anyway, if a user wants to know the status of xid in case of\n> disconnection they have prepared xacts.\n\nYeah. The original proposal mentioned that, but that this was a\n\"lighter\" alternative.\n\nReading Andres's comments and realising how relatively young\ntxid_status() is compared to txid_current(), I'm now wondering if we\nshouldn't just disclaim the whole thing in back branches. Maybe if we\nwant to rescue it in master, there could be a \"reliable\" argument,\ndefaulting to false, or whatever, and we could eventually make the\namortisation improvement.\n\n\n", "msg_date": "Sat, 28 Jan 2023 17:56:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Reading Andres's comments and realising how relatively young\n> txid_status() is compared to txid_current(), I'm now wondering if we\n> shouldn't just disclaim the whole thing in back branches.\n\nMy thoughts were trending in that direction too. It's starting\nto sound like we aren't going to be able to make a fix that\nwe'd be willing to risk back-patching, even if it were completely\ncompatible at the user level.\n\nStill, the idea that txid_status() isn't trustworthy is rather\nscary. I wonder whether there is a failure mode here that's\nexhibitable without using that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Jan 2023 00:02:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Sat, Jan 28, 2023 at 12:02:23AM -0500, Tom Lane wrote:\n> My thoughts were trending in that direction too. It's starting\n> to sound like we aren't going to be able to make a fix that\n> we'd be willing to risk back-patching, even if it were completely\n> compatible at the user level.\n> \n> Still, the idea that txid_status() isn't trustworthy is rather\n> scary. I wonder whether there is a failure mode here that's\n> exhibitable without using that.\n\nOkay, as far as I can see, the consensus would be to not do anything\nabout the performance impact of these functions:\n20210305.115011.558061052471425531.horikyota.ntt@gmail.com\n\nThree of my buildfarm machines are unstable because of that, they need\nsomething for stable branches as well, and I'd like them to stress\ntheir options.\n\nBased on what's been mentioned, we can:\n1) tweak the test with an extra checkpoint to make sure that the XIDs\nare flushed, like in the patch posted on [1].\n2) tweak the test to rely on a state of the table, as\nmentioned by Andrey.\n3) remove entirely the test, because as introduced it does not\nactually test what it should.\n\n2) is not really interesting, IMO, because the test checks for two\nthings:\n- an in-progress XID, which we already do in the main regression test\nsuite.\n- a post-crash state, and switching to an approach where some data is\nfor example scanned is no different than a lot of the other recovery\ntests.\n\n1) means more test cycles, and perhaps we could enforce compression of\nWAL while on it? At the end, my vote would just go for 3) and drop\nthe whole scenario, though there may be an argument in 1).\n\n[1]: https://www.postgresql.org/message-id/20210305.115011.558061052471425531.horikyota.ntt@gmail.com\n--\nMichael", "msg_date": "Mon, 30 Jan 2023 14:57:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" }, { "msg_contents": "On Mon, Jan 30, 2023 at 02:57:13PM +0900, Michael Paquier wrote:\n> 1) means more test cycles, and perhaps we could enforce compression of\n> WAL while on it? At the end, my vote would just go for 3) and drop\n> the whole scenario, though there may be an argument in 1).\n\nAnd actually I was under the impression that 1) is not completely\nstable either in the test because we rely on the return result of\ntxid_current() with IPC::Run::start, so a checkpoint forcing a flush\nmay not be able to do its work. In order to bring all my animals back\nto green, I have removed the test.\n--\nMichael", "msg_date": "Tue, 31 Jan 2023 12:50:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Something is wrong with wal_compression" } ]
[ { "msg_contents": "Hi,\n\nI received an alert dikkop (my rpi4 buildfarm animal running freebsd 14)\ndid not report any results for a couple days, and it seems it got into\nan infinite loop in REL_11_STABLE when building hash table in a parallel\nhashjoin, or something like that.\n\nIt seems to be progressing now, probably because I attached gdb to the\nworkers to get backtraces, which does signals etc.\n\nAnyway, in 'ps ax' I saw this:\n\n94545 - Ss 0:03.39 postgres: buildfarm regression [local] SELECT\n94627 - Is 0:00.03 postgres: parallel worker for PID 94545\n94628 - Is 0:00.02 postgres: parallel worker for PID 94545\n\nand the backend was stuck waiting on this query:\n\n select final > 1 as multibatch\n from hash_join_batches(\n $$\n select count(*) from join_foo\n left join (select b1.id, b1.t from join_bar b1 join join_bar\nb2 using (id)) ss\n on join_foo.id < ss.id + 1 and join_foo.id > ss.id - 1;\n $$);\n\nThis started on 2023-01-20 23:23:18.125, and the next log (after I did\nthe gdb stuff), is from 2023-01-26 20:05:16.751. Quite a bit of time.\n\nIt seems all three processes are doing WaitEventSetWait, either through\na ConditionVariable, or WaitLatch. But I don't have any good idea of\nwhat might have broken - and as it got \"unstuck\" I can't investigate\nmore. But I see there's nodeHash and parallelism, and I recall there's a\nlot of gotchas due to how the backends cooperate when building the hash\ntable, etc. Thomas, any idea what might be wrong?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 26 Jan 2023 21:36:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I received an alert dikkop (my rpi4 buildfarm animal running freebsd 14)\n> did not report any results for a couple days, and it seems it got into\n> an infinite loop in REL_11_STABLE when building hash table in a parallel\n> hashjoin, or something like that.\n\n> It seems to be progressing now, probably because I attached gdb to the\n> workers to get backtraces, which does signals etc.\n\nThat reminds me of cases that I saw several times on my now-deceased\nanimal florican:\n\nhttps://www.postgresql.org/message-id/flat/2245838.1645902425%40sss.pgh.pa.us\n\nThere's clearly something rotten somewhere in there, but whether\nit's our bug or FreeBSD's isn't clear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 15:49:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Fri, Jan 27, 2023 at 9:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > I received an alert dikkop (my rpi4 buildfarm animal running freebsd 14)\n> > did not report any results for a couple days, and it seems it got into\n> > an infinite loop in REL_11_STABLE when building hash table in a parallel\n> > hashjoin, or something like that.\n>\n> > It seems to be progressing now, probably because I attached gdb to the\n> > workers to get backtraces, which does signals etc.\n>\n> That reminds me of cases that I saw several times on my now-deceased\n> animal florican:\n>\n> https://www.postgresql.org/message-id/flat/2245838.1645902425%40sss.pgh.pa.us\n>\n> There's clearly something rotten somewhere in there, but whether\n> it's our bug or FreeBSD's isn't clear.\n\nAnd if it's ours, it's possibly in latch code and not anything higher\n(I mean, not in condition variables, barriers, or parallel hash join)\nbecause I saw a similar hang in the shm_mq stuff which uses the latch\nAPI directly. Note that 13 switched to kqueue but still used the\nself-pipe, and 14 switched to a signal event, and this hasn't been\nreported in those releases or later, which makes the poll() code path\na key suspect.\n\n\n", "msg_date": "Fri, 27 Jan 2023 09:57:02 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Fri, Jan 27, 2023 at 9:57 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jan 27, 2023 at 9:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > > I received an alert dikkop (my rpi4 buildfarm animal running freebsd 14)\n> > > did not report any results for a couple days, and it seems it got into\n> > > an infinite loop in REL_11_STABLE when building hash table in a parallel\n> > > hashjoin, or something like that.\n> >\n> > > It seems to be progressing now, probably because I attached gdb to the\n> > > workers to get backtraces, which does signals etc.\n> >\n> > That reminds me of cases that I saw several times on my now-deceased\n> > animal florican:\n> >\n> > https://www.postgresql.org/message-id/flat/2245838.1645902425%40sss.pgh.pa.us\n> >\n> > There's clearly something rotten somewhere in there, but whether\n> > it's our bug or FreeBSD's isn't clear.\n>\n> And if it's ours, it's possibly in latch code and not anything higher\n> (I mean, not in condition variables, barriers, or parallel hash join)\n> because I saw a similar hang in the shm_mq stuff which uses the latch\n> API directly. Note that 13 switched to kqueue but still used the\n> self-pipe, and 14 switched to a signal event, and this hasn't been\n> reported in those releases or later, which makes the poll() code path\n> a key suspect.\n\nAlso, 14 changed the flag/memory barrier dance (maybe_sleeping), but\n13 did it the same way as 11 + 12. So between 12 and 13 we have just\nthe poll -> kqueue change.\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:06:45 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "After 1000 make check loops, and 1000 make -C src/test/modules/test_shm_mq\ncheck loops, on the same FBSD 13.1 machine as elver which has failed\nlike this once before, I haven't been able to reproduce this on\nREL_12_STABLE. Not really sure how to chase this, but if you see this\nsituation again, I'd been interested to see the output of fstat -p PID\n(shows bytes in pipes) and procstat -j PID (shows pending signals) for\nall PIDs involved (before connecting a debugger or doing anything else\nthat might make it return with EINTR, after which we know it continues\nhappily because it then sees latch->is_set next time around the loop).\nIf poll() is not returning when there are bytes ready to read from the\nself-pipe, which fstat can show, I think that'd indicate a kernel bug.\nIf procstat -j shows signals pending but somehow it's still blocked in\nthe syscall. Otherwise, it might indicate a compiler or postgres bug,\nbut I don't have any particular theories.\n\n\n", "msg_date": "Fri, 27 Jan 2023 22:23:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 22:23:58 +1300, Thomas Munro wrote:\n> After 1000 make check loops, and 1000 make -C src/test/modules/test_shm_mq\n> check loops, on the same FBSD 13.1 machine as elver which has failed\n> like this once before, I haven't been able to reproduce this on\n> REL_12_STABLE.\n\nDid you use the same compiler / compilation flags as when elver hit it?\nClearly Tomas' case was with at least some optimizations enabled.\n\nExcept that you're saying that you hit this on elver (amd64), I think it'd be\ninteresting that we see the failure on an arm host, which has a less strict\nmemory order model than x86.\n\nIIUC elver previously hit this on 12?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 19:42:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Except that you're saying that you hit this on elver (amd64), I think it'd be\n> interesting that we see the failure on an arm host, which has a less strict\n> memory order model than x86.\n\nI also saw it on florican, which is/was an i386 machine using clang and\npretty standard build options other than\n\t'CFLAGS' => '-msse2 -O2',\nso I think this isn't too much about machine architecture or compiler\nflags.\n\nMachine speed might matter though. elver is a good deal faster than\nflorican was, and dikkop is slower yet. I gather Thomas has seen this\nonly once on elver, but I saw it maybe a dozen times over a couple of\nyears on florican, and now dikkop has hit it after not so many runs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 23:18:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 23:18:39 -0500, Tom Lane wrote:\n> I also saw it on florican, which is/was an i386 machine using clang and\n> pretty standard build options other than\n> \t'CFLAGS' => '-msse2 -O2',\n> so I think this isn't too much about machine architecture or compiler\n> flags.\n\nAh. Florican dropped of the BF status page and I was too lazy to look\ndeeper. You have a penchant for odd architectures, so it didn't seem too crazy\n:)\n\n\n> Machine speed might matter though. elver is a good deal faster than\n> florican was, and dikkop is slower yet. I gather Thomas has seen this\n> only once on elver, but I saw it maybe a dozen times over a couple of\n> years on florican, and now dikkop has hit it after not so many runs.\n\nRe-reading the old thread, it is interesting that you tried hard to reproduce\nit outside of the BF, without success:\nhttps://postgr.es/m/2398828.1646000688%40sss.pgh.pa.us\n\nSuch problems are quite annoying. Last time I hit such a case was\nhttps://postgr.es/m/20220325052654.3xpbmntatyofau2w%40alap3.anarazel.de\nbut I can't see anything like that being the issue here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 20:53:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Sat, Jan 28, 2023 at 4:42 PM Andres Freund <andres@anarazel.de> wrote:\n> Did you use the same compiler / compilation flags as when elver hit it?\n> Clearly Tomas' case was with at least some optimizations enabled.\n\nI did use the same compiler version and optimisation level, clang\nllvmorg-13.0.0-0-gd7b669b3a303 at -O2.\n\n\n", "msg_date": "Sat, 28 Jan 2023 18:34:20 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "\n\nOn 1/28/23 05:53, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-27 23:18:39 -0500, Tom Lane wrote:\n>> I also saw it on florican, which is/was an i386 machine using clang and\n>> pretty standard build options other than\n>> \t'CFLAGS' => '-msse2 -O2',\n>> so I think this isn't too much about machine architecture or compiler\n>> flags.\n> \n> Ah. Florican dropped of the BF status page and I was too lazy to look\n> deeper. You have a penchant for odd architectures, so it didn't seem too crazy\n> :)\n> \n> \n>> Machine speed might matter though. elver is a good deal faster than\n>> florican was, and dikkop is slower yet. I gather Thomas has seen this\n>> only once on elver, but I saw it maybe a dozen times over a couple of\n>> years on florican, and now dikkop has hit it after not so many runs.\n> \n> Re-reading the old thread, it is interesting that you tried hard to reproduce\n> it outside of the BF, without success:\n> https://postgr.es/m/2398828.1646000688%40sss.pgh.pa.us\n> \n> Such problems are quite annoying. Last time I hit such a case was\n> https://postgr.es/m/20220325052654.3xpbmntatyofau2w%40alap3.anarazel.de\n> but I can't see anything like that being the issue here.\n> \n\nFWIW I'll wait for dikkop to finish the current buildfarm run (it's\ncurrently chewing on HEAD) and then will try to do runs of the 'joins'\ntest in a loop. That's where dikkop got stuck before.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 28 Jan 2023 13:05:20 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On 1/28/23 13:05, Tomas Vondra wrote:\n>\n> FWIW I'll wait for dikkop to finish the current buildfarm run (it's\n> currently chewing on HEAD) and then will try to do runs of the 'joins'\n> test in a loop. That's where dikkop got stuck before.\n>\n\nSo I did that - same configure options as the buildfarm client, and a\n'make check' (with only tests up to the 'join' suite, because that's\nwhere it got stuck before). And it took only ~15 runs (~1h) to hit this\nagain on dikkop.\n\nAs before, there are three processes - leader + 2 workers, but the query\nis different - this time it's this one:\n\n -- A couple of other hash join tests unrelated to work_mem management.\n -- Check that EXPLAIN ANALYZE has data even if the leader doesn't\nparticipate\n savepoint settings;\n set local max_parallel_workers_per_gather = 2;\n set local work_mem = '4MB';\n set local parallel_leader_participation = off;\n select * from hash_join_batches(\n $$\n select count(*) from simple r join simple s using (id);\n $$);\n\nI managed to collect the fstat/procstat stuff Thomas asked for, and the\nbacktraces - attached. I still have the core files, in case we look at\nsomething. As before, running gcore on the second worker (29081) gets\nthis unstuck - it sends some signal that apparently wakes it up.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 29 Jan 2023 13:53:10 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Mon, Jan 30, 2023 at 1:53 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> So I did that - same configure options as the buildfarm client, and a\n> 'make check' (with only tests up to the 'join' suite, because that's\n> where it got stuck before). And it took only ~15 runs (~1h) to hit this\n> again on dikkop.\n\nThat's good news.\n\n> I managed to collect the fstat/procstat stuff Thomas asked for, and the\n> backtraces - attached. I still have the core files, in case we look at\n> something. As before, running gcore on the second worker (29081) gets\n> this unstuck - it sends some signal that apparently wakes it up.\n\nThanks! As expected, no bytes in the pipe for any those processes.\nUnfortunately I gave the wrong procstat command, it should be -i, not\n-j. Does \"procstat -i /path/to/core | grep USR1\" show P (pending) for\nthat stuck process? Silly question really, I don't really expect\npoll() to be misbehaving in such a basic way.\n\nI was talking to Andres on IM about this yesterday and he pointed out\na potential out-of-order hazard: WaitEventSetWait() sets \"waiting\" (to\ntell the signal handler to write to the self-pipe) and then reads\nlatch->is_set with neither compiler nor memory barrier, which doesn't\nseem right because we might see a value of latch->is_set from before\n\"waiting\" was true, and yet the signal handler might also have run\nwhile \"waiting\" was false so the self-pipe doesn't save us, despite\nthe length of the comment about that. Can you reproduce it with this\nchange?\n\n--- a/src/backend/storage/ipc/latch.c\n+++ b/src/backend/storage/ipc/latch.c\n@@ -1011,6 +1011,7 @@ WaitEventSetWait(WaitEventSet *set, long timeout,\n * ordering, so that we cannot miss seeing is_set if a notificat\nion\n * has already been queued.\n */\n+ pg_memory_barrier();\n if (set->latch && set->latch->is_set)\n {\n occurred_events->fd = PGINVALID_SOCKET;\n\n\n", "msg_date": "Mon, 30 Jan 2023 06:26:02 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "\n\nOn 1/29/23 18:26, Thomas Munro wrote:\n> On Mon, Jan 30, 2023 at 1:53 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> So I did that - same configure options as the buildfarm client, and a\n>> 'make check' (with only tests up to the 'join' suite, because that's\n>> where it got stuck before). And it took only ~15 runs (~1h) to hit this\n>> again on dikkop.\n> \n> That's good news.\n> \n>> I managed to collect the fstat/procstat stuff Thomas asked for, and the\n>> backtraces - attached. I still have the core files, in case we look at\n>> something. As before, running gcore on the second worker (29081) gets\n>> this unstuck - it sends some signal that apparently wakes it up.\n> \n> Thanks! As expected, no bytes in the pipe for any those processes.\n> Unfortunately I gave the wrong procstat command, it should be -i, not\n> -j. Does \"procstat -i /path/to/core | grep USR1\" show P (pending) for\n> that stuck process? Silly question really, I don't really expect\n> poll() to be misbehaving in such a basic way.\n> \n\nIt shows \"--C\" for all three processes, which should mean \"will be caught\".\n\n> I was talking to Andres on IM about this yesterday and he pointed out\n> a potential out-of-order hazard: WaitEventSetWait() sets \"waiting\" (to\n> tell the signal handler to write to the self-pipe) and then reads\n> latch->is_set with neither compiler nor memory barrier, which doesn't\n> seem right because we might see a value of latch->is_set from before\n> \"waiting\" was true, and yet the signal handler might also have run\n> while \"waiting\" was false so the self-pipe doesn't save us, despite\n> the length of the comment about that. Can you reproduce it with this\n> change?\n> \n\nWill do, but I'll wait for another lockup to see how frequent it\nactually is. I'm now at ~90 runs total, and it didn't happen again yet.\nSo hitting it after 15 runs might have been a bit of a luck.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 29 Jan 2023 18:39:05 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 06:26:02 +1300, Thomas Munro wrote:\n> On Mon, Jan 30, 2023 at 1:53 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > So I did that - same configure options as the buildfarm client, and a\n> > 'make check' (with only tests up to the 'join' suite, because that's\n> > where it got stuck before). And it took only ~15 runs (~1h) to hit this\n> > again on dikkop.\n> \n> That's good news.\n\nIndeed.\n\nAs annoying as it is, it might be worth reproing it once or twice more, just\nto have a feeling for how long we need to run to have confidence in a fix.\n\n\n> I was talking to Andres on IM about this yesterday and he pointed out\n> a potential out-of-order hazard: WaitEventSetWait() sets \"waiting\" (to\n> tell the signal handler to write to the self-pipe) and then reads\n> latch->is_set with neither compiler nor memory barrier, which doesn't\n> seem right because we might see a value of latch->is_set from before\n> \"waiting\" was true, and yet the signal handler might also have run\n> while \"waiting\" was false so the self-pipe doesn't save us, despite\n> the length of the comment about that. Can you reproduce it with this\n> change?\n> \n> --- a/src/backend/storage/ipc/latch.c\n> +++ b/src/backend/storage/ipc/latch.c\n> @@ -1011,6 +1011,7 @@ WaitEventSetWait(WaitEventSet *set, long timeout,\n> * ordering, so that we cannot miss seeing is_set if a notificat\n> ion\n> * has already been queued.\n> */\n> + pg_memory_barrier();\n> if (set->latch && set->latch->is_set)\n> {\n> occurred_events->fd = PGINVALID_SOCKET;\n\nI think we need a barrier in SetLatch(), after is_set = true. We have that in\nsome of the newer branches (due to the maybe_sleeping logic), but not in the\nolder branches.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 29 Jan 2023 09:41:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hi,\n\nOn 2023-01-29 18:39:05 +0100, Tomas Vondra wrote:\n> Will do, but I'll wait for another lockup to see how frequent it\n> actually is. I'm now at ~90 runs total, and it didn't happen again yet.\n> So hitting it after 15 runs might have been a bit of a luck.\n\nWas there a difference in how much load there was on the machine between\n\"reproduced in 15 runs\" and \"not reproed in 90\"? If indeed lack of barriers\nis related to the issue, an increase in context switches could substantially\nchange the behaviour (in both directions). More intra-process context\nswitches can amount to \"probabilistic barriers\" because that'll be a\nbarrier. At the same time it can make it more likely that the relatively\nnarrow window in WaitEventSetWait() is hit, or lead to larger delays\nprocessing signals.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 29 Jan 2023 09:53:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "\n\nOn 1/29/23 18:53, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-29 18:39:05 +0100, Tomas Vondra wrote:\n>> Will do, but I'll wait for another lockup to see how frequent it\n>> actually is. I'm now at ~90 runs total, and it didn't happen again yet.\n>> So hitting it after 15 runs might have been a bit of a luck.\n> \n> Was there a difference in how much load there was on the machine between\n> \"reproduced in 15 runs\" and \"not reproed in 90\"? If indeed lack of barriers\n> is related to the issue, an increase in context switches could substantially\n> change the behaviour (in both directions). More intra-process context\n> switches can amount to \"probabilistic barriers\" because that'll be a\n> barrier. At the same time it can make it more likely that the relatively\n> narrow window in WaitEventSetWait() is hit, or lead to larger delays\n> processing signals.\n> \n\nNo. The only thing the machine is doing is\n\n while /usr/bin/true; do\n make check\n done\n\nI can't reduce the workload further, because the \"join\" test is in a\nseparate parallel group (I cut down parallel_schedule). I could make the\nmachine busier, of course.\n\nHowever, the other lockup I saw was when using serial_schedule, so I\nguess lower concurrency makes it more likely.\n\nBut who knows ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 29 Jan 2023 19:08:36 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Mon, Jan 30, 2023 at 7:08 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> However, the other lockup I saw was when using serial_schedule, so I\n> guess lower concurrency makes it more likely.\n\nFWIW \"psql db -f src/test/regress/sql/join_hash.sql | cat\" also works\n(I mean, it's self-contained and doesn't need anything else from make\ncheck; pipe to cat just disables the pager); that's how I've been\ntrying (and failing) to reproduce this on various computers. I also\ndid a lot of \"make -C src/test/module/test_shm_mq installcheck\" loops,\nat the same time, because that's where my animal hung.\n\n\n", "msg_date": "Mon, 30 Jan 2023 07:20:41 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Mon, Jan 30, 2023 at 6:26 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> out-of-order hazard\n\nI've been trying to understand how that could happen, but my CPU-fu is\nweak. Let me try to write an argument for why it can't happen, so\nthat later I can look back at how stupid and naive I was. We have A\nB, and if the CPU sees no dependency and decides to execute B A\n(pipelined), shouldn't an interrupt either wait for the whole\nschemozzle to commit first (if not in a hurry), or nuke it, handle the\nIPI and restart, or something? After an hour of reviewing random\nslides from classes on out-of-order execution and reorder buffers and\nthe like, I think the term for making sure that interrupts run with\nthe illusion of in-order execution maintained is called \"precise\ninterrupts\", and it is expected in all modern architectures, after the\nearly OoO pioneers lost their minds trying to program without it. I\nguess generally you want that because it would otherwise run your\ninterrupt handler in a completely uncertain environment, and\nspecifically in this case it would reach our signal handler which\nreads A's output (waiting) and writes to B's input (is_set), so B IPI\nA surely shouldn't be allowed?\n\nAs for compiler barriers, I see that elver's compiler isn't reordering the code.\n\nMaybe it's a much dumber sort of a concurrency problem: stale cache\nline due to missing barrier, but... commit db0f6cad488 made us also\nset our own latch (a second time) when someone sets our latch in\nreleases 9.something to 13. Which should mean that we're guaranteed\nto see is_set = true in the scenario described, because we'll clobber\nit ourselves if we have to, for good measure.\n\nIf our secondary SetLatch() sees it's already set and decides not to\nset it, then it's possible that the code we interrupted was about to\nrun ResetLatch(), but any code doing that must next check its expected\nexit condition (or it has a common-or-garden latch protocol bug, as\nhas been discovered from time in the tree...).\n\n/me wanders away with a renewed fear of computers and the vast\ncomplexities they hide\n\n\n", "msg_date": "Mon, 30 Jan 2023 15:22:34 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 15:22:34 +1300, Thomas Munro wrote:\n> On Mon, Jan 30, 2023 at 6:26 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > out-of-order hazard\n>\n> I've been trying to understand how that could happen, but my CPU-fu is\n> weak. Let me try to write an argument for why it can't happen, so\n> that later I can look back at how stupid and naive I was. We have A\n> B, and if the CPU sees no dependency and decides to execute B A\n> (pipelined), shouldn't an interrupt either wait for the whole\n> schemozzle to commit first (if not in a hurry), or nuke it, handle the\n> IPI and restart, or something?\n\nIn a core local view, yes, I think so. But I don't think that's how it can\nwork on multi-core, and even more so, multi-socket machines. Imagine how it'd\ninfluence latency if every interrupt on any CPU would prevent all out-of-order\nexecution on any CPU.\n\n\n> After an hour of reviewing randoma\n> slides from classes on out-of-order execution and reorder buffers and\n> the like, I think the term for making sure that interrupts run with\n> the illusion of in-order execution maintained is called \"precise\n> interrupts\", and it is expected in all modern architectures, after the\n> early OoO pioneers lost their minds trying to program without it. I\n> guess generally you want that because it would otherwise run your\n> interrupt handler in a completely uncertain environment, and\n> specifically in this case it would reach our signal handler which\n> reads A's output (waiting) and writes to B's input (is_set), so B IPI\n> A surely shouldn't be allowed?\n\nUserspace signals aren't delivered synchronously during hardware interrupts\nafaik - and I don't think they even possibly could be (after all the process\npossibly isn't scheduled).\n\nI think what you're talking about with precise interrupts above is purely\nabout the single-core view, and mostly about hardware interrupts for faults\netc. The CPU will unwind state from speculatively executed code etc on\ninterrupt, sure - but I think that's separate from guaranteeing that you can't\nhave stale cache contents *due to work by another CPU*.\n\n\nI'm not even sure that userspace signals are generally delivered via an\nimmediate hardware interrupt, or whether they're processed at the next\nscheduler tick. After all, we know that multiple signals are coalesced, which\ncertainly isn't compatible with synchronous execution. But it could be that\nthat just happens when the target of a signal is not currently scheduled.\n\n\n> Maybe it's a much dumber sort of a concurrency problem: stale cache\n> line due to missing barrier, but... commit db0f6cad488 made us also\n> set our own latch (a second time) when someone sets our latch in\n> releases 9.something to 13.\n\nBut this part does indeed put a crimp on some potential theories.\n\nTBH, I'd be in favor of just adding the barriers for good measure, even if we\ndon't know if it's a live bug today - it seems incredibly fragile.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 29 Jan 2023 21:36:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Mon, Jan 30, 2023 at 6:36 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-30 15:22:34 +1300, Thomas Munro wrote:\n> > On Mon, Jan 30, 2023 at 6:26 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > out-of-order hazard\n> >\n> > I've been trying to understand how that could happen, but my CPU-fu is\n> > weak. Let me try to write an argument for why it can't happen, so\n> > that later I can look back at how stupid and naive I was. We have A\n> > B, and if the CPU sees no dependency and decides to execute B A\n> > (pipelined), shouldn't an interrupt either wait for the whole\n> > schemozzle to commit first (if not in a hurry), or nuke it, handle the\n> > IPI and restart, or something?\n>\n> In a core local view, yes, I think so. But I don't think that's how it can\n> work on multi-core, and even more so, multi-socket machines. Imagine how it'd\n> influence latency if every interrupt on any CPU would prevent all out-of-order\n> execution on any CPU.\n\nGood. Yeah, I was talking only about a single thread/core.\n\n> > After an hour of reviewing randoma\n> > slides from classes on out-of-order execution and reorder buffers and\n> > the like, I think the term for making sure that interrupts run with\n> > the illusion of in-order execution maintained is called \"precise\n> > interrupts\", and it is expected in all modern architectures, after the\n> > early OoO pioneers lost their minds trying to program without it. I\n> > guess generally you want that because it would otherwise run your\n> > interrupt handler in a completely uncertain environment, and\n> > specifically in this case it would reach our signal handler which\n> > reads A's output (waiting) and writes to B's input (is_set), so B IPI\n> > A surely shouldn't be allowed?\n>\n> Userspace signals aren't delivered synchronously during hardware interrupts\n> afaik - and I don't think they even possibly could be (after all the process\n> possibly isn't scheduled).\n\nYeah, they're not synchronous and the target might not even be\nrunning. BUT if a suitable thread is running then AFAICT an IPI is\ndelivered to that sucker to get it running the handler ASAP, at least\non the three OSes I looked at. (See breadcrumbs below).\n\n> I think what you're talking about with precise interrupts above is purely\n> about the single-core view, and mostly about hardware interrupts for faults\n> etc. The CPU will unwind state from speculatively executed code etc on\n> interrupt, sure - but I think that's separate from guaranteeing that you can't\n> have stale cache contents *due to work by another CPU*.\n\nYeah. I get the cache problem, a separate issue that does indeed look\npretty dodgy. I guess I wrote my email out-of-order: at the end I\nspeculated that cache coherency probably can't explain this failure at\nleast in THAT bit of the source, because of that funky extra\nself-SetLatch(). I just got spooked by the mention of out-of-order\nexecution and I wanted to chase it down and straighten out my\nunderstanding.\n\n> I'm not even sure that userspace signals are generally delivered via an\n> immediate hardware interrupt, or whether they're processed at the next\n> scheduler tick. After all, we know that multiple signals are coalesced, which\n> certainly isn't compatible with synchronous execution. But it could be that\n> that just happens when the target of a signal is not currently scheduled.\n\nFreeBSD: By default, they are when possible, eg if the process is\ncurrently running a suitable thread. You can set sysctl\nkern.smp.forward_signal_enabled=0 to turn that off, and then it works\nmore like the way you imagined (checking for pending signals at\nvarious arbitrary times, not sure). See tdsigwakeup() ->\nforward_signal() -> ipi_cpu().\n\nLinux: Well it certainly smells approximately similar. See\nsignal_wake_up_state() -> kick_process() -> smp_send_reschedule() ->\nsmp_cross_call() -> __ipi_send_mask(). The comment for kick_process()\nexplains that it's using the scheduler IPI to get signals handled\nASAP.\n\nDarwin: ... -> cpu_signal() -> something that talks about IPIs\n\nCoalescing is happening not only at the pending signal level (an\ninvention of the OS), and then for the inter-processor wakeups there\nis also interrupt coalescing. It's latches all the way down.\n\n\n", "msg_date": "Mon, 30 Jan 2023 21:43:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On 1/29/23 19:08, Tomas Vondra wrote:\n> \n> \n> On 1/29/23 18:53, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-01-29 18:39:05 +0100, Tomas Vondra wrote:\n>>> Will do, but I'll wait for another lockup to see how frequent it\n>>> actually is. I'm now at ~90 runs total, and it didn't happen again yet.\n>>> So hitting it after 15 runs might have been a bit of a luck.\n>>\n>> Was there a difference in how much load there was on the machine between\n>> \"reproduced in 15 runs\" and \"not reproed in 90\"? If indeed lack of barriers\n>> is related to the issue, an increase in context switches could substantially\n>> change the behaviour (in both directions). More intra-process context\n>> switches can amount to \"probabilistic barriers\" because that'll be a\n>> barrier. At the same time it can make it more likely that the relatively\n>> narrow window in WaitEventSetWait() is hit, or lead to larger delays\n>> processing signals.\n>>\n> \n> No. The only thing the machine is doing is\n> \n> while /usr/bin/true; do\n> make check\n> done\n> \n> I can't reduce the workload further, because the \"join\" test is in a\n> separate parallel group (I cut down parallel_schedule). I could make the\n> machine busier, of course.\n> \n> However, the other lockup I saw was when using serial_schedule, so I\n> guess lower concurrency makes it more likely.\n> \n\nFWIW the machine is now on run ~2700 without any further lockups :-/\n\nSeems it was quite lucky we hit it twice in a handful of attempts.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 6 Feb 2023 19:51:19 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hi,\n\nOn 2023-02-06 19:51:19 +0100, Tomas Vondra wrote:\n> > No. The only thing the machine is doing is\n> > \n> > while /usr/bin/true; do\n> > make check\n> > done\n> > \n> > I can't reduce the workload further, because the \"join\" test is in a\n> > separate parallel group (I cut down parallel_schedule). I could make the\n> > machine busier, of course.\n> > \n> > However, the other lockup I saw was when using serial_schedule, so I\n> > guess lower concurrency makes it more likely.\n> > \n> \n> FWIW the machine is now on run ~2700 without any further lockups :-/\n> \n> Seems it was quite lucky we hit it twice in a handful of attempts.\n\nDid you cut down the workload before you reproduced it the first time, or\nafter? It's quite possible that it's not reproducible in isolation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:20:15 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "\n\nOn 2/6/23 20:20, Andres Freund wrote:\n> Hi,\n> \n> On 2023-02-06 19:51:19 +0100, Tomas Vondra wrote:\n>>> No. The only thing the machine is doing is\n>>>\n>>> while /usr/bin/true; do\n>>> make check\n>>> done\n>>>\n>>> I can't reduce the workload further, because the \"join\" test is in a\n>>> separate parallel group (I cut down parallel_schedule). I could make the\n>>> machine busier, of course.\n>>>\n>>> However, the other lockup I saw was when using serial_schedule, so I\n>>> guess lower concurrency makes it more likely.\n>>>\n>>\n>> FWIW the machine is now on run ~2700 without any further lockups :-/\n>>\n>> Seems it was quite lucky we hit it twice in a handful of attempts.\n> \n> Did you cut down the workload before you reproduced it the first time, or\n> after? It's quite possible that it's not reproducible in isolation.\n> \n\nNo, I left the workload as it was for the first lockup, so `make check`\nruns everything as is up until the \"join\" test suite.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Feb 2023 00:46:12 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Tue, Feb 7, 2023 at 12:46 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> No, I left the workload as it was for the first lockup, so `make check`\n> runs everything as is up until the \"join\" test suite.\n\nWait, shouldn't that be join_hash?\n\n\n", "msg_date": "Tue, 7 Feb 2023 12:48:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On 2/7/23 00:48, Thomas Munro wrote:\n> On Tue, Feb 7, 2023 at 12:46 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> No, I left the workload as it was for the first lockup, so `make check`\n>> runs everything as is up until the \"join\" test suite.\n> \n> Wait, shouldn't that be join_hash?\n\nNo, because join_hash does not exist on 11 (it was added in 12). Also,\nit actually locked up like this - that's the lockup I reported on 28/1.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Feb 2023 01:06:35 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Tue, Feb 7, 2023 at 1:06 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 2/7/23 00:48, Thomas Munro wrote:\n> > On Tue, Feb 7, 2023 at 12:46 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >> No, I left the workload as it was for the first lockup, so `make check`\n> >> runs everything as is up until the \"join\" test suite.\n> >\n> > Wait, shouldn't that be join_hash?\n>\n> No, because join_hash does not exist on 11 (it was added in 12). Also,\n> it actually locked up like this - that's the lockup I reported on 28/1.\n\nOh, good. I had been trying to repro with 12 here and forgot that you\nwere looking at 11...\n\n\n", "msg_date": "Tue, 7 Feb 2023 13:09:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On 2/7/23 01:09, Thomas Munro wrote:\n> On Tue, Feb 7, 2023 at 1:06 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 2/7/23 00:48, Thomas Munro wrote:\n>>> On Tue, Feb 7, 2023 at 12:46 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>> No, I left the workload as it was for the first lockup, so `make check`\n>>>> runs everything as is up until the \"join\" test suite.\n>>>\n>>> Wait, shouldn't that be join_hash?\n>>\n>> No, because join_hash does not exist on 11 (it was added in 12). Also,\n>> it actually locked up like this - that's the lockup I reported on 28/1.\n> \n> Oh, good. I had been trying to repro with 12 here and forgot that you\n> were looking at 11...\n\nFYI it happened again, on a regular run of regression tests (I gave up\non trying to reproduce this - after some initial hits I didn't hit it in\na couple thousand tries so I just added the machine back to buildfarm).\n\nAnyway, same symptoms - lockup in join_hash on PG11, leader waiting on\nWaitLatch and both workers waiting on BarrierArriveAndWait. I forgot\nrunning gdb on the second worker will get it unstuck, so I haven't been\nable to collect more info.\n\nWhat else do you think would be useful to collect next time?\n\nIt's hard to draw conclusions due to the low probability of the issue,\nbut it's pretty weird this only ever happened on 11 so far.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 18 Jun 2023 03:03:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hi,\n\nI have another case of this on dikkop (on 11 again). Is there anything\nelse we'd want to try? Or maybe someone would want access to the machine\nand do some investigation directly?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 30 Aug 2023 14:16:47 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Thu, Aug 31, 2023 at 12:16 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I have another case of this on dikkop (on 11 again). Is there anything\n> else we'd want to try? Or maybe someone would want access to the machine\n> and do some investigation directly?\n\nSounds interesting -- I'll ping you off-list.\n\n\n", "msg_date": "Thu, 31 Aug 2023 14:32:43 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Thu, Aug 31, 2023 at 2:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Aug 31, 2023 at 12:16 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > I have another case of this on dikkop (on 11 again). Is there anything\n> > else we'd want to try? Or maybe someone would want access to the machine\n> > and do some investigation directly?\n\nHmm. No conclusion tonight but I think it's weird. We have:\n\nbsd@generic:/mnt/data/buildfarm $ ps x -O wchan | grep 52663\n52663 select - Is 0:07.40 postgres: bsd regression [local] SELECT (postgres)\n52731 select - Is 0:00.09 postgres: parallel worker for PID 52663\n (postgres)\n52732 select - Is 0:00.06 postgres: parallel worker for PID 52663\n (postgres)\n81525 piperd 0 S+ 0:00.01 grep 52663\n\nwchan=select means sleeping in poll()/select().\n\nbsd@generic:/mnt/data/buildfarm $ procstat -i 52732 | grep USR1\n52732 postgres USR1 P-C\nbsd@generic:/mnt/data/buildfarm $ procstat -j 52732 | grep USR1\n52732 100121 postgres USR1 --\n\nWe have a signal that is pending and not blocked, so I don't\nimmediately know why poll() hasn't returned control.\n\n\n", "msg_date": "Thu, 31 Aug 2023 23:15:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hello Thomas,\n\n31.08.2023 14:15, Thomas Munro wrote:\n\n> We have a signal that is pending and not blocked, so I don't\n> immediately know why poll() hasn't returned control.\n\nWhen I worked at the Postgres Pro company, we observed a similar lockup\nunder rather specific conditions (we used Elbrus CPU and the specific Elbrus\ncompiler (lcc) based on edg).\nI managed to reproduce that lockup and Anton Voloshin investigated it.\nThe issue was caused by the compiler optimization in WaitEventSetWait():\n     waiting = true;\n...\n     while (returned_events == 0)\n     {\n...\n         if (set->latch && set->latch->is_set)\n         {\n...\n             break;\n         }\n\nIn that case, compiler decided that it may place the read\n\"set->latch->is_set\" before the write \"waiting = true\".\n(Placing \"pg_compiler_barrier();\" just after \"waiting = true;\" fixed the\nissue for us.)\nI can't provide more details for now, but maybe you could look at the binary\ncode generated on the target platform to confirm or reject my guess.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 1 Sep 2023 11:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On 9/1/23 10:00, Alexander Lakhin wrote:\n> Hello Thomas,\n> \n> 31.08.2023 14:15, Thomas Munro wrote:\n> \n>> We have a signal that is pending and not blocked, so I don't\n>> immediately know why poll() hasn't returned control.\n> \n> When I worked at the Postgres Pro company, we observed a similar lockup\n> under rather specific conditions (we used Elbrus CPU and the specific\n> Elbrus\n> compiler (lcc) based on edg).\n> I managed to reproduce that lockup and Anton Voloshin investigated it.\n> The issue was caused by the compiler optimization in WaitEventSetWait():\n>     waiting = true;\n> ...\n>     while (returned_events == 0)\n>     {\n> ...\n>         if (set->latch && set->latch->is_set)\n>         {\n> ...\n>             break;\n>         }\n> \n> In that case, compiler decided that it may place the read\n> \"set->latch->is_set\" before the write \"waiting = true\".\n> (Placing \"pg_compiler_barrier();\" just after \"waiting = true;\" fixed the\n> issue for us.)\n> I can't provide more details for now, but maybe you could look at the\n> binary\n> code generated on the target platform to confirm or reject my guess.\n> \n\nHmmm, I'm not very good at reading the binary code, but here's what\nobjdump produced for WaitEventSetWait. Maybe someone will see what the\nissue is.\n\nI thought about maybe just adding the barrier in the code, but then how\nwould we know it's the issue and this fixed it? It happens so rarely we\ncan't make any conclusions from a couple runs of tests.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 1 Sep 2023 15:00:29 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hello Tomas,\n\n01.09.2023 16:00, Tomas Vondra wrote:\n> Hmmm, I'm not very good at reading the binary code, but here's what\n> objdump produced for WaitEventSetWait. Maybe someone will see what the\n> issue is.\n\nAt first glance, I can't see anything suspicious in the disassembly.\nIIUC, waiting = true presented there as:\n   805c38: b902ad18      str     w24, [x8, #684] // pgstat_report_wait_start(): proc->wait_event_info = wait_event_info;\n// end of pgstat_report_wait_start(wait_event_info);\n\n   805c3c: b0ffdb09      adrp    x9, 0x366000 <dsm_segment_address+0x24>\n   805c40: b0ffdb0a      adrp    x10, 0x366000 <dsm_segment_address+0x28>\n   805c44: f0000eeb      adrp    x11, 0x9e4000 <PMSignalShmemInit+0x4>\n\n   805c48: 52800028      mov     w8, #1 // true\n   805c4c: 52800319      mov     w25, #24\n   805c50: 5280073a      mov     w26, #57\n   805c54: fd446128      ldr     d8, [x9, #2240]\n   805c58: 90000d7b      adrp    x27, 0x9b1000 <ModifyWaitEvent+0xb0>\n   805c5c: fd415949      ldr     d9, [x10, #688]\n   805c60: f9071d68      str     x8, [x11, #3640] // waiting = true (x8 = w8)\nSo there are two simple mov's and two load operations performed in parallel,\nbut I don't think it's similar to what we had in that case.\n\n> I thought about maybe just adding the barrier in the code, but then how\n> would we know it's the issue and this fixed it? It happens so rarely we\n> can't make any conclusions from a couple runs of tests.\n\nProbably I could construct a reproducer for the lockup if I had access to\nthe such machine for a day or two.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 1 Sep 2023 22:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Fri, Sep 1, 2023 at 6:13 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> (Placing \"pg_compiler_barrier();\" just after \"waiting = true;\" fixed the\n> issue for us.)\n\nMaybe it'd be worth trying something stronger, like\npg_memory_barrier(). A compiler barrier doesn't prevent the CPU from\nreordering loads and stores as it goes, and ARM64 has weak memory\nordering.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Sep 2023 16:21:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hello Robert,\n\n01.09.2023 23:21, Robert Haas wrote:\n> On Fri, Sep 1, 2023 at 6:13 AM Alexander Lakhin<exclusion@gmail.com> wrote:\n>> (Placing \"pg_compiler_barrier();\" just after \"waiting = true;\" fixed the\n>> issue for us.)\n> Maybe it'd be worth trying something stronger, like\n> pg_memory_barrier(). A compiler barrier doesn't prevent the CPU from\n> reordering loads and stores as it goes, and ARM64 has weak memory\n> ordering.\n\nIndeed, thank you for the tip!\nSo maybe here we deal with not compiler's, but with CPU's optimization.\nThe wider code fragment is:\n   805c48: 52800028      mov     w8, #1 // true\n   805c4c: 52800319      mov     w25, #24\n   805c50: 5280073a      mov     w26, #57\n   805c54: fd446128      ldr     d8, [x9, #2240]\n   805c58: 90000d7b      adrp    x27, 0x9b1000 <ModifyWaitEvent+0xb0>\n   805c5c: fd415949      ldr     d9, [x10, #688]\n   805c60: f9071d68      str     x8, [x11, #3640] // waiting = true (x8 = w8)\n   805c64: f90003f3      str     x19, [sp]\n   805c68: 14000010      b       0x805ca8 <WaitEventSetWait+0x108>\n\n   805ca8: f9400a88      ldr     x8, [x20, #16] // if (set->latch && set->latch->is_set)\n   805cac: b4000068      cbz     x8, 0x805cb8 <WaitEventSetWait+0x118>\n   805cb0: f9400108      ldr     x8, [x8]\n   805cb4: b5001248      cbnz    x8, 0x805efc <WaitEventSetWait+0x35c>\n   805cb8: f9401280      ldr     x0, [x20, #32]\n\nIf that CPU can delay the writing to the variable waiting\n(str x8, [x11, #3640]) in it's internal form like\n\"store 1 to [address]\" to 805cb0 or a later instruction, then we can get the\nbehavior discussed. Something like that is shown in the ARM documentation:\nhttps://developer.arm.com/documentation/102336/0100/Memory-ordering?lang=en\nI'll try to test this guess on the target machine...\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 3 Sep 2023 00:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "I agree that the code lacks barriers. I haven't been able to figure\nout how any reordering could cause this hang, though, because in these\nold branches procsignal_sigusr1_handler is used for latch wakeups, and\nit also calls SetLatch(MyLatch) itself, right at the end. That is,\nSetLatch() gets called twice, first in the waker process and then\nagain in the awoken process, so it should be impossible for the latter\nnot to see MyLatch->is_set == true after procsignal_sigusr1_handler\ncompletes.\n\nThat made me think the handler didn't run, which is consistent with\nprocstat -i showing it as pending ('P'). Which made me start to\nsuspect a kernel bug, unless we can explain what we did to block it...\n\nBut... perhaps I am confused about that and did something wrong when\nlooking into it. It's hard to investigate when you aren't allowed to\ntake core files or connect a debugger (both will reliably trigger\nEINTR).\n\n\n", "msg_date": "Sun, 3 Sep 2023 11:06:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hello,\n\n03.09.2023 00:00, Alexander Lakhin wrote:\n> I'll try to test this guess on the target machine...\n>\n\nI got access to dikkop thanks to Tomas Vondra, and started reproducing the\nissue. It was rather difficult to catch the lockup as Tomas and Tom\nnoticed before. I tried to use stress-ng to affect reproduction somehow\nand gradually managed to reduce the join_hash test to just a single\nquery (if I were venturous enough, I could reduce that test to it in\njust one step, but I choose to move little by little). At the end I\nconstructed a reproducing script that could catch the lockup on that ARM\nmachine during an hour on average.\nToday I've booted FreeBSD-14.0-ALPHA4 on my secondary AMD (Ryzen 7 3700X)\nmachine and managed to reproduce the issue with that script (see\nattachment). (I run the script just after `gmake && gmake check`.)\nTo reproduce it faster on my machine, I run it on tmpfs and use parallel:\nfor ((i=1;i<=4;i++)); do echo \"ITERATION $i\"; parallel --halt now,fail=1 -j8 --linebuffer --tag time .../repro+.sh {} \n::: 01 02 03 04 || break; done\n\nIt produces the lockup as follows:\nITERATION 1\n...\n02      waiting for server to shut down.... done\n02      server stopped\n02\n02      real    3m2.420s\n02      user    0m2.896s\n02      sys     0m1.685s\n04      TIMEOUT on iteration 448\n04\n04      real    3m16.212s\n04      user    0m2.322s\n04      sys     0m1.904s\n...\npsql -p 15435 regression -c \"SELECT * FROM pg_stat_activity;\"\n  16384 | regression | 53696 |            |       10 | user    | psql             |             |                 \n|          -1 | 2023-09-08 18:44:27.572426+00 | 2023-09-08 18:44:27.573633+00 | 2023-09-08 18:44:27.795518+00 | \n2023-09-08 18:44:27.795519+00 | IPC             | HashBuildHashOuter  | active | |          731 |          | explain \n(analyze)  select count(*) from simple r join simple s using (id); | client backend\n  16384 | regression | 53894 |      53696 |       10 | user    | psql             |             |                 \n|             | 2023-09-08 18:44:27.796749+00 | 2023-09-08 18:44:27.573633+00 | 2023-09-08 18:44:27.795518+00 | \n2023-09-08 18:44:27.799261+00 | IPC             | HashBuildHashOuter  | active | |          731 |          | explain \n(analyze)  select count(*) from simple r join simple s using (id); | parallel worker\n  16384 | regression | 53896 |      53696 |       10 | user    | psql             |             |                 \n|             | 2023-09-08 18:44:27.797436+00 | 2023-09-08 18:44:27.573633+00 | 2023-09-08 18:44:27.795518+00 | \n2023-09-08 18:44:27.799291+00 | IPC             | HashBuildHashInner  | active | |          731 |          | explain \n(analyze)  select count(*) from simple r join simple s using (id); | parallel worker\n\nprocstat -i 53896\n53896 postgres         URG      P-C\n\ntail server04.log\n2023-09-08 18:44:27.777 UTC|user|regression|53696|64fb6b8b.d1c0|LOG:  statement: explain (analyze)  select count(*) from \nsimple r join simple s using (id);\n2023-09-08 18:44:27.786 UTC|user|regression|53696|64fb6b8b.d1c0|LOG:  statement: explain (analyze)  select count(*) from \nsimple r join simple s using (id);\n2023-09-08 18:44:27.795 UTC|user|regression|53696|64fb6b8b.d1c0|LOG:  statement: explain (analyze)  select count(*) from \nsimple r join simple s using (id);\n2023-09-08 18:45:38.685 UTC|[unknown]|[unknown]|66915|64fb6bd2.10563|LOG:  connection received: host=[local]\n2023-09-08 18:45:38.685 UTC|user|regression|66915|64fb6bd2.10563|LOG:  connection authorized: user=user \ndatabase=regression application_name=psql\n2023-09-08 18:45:38.686 UTC|user|regression|66915|64fb6bd2.10563|LOG:  statement: SELECT * FROM pg_stat_activity;\n\nIt takes less than 10 minutes on average for me. I checked\nREL_12_STABLE, REL_13_STABLE, and REL_14_STABLE (with HAVE_KQUEUE undefined\nforcefully) — they all are affected.\nI could not reproduce the lockup on my Ubuntu box (with HAVE_SYS_EPOLL_H\nundefined manually). And surprisingly for me, I could not reproduce it on\nmaster and REL_16_STABLE.\n`git bisect` for this behavior change pointed at 7389aad63 (though maybe it\njust greatly decreased probability of the failure; I'm going to double-check\nthis).\nIn particular, that commit changed this:\n-    /*\n-     * Ignore SIGURG for now.  Child processes may change this (see\n-     * InitializeLatchSupport), but they will not receive any such signals\n-     * until they wait on a latch.\n-     */\n-    pqsignal_pm(SIGURG, SIG_IGN);   /* ignored */\n-#endif\n+    /* This may configure SIGURG, depending on platform. */\n+    InitializeLatchSupport();\n+    InitProcessLocalLatch();\n\nWith debugging logging added I see (on 7389aad63~1) that one process\nreally sends SIGURG to another, and the latter reaches poll(), but it\njust got no signal, it's signal handler not called and poll() just waits...\n\nSo it looks like the ARM weak memory model is not the root cause of the\nissue. But as far as I can see, it's still specific to FreeBSD (but not\nspecific to a compiler — I used gcc and clang with the same success).\n\nBest regards,\nAlexander", "msg_date": "Fri, 8 Sep 2023 22:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Sat, Sep 9, 2023 at 7:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> It takes less than 10 minutes on average for me. I checked\n> REL_12_STABLE, REL_13_STABLE, and REL_14_STABLE (with HAVE_KQUEUE undefined\n> forcefully) — they all are affected.\n> I could not reproduce the lockup on my Ubuntu box (with HAVE_SYS_EPOLL_H\n> undefined manually). And surprisingly for me, I could not reproduce it on\n> master and REL_16_STABLE.\n> `git bisect` for this behavior change pointed at 7389aad63 (though maybe it\n> just greatly decreased probability of the failure; I'm going to double-check\n> this).\n> In particular, that commit changed this:\n> - /*\n> - * Ignore SIGURG for now. Child processes may change this (see\n> - * InitializeLatchSupport), but they will not receive any such signals\n> - * until they wait on a latch.\n> - */\n> - pqsignal_pm(SIGURG, SIG_IGN); /* ignored */\n> -#endif\n> + /* This may configure SIGURG, depending on platform. */\n> + InitializeLatchSupport();\n> + InitProcessLocalLatch();\n>\n> With debugging logging added I see (on 7389aad63~1) that one process\n> really sends SIGURG to another, and the latter reaches poll(), but it\n> just got no signal, it's signal handler not called and poll() just waits...\n\nThanks for working so hard on this Alexander. That is a surprising\ndiscovery! So changes to the signal handler arrangements in the\n*postmaster* before the child was forked affected this?\n\n> So it looks like the ARM weak memory model is not the root cause of the\n> issue. But as far as I can see, it's still specific to FreeBSD (but not\n> specific to a compiler — I used gcc and clang with the same success).\n\nIdea: FreeBSD 13 introduced a new mechanism called sigfastblock[1],\nwhich lets system libraries control signal blocking with atomic memory\ntricks in a word of user space memory. I have no particular theory\nfor why it would be going wrong here (I don't expect us to be using\nany of the stuff that would use it, though I don't understand it in\ndetail so that doesn't say much), but it occurred to me that all\nreports so far have been on 13.x or 14. I wonder... If you have a\ngood fast recipe for reproducing this, could you also try it on\nFreeBSD 12.4?\n\n[1] https://man.freebsd.org/cgi/man.cgi?query=sigfastblock&sektion=2&manpath=FreeBSD+13.0-current\n\n\n", "msg_date": "Sat, 9 Sep 2023 07:39:45 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "Hi Thomas,\n\n08.09.2023 22:39, Thomas Munro wrote:\n>> With debugging logging added I see (on 7389aad63~1) that one process\n>> really sends SIGURG to another, and the latter reaches poll(), but it\n>> just got no signal, it's signal handler not called and poll() just waits...\n> Thanks for working so hard on this Alexander. That is a surprising\n> discovery! So changes to the signal handler arrangements in the\n> *postmaster* before the child was forked affected this?\n\nYes, I think we deal with something like that. I can try to deduce a minimum\nchange that affects reproducing the issue, but may be it's not that important.\nPerhaps we now should think of escalating the problem to FreeBSD developers?\nI wonder, what kind of reproducer they find acceptable. A standalone C\nprogram only or maybe a script that compiles/installs postgres and runs\nour test will do too?\n\n>> So it looks like the ARM weak memory model is not the root cause of the\n>> issue. But as far as I can see, it's still specific to FreeBSD (but not\n>> specific to a compiler — I used gcc and clang with the same success).\n> Idea: FreeBSD 13 introduced a new mechanism called sigfastblock[1],\n> which lets system libraries control signal blocking with atomic memory\n> tricks in a word of user space memory. I have no particular theory\n> for why it would be going wrong here (I don't expect us to be using\n> any of the stuff that would use it, though I don't understand it in\n> detail so that doesn't say much), but it occurred to me that all\n> reports so far have been on 13.x or 14. I wonder... If you have a\n> good fast recipe for reproducing this, could you also try it on\n> FreeBSD 12.4?\n\nIt was a happy guess! I checked the reproduction on\nFreeBSD 13.1-RELEASE releng/13.1-n250148-fc952ac2212\nand got the same results as on FreeBSD 14:\nREL_12_STABLE - failed on iteration 3\nREL_15_STABLE - failed on iteration 1\nREL_16_STABLE - 10 iterations with no failure\n\nBut on FreeBSD 12.4-RELEASE r372781:\nREL_12_STABLE - 20 iterations with no failure\nREL_15_STABLE - 20 iterations with no failure\n\nBTW, I also retested 7389aad63 on FreeBSD 14 and got no failure for 100\niterations.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 9 Sep 2023 12:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" }, { "msg_contents": "On Sat, Sep 9, 2023 at 9:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> Yes, I think we deal with something like that. I can try to deduce a minimum\n> change that affects reproducing the issue, but may be it's not that important.\n> Perhaps we now should think of escalating the problem to FreeBSD developers?\n> I wonder, what kind of reproducer they find acceptable. A standalone C\n> program only or maybe a script that compiles/installs postgres and runs\n> our test will do too?\n\nWe discussed this a bit off-list and I am following up on that. My\nguess is that this will turn out to be a bad interaction between that\noptimisation and our (former) habit of forking background workers from\ninside a signal handler, but let's see...\n\nFTR If someone is annoyed by this and just wants their build farm\nanimal not to hang on REL_12_STABLE, via Alexander's later experiments\nwe learned that sysctl kern.sigfastblock_fetch_always=1 fixes the\nproblem.\n\n\n", "msg_date": "Tue, 12 Sep 2023 09:04:59 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: lockup in parallel hash join on dikkop (freebsd 14.0-current)" } ]
[ { "msg_contents": "I have found an odd behavior --- a query in the target list that assigns\nto a partitioned column causes queries that would normally be volatile\nto return always zero.\n\nIn the first query, no partitioning is used:\n\n\t d1 | d2\n\t----+----\n\t 1 | 0\n\t 2 | 0\n\t 2 | 1\n\t 1 | 0\n\t 1 | 2\n\t 1 | 2\n\t 1 | 0\n\t 0 | 2\n\t 2 | 0\n\t 2 | 2\n\nIn the next query, 'd1' is a partition key and it gets a constant value\nof zero for all rows:\n\n\t d1 | d2\n\t----+----\n-->\t 0 | 1\n-->\t 0 | 2\n\t 0 | 2\n\t 0 | 1\n\t 0 | 2\n\t 0 | 1\n\t 0 | 2\n\t 0 | 2\n\t 0 | 2\n\t 0 | 2\n\nThe self-contained query is attached. The value is _always_ zero, which\nsuggests random() is not being called; calling setseed() does not\nchange that. If I change \"SELECT x\" with \"SELECT 2\", the \"2\" is used. \nI see this behavior back to PG 11.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.", "msg_date": "Thu, 26 Jan 2023 19:07:29 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Partition key causes problem for volatile target list query" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I have found an odd behavior --- a query in the target list that assigns\n> to a partitioned column causes queries that would normally be volatile\n> to return always zero.\n\nWell, if you looked further than the first few rows, it wouldn't be\n\"always zero\". But the select from the partitioned table will read\nthe first partition first, and that partition will have the rows\nwith d1=0, by definition.\n\n=# explain select * from case_test2 limit 10;\n QUERY PLAN \n \n--------------------------------------------------------------------------------\n-----------\n Limit (cost=0.00..0.19 rows=10 width=8)\n -> Append (cost=0.00..1987.90 rows=102260 width=8)\n -> Seq Scan on case_test2_0 case_test2_1 (cost=0.00..478.84 rows=3318\n4 width=8)\n -> Seq Scan on case_test2_1 case_test2_2 (cost=0.00..480.86 rows=3328\n6 width=8)\n -> Seq Scan on case_test2_2 case_test2_3 (cost=0.00..484.30 rows=3353\n0 width=8)\n -> Seq Scan on case_test2_3 case_test2_4 (cost=0.00..32.60 rows=2260 \nwidth=8)\n(6 rows)\n\nThe result appears sorted by d1, but that's an implementation artifact.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 19:21:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Partition key causes problem for volatile target list query" }, { "msg_contents": "On Thu, Jan 26, 2023 at 07:21:16PM -0500, Tom Lane wrote:\n> Well, if you looked further than the first few rows, it wouldn't be\n> \"always zero\". But the select from the partitioned table will read\n> the first partition first, and that partition will have the rows\n> with d1=0, by definition.\n> \n> =# explain select * from case_test2 limit 10;\n> QUERY PLAN \n> \n> --------------------------------------------------------------------------------\n> -----------\n> Limit (cost=0.00..0.19 rows=10 width=8)\n> -> Append (cost=0.00..1987.90 rows=102260 width=8)\n> -> Seq Scan on case_test2_0 case_test2_1 (cost=0.00..478.84 rows=3318\n> 4 width=8)\n> -> Seq Scan on case_test2_1 case_test2_2 (cost=0.00..480.86 rows=3328\n> 6 width=8)\n> -> Seq Scan on case_test2_2 case_test2_3 (cost=0.00..484.30 rows=3353\n> 0 width=8)\n> -> Seq Scan on case_test2_3 case_test2_4 (cost=0.00..32.60 rows=2260 \n> width=8)\n> (6 rows)\n> \n> The result appears sorted by d1, but that's an implementation artifact.\n\nWow, thanks. Not sure how I missed something so obvious. I just saw it\nmyself by generating only 10 rows and noticing the numbers were always\nincreasing.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Thu, 26 Jan 2023 19:30:26 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Partition key causes problem for volatile target list query" } ]
[ { "msg_contents": "Hi,\n\nI was going through the comments [1] mentioned in\ninit_toast_snapshot() and based on the comments understood that the\nerror \"cannot fetch toast data without an active snapshot\" will occur\nif a procedure fetches a toasted value into a local variable, commits,\nand then tries to detoast the value. I would like to know the sample\nquery which causes such behaviour. I checked the test cases. Looks\nlike such a case is not present in the regression suit. It is better\nto add one.\n\n\n[1]:\n /*\n * GetOldestSnapshot returns NULL if the session has no active snapshots.\n * We can get that if, for example, a procedure fetches a toasted value\n * into a local variable, commits, and then tries to detoast the value.\n * Such coding is unsafe, because once we commit there is nothing to\n * prevent the toast data from being deleted. Detoasting *must* happen in\n * the same transaction that originally fetched the toast pointer. Hence,\n * rather than trying to band-aid over the problem, throw an error. (This\n * is not very much protection, because in many scenarios the procedure\n * would have already created a new transaction snapshot, preventing us\n * from detecting the problem. But it's better than nothing, and for sure\n * we shouldn't expend code on masking the problem more.)\n */\n\nThanks & Regards,\nNitin Jadhav\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:26:28 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Add a test case related to the error \"cannot fetch toast data without\n an active snapshot\"" }, { "msg_contents": "> if a procedure fetches a toasted value into a local variable, commits,\n> and then tries to detoast the value.\n\nI spent some time and tried to reproduce this error by using [1]\nqueries. But the error did not occur. Not sure whether I followed what\nis mentioned in the above comment. Please correct me if I am wrong.\n\n[1]:\nCREATE TABLE toasted(id serial primary key, data text);\nINSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,\n':') FROM generate_series(1, 1000)));\nINSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,\n':') FROM generate_series(1, 1000)));\n\nDO $$\nDECLARE v_r record;\nDECLARE vref_cursor REFCURSOR;\nBEGIN\nOPEN vref_cursor FOR SELECT data FROM toasted;\nLOOP\nfetch vref_cursor into v_r;\nINSERT INTO toasted(data) VALUES(v_r.data);\nCOMMIT;\nEND LOOP;\nEND;$$;\n\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Jan 27, 2023 at 6:26 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I was going through the comments [1] mentioned in\n> init_toast_snapshot() and based on the comments understood that the\n> error \"cannot fetch toast data without an active snapshot\" will occur\n> if a procedure fetches a toasted value into a local variable, commits,\n> and then tries to detoast the value. I would like to know the sample\n> query which causes such behaviour. I checked the test cases. Looks\n> like such a case is not present in the regression suit. It is better\n> to add one.\n>\n>\n> [1]:\n> /*\n> * GetOldestSnapshot returns NULL if the session has no active snapshots.\n> * We can get that if, for example, a procedure fetches a toasted value\n> * into a local variable, commits, and then tries to detoast the value.\n> * Such coding is unsafe, because once we commit there is nothing to\n> * prevent the toast data from being deleted. Detoasting *must* happen in\n> * the same transaction that originally fetched the toast pointer. Hence,\n> * rather than trying to band-aid over the problem, throw an error. (This\n> * is not very much protection, because in many scenarios the procedure\n> * would have already created a new transaction snapshot, preventing us\n> * from detecting the problem. But it's better than nothing, and for sure\n> * we shouldn't expend code on masking the problem more.)\n> */\n>\n> Thanks & Regards,\n> Nitin Jadhav\n\n\n", "msg_date": "Tue, 7 Feb 2023 15:47:00 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add a test case related to the error \"cannot fetch toast data\n without an active snapshot\"" } ]
[ { "msg_contents": "Hi,\n\nI've been puzzled by this message:\n\n~~~\nLOG:  fetching timeline history file for timeline 17 from primary server\nFATAL:  could not receive timeline history file from the primary server: \nERROR:  could not open file \"pg_xlog/00000011.history\": No such file or \ndirectory\n~~~\n\nIt took me a while to understand that the timeline id 11 in hexadecimal \nis the same as the timeline id 17 in decimal.\n\nIt appears that the first message is formatted with %u instead of %X, \nand there some others places with the some format, while WAL filename \nand history file used hexadecimal.\n\nThere is another place where timeline id is used : pg_waldump, and in \nthese tools, timeline id ( -t or --timeline ) should be given in \ndecimal, while filename gives it in hexadecimal : imho, it's not \nuser-friendly, and can lead to user's bad input for timeline id.\n\nThe attached patch proposes to change the format of timelineid from %u \nto %X.\n\nRegarding .po files, I don't know how to manage them. Is there any \nroutine to spread the modifications? Or should I identify and change \neach message?\n\n\nbest regards,\n\n-- \nSébastien", "msg_date": "Fri, 27 Jan 2023 14:52:19 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Timeline ID hexadecimal format" }, { "msg_contents": "On 27.01.23 14:52, Sébastien Lardière wrote:\n> The attached patch proposes to change the format of timelineid from %u \n> to %X.\n\nI think your complaint has merit. But note that if we did a change like \nthis, then log files or reports from different versions would have \ndifferent meaning without a visual difference, which is kind of what you \ncomplained about in the first place. At least we should do something \nlike 0x%X.\n\n> Regarding .po files, I don't know how to manage them. Is there any \n> routine to spread the modifications? Or should I identify and change \n> each message?\n\nDon't worry about this. This is handled elsewhere.\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:55:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 27/01/2023 15:55, Peter Eisentraut wrote:\n> On 27.01.23 14:52, Sébastien Lardière wrote:\n>> The attached patch proposes to change the format of timelineid from \n>> %u to %X.\n>\n> I think your complaint has merit.  But note that if we did a change \n> like this, then log files or reports from different versions would \n> have different meaning without a visual difference, which is kind of \n> what you complained about in the first place.  At least we should do \n> something like 0x%X.\n\nIndeed, but the messages that puzzled was in one log file, just \ntogether, not in some differents versions.\n\nBut yes, it should be documented somewhere, actually, I can't find any \ngood place for that,\n\nWhile digging, It seems that recovery_target_timeline should be given in \ndecimal, not in hexadecimal, which seems odd to me ; and pg_controldata \nuse decimal too, not hexadecimal…\n\nSo, if this idea is correct, the given patch is not enough.\n\nAnyway, do you think it is a good idea or not ?\n\n\n>\n>> Regarding .po files, I don't know how to manage them. Is there any \n>> routine to spread the modifications? Or should I identify and change \n>> each message?\n>\n> Don't worry about this.  This is handled elsewhere.\n>\n\nnice,\n\n\nregards,\n\n\n-- \nSébastien\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 17:17:35 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 27/01/2023 15:55, Peter Eisentraut wrote:\n> On 27.01.23 14:52, Sébastien Lardière wrote:\n>> The attached patch proposes to change the format of timelineid from \n>> %u to %X.\n>\n> I think your complaint has merit.  But note that if we did a change \n> like this, then log files or reports from different versions would \n> have different meaning without a visual difference, which is kind of \n> what you complained about in the first place.  At least we should do \n> something like 0x%X.\n>\nHi,\n\nHere's the patch with the suggested format ; plus, I add some note in \nthe documentation about recovery_target_timeline, because I don't get \nhow strtoul(), with the special 0 base parameter can work without 0x \nprefix ; I suppose that nobody use it.\n\nI also change pg_controldata and the usage of this output by pg_upgrade. \nI let internal usages unchanded : content of backup manifest and content \nof history file.\n\nShould I open a commitfest entry, or is it too soon ?\n\nregards,\n\n-- \nSébastien", "msg_date": "Mon, 30 Jan 2023 17:05:36 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 30.01.23 17:05, Sébastien Lardière wrote:\n> \n> Here's the patch with the suggested format ; plus, I add some note in \n> the documentation about recovery_target_timeline, because I don't get \n> how strtoul(), with the special 0 base parameter can work without 0x \n> prefix ; I suppose that nobody use it.\n> \n> I also change pg_controldata and the usage of this output by pg_upgrade. \n> I let internal usages unchanded : content of backup manifest and content \n> of history file.\n> \n> Should I open a commitfest entry, or is it too soon ?\n\nIt is not too soon. (The next commitfest is open for new patch \nsubmissions as soon as the current one is \"in progress\", which closes it \nfor new patches.)\n\n\n", "msg_date": "Tue, 31 Jan 2023 10:53:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On Mon, Jan 30, 2023 at 9:35 PM Sébastien Lardière\n<sebastien@lardiere.net> wrote:\n>\n> On 27/01/2023 15:55, Peter Eisentraut wrote:\n> > On 27.01.23 14:52, Sébastien Lardière wrote:\n> >> The attached patch proposes to change the format of timelineid from\n> >> %u to %X.\n> >\n> > I think your complaint has merit. But note that if we did a change\n> > like this, then log files or reports from different versions would\n> > have different meaning without a visual difference, which is kind of\n> > what you complained about in the first place. At least we should do\n> > something like 0x%X.\n> >\n> Hi,\n>\n> Here's the patch with the suggested format ; plus, I add some note in\n> the documentation about recovery_target_timeline, because I don't get\n> how strtoul(), with the special 0 base parameter can work without 0x\n> prefix ; I suppose that nobody use it.\n>\n> I also change pg_controldata and the usage of this output by pg_upgrade.\n> I let internal usages unchanded : content of backup manifest and content\n> of history file.\n\nThe patch seems to have some special/unprintable characters in it. I\nsee a lot ^[[ in there. I can't read the patch because of that.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 31 Jan 2023 16:56:39 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 31/01/2023 12:26, Ashutosh Bapat wrote:\n> On Mon, Jan 30, 2023 at 9:35 PM Sébastien Lardière\n> <sebastien@lardiere.net> wrote:\n>> On 27/01/2023 15:55, Peter Eisentraut wrote:\n>>> On 27.01.23 14:52, Sébastien Lardière wrote:\n>>>> The attached patch proposes to change the format of timelineid from\n>>>> %u to %X.\n>>> I think your complaint has merit. But note that if we did a change\n>>> like this, then log files or reports from different versions would\n>>> have different meaning without a visual difference, which is kind of\n>>> what you complained about in the first place. At least we should do\n>>> something like 0x%X.\n>>>\n>> Hi,\n>>\n>> Here's the patch with the suggested format ; plus, I add some note in\n>> the documentation about recovery_target_timeline, because I don't get\n>> how strtoul(), with the special 0 base parameter can work without 0x\n>> prefix ; I suppose that nobody use it.\n>>\n>> I also change pg_controldata and the usage of this output by pg_upgrade.\n>> I let internal usages unchanded : content of backup manifest and content\n>> of history file.\n> The patch seems to have some special/unprintable characters in it. I\n> see a lot ^[[ in there. I can't read the patch because of that.\n>\nSorry for that, it was the --color from git diff, it's fixed, I hope, \nthank you\n\nregards,\n\n-- \nSébastien", "msg_date": "Tue, 31 Jan 2023 13:52:57 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 31/01/2023 10:53, Peter Eisentraut wrote:\n> On 30.01.23 17:05, Sébastien Lardière wrote:\n>>\n>> Here's the patch with the suggested format ; plus, I add some note in \n>> the documentation about recovery_target_timeline, because I don't get \n>> how strtoul(), with the special 0 base parameter can work without 0x \n>> prefix ; I suppose that nobody use it.\n>>\n>> I also change pg_controldata and the usage of this output by \n>> pg_upgrade. I let internal usages unchanded : content of backup \n>> manifest and content of history file.\n>>\n>> Should I open a commitfest entry, or is it too soon ?\n>\n> It is not too soon.  (The next commitfest is open for new patch \n> submissions as soon as the current one is \"in progress\", which closes \n> it for new patches.)\n\n\nDone : https://commitfest.postgresql.org/42/4155/\n\n\n-- \nSébastien\n\n\n\n", "msg_date": "Tue, 31 Jan 2023 14:03:29 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "I actually find it kind of annoying that we use hex strings for a lot\nof things where they don't add any value. Namely Transaction ID and\nLSNs. As a result it's always a bit of a pain to ingest these in other\ntools or do arithmetic on them. Neither is referring to memory or\nanything where powers of 2 are significant so it really doesn't buy\nanything in making them easier to interpret either.\n\nI don't see any advantage in converting every place where we refer to\ntimelines into hex and then having to refer to things like timeline\n1A. It doesn't seem any more intuitive to someone understanding what's\ngoing on than referring to timeline 26.\n\nThe fact that the *filename* has it encoded in hex is an\nimplementation detail and really gets exposed here because it's giving\nyou the underlying system error that caused the problem. The confusion\nonly arises when the two are juxtaposed. A hint or something just in\nthat case might be enough?\n\n\n", "msg_date": "Tue, 31 Jan 2023 14:16:06 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 31/01/2023 20:16, Greg Stark wrote:\n> The fact that the *filename* has it encoded in hex is an\n> implementation detail and really gets exposed here because it's giving\n> you the underlying system error that caused the problem.\n\n\nIt's an implementation detail, but an exposed detail, so, people refer \nto the filename to find the timeline ID (That's why it happened to me)\n\n\n> The confusion\n> only arises when the two are juxtaposed. A hint or something just in\n> that case might be enough?\n>\n>\n\nThanks, i got your point.\n\n  Note that my proposal was to remove the ambiguous notation which \nhappen in some case (as in 11 <-> 17). A hint is useless in most of the \ncase, because there is no ambiguous. That's why i though format \nhexadecimal everywhere.\n\n\nAt least, can I propose to improve the documentation to expose the fact \nthat the timeline ID is exposed in hexadecimal in filenames but must be \nused in decimal in recovery_target_timeline and pg_waldump ?\n\n\nregards,\n\n-- \nSébastien\n\n\n\n", "msg_date": "Wed, 1 Feb 2023 17:54:43 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 31/01/2023 20:16, Greg Stark wrote:\n> A hint or something just in\n> that case might be enough?\n\nIt seems to be a -1 ;\n\nlet's try to improve the documentation, with the attached patch\n\nbest regards,\n\n-- \nSébastien", "msg_date": "Fri, 24 Feb 2023 17:27:21 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 24.02.23 17:27, Sébastien Lardière wrote:\n> diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml\n> index be05a33205..7e26b51031 100644\n> --- a/doc/src/sgml/backup.sgml\n> +++ b/doc/src/sgml/backup.sgml\n> @@ -1332,7 +1332,8 @@ restore_command = 'cp/mnt/server/archivedir/%f %p'\n> you like, add comments to a history file to record your own notes about\n> how and why this particular timeline was created. Such comments will be\n> especially valuable when you have a thicket of different timelines as\n> - a result of experimentation.\n> + a result of experimentation. In both WAL segment file names and history files,\n> + the timeline ID number is expressed in hexadecimal.\n> </para>\n> \n> <para>\n\nI think here it would be more helpful to show actual examples. Like, \nhere is a possible file name, this is what the different parts mean.\n\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index e5c41cc6c6..3b5d041d92 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -4110,7 +4110,9 @@ restore_command = 'copy \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"' # Windows\n> current when the base backup was taken. The\n> value <literal>latest</literal> recovers\n> to the latest timeline found in the archive, which is useful in\n> - a standby server. <literal>latest</literal> is the default.\n> + a standby server. A numerical value expressed in hexadecimal must be\n> + prefixed with <literal>0x</literal>, for example <literal>0x11</literal>.\n> + <literal>latest</literal> is the default.\n> </para>\n> \n> <para>\n\nThis applies to all configuration parameters, so it doesn't need to be \nmentioned explicitly for individual ones.\n\n> diff --git a/doc/src/sgml/ref/pg_waldump.sgml b/doc/src/sgml/ref/pg_waldump.sgml\n> index 343f0482a9..4ae8f2ebdd 100644\n> --- a/doc/src/sgml/ref/pg_waldump.sgml\n> +++ b/doc/src/sgml/ref/pg_waldump.sgml\n> @@ -215,7 +215,8 @@ PostgreSQL documentation\n> <para>\n> Timeline from which to read WAL records. The default is to use the\n> value in <replaceable>startseg</replaceable>, if that is specified; otherwise, the\n> - default is 1.\n> + default is 1. The value must be expressed in decimal, contrary to the hexadecimal\n> + value given in WAL segment file names and history files.\n> </para>\n> </listitem>\n> </varlistentry>\n\nMaybe this could be fixed instead?\n\n\n\n", "msg_date": "Thu, 2 Mar 2023 09:12:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 02/03/2023 09:12, Peter Eisentraut wrote:\n> On 24.02.23 17:27, Sébastien Lardière wrote:\n>> diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml\n>> index be05a33205..7e26b51031 100644\n>> --- a/doc/src/sgml/backup.sgml\n>> +++ b/doc/src/sgml/backup.sgml\n>> @@ -1332,7 +1332,8 @@ restore_command = 'cp/mnt/server/archivedir/%f %p'\n>>       you like, add comments to a history file to record your own \n>> notes about\n>>       how and why this particular timeline was created.  Such \n>> comments will be\n>>       especially valuable when you have a thicket of different \n>> timelines as\n>> -    a result of experimentation.\n>> +    a result of experimentation. In both WAL segment file names and \n>> history files,\n>> +    the timeline ID number is expressed in hexadecimal.\n>>      </para>\n>>        <para>\n>\n> I think here it would be more helpful to show actual examples. Like, \n> here is a possible file name, this is what the different parts mean.\n\nSo you mean explain the WAL filename and the history filename ? Is it \nthe good place for it ?\n\n\n>\n>> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n>> index e5c41cc6c6..3b5d041d92 100644\n>> --- a/doc/src/sgml/config.sgml\n>> +++ b/doc/src/sgml/config.sgml\n>> @@ -4110,7 +4110,9 @@ restore_command = 'copy \n>> \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"'  # Windows\n>>           current when the base backup was taken.  The\n>>           value <literal>latest</literal> recovers\n>>           to the latest timeline found in the archive, which is \n>> useful in\n>> -        a standby server. <literal>latest</literal> is the default.\n>> +        a standby server. A numerical value expressed in hexadecimal \n>> must be\n>> +        prefixed with <literal>0x</literal>, for example \n>> <literal>0x11</literal>.\n>> +        <literal>latest</literal> is the default.\n>>          </para>\n>>            <para>\n>\n> This applies to all configuration parameters, so it doesn't need to be \n> mentioned explicitly for individual ones.\n\nProbably, but is there another parameter with the same consequence ?\n\nworth it to document this point globally ?\n\n\n>\n>> diff --git a/doc/src/sgml/ref/pg_waldump.sgml \n>> b/doc/src/sgml/ref/pg_waldump.sgml\n>> index 343f0482a9..4ae8f2ebdd 100644\n>> --- a/doc/src/sgml/ref/pg_waldump.sgml\n>> +++ b/doc/src/sgml/ref/pg_waldump.sgml\n>> @@ -215,7 +215,8 @@ PostgreSQL documentation\n>>          <para>\n>>           Timeline from which to read WAL records. The default is to \n>> use the\n>>           value in <replaceable>startseg</replaceable>, if that is \n>> specified; otherwise, the\n>> -        default is 1.\n>> +        default is 1. The value must be expressed in decimal, \n>> contrary to the hexadecimal\n>> +        value given in WAL segment file names and history files.\n>>          </para>\n>>         </listitem>\n>>        </varlistentry>\n>\n> Maybe this could be fixed instead?\n>\n>\n\nIndeed, and strtoul is probably a better option than sscanf, don't you \nthink ?\n\n\n\n-- \nSébastien\n\n\n\n", "msg_date": "Fri, 3 Mar 2023 16:52:01 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On Tue, Jan 31, 2023 at 2:16 PM Greg Stark <stark@mit.edu> wrote:\n> I don't see any advantage in converting every place where we refer to\n> timelines into hex and then having to refer to things like timeline\n> 1A. It doesn't seem any more intuitive to someone understanding what's\n> going on than referring to timeline 26.\n\nThe point, though, is that the WAL files we have on disk already say\n1A. If we change the log messages to match, that's easier for users.\nWe could alternatively change the naming convention for WAL files on\ndisk, but that feels like a much bigger compatibility break.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Mar 2023 11:04:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 03.03.23 16:52, Sébastien Lardière wrote:\n> On 02/03/2023 09:12, Peter Eisentraut wrote:\n>> On 24.02.23 17:27, Sébastien Lardière wrote:\n>>> diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml\n>>> index be05a33205..7e26b51031 100644\n>>> --- a/doc/src/sgml/backup.sgml\n>>> +++ b/doc/src/sgml/backup.sgml\n>>> @@ -1332,7 +1332,8 @@ restore_command = 'cp/mnt/server/archivedir/%f %p'\n>>>       you like, add comments to a history file to record your own \n>>> notes about\n>>>       how and why this particular timeline was created.  Such \n>>> comments will be\n>>>       especially valuable when you have a thicket of different \n>>> timelines as\n>>> -    a result of experimentation.\n>>> +    a result of experimentation. In both WAL segment file names and \n>>> history files,\n>>> +    the timeline ID number is expressed in hexadecimal.\n>>>      </para>\n>>>        <para>\n>>\n>> I think here it would be more helpful to show actual examples. Like, \n>> here is a possible file name, this is what the different parts mean.\n> \n> So you mean explain the WAL filename and the history filename ? Is it \n> the good place for it ?\n\nWell, your patch says, by the way, the timeline ID in the file is \nhexadecimal. Then one might ask, what file, what is a timeline, what \nare the other numbers in the file, etc. It seems very specific in this \ncontext. I don't know if the format of these file names is actually \ndocumented somewhere.\n\n>>> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n>>> index e5c41cc6c6..3b5d041d92 100644\n>>> --- a/doc/src/sgml/config.sgml\n>>> +++ b/doc/src/sgml/config.sgml\n>>> @@ -4110,7 +4110,9 @@ restore_command = 'copy \n>>> \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"'  # Windows\n>>>           current when the base backup was taken.  The\n>>>           value <literal>latest</literal> recovers\n>>>           to the latest timeline found in the archive, which is \n>>> useful in\n>>> -        a standby server. <literal>latest</literal> is the default.\n>>> +        a standby server. A numerical value expressed in hexadecimal \n>>> must be\n>>> +        prefixed with <literal>0x</literal>, for example \n>>> <literal>0x11</literal>.\n>>> +        <literal>latest</literal> is the default.\n>>>          </para>\n>>>            <para>\n>>\n>> This applies to all configuration parameters, so it doesn't need to be \n>> mentioned explicitly for individual ones.\n> \n> Probably, but is there another parameter with the same consequence ?\n> \n> worth it to document this point globally ?\n\nIt's ok to mention it again. We do something similar for example at \nunix_socket_permissions. But maybe with more context, like \"If you want \nto specify a timeline ID hexadecimal (for example, if extracted from a \nWAL file name), then prefix it with a 0x\".\n\n>>> diff --git a/doc/src/sgml/ref/pg_waldump.sgml \n>>> b/doc/src/sgml/ref/pg_waldump.sgml\n>>> index 343f0482a9..4ae8f2ebdd 100644\n>>> --- a/doc/src/sgml/ref/pg_waldump.sgml\n>>> +++ b/doc/src/sgml/ref/pg_waldump.sgml\n>>> @@ -215,7 +215,8 @@ PostgreSQL documentation\n>>>          <para>\n>>>           Timeline from which to read WAL records. The default is to \n>>> use the\n>>>           value in <replaceable>startseg</replaceable>, if that is \n>>> specified; otherwise, the\n>>> -        default is 1.\n>>> +        default is 1. The value must be expressed in decimal, \n>>> contrary to the hexadecimal\n>>> +        value given in WAL segment file names and history files.\n>>>          </para>\n>>>         </listitem>\n>>>        </varlistentry>\n>>\n>> Maybe this could be fixed instead?\n> \n> Indeed, and strtoul is probably a better option than sscanf, don't you \n> think ?\n\nYeah, the use of sscanf() is kind of weird here. We have been moving \nthe option parsing to use option_parse_int(). Maybe hex support could \nbe added there. Or just use strtoul().\n\n\n", "msg_date": "Mon, 6 Mar 2023 18:04:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 06/03/2023 18:04, Peter Eisentraut wrote:\n> On 03.03.23 16:52, Sébastien Lardière wrote:\n>> On 02/03/2023 09:12, Peter Eisentraut wrote:\n>>>\n>>> I think here it would be more helpful to show actual examples. Like, \n>>> here is a possible file name, this is what the different parts mean.\n>>\n>> So you mean explain the WAL filename and the history filename ? Is it \n>> the good place for it ?\n>\n> Well, your patch says, by the way, the timeline ID in the file is \n> hexadecimal.  Then one might ask, what file, what is a timeline, what \n> are the other numbers in the file, etc.  It seems very specific in \n> this context.  I don't know if the format of these file names is \n> actually documented somewhere.\n\n\nWell, in the context of this patch, the usage both filename are \nexplained juste before, so it seems understandable to me\n\nTimelines are explained in this place : \nhttps://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-TIMELINES \nso the patch explains the format there\n\n\n>\n>>>>\n>>>\n>>> This applies to all configuration parameters, so it doesn't need to \n>>> be mentioned explicitly for individual ones.\n>>\n>> Probably, but is there another parameter with the same consequence ?\n>>\n>> worth it to document this point globally ?\n>\n> It's ok to mention it again.  We do something similar for example at \n> unix_socket_permissions.  But maybe with more context, like \"If you \n> want to specify a timeline ID hexadecimal (for example, if extracted \n> from a WAL file name), then prefix it with a 0x\".\n\n\nOk, I've improved the message\n\n\n>\n>>>\n>>> Maybe this could be fixed instead?\n>>\n>> Indeed, and strtoul is probably a better option than sscanf, don't \n>> you think ?\n>\n> Yeah, the use of sscanf() is kind of weird here.  We have been moving \n> the option parsing to use option_parse_int().  Maybe hex support could \n> be added there.  Or just use strtoul().\n\n\nI've made the change with strtoul\n\nAbout option_parse_int(), actually, strtoint() is used, do we need a \noption_parse_ul() fonction ?\n\npatch attached,\n\nbest regards,\n\n\n-- \nSébastien", "msg_date": "Tue, 7 Mar 2023 18:14:58 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "I have committed the two documentation changes, with some minor adjustments.\n\nOn 07.03.23 18:14, Sébastien Lardière wrote:\n>>>> Maybe this could be fixed instead?\n>>>\n>>> Indeed, and strtoul is probably a better option than sscanf, don't \n>>> you think ?\n>>\n>> Yeah, the use of sscanf() is kind of weird here.  We have been moving \n>> the option parsing to use option_parse_int().  Maybe hex support could \n>> be added there.  Or just use strtoul().\n> \n> \n> I've made the change with strtoul\n> \n> About option_parse_int(), actually, strtoint() is used, do we need a \n> option_parse_ul() fonction ?\n\nFor the option parsing, I propose the attached patch. This follows the \nstructure of option_parse_int(), so in the future it could be extracted \nand refactored in the same way, if there is more need.", "msg_date": "Mon, 20 Mar 2023 09:17:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 20/03/2023 09:17, Peter Eisentraut wrote:\n> I have committed the two documentation changes, with some minor \n> adjustments.\n\n\nThank you,\n\n\n>\n> On 07.03.23 18:14, Sébastien Lardière wrote:\n>>>>> Maybe this could be fixed instead?\n>>>>\n>>>> Indeed, and strtoul is probably a better option than sscanf, don't \n>>>> you think ?\n>>>\n>>> Yeah, the use of sscanf() is kind of weird here.  We have been \n>>> moving the option parsing to use option_parse_int().  Maybe hex \n>>> support could be added there.  Or just use strtoul().\n>>\n>>\n>> I've made the change with strtoul\n>>\n>> About option_parse_int(), actually, strtoint() is used, do we need a \n>> option_parse_ul() fonction ?\n>\n> For the option parsing, I propose the attached patch.  This follows \n> the structure of option_parse_int(), so in the future it could be \n> extracted and refactored in the same way, if there is more need.\n\n\nok for me, it accept 0x values and refuse wrong values\n\nthank you,\n\n\n-- \nSébastien\n\n\n\n", "msg_date": "Mon, 20 Mar 2023 10:40:31 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 20.03.23 10:40, Sébastien Lardière wrote:\n>>> About option_parse_int(), actually, strtoint() is used, do we need a \n>>> option_parse_ul() fonction ?\n>>\n>> For the option parsing, I propose the attached patch.  This follows \n>> the structure of option_parse_int(), so in the future it could be \n>> extracted and refactored in the same way, if there is more need.\n> \n> \n> ok for me, it accept 0x values and refuse wrong values\n\ncommitted\n\n\n\n", "msg_date": "Tue, 21 Mar 2023 08:15:22 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Timeline ID hexadecimal format" }, { "msg_contents": "On 21/03/2023 08:15, Peter Eisentraut wrote:\n> On 20.03.23 10:40, Sébastien Lardière wrote:\n>>>> About option_parse_int(), actually, strtoint() is used, do we need \n>>>> a option_parse_ul() fonction ?\n>>>\n>>> For the option parsing, I propose the attached patch.  This follows \n>>> the structure of option_parse_int(), so in the future it could be \n>>> extracted and refactored in the same way, if there is more need.\n>>\n>>\n>> ok for me, it accept 0x values and refuse wrong values\n>\n> committed\n>\nthanks,\n\n\n-- \nSébastien\n\n\n\n", "msg_date": "Tue, 21 Mar 2023 08:37:45 +0100", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": true, "msg_subject": "Re: Timeline ID hexadecimal format" } ]
[ { "msg_contents": "Hi all,\n\nI am investigating the benefits of different profile-guided optimizations\n(PGO) and link-time optimizations (LTO) versus binary optimizers (e.g.\nBOLT) for applications such as PostgreSQL.\n\nI am facing issues when applying LTO to PostgreSQL as the produced binary\nseems broken (the server dies quickly after it has started). This is\ndefinitely a compiler bug, but I was wondering if anyone here have\nexperimented with LTO for PostgreSQL.\n\nThanks,\n\n-- \nJoão Paulo L. de Carvalho\nPh.D Computer Science | IC-UNICAMP | Campinas , SP - Brazil\nPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canada\njoao.carvalho@ic.unicamp.br\njoao.carvalho@ualberta.ca\n\nHi all,I am investigating the benefits of different profile-guided optimizations (PGO) and link-time optimizations (LTO) versus binary optimizers (e.g. BOLT) for applications such as PostgreSQL.I am facing issues when applying LTO to PostgreSQL as the produced binary seems broken (the server dies quickly after it has started). This is definitely a compiler bug, but I was wondering if anyone here  have experimented with LTO for PostgreSQL.Thanks,-- João Paulo L. de CarvalhoPh.D Computer Science |  IC-UNICAMP | Campinas , SP - BrazilPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canadajoao.carvalho@ic.unicamp.brjoao.carvalho@ualberta.ca", "msg_date": "Fri, 27 Jan 2023 10:05:09 -0700", "msg_from": "=?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?=\n <jaopaulolc@gmail.com>", "msg_from_op": true, "msg_subject": "Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "Hi,\n\nWe have implemented LTO in PostGIS's build system a couple releases ago. It\ndefinitely gives +10% on heavy maths. Unfortunately we did not manage to\nget it running under FreeBSD because of default system linker issues so we\nhad to hide it under --with-lto switch which we recommend to everyone.\n\nI did not experiment with Postgres itself but there are definitely traces\nof numerous LTO-enabled private builds on the web.\n\nOn Fri, Jan 27, 2023 at 8:05 PM João Paulo Labegalini de Carvalho <\njaopaulolc@gmail.com> wrote:\n\n> Hi all,\n>\n> I am investigating the benefits of different profile-guided optimizations\n> (PGO) and link-time optimizations (LTO) versus binary optimizers (e.g.\n> BOLT) for applications such as PostgreSQL.\n>\n> I am facing issues when applying LTO to PostgreSQL as the produced binary\n> seems broken (the server dies quickly after it has started). This is\n> definitely a compiler bug, but I was wondering if anyone here have\n> experimented with LTO for PostgreSQL.\n>\n> Thanks,\n>\n> --\n> João Paulo L. de Carvalho\n> Ph.D Computer Science | IC-UNICAMP | Campinas , SP - Brazil\n> Postdoctoral Research Fellow | University of Alberta | Edmonton, AB -\n> Canada\n> joao.carvalho@ic.unicamp.br\n> joao.carvalho@ualberta.ca\n>\n\nHi,We have implemented LTO in PostGIS's build system a couple releases ago. It definitely gives +10% on heavy maths. Unfortunately we did not manage to get it running under FreeBSD because of default system linker issues so we had to hide it under --with-lto switch which we recommend to everyone.I did not experiment with Postgres itself but there are definitely traces of numerous LTO-enabled private builds on the web.On Fri, Jan 27, 2023 at 8:05 PM João Paulo Labegalini de Carvalho <jaopaulolc@gmail.com> wrote:Hi all,I am investigating the benefits of different profile-guided optimizations (PGO) and link-time optimizations (LTO) versus binary optimizers (e.g. BOLT) for applications such as PostgreSQL.I am facing issues when applying LTO to PostgreSQL as the produced binary seems broken (the server dies quickly after it has started). This is definitely a compiler bug, but I was wondering if anyone here  have experimented with LTO for PostgreSQL.Thanks,-- João Paulo L. de CarvalhoPh.D Computer Science |  IC-UNICAMP | Campinas , SP - BrazilPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canadajoao.carvalho@ic.unicamp.brjoao.carvalho@ualberta.ca", "msg_date": "Fri, 27 Jan 2023 22:09:14 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "=?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?= <jaopaulolc@gmail.com> writes:\n> I am facing issues when applying LTO to PostgreSQL as the produced binary\n> seems broken (the server dies quickly after it has started). This is\n> definitely a compiler bug, but I was wondering if anyone here have\n> experimented with LTO for PostgreSQL.\n\nThere are a lot of places where we're implicitly relying on\ncross-compilation-unit optimizations NOT happening, because the\ncode isn't adequately decorated with memory barriers and the like.\nSo I wouldn't necessarily assume that the misbehavior you're seeing\nrepresents anything that the compiler folks would consider a bug.\n\nIn the long run we might be interested in trying to make this\nwork better, but I don't know of anyone working on it now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:06:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 10:05:09 -0700, Jo�o Paulo Labegalini de Carvalho wrote:\n> I am investigating the benefits of different profile-guided optimizations\n> (PGO) and link-time optimizations (LTO) versus binary optimizers (e.g.\n> BOLT) for applications such as PostgreSQL.\n> \n> I am facing issues when applying LTO to PostgreSQL as the produced binary\n> seems broken (the server dies quickly after it has started). This is\n> definitely a compiler bug, but I was wondering if anyone here have\n> experimented with LTO for PostgreSQL.\n\nWhat compiler / version / flags / OS did you try?\n\n\nFWIW, I've experimented with LTO and PGO a bunch, both with gcc and clang. I\ndid hit a crash in gcc, but that did turn out to be a compiler bug, and\nactually reduced to something not even needing LTO.\n\nI saw quite substantial speedups with PGO, but I only tested very specific\nworkloads. IIRC it was >15% gain in concurrent readonly pgbench.\n\n\nI dimly recall failing to get some benefit out of bolt for some reason that I\nunfortunately don't even vaguely recall.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:07:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 15:06:37 -0500, Tom Lane wrote:\n> There are a lot of places where we're implicitly relying on\n> cross-compilation-unit optimizations NOT happening, because the code isn't\n> adequately decorated with memory barriers and the like.\n\nWe have a fallback compiler barrier implementation doing that, but it\nshouldn't be used on any halfway reasonable compiler. Cross-compilation-unit\ncalls don't provide a memory barrier - I assume you're thinking about a\ncompiler barrier?\n\nI'm sure we have a few places that aren't that careful, but I would hope it's\nnot a large number. Are you thinking of specific \"patterns\" we've repeated all\nover, or just a few cases you recall?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:08:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-27 15:06:37 -0500, Tom Lane wrote:\n>> There are a lot of places where we're implicitly relying on\n>> cross-compilation-unit optimizations NOT happening, because the code isn't\n>> adequately decorated with memory barriers and the like.\n\n> We have a fallback compiler barrier implementation doing that, but it\n> shouldn't be used on any halfway reasonable compiler. Cross-compilation-unit\n> calls don't provide a memory barrier - I assume you're thinking about a\n> compiler barrier?\n\nSorry, yeah, I was being sloppy there.\n\n> I'm sure we have a few places that aren't that careful, but I would hope it's\n> not a large number. Are you thinking of specific \"patterns\" we've repeated all\n> over, or just a few cases you recall?\n\nI recall that we used to have dependencies on, for example, the LWLock\nfunctions being out-of-line. Probably that specific pain point has\nbeen cleaned up, but it surprises me not at all to hear that there\nare more.\n\nI agree that there are probably not a huge number of places that would\nneed to be fixed, but I'm not sure how we'd go about finding them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:28:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 18:28:16 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm sure we have a few places that aren't that careful, but I would hope it's\n> > not a large number. Are you thinking of specific \"patterns\" we've repeated all\n> > over, or just a few cases you recall?\n> \n> I recall that we used to have dependencies on, for example, the LWLock\n> functions being out-of-line. Probably that specific pain point has\n> been cleaned up, but it surprises me not at all to hear that there\n> are more.\n\nWe did clean up a fair bit, some via \"infrastructure\" fixes. E.g. our\nspinlocks didn't use to be a barrier a good while back (c.f. 0709b7ee72e), and\nthat required putting volatile on things that couldn't move across the lock\nboundaries. I think that in turn was what caused the LWLock issue you\nmention, as back then lwlocks used spinlocks.\n\nThe increased use of atomics instead of \"let's just do a dirty read\", fixed a\nfew instances too.\n\n\n> I agree that there are probably not a huge number of places that would\n> need to be fixed, but I'm not sure how we'd go about finding them.\n\nYea, that's the annoying part...\n\n\nOne thing we can look for is the use of volatile, which we used to use a lot\nfor preventing code rearrangement (for lack of barrier primitives in the bad\nold days). Both Robert and I removed a bunch of that kind of use of volatile,\nand from memory some of them wouldn't have been safe with LTO.\n\nIt's really too bad that we [have to] use volatile around signal handlers and\nfor PG_TRY too, otherwise it'd be easier to search for.\n\nKinda wondering if we ought to add a sig_volatile, err_volatile or such.\n\n\nBut the main thing probably is to just regularly test LTO and look for\nproblems. Perhaps worth adding a BF animal that uses -O3 + LTO?\n\nI don't immediately see how to squeeze using PGO into the BF build process\n(since we'd have to build without PGO, run some workload, build with PGO -\nwithout any source modifications inbetween)...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 16:45:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "> What compiler / version / flags / OS did you try?\n>\n\nI am running experiment on a machine with:\n\n - Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz\n - Ubuntu 18.04.6 LTS\n - LLVM/Clang 15.0.6 (build from source)\n\nThese are the flags I am using:\n\nCFLAGS = -O3 -fuse-ld=lld -gline-tables-only -fprofile-instr-generate\nLDFLAGS = -fuse-ld=lld -Wl,-q\n\n\nFWIW, I've experimented with LTO and PGO a bunch, both with gcc and clang. I\n> did hit a crash in gcc, but that did turn out to be a compiler bug, and\n> actually reduced to something not even needing LTO.\n>\n\nGood to hear that it works. I just need to figure out what is going wrong\non my end then.\n\n\n> I saw quite substantial speedups with PGO, but I only tested very specific\n> workloads. IIRC it was >15% gain in concurrent readonly pgbench.\n>\n\nI successfully applied PGO only and obtained similar gains with TPC-C &\nTPC-H workloads.\n\nI dimly recall failing to get some benefit out of bolt for some reason that\n> I\n> unfortunately don't even vaguely recall.\n>\n\nI got similar gains slightly higher than PGO with BOLT, but not for all\nqueries in TPC-H. In fact, I observed small (2-4%) regressions with BOLT.\n\n-- \nJoão Paulo L. de Carvalho\nPh.D Computer Science | IC-UNICAMP | Campinas , SP - Brazil\nPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canada\njoao.carvalho@ic.unicamp.br\njoao.carvalho@ualberta.ca\n\nWhat compiler / version / flags / OS did you try?I am running experiment on a machine with:Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHzUbuntu 18.04.6 LTSLLVM/Clang 15.0.6 (build from source)These are the flags I am using:CFLAGS = -O3 -fuse-ld=lld -gline-tables-only -fprofile-instr-generateLDFLAGS = -fuse-ld=lld -Wl,-q\nFWIW, I've experimented with LTO and PGO a bunch, both with gcc and clang. I\ndid hit a crash in gcc, but that did turn out to be a compiler bug, and\nactually reduced to something not even needing LTO.Good to hear that it works. I just need to figure out what is going wrong on my end then. I saw quite substantial speedups with PGO, but I only tested very specific\nworkloads. IIRC it was >15% gain in concurrent readonly pgbench.I successfully applied PGO only and obtained similar gains with TPC-C & TPC-H workloads.I dimly recall failing to get some benefit out of bolt for some reason that I\nunfortunately don't even vaguely recall.I got similar gains slightly higher than PGO with BOLT, but not for all queries in TPC-H. In fact, I observed small (2-4%) regressions with BOLT.-- João Paulo L. de CarvalhoPh.D Computer Science |  IC-UNICAMP | Campinas , SP - BrazilPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canadajoao.carvalho@ic.unicamp.brjoao.carvalho@ualberta.ca", "msg_date": "Mon, 30 Jan 2023 10:24:02 -0700", "msg_from": "=?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?=\n <jaopaulolc@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 10:24:02 -0700, Jo�o Paulo Labegalini de Carvalho wrote:\n> > What compiler / version / flags / OS did you try?\n> >\n> \n> I am running experiment on a machine with:\n> \n> - Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz\n> - Ubuntu 18.04.6 LTS\n> - LLVM/Clang 15.0.6 (build from source)\n> \n> These are the flags I am using:\n> \n> CFLAGS = -O3 -fuse-ld=lld -gline-tables-only -fprofile-instr-generate\n> LDFLAGS = -fuse-ld=lld -Wl,-q\n\nFor some reason my notes for using LTO include changing RANLIB to point to\ngcc/llvm-ranlib of the appropriate version. Won't even be used on HEAD, but\nbefore that it can make a difference.\n\nDepending on how you built clang, it could be that the above recipe ends up\nusing the system lld, which might be too old.\n\nWhat are the crashes you're getting?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Jan 2023 09:47:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" }, { "msg_contents": "On Mon, Jan 30, 2023 at 10:47 AM Andres Freund <andres@anarazel.de> wrote:\n\n> For some reason my notes for using LTO include changing RANLIB to point to\n> gcc/llvm-ranlib of the appropriate version. Won't even be used on HEAD, but\n> before that it can make a difference.\n>\n\nI will try that.\n\n\n> Depending on how you built clang, it could be that the above recipe ends up\n> using the system lld, which might be too old.\n>\n\nI double checked and I am using the lld that I built from source.\n\n\n> What are the crashes you're getting?\n>\n\nWhen I run make check, the server starts up fine but the test queries seem\nto not execute. I don't see any errors, the check step just quits after a\nwhile.\n\n2023-02-01 13:00:38.703 EST postmaster[28750] LOG: starting PostgreSQL\n14.5 on x86_64-pc-linux-gnu, compiled by clang version 15.0.6, 64-bit\n2023-02-01 13:00:38.703 EST postmaster[28750] LOG: listening on Unix\nsocket \"/tmp/pg_regress-h8Fmqu/.s.PGSQL.58085\"\n2023-02-01 13:00:38.704 EST startup[28753] LOG: database system was shut\ndown at 2023-02-01 13:00:38 EST\n2023-02-01 13:00:38.705 EST postmaster[28750] LOG: database system is\nready to accept connections\n\n-- \nJoão Paulo L. de Carvalho\nPh.D Computer Science | IC-UNICAMP | Campinas , SP - Brazil\nPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canada\njoao.carvalho@ic.unicamp.br\njoao.carvalho@ualberta.ca\n\nOn Mon, Jan 30, 2023 at 10:47 AM Andres Freund <andres@anarazel.de> wrote:For some reason my notes for using LTO include changing RANLIB to point to\ngcc/llvm-ranlib of the appropriate version. Won't even be used on HEAD, but\nbefore that it can make a difference.I will try that. \nDepending on how you built clang, it could be that the above recipe ends up\nusing the system lld, which might be too old.I double checked and I am using the lld that I built from source. \nWhat are the crashes you're getting?When I run make check, the server starts up fine but the test queries seem to not execute. I don't see any errors, the check step just quits after a while.2023-02-01 13:00:38.703 EST postmaster[28750] LOG:  starting PostgreSQL 14.5 on x86_64-pc-linux-gnu, compiled by clang version 15.0.6, 64-bit2023-02-01 13:00:38.703 EST postmaster[28750] LOG:  listening on Unix socket \"/tmp/pg_regress-h8Fmqu/.s.PGSQL.58085\"2023-02-01 13:00:38.704 EST startup[28753] LOG:  database system was shut down at 2023-02-01 13:00:38 EST2023-02-01 13:00:38.705 EST postmaster[28750] LOG:  database system is ready to accept connections-- João Paulo L. de CarvalhoPh.D Computer Science |  IC-UNICAMP | Campinas , SP - BrazilPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canadajoao.carvalho@ic.unicamp.brjoao.carvalho@ualberta.ca", "msg_date": "Wed, 1 Feb 2023 11:19:55 -0700", "msg_from": "=?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?=\n <jaopaulolc@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimizing PostgreSQL with LLVM's PGO+LTO" } ]
[ { "msg_contents": "This patch adds the n_tup_newpage_upd to all the table stat views.\n\nJust as we currently track HOT updates, it should be beneficial to track\nupdates where the new tuple cannot fit on the existing page and must go to\na different one.\n\nHopefully this can give users some insight as to whether their current\nfillfactor settings need to be adjusted.\n\nMy chosen implementation replaces the hot-update boolean with an\nupdate_type which is currently a three-value enum. I favored that\nonly slightly over adding a separate newpage-update boolean because the two\nevents are mutually exclusive and fewer parameters is less overhead and one\nless assertion check. The relative wisdom of this choice may not come to\nlight until we add a new measurement and see whether that new measurement\noverlaps either is-hot or is-new-page.", "msg_date": "Fri, 27 Jan 2023 18:23:39 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 18:23:39 -0500, Corey Huinker wrote:\n> This patch adds the n_tup_newpage_upd to all the table stat views.\n>\n> Just as we currently track HOT updates, it should be beneficial to track\n> updates where the new tuple cannot fit on the existing page and must go to\n> a different one.\n\nI like that idea.\n\n\nI wonder if it's quite detailed enough - we can be forced to do out-of-page\nupdates because we actually are out of space, or because we reach the max\nnumber of line pointers we allow in a page. HOT pruning can't remove dead line\npointers, so that doesn't necessarily help.\n\nWhich e.g. means that:\n\n> Hopefully this can give users some insight as to whether their current\n> fillfactor settings need to be adjusted.\n\nIsn't that easy, because you can have a page with just a visible single tuple\non, but still be unable to do a same-page update. The fix instead is to VACUUM\n(more aggressively).\n\n\nOTOH, just seeing that there's high percentage \"out-of-page updates\" provides\nmore information than we have right now. And the alternative would be to add\nyet another counter.\n\n\nSimilarly, it's a bit sad that we can't distinguish between the number of\npotential-HOT out-of-page updates and the other out-of-page updates. But\nthat'd mean even more counters.\n\n\nI guess we could try to add tracepoints to allow to distinguish between those\ncases instead? Not a lot of people use those though.\n\n\n\n> @@ -372,8 +372,11 @@ pgstat_count_heap_update(Relation rel, bool hot)\n> \t\tpgstat_info->trans->tuples_updated++;\n>\n> \t\t/* t_tuples_hot_updated is nontransactional, so just advance it */\n> -\t\tif (hot)\n> +\t\tif (hut == PGSTAT_HEAPUPDATE_HOT)\n> \t\t\tpgstat_info->t_counts.t_tuples_hot_updated++;\n> +\t\telse if (hut == PGSTAT_HEAPUPDATE_NEW_PAGE)\n> +\t\t\tpgstat_info->t_counts.t_tuples_newpage_updated++;\n> +\n> \t}\n> }\n>\n\nI think this might cause some trouble for existing monitoring setups after an\nupgrade. Suddenly the number of updates will appear way lower than\nbefore... And if we end up eventually distinguishing between different reasons\nfor out-of-page updates, or hot/non-hot out-of-page that'll happen again.\n\nI wish we'd included HOT updates in the total number of updates, and just kept\nHOT updates a separate counter that'd always be less than updates in total.\n\n\n From that angle: Perhaps it'd be better to have counter for how many times a\npage is found to be full during an update?\n\n\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -3155,7 +3155,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,\n> \t\t\t\tpagefree;\n> \tbool\t\thave_tuple_lock = false;\n> \tbool\t\tiscombo;\n> -\tbool\t\tuse_hot_update = false;\n> +\tPgStat_HeapUpdateType update_type = PGSTAT_HEAPUPDATE_NON_HOT;\n> +\n> \tbool\t\tkey_intact;\n> \tbool\t\tall_visible_cleared = false;\n> \tbool\t\tall_visible_cleared_new = false;\n> @@ -3838,10 +3839,11 @@ l2:\n> \t\t * changed.\n> \t\t */\n> \t\tif (!bms_overlap(modified_attrs, hot_attrs))\n> -\t\t\tuse_hot_update = true;\n> +\t\t\tupdate_type = PGSTAT_HEAPUPDATE_HOT;\n> \t}\n> \telse\n> \t{\n> +\t\tupdate_type = PGSTAT_HEAPUPDATE_NEW_PAGE;\n> \t\t/* Set a hint that the old page could use prune/defrag */\n> \t\tPageSetFull(page);\n> \t}\n> @@ -3875,7 +3877,7 @@ l2:\n> \t */\n> \tPageSetPrunable(page, xid);\n>\n> -\tif (use_hot_update)\n> +\tif (update_type == PGSTAT_HEAPUPDATE_HOT)\n\nIt's a bit weird that heap_update() uses a pgstat type to decide what to\ndo. But not sure there's a much better alternative.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:55:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "On Fri, Jan 27, 2023 at 3:55 PM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if it's quite detailed enough - we can be forced to do out-of-page\n> updates because we actually are out of space, or because we reach the max\n> number of line pointers we allow in a page. HOT pruning can't remove dead line\n> pointers, so that doesn't necessarily help.\n\nIt would be hard to apply that kind of information, I suspect. Maybe\nit's still worth having, though.\n\n> Similarly, it's a bit sad that we can't distinguish between the number of\n> potential-HOT out-of-page updates and the other out-of-page updates. But\n> that'd mean even more counters.\n\nISTM that it would make more sense to do that at the index level\ninstead. It wouldn't be all that hard to teach ExecInsertIndexTuples()\nto remember whether each index received the indexUnchanged hint used\nby bottom-up deletion, which is approximately the same thing, but\nworks at the index level.\n\nThis is obviously more useful, because you have index-granularity\ninformation that can guide users in how to index to maximize the\nnumber of HOT updates. And, even if changing things around didn't lead\nto the hoped-for improvement in the rate of HOT updates, it would at\nleast still allow the indexes on the table to use bottom-up deletion\nmore often, on average.\n\nAdmittedly this has some problems. The index_unchanged_by_update()\nlogic probably isn't as sophisticated as it ought to be because it's\ndriven by the statement-level extraUpdatedCols bitmap set, and not a\nper-tuple test, like the HOT safety test in heap_update() is.\nBut...that should probably be fixed anyway.\n\n> I think this might cause some trouble for existing monitoring setups after an\n> upgrade. Suddenly the number of updates will appear way lower than\n> before... And if we end up eventually distinguishing between different reasons\n> for out-of-page updates, or hot/non-hot out-of-page that'll happen again.\n\nUh...no it won't? The new counter is totally independent of the existing\nHOT counter, and the transactional all-updates counter. It's just that\nthere is an enum that encodes which of the two non-transactional \"sub\ncounters\" to use (either for HOT updates or new-page-migration\nupdates).\n\n> I wish we'd included HOT updates in the total number of updates, and just kept\n> HOT updates a separate counter that'd always be less than updates in total.\n\nUh...we did in fact do it that way to begin with?\n\n> From that angle: Perhaps it'd be better to have counter for how many times a\n> page is found to be full during an update?\n\nDidn't Corey propose a patch to add just that? Do you mean something\nmore specific, like a tracker for when an UPDATE leaves a page full,\nwithout needing to go to a new page itself?\n\nIf so, then that does require defining what that really means, because\nit isn't trivial. Do you assume that all updates have a successor\nversion that is equal in size to that of the UPDATE that gets counted\nby this hypothetical other counter of yours?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Jan 2023 17:59:32 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "Hi,\n\nOn 2023-01-27 17:59:32 -0800, Peter Geoghegan wrote:\n> > I think this might cause some trouble for existing monitoring setups after an\n> > upgrade. Suddenly the number of updates will appear way lower than\n> > before... And if we end up eventually distinguishing between different reasons\n> > for out-of-page updates, or hot/non-hot out-of-page that'll happen again.\n> \n> Uh...no it won't? The new counter is totally independent of the existing\n> HOT counter, and the transactional all-updates counter. It's just that\n> there is an enum that encodes which of the two non-transactional \"sub\n> counters\" to use (either for HOT updates or new-page-migration\n> updates).\n>\n> > I wish we'd included HOT updates in the total number of updates, and just kept\n> > HOT updates a separate counter that'd always be less than updates in total.\n> \n> Uh...we did in fact do it that way to begin with?\n\nSorry, I misread the diff, and then misremembered some old issue.\n\n\n> > From that angle: Perhaps it'd be better to have counter for how many times a\n> > page is found to be full during an update?\n> \n> Didn't Corey propose a patch to add just that? Do you mean something\n> more specific, like a tracker for when an UPDATE leaves a page full,\n> without needing to go to a new page itself?\n\nNope, I just had a brainfart.\n\n\n> > Similarly, it's a bit sad that we can't distinguish between the number of\n> > potential-HOT out-of-page updates and the other out-of-page updates. But\n> > that'd mean even more counters.\n> \n> ISTM that it would make more sense to do that at the index level\n> instead. It wouldn't be all that hard to teach ExecInsertIndexTuples()\n> to remember whether each index received the indexUnchanged hint used\n> by bottom-up deletion, which is approximately the same thing, but\n> works at the index level.\n\nI don't think that'd make it particularly easy to figure out how often\nout-of-space causes non-HOT updates to go out of page, and how often it causes\npotential HOT updates to go out of page. If you just have a single index,\nit's not too hard, but after that seems decidedly nontrival.\n\nBut I might just be missing what you're suggesting.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:44:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "On Fri, Jan 27, 2023 at 6:44 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think that'd make it particularly easy to figure out how often\n> out-of-space causes non-HOT updates to go out of page, and how often it causes\n> potential HOT updates to go out of page. If you just have a single index,\n> it's not too hard, but after that seems decidedly nontrival.\n>\n> But I might just be missing what you're suggesting.\n\nIt would be useless for that, of course. But it would be a good proxy\nfor what specific indexes force non-hot updates due to HOT safety\nissues. This would work independently of the issue of what's going on\nin the heap. That matters too, of course, but in practice the main\nproblem is likely the specific combination of indexes and updates.\n(Maybe it would just be an issue with heap fill factor, at times, but\neven then you'd still want to rule out basic HOT safety issues first.)\n\nIf you see one particular index that gets a far larger number of\nnon-hot updates that are reported as \"logical changes to the indexed\ncolumns\", then dropping that index has the potential to make the HOT\nupdate situation far better.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Jan 2023 18:51:09 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "On Fri, Jan 27, 2023 at 6:55 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-01-27 18:23:39 -0500, Corey Huinker wrote:\n> > This patch adds the n_tup_newpage_upd to all the table stat views.\n> >\n> > Just as we currently track HOT updates, it should be beneficial to track\n> > updates where the new tuple cannot fit on the existing page and must go\n> to\n> > a different one.\n>\n> I like that idea.\n>\n>\n> I wonder if it's quite detailed enough - we can be forced to do out-of-page\n> updates because we actually are out of space, or because we reach the max\n> number of line pointers we allow in a page. HOT pruning can't remove dead\n> line\n> pointers, so that doesn't necessarily help.\n>\n\nI must be missing something, I only see the check for running out of space,\nnot the check for exhausting line pointers. I agree dividing them would be\ninteresting.\n\n\n\n> Similarly, it's a bit sad that we can't distinguish between the number of\n> potential-HOT out-of-page updates and the other out-of-page updates. But\n> that'd mean even more counters.\n>\n\nI wondered that too, but the combinations of \"would have been HOT but not\nno space\" and \"key update suggested not-HOT but it was id=id so today's\nyour lucky HOT\" combinations started to get away from me.\n\nI wondered if there was interest in knowing if the tuple had to get\nTOASTed, and further wondered if we would be interested in the number of\nupdates that had to wait on a lock. Again, more counters.\n\nOn Fri, Jan 27, 2023 at 6:55 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-01-27 18:23:39 -0500, Corey Huinker wrote:\n> This patch adds the n_tup_newpage_upd to all the table stat views.\n>\n> Just as we currently track HOT updates, it should be beneficial to track\n> updates where the new tuple cannot fit on the existing page and must go to\n> a different one.\n\nI like that idea.\n\n\nI wonder if it's quite detailed enough - we can be forced to do out-of-page\nupdates because we actually are out of space, or because we reach the max\nnumber of line pointers we allow in a page. HOT pruning can't remove dead line\npointers, so that doesn't necessarily help.I must be missing something, I only see the check for running out of space, not the check for exhausting line pointers. I agree dividing them would be interesting. Similarly, it's a bit sad that we can't distinguish between the number of\npotential-HOT out-of-page updates and the other out-of-page updates. But\nthat'd mean even more counters.I wondered that too, but the combinations of \"would have been HOT but not no space\" and \"key update suggested not-HOT but it was id=id so today's your lucky HOT\" combinations started to get away from me.I wondered if there was interest in knowing if the tuple had to get TOASTed, and further wondered if we would be interested in the number of updates that had to wait on a lock. Again, more counters.", "msg_date": "Mon, 30 Jan 2023 13:40:15 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 13:40:15 -0500, Corey Huinker wrote:\n> I must be missing something, I only see the check for running out of space,\n> not the check for exhausting line pointers. I agree dividing them would be\n> interesting.\n\nSee PageGetHeapFreeSpace(), particularly the header comment and the\nMaxHeapTuplesPerPage check.\n\n\n> > Similarly, it's a bit sad that we can't distinguish between the number of\n> > potential-HOT out-of-page updates and the other out-of-page updates. But\n> > that'd mean even more counters.\n>\n> I wondered that too, but the combinations of \"would have been HOT but not\n> no space\" and \"key update suggested not-HOT but it was id=id so today's\n> your lucky HOT\" combinations started to get away from me.\n\nNot sure I follow the second part. Are you just worried about explaining when\na HOT update is possible?\n\n\n> I wondered if there was interest in knowing if the tuple had to get\n> TOASTed, and further wondered if we would be interested in the number of\n> updates that had to wait on a lock. Again, more counters.\n\nThose seem a lot less actionable / related to the topic at hand to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Jan 2023 10:45:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "On Fri, Jan 27, 2023 at 3:23 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> This patch adds the n_tup_newpage_upd to all the table stat views.\n\nI think that this is pretty close to being committable already.\n\nI'll move on that early next week, barring any objections.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 Mar 2023 15:22:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "On Fri, Mar 17, 2023 at 3:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that this is pretty close to being committable already.\n\nAttached revision has some small tweaks by me. Going to commit this\nrevised version tomorrow morning.\n\nChanges:\n\n* No more dedicated struct to carry around the type of an update.\n\nWe just use two boolean arguments to the pgstats function instead. The\nstruct didn't seem to be adding much, and it was distracting to track\nthe information this way within heap_update().\n\n* Small adjustments to the documentation.\n\nNearby related items were tweaked slightly to make everything fit\ntogether a bit better. For example, the description of n_tup_hot_upd\nis revised to make it obvious that n_tup_hot_upd counts row updates\nthat can never get counted under the new n_tup_newpage_upd counter.\n\n--\nPeter Geoghegan", "msg_date": "Wed, 22 Mar 2023 17:14:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "On Wed, Mar 22, 2023 at 05:14:08PM -0700, Peter Geoghegan wrote:\n> * Small adjustments to the documentation.\n> \n> Nearby related items were tweaked slightly to make everything fit\n> together a bit better. For example, the description of n_tup_hot_upd\n> is revised to make it obvious that n_tup_hot_upd counts row updates\n> that can never get counted under the new n_tup_newpage_upd counter.\n\n@@ -168,6 +168,7 @@ typedef struct PgStat_TableCounts\n PgStat_Counter t_tuples_updated;\n PgStat_Counter t_tuples_deleted;\n PgStat_Counter t_tuples_hot_updated;\n+ PgStat_Counter t_tuples_newpage_updated;\n bool t_truncdropped;\n\nI have in the works something that's going to rename these fields to\nnot have the \"t_\" prefix anymore, to ease some global refactoring in\npgstatfuncs.c so as we have less repetitive code with the functions \nthat grab these counters. I don't think that's something you need to\nname without the prefix here, just a FYI that this is going to be\nimmediately renamed ;)\n--\nMichael", "msg_date": "Thu, 23 Mar 2023 09:24:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": ">\n>\n> * No more dedicated struct to carry around the type of an update.\n>\n> We just use two boolean arguments to the pgstats function instead. The\n> struct didn't seem to be adding much, and it was distracting to track\n> the information this way within heap_update().\n>\n\nThat's probably a good move, especially if we start tallying updates that\nuse TOAST.\n\n* No more dedicated struct to carry around the type of an update.\n\nWe just use two boolean arguments to the pgstats function instead. The\nstruct didn't seem to be adding much, and it was distracting to track\nthe information this way within heap_update().That's probably a good move, especially if we start tallying updates that use TOAST.", "msg_date": "Thu, 23 Mar 2023 01:38:42 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" }, { "msg_contents": "On Wed, Mar 22, 2023 at 10:38 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> That's probably a good move, especially if we start tallying updates that use TOAST.\n\nOkay, pushed.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 23 Mar 2023 11:18:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add n_tup_newpage_upd to pg_stat table views" } ]
[ { "msg_contents": "Hi,\n\nI found what appears to be a small harmless error in numeric.c,\nthat seems worthwhile to fix only because it's currently causes confusion.\n\nIt hasn't caused any problems, since the incorrect formula happens to\nalways produce the same result for DEC_DIGITS==4.\n\nHowever, for other DEC_DIGITS values, it causes an undesired variation in the\nprecision of the results returned by sqrt().\n\nTo understand the problem, let's look at the equivalent formula for sweight,\nwhen replacing DEC_DIGITS with the values 1, 2, 4:\n\nHEAD, unpatched:\n sweight = (arg.weight + 1) * DEC_DIGITS / 2 - 1\nRewritten:\n (arg.weight + 1) * 1 / 2 - 1 <=> arg.weight / 2 - 1 / 2\n (arg.weight + 1) * 2 / 2 - 1 <=> arg.weight\n (arg.weight + 1) * 4 / 2 - 1 <=> 2 * arg.weight + 1\n\nHEAD, patched:\n sweight = (arg.weight * DEC_DIGITS) / 2 + 1\nRewritten:\n (arg.weight * 1) / 2 + 1 <=> arg.weight / 2 + 1\n (arg.weight * 2) / 2 + 1 <=> arg.weight + 1\n (arg.weight * 4) / 2 + 1 <=> 2 * arg.weight + 1\n\nAs we can see, the equivalent formula for the patched version is arg.weight\ntimes half the DEC_DIGITS, plus one.\n\nThe first part of the formula is the same but note how the patched version\ngives a constant addition of `+ 1` regardless of the DEC_DIGITS value,\nwhereas the unpatched version gives strange subtractions/additions\nsuch as `- 1 / 2` and `+ 3`.\n\nDemonstration of the undesired result digit precision variation effect:\n\nHEAD, unpatched:\nDEC_DIGITS sqrt(2::numeric)\n4 1.414213562373095\n2 1.4142135623730950\n1 1.41421356237309505\n\nHEAD, patched:\nDEC_DIGITS sqrt(2::numeric)\n4 1.414213562373095\n2 1.414213562373095\n1 1.414213562373095\n\nThe patched version consistently returns 16 significant digits for sqrt(2::numeric)\nwhen DEC_DIGITS is 1, 2 and 4, whereas the unpatched version surprisingly\ngives 18 sig. digits for DEC_DIGITS==1 and 17 sig. digits for DEC_DIGITS==2.\n\nNote, however, that it's still possible to find examples of when sqrt(numeric)\nproduce results with different precision for different DEC_DIGITS/NBASE values,\nbut in such cases, it's intentional, and due to getting additional precision\nfor free, since the larger the NBASE, the more decimal digits are produced\nat the same time per iteration in the calculation.\n\nExample:\n\nHEAD, unpatched\nDEC_DIGITS sqrt(102::numeric)\n4 10.09950493836208\n2 10.099504938362078\n1 10.0995049383620780\n\nHEAD, patched:\nDEC_DIGITS sqrt(102::numeric)\n4 10.099504938362078\n2 10.09950493836208\n1 10.09950493836208\n\nAccording to the comment in numeric_sqrt(), the goal is to give at least\nNUMERIC_MIN_SIG_DIGITS (16) significant digits.\n\nSince 10.09950493836208 has 16 significant digits, we can see above how\nDEC_DIGITS==2 causes an additional unnecessary significant digit to be computed,\nand for DEC_DIGITS==1, two additional unnecessary significant digits are\ncomputed.\n\nThe patched version returns 16 significant digits as expected for DEC_DIGITS==2\nand DEC_DIGITS==1, and for DEC_DIGITS==4 we get an additional digit for free.\n\nTo see why we should get an additional digit for the DEC_DIGITS==4 case,\nlet's enable NUMERIC_DEBUG and look at the result:\n\nSELECT sqrt(102::numeric);\nmake_result(): NUMERIC w=0 d=0 POS 0102\nmake_result(): NUMERIC w=0 d=15 POS 0010 0995 0493 8362 0780\n sqrt\n--------------------\n10.099504938362078\n(1 row)\n\nSince 10.099504938362 has only 14 sig. digits, we need one more NBASE digit\nin the result, thus 0780 is computed, and we get an extra decimal digit for\nfree.\n\nCompare this to DEC_DIGITS==2, which for the patched version correctly\nreturns 10.09950493836208, since the last produced NBASE digit `08`\nis sufficient, i.e. with it, the result has 16 sig. decimal digits,\nwhich is enough, since NUMERIC_MIN_SIG_DIGITS==16.\n\nIn conclusion, the proposed patch fixes a harmless problem, but is important\nto fix, since otherwise, anyone who want to experiment with different\nDEC_DIGITS/NBASE combinations by changing the `#if 0` preprocessor values\nin the top of numeric.c will get surprising results from sqrt().\n\nIn passing, also add pow10[] values for DEC_DIGITS==2 and DEC_DIGITS==1,\nsince otherwise it's not possible to compile such DEC_DIGITS values\ndue to the assert:\n\n StaticAssertDecl(lengthof(pow10) == DEC_DIGITS, \"mismatch with DEC_DIGITS\");\n\n/Joel\nHi,I found what appears to be a small harmless error in numeric.c,that seems worthwhile to fix only because it's currently causes confusion.It hasn't caused any problems, since the incorrect formula happens toalways produce the same result for DEC_DIGITS==4.However, for other DEC_DIGITS values, it causes an undesired variation in theprecision of the results returned by sqrt().To understand the problem, let's look at the equivalent formula for sweight,when replacing DEC_DIGITS with the values 1, 2, 4:HEAD, unpatched:    sweight = (arg.weight + 1) * DEC_DIGITS / 2 - 1Rewritten:    (arg.weight + 1) * 1 / 2 - 1 <=> arg.weight / 2 - 1 / 2    (arg.weight + 1) * 2 / 2 - 1 <=> arg.weight    (arg.weight + 1) * 4 / 2 - 1 <=> 2 * arg.weight + 1HEAD, patched:    sweight = (arg.weight * DEC_DIGITS) / 2 + 1Rewritten:    (arg.weight * 1) / 2 + 1 <=> arg.weight / 2 + 1    (arg.weight * 2) / 2 + 1 <=> arg.weight + 1    (arg.weight * 4) / 2 + 1 <=> 2 * arg.weight + 1As we can see, the equivalent formula for the patched version is arg.weighttimes half the DEC_DIGITS, plus one.The first part of the formula is the same but note how the patched versiongives a constant addition of `+ 1` regardless of the DEC_DIGITS value,whereas the unpatched version gives strange subtractions/additionssuch as `- 1 / 2` and `+ 3`.Demonstration of the undesired result digit precision variation effect:HEAD, unpatched:DEC_DIGITS  sqrt(2::numeric)4           1.4142135623730952           1.41421356237309501           1.41421356237309505HEAD, patched:DEC_DIGITS  sqrt(2::numeric)4           1.4142135623730952           1.4142135623730951           1.414213562373095The patched version consistently returns 16 significant digits for sqrt(2::numeric)when DEC_DIGITS is 1, 2 and 4, whereas the unpatched version surprisinglygives 18 sig. digits for DEC_DIGITS==1 and 17 sig. digits for DEC_DIGITS==2.Note, however, that it's still possible to find examples of when sqrt(numeric)produce results with different precision for different DEC_DIGITS/NBASE values,but in such cases, it's intentional, and due to getting additional precisionfor free, since the larger the NBASE, the more decimal digits are producedat the same time per iteration in the calculation.Example:HEAD, unpatchedDEC_DIGITS  sqrt(102::numeric)4           10.099504938362082           10.0995049383620781           10.0995049383620780HEAD, patched:DEC_DIGITS  sqrt(102::numeric)4           10.0995049383620782           10.099504938362081           10.09950493836208According to the comment in numeric_sqrt(), the goal is to give at leastNUMERIC_MIN_SIG_DIGITS (16) significant digits.Since 10.09950493836208 has 16 significant digits, we can see above howDEC_DIGITS==2 causes an additional unnecessary significant digit to be computed,and for DEC_DIGITS==1, two additional unnecessary significant digits arecomputed.The patched version returns 16 significant digits as expected for DEC_DIGITS==2and DEC_DIGITS==1, and for DEC_DIGITS==4 we get an additional digit for free.To see why we should get an additional digit for the DEC_DIGITS==4 case,let's enable NUMERIC_DEBUG and look at the result:SELECT sqrt(102::numeric);make_result(): NUMERIC w=0 d=0 POS 0102make_result(): NUMERIC w=0 d=15 POS 0010 0995 0493 8362 0780        sqrt--------------------10.099504938362078(1 row)Since 10.099504938362 has only 14 sig. digits, we need one more NBASE digitin the result, thus 0780 is computed, and we get an extra decimal digit forfree.Compare this to DEC_DIGITS==2, which for the patched version correctlyreturns 10.09950493836208, since the last produced NBASE digit `08`is sufficient, i.e. with it, the result has 16 sig. decimal digits,which is enough, since NUMERIC_MIN_SIG_DIGITS==16.In conclusion, the proposed patch fixes a harmless problem, but is importantto fix, since otherwise, anyone who want to experiment with differentDEC_DIGITS/NBASE combinations by changing the `#if 0` preprocessor valuesin the top of numeric.c will get surprising results from sqrt().In passing, also add pow10[] values for DEC_DIGITS==2 and DEC_DIGITS==1,since otherwise it's not possible to compile such DEC_DIGITS valuesdue to the assert:    StaticAssertDecl(lengthof(pow10) == DEC_DIGITS, \"mismatch with DEC_DIGITS\");/Joel", "msg_date": "Sat, 28 Jan 2023 23:13:47 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "[PATCH] Fix old thinko in formula to compute sweight in\n numeric_sqrt()." }, { "msg_contents": "On Sat, 28 Jan 2023 at 22:14, Joel Jacobson <joel@compiler.org> wrote:\n>\n> I found what appears to be a small harmless error in numeric.c,\n> that seems worthwhile to fix only because it's currently causes confusion.\n>\n\nShrug. Looking at git blame, it's been like that for about 20 years,\nand I wasn't aware of it causing confusion.\n\n> HEAD, patched:\n> sweight = (arg.weight * DEC_DIGITS) / 2 + 1\n>\n\nYou haven't actually said why this formula is more correct than the\ncurrent one. I believe that it is when arg.weight >= 0, but I'm not\nconvinced it's correct for arg.weight < 0.\n\nGiven that this is only an approximate computation, which ensures\nroughly 16 significant digits in the result, but can't guarantee\nexactly 16 digits, I'm not convinced of the benefits of changing it.\n\n> Note, however, that it's still possible to find examples of when sqrt(numeric)\n> produce results with different precision for different DEC_DIGITS/NBASE values,\n> but in such cases, it's intentional, and due to getting additional precision\n> for free, since the larger the NBASE, the more decimal digits are produced\n> at the same time per iteration in the calculation.\n>\n> Example:\n>\n> HEAD, unpatched\n> DEC_DIGITS sqrt(102::numeric)\n> 4 10.09950493836208\n> 2 10.099504938362078\n> 1 10.0995049383620780\n>\n> HEAD, patched:\n> DEC_DIGITS sqrt(102::numeric)\n> 4 10.099504938362078\n> 2 10.09950493836208\n> 1 10.09950493836208\n>\n> According to the comment in numeric_sqrt(), the goal is to give at least\n> NUMERIC_MIN_SIG_DIGITS (16) significant digits.\n>\n> Since 10.09950493836208 has 16 significant digits, we can see above how\n> DEC_DIGITS==2 causes an additional unnecessary significant digit to be computed,\n> and for DEC_DIGITS==1, two additional unnecessary significant digits are\n> computed.\n>\n> The patched version returns 16 significant digits as expected for DEC_DIGITS==2\n> and DEC_DIGITS==1, and for DEC_DIGITS==4 we get an additional digit for free.\n>\n\nYou lost me here. In unpatched HEAD, sqrt(102::numeric) produces\n10.099504938362078, not 10.09950493836208 (with DEC_DIGITS = 4). And\nwasn't your previous point that when DEC_DIGITS = 4, the new formula\nis the same as the old one?\n\n> In conclusion, the proposed patch fixes a harmless problem, but is important\n> to fix, since otherwise, anyone who want to experiment with different\n> DEC_DIGITS/NBASE combinations by changing the `#if 0` preprocessor values\n> in the top of numeric.c will get surprising results from sqrt().\n>\n\nAnyone changing DEC_DIGITS/NBASE will find hundreds of regression test\nfailures due to changes in result precision all over the place (and\nfailures due to the overflow limit changing). I don't see why\nnumeric_sqrt() should be singled out for fixing.\n\n> In passing, also add pow10[] values for DEC_DIGITS==2 and DEC_DIGITS==1,\n> since otherwise it's not possible to compile such DEC_DIGITS values\n> due to the assert:\n>\n> StaticAssertDecl(lengthof(pow10) == DEC_DIGITS, \"mismatch with DEC_DIGITS\");\n>\n\nThat might be worth doing, to ensure that the code still compiles for\nother DEC_DIGITS/NBASE values. I'm not sure how useful that really is\nanymore though. As the comment at the top says, it's kept mostly for\nhistorical reasons.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 29 Jan 2023 13:33:22 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix old thinko in formula to compute sweight in\n numeric_sqrt()." }, { "msg_contents": "Hi,\n\nOn Sun, Jan 29, 2023, at 14:33, Dean Rasheed wrote:\n> On Sat, 28 Jan 2023 at 22:14, Joel Jacobson <joel@compiler.org> wrote:\n>> HEAD, patched:\n>> sweight = (arg.weight * DEC_DIGITS) / 2 + 1\n>\n> You haven't actually said why this formula is more correct than the\n> current one. I believe that it is when arg.weight >= 0, but I'm not\n> convinced it's correct for arg.weight < 0.\n\nOps, I totally failed to consider arg.weight < 0. Nice catch, thanks!\n\nIt seems that,\nwhen arg.weight < 0, the current sweight formula in HEAD is correct, and,\nwhen arg.weight >= 0, the formula I suggested seems to be an improvement.\n\nI think this is what we want:\n\n\tif (arg.weight < 0)\n\t\tsweight = (arg.weight + 1) * DEC_DIGITS / 2 - 1;\n\telse\n\t\tsweight = arg.weight * DEC_DIGITS / 2 + 1;\n\nFor DEC_DIGITS == 4, they are both the same, and become (arg.weight * 2 + 1).\nBut when DEC_DIGITS != 4 && arg.weight >= 0, it's an improvement,\nas we then don't need to generate unnecessarily many sig. digits,\nand more often get exactly NUMERIC_MIN_SIG_DIGITS in the result,\nwhile still never getting fewer than NUMERIC_MIN_SIG_DIGITS.\n\nWhen DEC_DIGITS == 4, the compiler optimizes away the if/else,\nand compiles it to exactly the same code as the\ncurrent sweight formula, tested with Godbolt:\n\nTest code:\n#define DEC_DIGITS 4\nint sweight(weight) {\n\tif (weight < 0)\n\t\treturn (weight + 1) * DEC_DIGITS / 2 - 1;\n\telse\n\t\treturn weight * DEC_DIGITS / 2 + 1;\n}\n\nOutput for x86-64 gcc (trunk):\n\nsweight:\n lea eax, [rdi+1+rdi]\n ret\n\nSo, the extra if/else shouldn't cause any overhead, when DEC_DIGITS == 4.\n\nNot sure how/if it can be mathematically proven why it's more correct\nwhen DEC_DIGITS != 4, i.e., when it makes a difference.\nI derived it based on the insight that the square root weight\nshould naturally be half the arg weight, and that the integer divison\nmeans we must adjust for when the weight is not evenly divisable by two.\n\nBut it seems possible to gain some confidence about the correctness of the\nimprovement by experimentally testing a wide range of negative/positive\narg.weight.\n\nI wrote the attached script, test_sweight_formulas.sh, for that purpose.\nIt compares the number of sig. digits in the result for \n\n sqrt(trim_scale(2::numeric*10::numeric^exp))\n\nwhere exp is -40..40, that is, for args between\n 0.0000000000000000000000000000000000000002\n 0.000000000000000000000000000000000000002\n...\n 2000000000000000000000000000000000000000\n 20000000000000000000000000000000000000000\n\nThe last column shows the diff in number of sig. figs between HEAD and fix-sweight-v2.\n\nAs expected, there is no difference for DEC_DIGITS == 4.\nBut note the differences for DEC_DIGITS == 2 and DEC_DIGITS == 1 further down.\n\n--\n-- Comparison of sig_figs(sqrt(n)) for HEAD vs fix-sweight-v2\n-- when DEC_DIGITS == 4\n--\n DEC_DIGITS | n | HEAD | fix-sweight-v2 | diff\n------------+--------------+------+----------------+------\n 4 | 2e-40 | 21 | 21 | 0\n 4 | 2e-39 | 20 | 20 | 0\n 4 | 2e-38 | 20 | 20 | 0\n 4 | 2e-37 | 19 | 19 | 0\n 4 | 2e-36 | 19 | 19 | 0\n 4 | 2e-35 | 18 | 18 | 0\n 4 | 2e-34 | 18 | 18 | 0\n 4 | 2e-33 | 17 | 17 | 0\n 4 | 2e-32 | 17 | 17 | 0\n 4 | 2e-31 | 16 | 16 | 0\n 4 | 2e-30 | 17 | 17 | 0\n 4 | 2e-29 | 17 | 17 | 0\n 4 | 2e-28 | 16 | 16 | 0\n 4 | 2e-27 | 16 | 16 | 0\n 4 | 2e-26 | 17 | 17 | 0\n 4 | 2e-25 | 17 | 17 | 0\n 4 | 2e-24 | 16 | 16 | 0\n 4 | 2e-23 | 16 | 16 | 0\n 4 | 2e-22 | 17 | 17 | 0\n 4 | 2e-21 | 17 | 17 | 0\n 4 | 2e-20 | 16 | 16 | 0\n 4 | 2e-19 | 16 | 16 | 0\n 4 | 2e-18 | 17 | 17 | 0\n 4 | 2e-17 | 17 | 17 | 0\n 4 | 2e-16 | 16 | 16 | 0\n 4 | 2e-15 | 16 | 16 | 0\n 4 | 2e-14 | 17 | 17 | 0\n 4 | 2e-13 | 17 | 17 | 0\n 4 | 2e-12 | 16 | 16 | 0\n 4 | 2e-11 | 16 | 16 | 0\n 4 | 0.0000000002 | 17 | 17 | 0\n 4 | 0.000000002 | 17 | 17 | 0\n 4 | 0.00000002 | 16 | 16 | 0\n 4 | 0.0000002 | 16 | 16 | 0\n 4 | 0.000002 | 17 | 17 | 0\n 4 | 0.00002 | 17 | 17 | 0\n 4 | 0.0002 | 16 | 16 | 0\n 4 | 0.002 | 16 | 16 | 0\n 4 | 0.02 | 17 | 17 | 0\n 4 | 0.2 | 17 | 17 | 0\n 4 | 2 | 16 | 16 | 0\n 4 | 20 | 16 | 16 | 0\n 4 | 200 | 17 | 17 | 0\n 4 | 2000 | 17 | 17 | 0\n 4 | 20000 | 16 | 16 | 0\n 4 | 200000 | 16 | 16 | 0\n 4 | 2000000 | 17 | 17 | 0\n 4 | 20000000 | 17 | 17 | 0\n 4 | 200000000 | 16 | 16 | 0\n 4 | 2000000000 | 16 | 16 | 0\n 4 | 20000000000 | 17 | 17 | 0\n 4 | 2e+11 | 17 | 17 | 0\n 4 | 2e+12 | 16 | 16 | 0\n 4 | 2e+13 | 16 | 16 | 0\n 4 | 2e+14 | 17 | 17 | 0\n 4 | 2e+15 | 17 | 17 | 0\n 4 | 2e+16 | 16 | 16 | 0\n 4 | 2e+17 | 16 | 16 | 0\n 4 | 2e+18 | 17 | 17 | 0\n 4 | 2e+19 | 17 | 17 | 0\n 4 | 2e+20 | 16 | 16 | 0\n 4 | 2e+21 | 16 | 16 | 0\n 4 | 2e+22 | 17 | 17 | 0\n 4 | 2e+23 | 17 | 17 | 0\n 4 | 2e+24 | 16 | 16 | 0\n 4 | 2e+25 | 16 | 16 | 0\n 4 | 2e+26 | 17 | 17 | 0\n 4 | 2e+27 | 17 | 17 | 0\n 4 | 2e+28 | 16 | 16 | 0\n 4 | 2e+29 | 16 | 16 | 0\n 4 | 2e+30 | 17 | 17 | 0\n 4 | 2e+31 | 17 | 17 | 0\n 4 | 2e+32 | 17 | 17 | 0\n 4 | 2e+33 | 17 | 17 | 0\n 4 | 2e+34 | 18 | 18 | 0\n 4 | 2e+35 | 18 | 18 | 0\n 4 | 2e+36 | 19 | 19 | 0\n 4 | 2e+37 | 19 | 19 | 0\n 4 | 2e+38 | 20 | 20 | 0\n 4 | 2e+39 | 20 | 20 | 0\n 4 | 2e+40 | 21 | 21 | 0\n(81 rows)\n\n--\n-- Comparison of sig_figs(sqrt(n)) for HEAD vs fix-sweight-v2\n-- when DEC_DIGITS == 2\n--\n DEC_DIGITS | n | HEAD | fix-sweight-v2 | diff\n------------+--------------+------+----------------+------\n 2 | 2e-40 | 21 | 21 | 0\n 2 | 2e-39 | 20 | 20 | 0\n 2 | 2e-38 | 20 | 20 | 0\n 2 | 2e-37 | 19 | 19 | 0\n 2 | 2e-36 | 19 | 19 | 0\n 2 | 2e-35 | 18 | 18 | 0\n 2 | 2e-34 | 18 | 18 | 0\n 2 | 2e-33 | 17 | 17 | 0\n 2 | 2e-32 | 17 | 17 | 0\n 2 | 2e-31 | 17 | 17 | 0\n 2 | 2e-30 | 17 | 17 | 0\n 2 | 2e-29 | 17 | 17 | 0\n 2 | 2e-28 | 17 | 17 | 0\n 2 | 2e-27 | 17 | 17 | 0\n 2 | 2e-26 | 17 | 17 | 0\n 2 | 2e-25 | 17 | 17 | 0\n 2 | 2e-24 | 17 | 17 | 0\n 2 | 2e-23 | 17 | 17 | 0\n 2 | 2e-22 | 17 | 17 | 0\n 2 | 2e-21 | 17 | 17 | 0\n 2 | 2e-20 | 17 | 17 | 0\n 2 | 2e-19 | 17 | 17 | 0\n 2 | 2e-18 | 17 | 17 | 0\n 2 | 2e-17 | 17 | 17 | 0\n 2 | 2e-16 | 17 | 17 | 0\n 2 | 2e-15 | 17 | 17 | 0\n 2 | 2e-14 | 17 | 17 | 0\n 2 | 2e-13 | 17 | 17 | 0\n 2 | 2e-12 | 17 | 17 | 0\n 2 | 2e-11 | 17 | 17 | 0\n 2 | 0.0000000002 | 17 | 17 | 0\n 2 | 0.000000002 | 17 | 17 | 0\n 2 | 0.00000002 | 17 | 17 | 0\n 2 | 0.0000002 | 17 | 17 | 0\n 2 | 0.000002 | 17 | 17 | 0\n 2 | 0.00002 | 17 | 17 | 0\n 2 | 0.0002 | 17 | 17 | 0\n 2 | 0.002 | 17 | 17 | 0\n 2 | 0.02 | 17 | 17 | 0\n 2 | 0.2 | 17 | 17 | 0\n 2 | 2 | 17 | 16 | -1\n 2 | 20 | 17 | 16 | -1\n 2 | 200 | 17 | 16 | -1\n 2 | 2000 | 17 | 16 | -1\n 2 | 20000 | 17 | 16 | -1\n 2 | 200000 | 17 | 16 | -1\n 2 | 2000000 | 17 | 16 | -1\n 2 | 20000000 | 17 | 16 | -1\n 2 | 200000000 | 17 | 16 | -1\n 2 | 2000000000 | 17 | 16 | -1\n 2 | 20000000000 | 17 | 16 | -1\n 2 | 2e+11 | 17 | 16 | -1\n 2 | 2e+12 | 17 | 16 | -1\n 2 | 2e+13 | 17 | 16 | -1\n 2 | 2e+14 | 17 | 16 | -1\n 2 | 2e+15 | 17 | 16 | -1\n 2 | 2e+16 | 17 | 16 | -1\n 2 | 2e+17 | 17 | 16 | -1\n 2 | 2e+18 | 17 | 16 | -1\n 2 | 2e+19 | 17 | 16 | -1\n 2 | 2e+20 | 17 | 16 | -1\n 2 | 2e+21 | 17 | 16 | -1\n 2 | 2e+22 | 17 | 16 | -1\n 2 | 2e+23 | 17 | 16 | -1\n 2 | 2e+24 | 17 | 16 | -1\n 2 | 2e+25 | 17 | 16 | -1\n 2 | 2e+26 | 17 | 16 | -1\n 2 | 2e+27 | 17 | 16 | -1\n 2 | 2e+28 | 17 | 16 | -1\n 2 | 2e+29 | 17 | 16 | -1\n 2 | 2e+30 | 17 | 16 | -1\n 2 | 2e+31 | 17 | 16 | -1\n 2 | 2e+32 | 17 | 17 | 0\n 2 | 2e+33 | 17 | 17 | 0\n 2 | 2e+34 | 18 | 18 | 0\n 2 | 2e+35 | 18 | 18 | 0\n 2 | 2e+36 | 19 | 19 | 0\n 2 | 2e+37 | 19 | 19 | 0\n 2 | 2e+38 | 20 | 20 | 0\n 2 | 2e+39 | 20 | 20 | 0\n 2 | 2e+40 | 21 | 21 | 0\n(81 rows)\n\n--\n-- Comparison of sig_figs(sqrt(n)) for HEAD vs fix-sweight-v2\n-- when DEC_DIGITS == 1\n--\n DEC_DIGITS | n | HEAD | fix-sweight-v2 | diff\n------------+--------------+------+----------------+------\n 1 | 2e-40 | 21 | 21 | 0\n 1 | 2e-39 | 20 | 20 | 0\n 1 | 2e-38 | 20 | 20 | 0\n 1 | 2e-37 | 19 | 19 | 0\n 1 | 2e-36 | 19 | 19 | 0\n 1 | 2e-35 | 18 | 18 | 0\n 1 | 2e-34 | 18 | 18 | 0\n 1 | 2e-33 | 17 | 17 | 0\n 1 | 2e-32 | 17 | 17 | 0\n 1 | 2e-31 | 17 | 17 | 0\n 1 | 2e-30 | 17 | 17 | 0\n 1 | 2e-29 | 17 | 17 | 0\n 1 | 2e-28 | 17 | 17 | 0\n 1 | 2e-27 | 17 | 17 | 0\n 1 | 2e-26 | 17 | 17 | 0\n 1 | 2e-25 | 17 | 17 | 0\n 1 | 2e-24 | 17 | 17 | 0\n 1 | 2e-23 | 17 | 17 | 0\n 1 | 2e-22 | 17 | 17 | 0\n 1 | 2e-21 | 17 | 17 | 0\n 1 | 2e-20 | 17 | 17 | 0\n 1 | 2e-19 | 17 | 17 | 0\n 1 | 2e-18 | 17 | 17 | 0\n 1 | 2e-17 | 17 | 17 | 0\n 1 | 2e-16 | 17 | 17 | 0\n 1 | 2e-15 | 17 | 17 | 0\n 1 | 2e-14 | 17 | 17 | 0\n 1 | 2e-13 | 17 | 17 | 0\n 1 | 2e-12 | 17 | 17 | 0\n 1 | 2e-11 | 17 | 17 | 0\n 1 | 0.0000000002 | 17 | 17 | 0\n 1 | 0.000000002 | 17 | 17 | 0\n 1 | 0.00000002 | 17 | 17 | 0\n 1 | 0.0000002 | 17 | 17 | 0\n 1 | 0.000002 | 17 | 17 | 0\n 1 | 0.00002 | 17 | 17 | 0\n 1 | 0.0002 | 17 | 17 | 0\n 1 | 0.002 | 17 | 17 | 0\n 1 | 0.02 | 17 | 17 | 0\n 1 | 0.2 | 17 | 17 | 0\n 1 | 2 | 18 | 16 | -2\n 1 | 20 | 17 | 16 | -1\n 1 | 200 | 18 | 16 | -2\n 1 | 2000 | 17 | 16 | -1\n 1 | 20000 | 18 | 16 | -2\n 1 | 200000 | 17 | 16 | -1\n 1 | 2000000 | 18 | 16 | -2\n 1 | 20000000 | 17 | 16 | -1\n 1 | 200000000 | 18 | 16 | -2\n 1 | 2000000000 | 17 | 16 | -1\n 1 | 20000000000 | 18 | 16 | -2\n 1 | 2e+11 | 17 | 16 | -1\n 1 | 2e+12 | 18 | 16 | -2\n 1 | 2e+13 | 17 | 16 | -1\n 1 | 2e+14 | 18 | 16 | -2\n 1 | 2e+15 | 17 | 16 | -1\n 1 | 2e+16 | 18 | 16 | -2\n 1 | 2e+17 | 17 | 16 | -1\n 1 | 2e+18 | 18 | 16 | -2\n 1 | 2e+19 | 17 | 16 | -1\n 1 | 2e+20 | 18 | 16 | -2\n 1 | 2e+21 | 17 | 16 | -1\n 1 | 2e+22 | 18 | 16 | -2\n 1 | 2e+23 | 17 | 16 | -1\n 1 | 2e+24 | 18 | 16 | -2\n 1 | 2e+25 | 17 | 16 | -1\n 1 | 2e+26 | 18 | 16 | -2\n 1 | 2e+27 | 17 | 16 | -1\n 1 | 2e+28 | 18 | 16 | -2\n 1 | 2e+29 | 17 | 16 | -1\n 1 | 2e+30 | 18 | 16 | -2\n 1 | 2e+31 | 17 | 16 | -1\n 1 | 2e+32 | 18 | 17 | -1\n 1 | 2e+33 | 17 | 17 | 0\n 1 | 2e+34 | 18 | 18 | 0\n 1 | 2e+35 | 18 | 18 | 0\n 1 | 2e+36 | 19 | 19 | 0\n 1 | 2e+37 | 19 | 19 | 0\n 1 | 2e+38 | 20 | 20 | 0\n 1 | 2e+39 | 20 | 20 | 0\n 1 | 2e+40 | 21 | 21 | 0\n(81 rows)\n\n> You lost me here. In unpatched HEAD, sqrt(102::numeric) produces\n> 10.099504938362078, not 10.09950493836208 (with DEC_DIGITS = 4). And\n> wasn't your previous point that when DEC_DIGITS = 4, the new formula\n> is the same as the old one?\n\nMy apologies, I failed to correctly copy/paste the output from my terminal,\ninto the right DEC_DIGITS example, which made it look like\nthe output changed for DEC_DIGITS == 4 even though it didn't.\nNote to myself to always write a test script to ensure results are reproducible,\nwhich they now are thanks to the new test_sweight_formulas.sh script.\n\n>> In passing, also add pow10[] values for DEC_DIGITS==2 and DEC_DIGITS==1,\n>> since otherwise it's not possible to compile such DEC_DIGITS values\n>> due to the assert:\n>>\n>> StaticAssertDecl(lengthof(pow10) == DEC_DIGITS, \"mismatch with DEC_DIGITS\");\n>>\n>\n> That might be worth doing, to ensure that the code still compiles for\n> other DEC_DIGITS/NBASE values. I'm not sure how useful that really is\n> anymore though. As the comment at the top says, it's kept mostly for\n> historical reasons.\n\nAttached patch fix-pow10-assert.patch\n\n/Joel", "msg_date": "Tue, 31 Jan 2023 08:59:22 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix old thinko in formula to compute sweight in\n numeric_sqrt()." }, { "msg_contents": "On Tue, 31 Jan 2023 at 08:00, Joel Jacobson <joel@compiler.org> wrote:\n>\n> I think this is what we want:\n>\n> if (arg.weight < 0)\n> sweight = (arg.weight + 1) * DEC_DIGITS / 2 - 1;\n> else\n> sweight = arg.weight * DEC_DIGITS / 2 + 1;\n>\n\nThat's still not right. If you want a proper mathematically justified\nformula, it's fairly easy to derive.\n\nLet \"n\" be the decimal weight of the input, taken to be the number of\ndecimal digits before the decimal point (or minus the number of zeros\nafter the decimal point, for inputs with no digits before the decimal\npoint).\n\nSimilarly, let \"sweight\" be the decimal weight of the square root.\nThen the relationship between sweight and n can be seen from a few\nsimple examples (to 4 significant digits):\n\nn arg sqrt(arg) sweight\n-3 0.0001 .. 0.0009999 0.01 .. 0.03162 -1\n-2 0.001 .. 0.009999 0.03162 .. 0.09999 -1\n-1 0.01 .. 0.09999 0.1 .. 0.3162 0\n0 0.1 .. 0.9999 0.3162 .. 0.9999 0\n1 1 .. 9.999 1 .. 3.162 1\n2 10 .. 99.99 3.16 .. 9.999 1\n3 100 .. 999.9 10 .. 31.62 2\n4 1000 ... 9999 31.62 .. 99.99 2\n\nand the general formula is:\n\n sweight = floor((n+1) / 2)\n\nIn our case, since the base is NBASE, not 10, and since we only\nrequire an approximation, we don't take the trouble to compute n\nexactly, we just use the fact that it lies in the range\n\n arg.weight * DEC_DIGITS + 1 <= n <= (arg.weight + 1) * DEC_DIGITS\n\nSince we want to ensure at least a certain number of significant\ndigits in the result, we're only interested in the lower bound.\nPlugging that into the formula above gives:\n\n sweight >= floor(arg.weight * DEC_DIGITS / 2 + 1)\n\nor equivalently, in code with truncated integer division:\n\n if (arg.weight >= 0)\n sweight = arg.weight * DEC_DIGITS / 2 + 1;\n else\n sweight = 1 - (1 - arg.weight * DEC_DIGITS) / 2;\n\nThis is not the same as your formula. For example, when DEC_DIGITS = 1\nand arg.weight = -1, yours gives sweight = -1 which isn't right, it\nshould be 0.\n\nWhen DEC_DIGITS = 4, this formula also reduces to sweight = 2 *\narg.weight + 1, but neither gcc nor clang is smart enough to spot that\n(clang doesn't simplify your formula either, BTW). So even though I\nbelieve that the above is mathematically correct, and won't change any\nresults for DEC_DIGITS = 4, I'm still hesitant to use it, because it\nwill have a (small) performance impact, and I don't believe it does\nanything to improve code readability (and certainly not without an\nexplanatory comment).\n\nWhen DEC_DIGITS = 1, it does guarantee that the result has exactly 16\nsignificant digits (or more if the input scale is larger), but that's\nonly really of theoretical interest to anyone.\n\nAs I noted above, when DEC_DIGITS > 1, this formula is only an\napproximation, since it's not using the exact input decimal weight. So\nmy inclination is to leave the code as-is. It does guarantee that the\nresult has at least 16 significant digits, which is the intention.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 31 Jan 2023 13:40:05 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix old thinko in formula to compute sweight in\n numeric_sqrt()." }, { "msg_contents": "Hi,\n\nOn Tue, Jan 31, 2023, at 14:40, Dean Rasheed wrote:\n> That's still not right. If you want a proper mathematically justified\n> formula, it's fairly easy to derive.\n...\n> or equivalently, in code with truncated integer division:\n>\n> if (arg.weight >= 0)\n> sweight = arg.weight * DEC_DIGITS / 2 + 1;\n> else\n> sweight = 1 - (1 - arg.weight * DEC_DIGITS) / 2;\n\nBeautiful! Thank you for magnificent analysis and extraordinary good explanation, now I finally get it.\n\n> When DEC_DIGITS = 4, this formula also reduces to sweight = 2 *\n> arg.weight + 1, but neither gcc nor clang is smart enough to spot that\n> (clang doesn't simplify your formula either, BTW).\n\nOh, that's a shame. :-(\n\n> So even though I\n> believe that the above is mathematically correct, and won't change any\n> results for DEC_DIGITS = 4, I'm still hesitant to use it, because it\n> will have a (small) performance impact, and I don't believe it does\n> anything to improve code readability (and certainly not without an\n> explanatory comment).\n\nI also think the performance impact no matter how small isn't worth it,\nbut a comment based on your comments would be very valuable IMO.\n\nBelow is an attempt at summarising your text, and to avoid the performance impact,\nmaybe an #if so we get the general correct formula for DEC_DIGITS 1 or 2,\nand the reduced hand-optimised form for DEC_DIGITS 4?\nThat could also improve readabilty, since readers perhaps more easily would see\nthe relation between sweight and arg.weight, for the only DEC_DIGITS case we care about.\n\nSuggestion:\n\n\t/*\n\t * Here we approximate the decimal weight of the square root (sweight),\n\t * given the NBASE-weight (arg.weight) of the input argument.\n\t *\n\t * The lower bound of the decimal weight of the input argument is used to\n\t * calculate the decimal weight of the square root, with integer division\n\t * being truncated.\n\t * \n\t * The general formula is:\n\t *\n\t * sweight = floor((n+1) / 2)\n\t * \n\t * In our case, since the base is NBASE, not 10, and since we only\n\t * require an approximation, we don't take the trouble to compute n\n\t * exactly, we just use the fact that it lies in the range\n\t * \n\t * arg.weight * DEC_DIGITS + 1 <= n <= (arg.weight + 1) * DEC_DIGITS\n\t *\n\t * Since we want to ensure at least a certain number of significant\n\t * digits in the result, we're only interested in the lower bound.\n\t * Plugging that into the formula above gives:\n\t * \n\t * sweight >= floor(arg.weight * DEC_DIGITS / 2 + 1)\n\t *\n\t * Which leads us to the formula below with truncated integer division.\n\t */\n#if DEC_DIGITS == 1 || DEC_DIGITS == 2\n\n\tif (arg.weight >= 0)\n\t\tsweight = arg.weight * DEC_DIGITS / 2 + 1;\n\telse\n\t\tsweight = 1 - (1 - arg.weight * DEC_DIGITS) / 2;\n\n#elif DEC_DIGITS == 4\n\n\t/*\n\t * Neither gcc nor clang is smart enough to spot that\n\t * the formula above neatly reduces to the below\n\t * when DEC_DIGITS == 4.\n\t */\n\tsweight = 2 * arg.weight + 1;\n\n#else\n#error unsupported NBASE\n#endif\n\n/Joel\n\n\n", "msg_date": "Tue, 31 Jan 2023 16:05:28 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix old thinko in formula to compute sweight in\n numeric_sqrt()." }, { "msg_contents": "On Tue, 31 Jan 2023 at 15:05, Joel Jacobson <joel@compiler.org> wrote:\n>\n> I also think the performance impact no matter how small isn't worth it,\n> but a comment based on your comments would be very valuable IMO.\n>\n> Below is an attempt at summarising your text, and to avoid the performance impact,\n> maybe an #if so we get the general correct formula for DEC_DIGITS 1 or 2,\n> and the reduced hand-optimised form for DEC_DIGITS 4?\n> That could also improve readabilty, since readers perhaps more easily would see\n> the relation between sweight and arg.weight, for the only DEC_DIGITS case we care about.\n>\n\nThat seems a bit wordy, given the context of this comment. I think\nit's sufficient to just give the formula, and note that it simplifies\nwhen DEC_DIGITS is even (not just 4):\n\n /*\n * Assume the input was normalized, so arg.weight is accurate. The result\n * then has at least sweight = floor(arg.weight * DEC_DIGITS / 2 + 1)\n * digits before the decimal point. When DEC_DIGITS is even, we can save\n * a few cycles, since the division is exact and there is no need to\n * round down.\n */\n#if DEC_DIGITS == ((DEC_DIGITS / 2) * 2)\n sweight = arg.weight * DEC_DIGITS / 2 + 1;\n#else\n if (arg.weight >= 0)\n sweight = arg.weight * DEC_DIGITS / 2 + 1;\n else\n sweight = 1 - (1 - arg.weight * DEC_DIGITS) / 2;\n#endif\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 31 Jan 2023 19:25:37 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix old thinko in formula to compute sweight in\n numeric_sqrt()." }, { "msg_contents": "On Tue, Jan 31, 2023, at 20:25, Dean Rasheed wrote:\n> That seems a bit wordy, given the context of this comment. I think\n> it's sufficient to just give the formula, and note that it simplifies\n> when DEC_DIGITS is even (not just 4):\n>\n> /*\n> * Assume the input was normalized, so arg.weight is accurate. The result\n> * then has at least sweight = floor(arg.weight * DEC_DIGITS / 2 + 1)\n> * digits before the decimal point. When DEC_DIGITS is even, we can save\n> * a few cycles, since the division is exact and there is no need to\n> * round down.\n> */\n> #if DEC_DIGITS == ((DEC_DIGITS / 2) * 2)\n> sweight = arg.weight * DEC_DIGITS / 2 + 1;\n> #else\n> if (arg.weight >= 0)\n> sweight = arg.weight * DEC_DIGITS / 2 + 1;\n> else\n> sweight = 1 - (1 - arg.weight * DEC_DIGITS) / 2;\n> #endif\n\nNice, you managed to simplify it even further.\nI think the comment and the code now are crystal clear together.\n\nI've tested it successfully, test report attached. In summary:\nDEC_DIGITS=1 now produce 16 sig. figs. in the range sqrt(2e-31) .. sqrt(2e+32), which before had a mix of 17 and 18 sig. figs in the result.\nDEC_DIGTIS=2 now produce 16 sig. figs. in the range sqrt(2e-31) .. sqrt(2e+31), which before always had 17 sig. figs in the result.\nDEC_DIGITS=4 is unchanged.\n\nExact tested patch attached, code copy/pasted verbatim from your email.\n\nTest", "msg_date": "Tue, 31 Jan 2023 22:59:10 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix old thinko in formula to compute sweight in\n numeric_sqrt()." }, { "msg_contents": "On Tue, 31 Jan 2023 at 21:59, Joel Jacobson <joel@compiler.org> wrote:\n>\n> Nice, you managed to simplify it even further.\n> I think the comment and the code now are crystal clear together.\n>\n> I've tested it successfully, test report attached.\n>\n\nCool. Thanks for testing.\nCommitted.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 2 Feb 2023 09:49:33 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix old thinko in formula to compute sweight in\n numeric_sqrt()." } ]
[ { "msg_contents": "I spent a little time investigating bug #17759 [1] in more detail.\nInitially, I thought that it had been fixed by 3f7836ff65, but it\nturns out that's not the case.\n\n[1] https://www.postgresql.org/message-id/17759-e76d9bece1b5421c%40postgresql.org\n\nThe immediate cause of the bug was that, before 3f7836ff65, the set of\ngenerated columns to be updated depended on extraUpdatedCols from the\ntarget RTE, and for MERGE, this was not being populated. 3f7836ff65\nappeared to fix that (it fixes the test case in the bug report) by no\nlonger relying on rte->extraUpdatedCols, but unfortunately there's a\nlittle more to it than that.\n\nSince 3f7836ff65, ExecInitModifyTable() calls\nExecInitStoredGenerated() if the command is an INSERT or an UPDATE,\nbut not if it's a MERGE. This means that the generated column info\ndoesn't get built until later (when a merge action actually executes\nfor the first time). If the first merge action to execute is an\nUPDATE, and no updated columns require generated columns to be\nrecomputed, then ExecInitStoredGenerated() will skip those generated\ncolumns and not generate ri_GeneratedExprs / ri_extraUpdatedCols info\nfor them. That's a problem, however, since the MERGE might also\ncontain an INSERT that gets executed later, for which it isn't safe to\nskip any of the generated columns. Here's a simple reproducer:\n\nCREATE TABLE t (\n id int PRIMARY key,\n val int,\n str text,\n upper_str text GENERATED ALWAYS AS (upper(str)) STORED\n);\n\nINSERT INTO t VALUES (1, 10, 'orig');\n\nMERGE INTO t\n USING (VALUES (1, 100), (2, 200)) v(id, val) ON t.id = v.id\n WHEN MATCHED THEN UPDATE SET val = v.val\n WHEN NOT MATCHED THEN INSERT VALUES (v.id, v.val, 'new');\n\nSELECT * FROM t;\n\n id | val | str | upper_str\n----+-----+------+-----------\n 1 | 100 | orig | ORIG\n 2 | 200 | new |\n(2 rows)\n\n\nSo we need to ensure that ExecInitModifyTable() calls\nExecInitStoredGenerated() for MERGE. Passing CMD_MERGE to\nExecInitStoredGenerated() is good enough, since anything other than\nCMD_UPDATE causes it to not skip any generated columns. That could be\nimproved by examining the merge action list (it would be OK to skip\ngenerated columns as long as the MERGE didn't contain an INSERT\naction), but I don't think it's worth the extra effort / risk.\n\nSo I think we need the attached in HEAD and v15.\n\nRegards,\nDean", "msg_date": "Sun, 29 Jan 2023 09:57:39 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Bug #17759: GENERATED columns not computed during MERGE" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I spent a little time investigating bug #17759 [1] in more detail.\n> Initially, I thought that it had been fixed by 3f7836ff65, but it\n> turns out that's not the case.\n\nThanks for looking closer! I had felt a little unsure about that\ntoo, but hadn't gotten to poking into it.\n\n> So I think we need the attached in HEAD and v15.\n\nLooks good to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Jan 2023 11:25:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug #17759: GENERATED columns not computed during MERGE" } ]
[ { "msg_contents": "One open issue (IMO) with the meson build system is that it installs the \ntest modules under src/test/modules/ as part of a normal installation. \nThis is because there is no way to set up up the build system to install \nextra things only when told. I think we still need a way to disable \nthis somehow, so that building a production installation doesn't end up \nwith a bunch of extra files.\n\nThe attached simple patch is a starting point for discussion. It just \ndisables the subdirectory src/test/modules/ based on some Boolean \nsetting. This could be some new top-level option, or maybe part of \nPG_TEST_EXTRA, or something else? With this, I get an identical set of \ninstalled files from meson. I imagine this option would be false by \ndefault and developers would enable it.\n\nThoughts?", "msg_date": "Mon, 30 Jan 2023 08:37:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "meson: Optionally disable installation of test modules" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 08:37:42 +0100, Peter Eisentraut wrote:\n> One open issue (IMO) with the meson build system is that it installs the\n> test modules under src/test/modules/ as part of a normal installation. This\n> is because there is no way to set up up the build system to install extra\n> things only when told. I think we still need a way to disable this somehow,\n> so that building a production installation doesn't end up with a bunch of\n> extra files.\n> \n> The attached simple patch is a starting point for discussion. It just\n> disables the subdirectory src/test/modules/ based on some Boolean setting.\n> This could be some new top-level option, or maybe part of PG_TEST_EXTRA, or\n> something else? With this, I get an identical set of installed files from\n> meson. I imagine this option would be false by default and developers would\n> enable it.\n\nBilal, with a bit of help by me, worked on an alternative approach to\nthis. It's a lot more verbose in the initial change, but wouldn't increase the\namount of work/lines for new test modules. The main advantage is that we\nwouldn't have disable the modules by default, which I think would be quite\nlikely to result in plenty people not running the tests.\n\nSending a link instead of attaching, in case you already registered a cfbot entry:\nhttps://github.com/anarazel/postgres/commit/d1d192a860da39af9aa63d7edf643eed0eeee7c4\n\nProbably worth adding an install-test-modules target for manual use.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Jan 2023 09:42:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 30.01.23 18:42, Andres Freund wrote:\n> On 2023-01-30 08:37:42 +0100, Peter Eisentraut wrote:\n>> One open issue (IMO) with the meson build system is that it installs the\n>> test modules under src/test/modules/ as part of a normal installation. This\n>> is because there is no way to set up up the build system to install extra\n>> things only when told. I think we still need a way to disable this somehow,\n>> so that building a production installation doesn't end up with a bunch of\n>> extra files.\n>>\n>> The attached simple patch is a starting point for discussion. It just\n>> disables the subdirectory src/test/modules/ based on some Boolean setting.\n>> This could be some new top-level option, or maybe part of PG_TEST_EXTRA, or\n>> something else? With this, I get an identical set of installed files from\n>> meson. I imagine this option would be false by default and developers would\n>> enable it.\n> \n> Bilal, with a bit of help by me, worked on an alternative approach to\n> this. It's a lot more verbose in the initial change, but wouldn't increase the\n> amount of work/lines for new test modules. The main advantage is that we\n> wouldn't have disable the modules by default, which I think would be quite\n> likely to result in plenty people not running the tests.\n> \n> Sending a link instead of attaching, in case you already registered a cfbot entry:\n> https://github.com/anarazel/postgres/commit/d1d192a860da39af9aa63d7edf643eed0eeee7c4\n> \n> Probably worth adding an install-test-modules target for manual use.\n\nLooks like a good idea. I'm happy to proceed along that line.\n\n\n\n", "msg_date": "Tue, 31 Jan 2023 09:44:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "Hi,\n\nOn 1/31/23 11:44, Peter Eisentraut wrote:\n> On 30.01.23 18:42, Andres Freund wrote:\n>> Bilal, with a bit of help by me, worked on an alternative approach to\n>> this. It's a lot more verbose in the initial change, but wouldn't \n>> increase the\n>> amount of work/lines for new test modules. The main advantage is that we\n>> wouldn't have disable the modules by default, which I think would be \n>> quite\n>> likely to result in plenty people not running the tests.\n>>\n>> Sending a link instead of attaching, in case you already registered a \n>> cfbot entry:\n>> https://github.com/anarazel/postgres/commit/d1d192a860da39af9aa63d7edf643eed0eeee7c4 \n>>\n>>\n>> Probably worth adding an install-test-modules target for manual use.\n>\n> Looks like a good idea.  I'm happy to proceed along that line.\n\nI am adding a patch of an alternative approach. Also, I updated the \ncommit at the link by adding regress_module, autoinc_regress and \nrefint_regress to the test_install_libs.\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Wed, 1 Feb 2023 15:41:21 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 01.02.23 13:41, Nazir Bilal Yavuz wrote:\n> On 1/31/23 11:44, Peter Eisentraut wrote:\n>> On 30.01.23 18:42, Andres Freund wrote:\n>>> Bilal, with a bit of help by me, worked on an alternative approach to\n>>> this. It's a lot more verbose in the initial change, but wouldn't \n>>> increase the\n>>> amount of work/lines for new test modules. The main advantage is that we\n>>> wouldn't have disable the modules by default, which I think would be \n>>> quite\n>>> likely to result in plenty people not running the tests.\n>>>\n>>> Sending a link instead of attaching, in case you already registered a \n>>> cfbot entry:\n>>> https://github.com/anarazel/postgres/commit/d1d192a860da39af9aa63d7edf643eed0eeee7c4\n>>>\n>>> Probably worth adding an install-test-modules target for manual use.\n>>\n>> Looks like a good idea.  I'm happy to proceed along that line.\n> \n> I am adding a patch of an alternative approach. Also, I updated the \n> commit at the link by adding regress_module, autoinc_regress and \n> refint_regress to the test_install_libs.\n\nIf you feel that your patch is ready, please add it to the commit fest. \nI look forward to reviewing it.\n\n\n\n", "msg_date": "Wed, 8 Feb 2023 11:30:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "Hi,\n\n\nOn 2/8/23 13:30, Peter Eisentraut wrote:\n>\n> If you feel that your patch is ready, please add it to the commit \n> fest. I look forward to reviewing it.\n\n\nThanks! Commit fest entry link: https://commitfest.postgresql.org/42/4173/\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n\n", "msg_date": "Thu, 9 Feb 2023 18:30:20 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 09.02.23 16:30, Nazir Bilal Yavuz wrote:\n> On 2/8/23 13:30, Peter Eisentraut wrote:\n>>\n>> If you feel that your patch is ready, please add it to the commit \n>> fest. I look forward to reviewing it.\n> \n> \n> Thanks! Commit fest entry link: https://commitfest.postgresql.org/42/4173/\n\nI tested this a bit. It works fine. The approach makes sense to me.\n\nThe install_additional_files script could be simplified a bit. You \ncould use os.makedirs(dest, exist_ok=True) and avoid the error checking. \n I don't think any callers try to copy a directory source, so the \nshutil.copytree() stuff isn't necessary. Run pycodestyle over the \nscript. And let's put the script into src/tools/ like the other support \nscripts.\n\n\n\n", "msg_date": "Mon, 20 Feb 2023 19:43:56 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 2023-02-20 19:43:56 +0100, Peter Eisentraut wrote:\n> I don't think any callers try to copy a directory source, so the\n> shutil.copytree() stuff isn't necessary.\n\nI'd like to use it for installing docs outside of the normal install\ntarget. Of course we could add the ability at a later point, but that seems a\nbit pointless back-forth to me.\n\n\n", "msg_date": "Mon, 20 Feb 2023 11:48:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 20.02.23 20:48, Andres Freund wrote:\n> On 2023-02-20 19:43:56 +0100, Peter Eisentraut wrote:\n>> I don't think any callers try to copy a directory source, so the\n>> shutil.copytree() stuff isn't necessary.\n> \n> I'd like to use it for installing docs outside of the normal install\n> target. Of course we could add the ability at a later point, but that seems a\n> bit pointless back-forth to me.\n\nI figured it could be useful as a general installation tool, but the \ncurrent script has specific command-line options for this specific \npurpose, so I don't think it would work for your purpose anyway.\n\nFor the purpose here, we really just need something that does\n\n for src in sys.argv[1:-1]:\n shutil.copy2(src, sys.argv[-1])\n\nBut we need to call it twice for different sets of files and \ndestinations, and since we can't have more than one command per test, we \neither need to write two \"tests\" or write a wrapper script like the one \nwe have here.\n\nI don't know what the best way to slice this is, but it's not a lot of \ncode that we couldn't move around again in the future.\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 10:09:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "Hi,\n\nThanks for the review.\n\nOn Mon, 20 Feb 2023 at 21:44, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> I tested this a bit. It works fine. The approach makes sense to me.\n>\n> The install_additional_files script could be simplified a bit. You\n> could use os.makedirs(dest, exist_ok=True) and avoid the error checking.\n> I don't think any callers try to copy a directory source, so the\n> shutil.copytree() stuff isn't necessary. Run pycodestyle over the\n> script. And let's put the script into src/tools/ like the other support\n> scripts.\n>\n\nI updated the patch in line with your comments.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 23 Feb 2023 21:06:26 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "Hi,\n\nOn 2023-02-22 10:09:10 +0100, Peter Eisentraut wrote:\n> On 20.02.23 20:48, Andres Freund wrote:\n> > On 2023-02-20 19:43:56 +0100, Peter Eisentraut wrote:\n> > > I don't think any callers try to copy a directory source, so the\n> > > shutil.copytree() stuff isn't necessary.\n> > \n> > I'd like to use it for installing docs outside of the normal install\n> > target. Of course we could add the ability at a later point, but that seems a\n> > bit pointless back-forth to me.\n> \n> I figured it could be useful as a general installation tool, but the current\n> script has specific command-line options for this specific purpose, so I\n> don't think it would work for your purpose anyway.\n> \n> For the purpose here, we really just need something that does\n> \n> for src in sys.argv[1:-1]:\n> shutil.copy2(src, sys.argv[-1])\n> \n> But we need to call it twice for different sets of files and destinations,\n> and since we can't have more than one command per test, we either need to\n> write two \"tests\" or write a wrapper script like the one we have here.\n\nHow about making the arguments\n --install target-path list of files or directories\n --install another-path another set of files\n\n\n> I don't know what the best way to slice this is, but it's not a lot of code\n> that we couldn't move around again in the future.\n\nThat's true. The main work here is going through all the test modules, and\nthat won't be affected by changing the argument syntax.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 25 Feb 2023 10:36:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 23.02.23 19:06, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> Thanks for the review.\n> \n> On Mon, 20 Feb 2023 at 21:44, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> I tested this a bit. It works fine. The approach makes sense to me.\n>>\n>> The install_additional_files script could be simplified a bit. You\n>> could use os.makedirs(dest, exist_ok=True) and avoid the error checking.\n>> I don't think any callers try to copy a directory source, so the\n>> shutil.copytree() stuff isn't necessary. Run pycodestyle over the\n>> script. And let's put the script into src/tools/ like the other support\n>> scripts.\n>>\n> \n> I updated the patch in line with your comments.\n\nLooks good to me. I did a small pass over it to adjust some namings. \nFor example, I renamed test_install_files to test_install_data, so it's \nconsistent with the overall meson naming:\n\n-install_data(\n+test_install_data += files(\n\nLet me know if you have any concerns about this version. Otherwise, I'm \nhappy to commit it.", "msg_date": "Wed, 1 Mar 2023 20:20:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "Hi,\n\nOn Wed, 1 Mar 2023 at 22:21, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Looks good to me. I did a small pass over it to adjust some namings.\n> For example, I renamed test_install_files to test_install_data, so it's\n> consistent with the overall meson naming:\n>\n> -install_data(\n> +test_install_data += files(\n>\n> Let me know if you have any concerns about this version. Otherwise, I'm\n> happy to commit it.\n\nThat makes sense, thanks!\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 2 Mar 2023 10:09:48 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 02.03.23 08:09, Nazir Bilal Yavuz wrote:\n> On Wed, 1 Mar 2023 at 22:21, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> Looks good to me. I did a small pass over it to adjust some namings.\n>> For example, I renamed test_install_files to test_install_data, so it's\n>> consistent with the overall meson naming:\n>>\n>> -install_data(\n>> +test_install_data += files(\n>>\n>> Let me know if you have any concerns about this version. Otherwise, I'm\n>> happy to commit it.\n> \n> That makes sense, thanks!\n\ncommitted\n\n\n\n", "msg_date": "Fri, 3 Mar 2023 07:47:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 2023-03-03 Fr 01:47, Peter Eisentraut wrote:\n> On 02.03.23 08:09, Nazir Bilal Yavuz wrote:\n>> On Wed, 1 Mar 2023 at 22:21, Peter Eisentraut\n>> <peter.eisentraut@enterprisedb.com> wrote:\n>>>\n>>> Looks good to me.  I did a small pass over it to adjust some namings.\n>>> For example, I renamed test_install_files to test_install_data, so it's\n>>> consistent with the overall meson naming:\n>>>\n>>> -install_data(\n>>> +test_install_data += files(\n>>>\n>>> Let me know if you have any concerns about this version. Otherwise, I'm\n>>> happy to commit it.\n>>\n>> That makes sense, thanks!\n>\n> committed\n>\n>\n>\n\nThese changes have broken the buildfarm adaptation work in different \nways on different platforms.\n\nOn Windows (but not Linux), the install_test_files are apparently \ngetting installed under runpython in the build directory rather than in \nthe tmp_install location, so those tests fail. Meanwhile, it's not clear \nto me how to install them in a standard installation, which means that \non Linux the corresponding -running tests are failing.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-03 Fr 01:47, Peter\n Eisentraut wrote:\n\nOn\n 02.03.23 08:09, Nazir Bilal Yavuz wrote:\n \nOn Wed, 1 Mar 2023 at 22:21, Peter\n Eisentraut\n \n<peter.eisentraut@enterprisedb.com> wrote:\n \n\n\n Looks good to me.  I did a small pass over it to adjust some\n namings.\n \n For example, I renamed test_install_files to\n test_install_data, so it's\n \n consistent with the overall meson naming:\n \n\n -install_data(\n \n +test_install_data += files(\n \n\n Let me know if you have any concerns about this version. \n Otherwise, I'm\n \n happy to commit it.\n \n\n\n That makes sense, thanks!\n \n\n\n committed\n \n\n\n\n\n\n\nThese changes have broken the buildfarm adaptation work in\n different ways on different platforms.\nOn Windows (but not Linux), the install_test_files are apparently\n getting installed under runpython in the build directory rather\n than in the tmp_install location, so those tests fail. Meanwhile,\n it's not clear to me how to install them in a standard\n installation, which means that on Linux the corresponding -running\n tests are failing.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 3 Mar 2023 14:43:25 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "Hi,\n\nOn Fri, 3 Mar 2023 at 22:43, Andrew Dunstan <andrew@dunslane.net> wrote:\n> These changes have broken the buildfarm adaptation work in different ways on different platforms.\n>\n> On Windows (but not Linux), the install_test_files are apparently getting installed under runpython in the build directory rather than in the tmp_install location, so those tests fail. Meanwhile, it's not clear to me how to install them in a standard installation, which means that on Linux the corresponding -running tests are failing.\n\nIs there a way to see the 'meson-logs/testlog.txt' file under the\nbuild directory?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 6 Mar 2023 16:47:03 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 2023-03-06 Mo 08:47, Nazir Bilal Yavuz wrote:\n> Hi,\n>\n> On Fri, 3 Mar 2023 at 22:43, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> These changes have broken the buildfarm adaptation work in different ways on different platforms.\n>>\n>> On Windows (but not Linux), the install_test_files are apparently getting installed under runpython in the build directory rather than in the tmp_install location, so those tests fail. Meanwhile, it's not clear to me how to install them in a standard installation, which means that on Linux the corresponding -running tests are failing.\n> Is there a way to see the 'meson-logs/testlog.txt' file under the\n> build directory?\n\n\nThere are two separate issues here, but let's deal with the Windows \nissue. Attached is the log output and also a listing of the runpython \ndirectory in the build directory.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com", "msg_date": "Mon, 6 Mar 2023 10:30:22 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "Hi,\n\nOn Mon, 6 Mar 2023 at 18:30, Andrew Dunstan <andrew@dunslane.net> wrote:\n> There are two separate issues here, but let's deal with the Windows issue. Attached is the log output and also a listing of the runpython directory in the build directory.\n\nThanks for the logs but I couldn't understand the problem. Is there a\nway to reproduce this?\n\nFor the Linux problem, Andres's patch solves this but you need to run\nan extra command. [1]\n\nAfter applying Andres's patch, you need to run:\n$ meson compile install-test-files -C $pgsql\nbefore running the 'running tests'.\n\nI tested on my local and\n......\n$ meson compile install-test-files -C $pgsql\n$ meson test -C $pgsql --setup running --print-errorlogs --no-rebuild\n--logbase installcheckworld --no-suite regress-running --no-suite\nisolation-running --no-suite ecpg-running\npassed successfully.\n\n[1] https://www.postgresql.org/message-id/20230308012940.edexipb3vqylcu6r%40awork3.anarazel.de\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 8 Mar 2023 16:49:37 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" }, { "msg_contents": "On 2023-03-08 We 08:49, Nazir Bilal Yavuz wrote:\n> Hi,\n>\n> On Mon, 6 Mar 2023 at 18:30, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> There are two separate issues here, but let's deal with the Windows issue. Attached is the log output and also a listing of the runpython directory in the build directory.\n> Thanks for the logs but I couldn't understand the problem. Is there a\n> way to reproduce this?\n>\n\nProblem now apparently resolved.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-08 We 08:49, Nazir Bilal\n Yavuz wrote:\n\n\nHi,\n\nOn Mon, 6 Mar 2023 at 18:30, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nThere are two separate issues here, but let's deal with the Windows issue. Attached is the log output and also a listing of the runpython directory in the build directory.\n\n\n\nThanks for the logs but I couldn't understand the problem. Is there a\nway to reproduce this?\n\n\n\n\n\nProblem now apparently resolved.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 8 Mar 2023 17:25:52 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: meson: Optionally disable installation of test modules" } ]
[ { "msg_contents": "Hi hackers,\n\nI'm having some difficulties building the documentation on MacOS.\n\nI'm using ./full-build.sh script from [1] repository. It worked just\nfine for many years but since recently it started to fail like this:\n\n```\n/usr/bin/xsltproc --path . --stringparam pg.version '16devel'\n/Users/eax/projects/c/pgscripts/../postgresql/doc/src/sgml/stylesheet.xsl\npostgres-full.xml\nerror : Unknown IO error\nwarning: failed to load external entity\n\"http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"\ncompilation error: file\n/Users/eax/projects/c/pgscripts/../postgresql/doc/src/sgml/stylesheet.xsl\nline 6 element import\nxsl:import : unable to load\nhttp://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\nerror : Unknown IO error\n/Users/eax/projects/c/postgresql/doc/src/sgml/stylesheet-html-common.xsl:4:\nwarning: failed to load external entity\n\"http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\"\n%common.entities;\n ^\nEntity: line 1:\n %common.entities;\n ^\n[...]\n\n```\n\nThis is not a network problem. I can download chunk.xsl with wget and\nalso build the documentation on my Linux laptop.\n\nI've tried `brew reinstall` and also:\n\n```\n./configure ... XMLLINT=\"xmllint --nonet\" XSLTPROC=\"xsltproc --nonet\"\n```\n\n... as suggested by the documentation [2] but it didn't change anything.\n\nI checked the archive of pgsql-hackers@ but was unable to find\nanything relevant.\n\nI'm using MacOS Monterey 12.6.2.\n\n```\n$ brew info docbook\n==> docbook: stable 5.1 (bottled)\n...\n$ brew info docbook-xsl\n==> docbook-xsl: stable 1.79.2 (bottled)\n...\n```\n\nAt this point I could use a friendly piece of advice from the community.\n\n[1]: https://github.com/afiskon/pgscripts/\n[2]: https://www.postgresql.org/docs/15/docguide-toolsets.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 30 Jan 2023 13:18:37 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "MacOS: xsltproc fails with \"warning: failed to load external entity\"" }, { "msg_contents": "Hi hackers,\n\n> At this point I could use a friendly piece of advice from the community.\n\nI've found a solution:\n\n```\nexport SGML_CATALOG_FILES=/usr/local/etc/xml/catalog\nexport XMLLINT=\"xmllint --catalogs\"\nexport XSLTPROC=\"xsltproc --catalogs\"\n```\n\nI will submit a patch for the documentation in a bit, after I'll check\nit properly.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 30 Jan 2023 14:13:22 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi hackers,\n\n> I've found a solution:\n>\n> ```\n> export SGML_CATALOG_FILES=/usr/local/etc/xml/catalog\n> export XMLLINT=\"xmllint --catalogs\"\n> export XSLTPROC=\"xsltproc --catalogs\"\n> ```\n>\n> I will submit a patch for the documentation in a bit, after I'll check\n> it properly.\n\nPFA the patch.\n\nI don't have a strong opinion regarding any particular wording and\nwould like to ask the committer to change it as he sees fit.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 30 Jan 2023 14:53:25 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>> I've found a solution:\n>> \n>> ```\n>> export SGML_CATALOG_FILES=/usr/local/etc/xml/catalog\n>> export XMLLINT=\"xmllint --catalogs\"\n>> export XSLTPROC=\"xsltproc --catalogs\"\n>> ```\n\nHmm, there is no such directory on my Mac, and indeed this recipe\ndoes not work here. I tried to transpose it to MacPorts by\nsubstituting /opt/local/etc/xml/catalog, which does exist --- but\nthe recipe still doesn't work.\n\nI believe what is actually failing is that http://docbook.sourceforge.net\nnow redirects to https:, and the ancient xsltproc version provided by\nApple doesn't do https. What you need to do if you want to use their\nxsltproc is install a local copy of the SGML catalog files and\nstylesheets, preferably in the place that xsltproc would look by default\n(/etc/xml/catalog seems to be the standard one). It would be good to\ndocument how to do that, but this patch doesn't do so.\n\nWhat we do actually have already is a recommendation to install\nappropriate MacPorts or Homebrew packages:\n\nhttps://www.postgresql.org/docs/devel/docguide-toolsets.html#DOCGUIDE-TOOLSETS-INST-MACOS\n\nand it works okay for me as long as I use MacPorts' version of xsltproc.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:20:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi Tom,\n\nThanks for the feedback.\n\n> Hmm, there is no such directory on my Mac, and indeed this recipe\n> does not work here. I tried to transpose it to MacPorts by\n> substituting /opt/local/etc/xml/catalog, which does exist --- but\n> the recipe still doesn't work.\n\nWell, that's a bummer.\n\n> What we do actually have already is a recommendation to install\n> appropriate MacPorts or Homebrew packages:\n>\n> https://www.postgresql.org/docs/devel/docguide-toolsets.html#DOCGUIDE-TOOLSETS-INST-MACOS\n>\n> and it works okay for me as long as I use MacPorts' version of xsltproc.\n\nUnfortunately it doesn't work for Homebrew anymore and there seems to\nbe only one xsltproc in the system.\n\n> I believe what is actually failing is that http://docbook.sourceforge.net\n> now redirects to https:, and the ancient xsltproc version provided by\n> Apple doesn't do https. What you need to do if you want to use their\n> xsltproc is install a local copy of the SGML catalog files and\n> stylesheets, preferably in the place that xsltproc would look by default\n> (/etc/xml/catalog seems to be the standard one). It would be good to\n> document how to do that, but this patch doesn't do so.\n\nFair enough.\n\nI would appreciate it if you could help figuring out how to do this\nfor MacPorts, since I'm not a MacPorts user. I'll figure out how to do\nthis for Homebrew.\n\nDoes something like:\n\n```\nln -s /opt/local/etc/xml/catalog /etc/xml/catalog\n```\n\n... work for you? Does your:\n\n```\nxsltproc --help\n```\n\n... also say that it uses /etc/xml/catalog path by default?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 30 Jan 2023 22:04:10 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>> What we do actually have already is a recommendation to install\n>> appropriate MacPorts or Homebrew packages:\n>> https://www.postgresql.org/docs/devel/docguide-toolsets.html#DOCGUIDE-TOOLSETS-INST-MACOS\n>> and it works okay for me as long as I use MacPorts' version of xsltproc.\n\n> Unfortunately it doesn't work for Homebrew anymore and there seems to\n> be only one xsltproc in the system.\n\nHmm. Seems unlikely that Homebrew would have dropped the package(s)\naltogether. But ... poking at this, I discovered that there are\ninaccuracies in our docs for MacPorts:\n\n* /opt/local/bin/xsltproc is provided by libxslt, and\n/opt/local/bin/xmllint is provided by libxml2, neither of which\nwill be installed by our recipe as given. You might have pulled\nthose ports in already to build Postgres with, but if you didn't, the\nrecipe will fail. I wonder if the Homebrew recipe has the same bug.\n\n* At some point MacPorts renamed docbook-xsl to docbook-xsl-nons.\nThis is harmless at the moment, because if you ask for docbook-xsl\nit will automatically install docbook-xsl-nons instead. I wonder\nif that'll be true indefinitely, though.\n\nI also wonder whether we shouldn't point at the meta-package docbook-xml\ninstead of naming a particular version here (and having to update\nthat from time to time). The extra disk space to install all the DTD\nversions is entirely insignificant (< 2MB).\n\n> Does your:\n> xsltproc --help\n> ... also say that it uses /etc/xml/catalog path by default?\n\nBoth /usr/bin/xsltproc and /opt/local/bin/xsltproc say\n\n --catalogs : use SGML catalogs from $SGML_CATALOG_FILES\n otherwise XML Catalogs starting from \n file:///etc/xml/catalog are activated by default\n\nHowever, this appears to be a lie for /opt/local/bin/xsltproc;\nwhat it's apparently *actually* using is /opt/local/etc/xml/catalog,\nwhich is what MacPorts provides.\n\nI repeated the test I did this morning, and this time using --catalogs\nwith SGML_CATALOG_FILES set to /opt/local/etc/xml/catalog worked for me,\nusing either copy of xsltproc. I must've fat-fingered it somehow before.\nNonetheless, I doubt that that recipe is worth recommending to MacPorts\nusers: if they pull in the DTD packages they might as well pull in libxml2\nand libxslt, and then they don't need to adjust anything.\n\nIn short, I think we need to update J.2.4 to say this for MacPorts:\n\nsudo port install libxml2 libxslt docbook-xml docbook-xsl-nons fop\n\nand I strongly suspect that the Homebrew recipe has a similar oversight.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Jan 2023 16:01:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "On 30.01.23 20:04, Aleksander Alekseev wrote:\n> I would appreciate it if you could help figuring out how to do this\n> for MacPorts, since I'm not a MacPorts user. I'll figure out how to do\n> this for Homebrew.\n\nI'm on macOS Monterey and Homebrew. I'm sure I have gone through many \nvariations of this setup, but checking what I happen to be using right \nnow, Makefile.global says\n\nXMLLINT = /usr/bin/xmllint\nXSLTPROC = /usr/bin/xsltproc\n\nand in the environment there is\n\nXML_CATALOG_FILES=/usr/local/etc/xml/catalog\n\nJust testing this right now, you can avoid having to set this \nenvironment variable by making the default catalog file /etc/xml/catalog \ninclude /usr/local/etc/xml/catalog.\n\nIt also works for me to use the Homebrew-provided versions of these tools:\n\nXMLLINT = /usr/local/opt/libxml2/bin/xmllint\nXSLTPROC = /usr/local/opt/libxslt/bin/xsltproc\n\nBut I can't determine right now what catalog file they look at by \ndefault. It appears that it's neither /etc/xml/catalog nor \n/usr/local/etc/xml/catalog. So in this case, setting XML_CATALOG_FILES \nis necessary.\n\nFor either sets of tools, the automatic download option doesn't appear \nto work anymore. This probably has to do with either the https or the \nredirects that have been mentioned.\n\n\n\n", "msg_date": "Tue, 31 Jan 2023 08:43:56 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi hackers,\n\n> /opt/local/bin/xsltproc is provided by libxslt, and\n> /opt/local/bin/xmllint is provided by libxml2, neither of which\n> will be installed by our recipe as given. You might have pulled\n> those ports in already to build Postgres with, but if you didn't, the\n> recipe will fail. I wonder if the Homebrew recipe has the same bug.\n\nRight, I had libxml2 installed (which provides xmllint) but not\nlibxslt (which provides xsltproc). For this reason I could find only\nthe version of xsltproc shipped with macOS.\n\n> Both /usr/bin/xsltproc and /opt/local/bin/xsltproc say\n>\n> --catalogs : use SGML catalogs from $SGML_CATALOG_FILES\n> otherwise XML Catalogs starting from\n> file:///etc/xml/catalog are activated by default\n>\n> However, this appears to be a lie for /opt/local/bin/xsltproc;\n> what it's apparently *actually* using is /opt/local/etc/xml/catalog,\n> which is what MacPorts provides.\n\n> I repeated the test I did this morning, and this time using --catalogs\n> with SGML_CATALOG_FILES set to /opt/local/etc/xml/catalog worked for me,\n> using either copy of xsltproc. I must've fat-fingered it somehow before.\n> Nonetheless, I doubt that that recipe is worth recommending to MacPorts\n> users: if they pull in the DTD packages they might as well pull in libxml2\n> and libxslt, and then they don't need to adjust anything.\n\nGot it, thanks.\n\n> In short, I think we need to update J.2.4 to say this for MacPorts:\n>\n> sudo port install libxml2 libxslt docbook-xml docbook-xsl-nons fop\n\nAgree. I decided to include libxml2 and libxslt for Homebrew as well.\nThe documentation above explains what these packages are needed for\nand also says that some of the packages may be optional. E.g. fop is\nactually not strictly required but we recommend installing it anyway.\n\n> But I can't determine right now what catalog file they look at by\n> default. It appears that it's neither /etc/xml/catalog nor\n> /usr/local/etc/xml/catalog. So in this case, setting XML_CATALOG_FILES\n> is necessary.\n>\n> For either sets of tools, the automatic download option doesn't appear\n> to work anymore. This probably has to do with either the https or the\n> redirects that have been mentioned.\n\nPeter, thanks for reporting this. I got the same results: neither\ntools work without setting XML_CATALOG_FILES and setting this\nenvironment variable work for both Homebrew and macOS versions.\n\nHere is the summary of our findings. PFA the updated patch v2.\n\n\nWhile on it, I noticed that the documentation says \"On macOS, you can\nbuild the HTML and man documentation without installing anything\nextra.\" I strongly suspect this may not be true anymore. This is\nsomewhat difficult to check however. Some of the recommended packages\nwere installed as dependencies of other packages and I don't feel like\ntaking a risk of running:\n\n```\nbrew uninstall --ignore-dependencies libxml2 libxslt docbook docbook-xsl\n```\n\n... right now. However maybe we should rephrase this to make sure\nthere are fewer supported/recommended ways of building the\ndocumentation? The alternative ways may also work but if they don't\nthere will be no actions required from us.\n\nI included the corresponding path as well.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 31 Jan 2023 13:38:44 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>> For either sets of tools, the automatic download option doesn't appear\n>> to work anymore. This probably has to do with either the https or the\n>> redirects that have been mentioned.\n\n> Peter, thanks for reporting this. I got the same results: neither\n> tools work without setting XML_CATALOG_FILES and setting this\n> environment variable work for both Homebrew and macOS versions.\n\n> Here is the summary of our findings. PFA the updated patch v2.\n\nIt's worse than that: I find that\n\n\texport XML_CATALOG_FILES=/dev/null\n\nbreaks the docs build on RHEL8 and Fedora 37 (latest) too, with the\nsame \"failed to load external entity\" symptom. I conclude from this\nthat there is no version of xsltproc anywhere that can still download\nthe required files automatically. So we need to take out the advice\nthat says you can rely on auto-download for everybody, not just macOS.\n\nIf this is indeed the case, perhaps we ought to start inserting --nonet\ninto the invocations. There's not much use in allowing these tools to\nperform internet access when the best-case scenario is that they fail.\n(Worst-case, you could end up getting hacked, perhaps?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Jan 2023 15:22:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "I wrote:\n> It's worse than that: I find that\n> \texport XML_CATALOG_FILES=/dev/null\n> breaks the docs build on RHEL8 and Fedora 37 (latest) too, with the\n> same \"failed to load external entity\" symptom. I conclude from this\n> that there is no version of xsltproc anywhere that can still download\n> the required files automatically. So we need to take out the advice\n> that says you can rely on auto-download for everybody, not just macOS.\n\n> If this is indeed the case, perhaps we ought to start inserting --nonet\n> into the invocations. There's not much use in allowing these tools to\n> perform internet access when the best-case scenario is that they fail.\n\nConcretely, I'm thinking something like the attached. Notes:\n\n1. I have not tested the meson changes.\n\n2. As this is written, you can't override the --nonet options very\neasily in the Makefile build (you could do so at runtime by setting\nXSLTPROC, but not at configure time); and you can't override them at\nall in the meson build. Given the lack of evidence that it's still\nuseful to allow net access, I'm untroubled by that. I did intentionally\nskip using \"override\" in the Makefile, though, to allow that case.\n\n3. For consistency with the directions for other platforms, I made\nthe package lists for macOS just mention libxslt. That should\nbe enough to pull in libxml2 as well.\n\n4. Use of --nonet changes the error message you get if xsltproc\ncan't find the DTDs. I copied the error I get from MacPorts'\nversion of xsltproc, but can you confirm it's the same on Homebrew?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 31 Jan 2023 18:54:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-01-31 18:54:31 -0500, Tom Lane wrote:\n> 1. I have not tested the meson changes.\n\nWorks here.\n\n\n> 2. As this is written, you can't override the --nonet options very\n> easily in the Makefile build (you could do so at runtime by setting\n> XSLTPROC, but not at configure time); and you can't override them at\n> all in the meson build. Given the lack of evidence that it's still\n> useful to allow net access, I'm untroubled by that. I did intentionally\n> skip using \"override\" in the Makefile, though, to allow that case.\n\nI'm not troubled by this either.\n\n\nI wonder if we should provide a build target to download the stylesheets\nourselves. The amount of packages our instructions download is quite\nsubstantial. We could perhaps trim them a bit, but we intentionally are\nincluding things to build pdfs etc as well, which does make sense...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Jan 2023 16:22:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi hackers,\n\n> Concretely, I'm thinking something like the attached. Notes:\n\n> > 1. I have not tested the meson changes.\n> Works here.\n\nTook me a while to figure out how to build the documentation with Meson:\n\n```\nXML_CATALOG_FILES=/usr/local/etc/xml/catalog ninja -C build alldocs\n```\n\nIt works. Perhaps we should add:\n\n```\nninja -C build alldocs\n```\n\n... command to installation.sgml file while on it, to the 17.4.1\nBuilding and Installation with Meson / Short Version section.\n\n> > 2. As this is written, you can't override the --nonet options very\n> > easily in the Makefile build (you could do so at runtime by setting\n> > XSLTPROC, but not at configure time); and you can't override them at\n> > all in the meson build. Given the lack of evidence that it's still\n> > useful to allow net access, I'm untroubled by that. I did intentionally\n> > skip using \"override\" in the Makefile, though, to allow that case.\n>\n> I'm not troubled by this either.\n\nNeither am I.\n\n> 3. For consistency with the directions for other platforms, I made\n> the package lists for macOS just mention libxslt. That should\n> be enough to pull in libxml2 as well.\n\nFair enough.\n\n> 4. Use of --nonet changes the error message you get if xsltproc\n> can't find the DTDs. I copied the error I get from MacPorts'\n> version of xsltproc, but can you confirm it's the same on Homebrew?\n\nYes, the message is the same.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 1 Feb 2023 13:05:32 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 13:05:32 +0300, Aleksander Alekseev wrote:\n> Took me a while to figure out how to build the documentation with Meson:\n> \n> ```\n> XML_CATALOG_FILES=/usr/local/etc/xml/catalog ninja -C build alldocs\n> ```\n> \n> It works. Perhaps we should add:\n> \n> ```\n> ninja -C build alldocs\n> ```\n\nFWIW, just 'docs' would build just the multi-page html/man pages,\nalldocs takes a lot longer...\n\nAnd yes, adding that to the docs is a good idea.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 02:32:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-01 13:05:32 +0300, Aleksander Alekseev wrote:\n>> It works. Perhaps we should add:\n>> ninja -C build alldocs\n\n> FWIW, just 'docs' would build just the multi-page html/man pages,\n> alldocs takes a lot longer...\n\nHmm ... why does 'docs' include the man pages, and not just the html?\nIt's unlike what \"make -C doc/src/sgml all\" does in the Makefile\nsystem, and I don't find it to be an improvement. I want the man\npages approximately never, so I don't care to wait around for them\nto be built.\n\nWhile I'm bitching ... section 17.1 doesn't mention that you need\nninja to use meson, much less mention the minimum version. And\nthe minimum version appears to be newer than RHEL8's 1.8.2,\nwhich I find pretty unfortunate. On RHEL8, it fails with\n\n$ ninja\nninja: error: build.ninja:6771: multiple outputs aren't (yet?) supported by depslog; bring this up on the mailing list if it affects you\n\nI did manage to test this stuff on bleeding-edge Fedora,\nbut ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 12:23:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 12:23:27 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-01 13:05:32 +0300, Aleksander Alekseev wrote:\n> >> It works. Perhaps we should add:\n> >> ninja -C build alldocs\n> \n> > FWIW, just 'docs' would build just the multi-page html/man pages,\n> > alldocs takes a lot longer...\n> \n> Hmm ... why does 'docs' include the man pages, and not just the html?\n\nI think it's because the makefile is doing things a bit oddly, and I\ndidn't quite grok that in the right moment.\n\nI probably just saw:\nall: html man\n\nbut before that there's\n\n# Make \"html\" the default target, since that is what most people tend\n# to want to use.\nhtml:\n\n\n> It's unlike what \"make -C doc/src/sgml all\" does in the Makefile\n> system, and I don't find it to be an improvement.\n\nWell, that'd actually build the manpages too, afaics :). But I get the\npoint.\n\nI really have no opinion on what we should should build under what\nname. Happy to change what's included in 'docs', add additional targets,\netc.\n\n\n> I want the man pages approximately never, so I don't care to wait\n> around for them to be built.\n> \n> While I'm bitching ... section 17.1 doesn't mention that you need\n> ninja to use meson, much less mention the minimum version.\n\nPeter rewrote the requirements (almost?) entirely while committing the\ndocs from Samay and hasn't responded to my concerns about the new\nform...\n\n\nNormally the ninja version that's pulled in by meson should suffice. I\nsuspect that the problem you found can be worked around.\n\n> And the minimum version appears to be newer than RHEL8's 1.8.2, which\n> I find pretty unfortunate. On RHEL8, it fails with\n> $ ninja\n> ninja: error: build.ninja:6771: multiple outputs aren't (yet?) supported by depslog; bring this up on the mailing list if it affects you\n\nWhat's in that line +- 2 lines? And/or what are the steps that got you\nto that point?\n\nI'll try building 1.8.2 and reproing.\n\n\n> I did manage to test this stuff on bleeding-edge Fedora,\n> but ...\n\nYea, I worked a fair bit to avoid requiring a too new version, I'll try\nto figure out what went wrong. I did built on rhel8 not long ago, so I\nsuspect it's a corner case somewhere.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 09:49:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-01 12:23:27 -0500, Tom Lane wrote:\n>> It's unlike what \"make -C doc/src/sgml all\" does in the Makefile\n>> system, and I don't find it to be an improvement.\n\n> Well, that'd actually build the manpages too, afaics :). But I get the\n> point.\n\nAh, sorry, I too had forgotten that \"all\" isn't the default target\nthere. I actually just go into that directory and type \"make\".\n\n> I really have no opinion on what we should should build under what\n> name. Happy to change what's included in 'docs', add additional targets,\n> etc.\n\nI think \"docs\" for just the html and \"alldocs\" for all supported\noutputs is probably reasonable. If we ever get to the point of\nbuilding distribution tarballs with meson, we might need another\ntarget for html+man, but I suppose that's a long way off.\n\n>> And the minimum version appears to be newer than RHEL8's 1.8.2, which\n>> I find pretty unfortunate. On RHEL8, it fails with\n>> $ ninja\n>> ninja: error: build.ninja:6771: multiple outputs aren't (yet?) supported by depslog; bring this up on the mailing list if it affects you\n\n> What's in that line +- 2 lines? And/or what are the steps that got you\n> to that point?\n\n\"meson setup build\" is sufficient to see it --- apparently ninja\ngets invoked at the end of that, and it's already unhappy. But\nit repeats after \"cd build; ninja\".\n\nIt seems to be unhappy about the stanza for building sql_help.c?\nLine 6771 is the blank line after \"description\" in this bit:\n\nbuild src/bin/psql/sql_help.c src/bin/psql/sql_help.h: CUSTOM_COMMAND_DEP | ../src/bin/psql/create_help.pl /usr/bin/perl\n DEPFILE = src/bin/psql/sql_help.dep\n DEPFILE_UNQUOTED = src/bin/psql/sql_help.dep\n COMMAND = /usr/bin/perl ../src/bin/psql/create_help.pl --docdir ../doc/src/sgml/ref --depfile src/bin/psql/sql_help.dep --outdir src/bin/psql --basename sql_help\n description = Generating$ psql_help$ with$ a$ custom$ command\n\nbuild src/bin/psql/psql.p/meson-generated_.._psqlscanslash.c.o: c_COMPILER src/bin/psql/psqlscanslash.c || src/bin/psql/sql_help.h src/include/catalog/pg_aggregate_d.h src/include/catalog/pg_am_d.h src/include/catalog/pg_amop_d.h src/include/catalog/pg_amproc_d.h src/include/catalog/pg_attrdef_d.h src/include/catalog/pg_attribute_d.h src/include/catalog/pg_auth_members_d.h src/include/catalog/pg_authid_d.h src/include/catalog/pg_cast_d.h src/include/catalog/pg_class_d.h src/include/catalog/pg_collation_d.h src/include/catalog/pg_constraint_d.h src/include/catalog/pg_conversion_d.h src/include/catalog/pg_database_d.h src/include/catalog/pg_db_role_setting_d.h src/include/catalog/pg_default_acl_d.h src/include/catalog/pg_depend_d.h src/include/catalog/pg_description_d.h src/include/catalog/pg_enum_d.h src/include/catalog/pg_event_trigger_d.h src/include/catalog/pg_extension_d.h src/include/catalog/pg_foreign_data_wrapper_d.h src/include/catalog/pg_foreign_server_d.h src/include/catalog/pg_foreign_table_d.h src/include/catalog/pg_index_d.h src/include/catalog/pg_inherits_d.h src/include/catalog/pg_init_privs_d.h src/include/catalog/pg_language_d.h src/include/catalog/pg_largeobject_d.h src/include/catalog/pg_largeobject_metadata_d.h src/include/catalog/pg_namespace_d.h src/include/catalog/pg_opclass_d.h src/include/catalog/pg_operator_d.h src/include/catalog/pg_opfamily_d.h src/include/catalog/pg_parameter_acl_d.h src/include/catalog/pg_partitioned_table_d.h src/include/catalog/pg_policy_d.h src/include/catalog/pg_proc_d.h src/include/catalog/pg_publication_d.h src/include/catalog/pg_publication_namespace_d.h src/include/catalog/pg_publication_rel_d.h src/include/catalog/pg_range_d.h src/include/catalog/pg_replication_origin_d.h src/include/catalog/pg_rewrite_d.h src/include/catalog/pg_seclabel_d.h src/include/catalog/pg_sequence_d.h src/include/catalog/pg_shdepend_d.h src/include/catalog/pg_shdescription_d.h src/include/catalog/pg_shseclabel_d.h src/include/catalog/pg_statistic_d.h src/include/catalog/pg_statistic_ext_d.h src/include/catalog/pg_statistic_ext_data_d.h src/include/catalog/pg_subscription_d.h src/include/catalog/pg_subscription_rel_d.h src/include/catalog/pg_tablespace_d.h src/include/catalog/pg_transform_d.h src/include/catalog/pg_trigger_d.h src/include/catalog/pg_ts_config_d.h src/include/catalog/pg_ts_config_map_d.h src/include/catalog/pg_ts_dict_d.h src/include/catalog/pg_ts_parser_d.h src/include/catalog/pg_ts_template_d.h src/include/catalog/pg_type_d.h src/include/catalog/pg_user_mapping_d.h src/include/catalog/postgres.bki src/include/catalog/schemapg.h src/include/catalog/system_constraints.sql src/include/catalog/system_fk_info.h src/include/nodes/nodetags.h src/include/utils/errcodes.h\n DEPFILE = src/bin/psql/psql.p/meson-generated_.._psqlscanslash.c.o.d\n DEPFILE_UNQUOTED = src/bin/psql/psql.p/meson-generated_.._psqlscanslash.c.o.d\n ARGS = -Isrc/bin/psql/psql.p -Isrc/bin/psql -I../src/bin/psql -Isrc/include -I../src/include -Isrc/interfaces/libpq -I../src/interfaces/libpq -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -O3 -fno-strict-aliasing -fwrapv -fexcess-precision=standard -D_GNU_SOURCE -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -Wdeclaration-after-statement -Wno-format-truncation -Wno-stringop-truncation -pthread\n\n\n>> I did manage to test this stuff on bleeding-edge Fedora,\n>> but ...\n\n> Yea, I worked a fair bit to avoid requiring a too new version, I'll try\n> to figure out what went wrong. I did built on rhel8 not long ago, so I\n> suspect it's a corner case somewhere.\n\nOh, interesting. Let me know if you want me to test anything in\nparticular.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 13:36:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 09:49:00 -0800, Andres Freund wrote:\n> On 2023-02-01 12:23:27 -0500, Tom Lane wrote:\n> > And the minimum version appears to be newer than RHEL8's 1.8.2, which\n> > I find pretty unfortunate. On RHEL8, it fails with\n> > $ ninja\n> > ninja: error: build.ninja:6771: multiple outputs aren't (yet?) supported by depslog; bring this up on the mailing list if it affects you\n> \n> What's in that line +- 2 lines? And/or what are the steps that got you\n> to that point?\n> \n> I'll try building 1.8.2 and reproing.\n> \n> \n> > I did manage to test this stuff on bleeding-edge Fedora,\n> > but ...\n> \n> Yea, I worked a fair bit to avoid requiring a too new version, I'll try\n> to figure out what went wrong. I did built on rhel8 not long ago, so I\n> suspect it's a corner case somewhere.\n\nUnfortunately the test script accidentally pulled in ninja from epel,\nhence not noticing the issue.\n\n\nThere's three issues:\n\nOne is easy enough, albeit slightly annoying: 1.8.2 wants the\n\"depending\" file only be named once in a dependency file. Slightly\nuglier code in snowball_create.pl, but whatever.\n\nThe second is one case of multiple outputs with a depfile:\ncreate_help.pl creates both sql_help.c and sql_help.h. Not immediately\nsure what a good solution here is. The brute force solution would be to\ninvoke it twice, but I don't like that at all.\n\nThe last case is the various man directories. That'd be easy enough to\navoid if we generated them inside a man/ directory.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 11:04:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-01 09:49:00 -0800, Andres Freund wrote:\n>> On 2023-02-01 12:23:27 -0500, Tom Lane wrote:\n>>> And the minimum version appears to be newer than RHEL8's 1.8.2, which\n>>> I find pretty unfortunate.\n\n> Unfortunately the test script accidentally pulled in ninja from epel,\n> hence not noticing the issue.\n\nAh. For myself, pulling the newer version from epel would not be a big\nproblem. I think what we need to do is figure out what is the minimum\nninja version we want to support, and then see if we need to make any\nof these changes. I don't have hard data on which distros have which\nversions of ninja, but surely somebody checked that at some point?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 14:20:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nHere are my two cents.\n\n> the minimum version appears to be newer than RHEL8's 1.8.2,\n> which I find pretty unfortunate. On RHEL8, it fails with\n\n> $ ninja\n> ninja: error: build.ninja:6771: multiple outputs aren't (yet?) supported by depslog; bring this up on the mailing list if it affects you\n\n> [...] I don't have hard data on which distros have which\n> versions of ninja, but surely somebody checked that at some point?\n\nI'm using three different systems at the moment and the minimum\nversion of Ninja that is known to work is 1.10.1.\n\n> Normally the ninja version that's pulled in by meson should suffice.\n\nThere are several ways to install Meson one of which, if you want the\nlatest version, is just using PIP:\n\n```\npip3 install --user meson\n```\n\nNaturally Ninja will not be pulled in this case.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 2 Feb 2023 01:15:15 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 14:20:19 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-01 09:49:00 -0800, Andres Freund wrote:\n> >> On 2023-02-01 12:23:27 -0500, Tom Lane wrote:\n> >>> And the minimum version appears to be newer than RHEL8's 1.8.2, which\n> >>> I find pretty unfortunate.\n> \n> > Unfortunately the test script accidentally pulled in ninja from epel,\n> > hence not noticing the issue.\n> \n> Ah. For myself, pulling the newer version from epel would not be a big\n> problem. I think what we need to do is figure out what is the minimum\n> ninja version we want to support, and then see if we need to make any\n> of these changes. I don't have hard data on which distros have which\n> versions of ninja, but surely somebody checked that at some point?\n\nI did survey available meson versions, and chose what features to\nuse. But not really ninja, since I didn't know about this specific issue\nand other than this the ninja version differences were handled by meson.\n\nAs all the issues are related to more precise dependencies, I somehwat\nwonder if it'd be good enough to use less accurate dependencies with\n1.8.2. But I don't like it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Feb 2023 16:52:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I did survey available meson versions, and chose what features to\n> use. But not really ninja, since I didn't know about this specific issue\n> and other than this the ninja version differences were handled by meson.\n\n> As all the issues are related to more precise dependencies, I somehwat\n> wonder if it'd be good enough to use less accurate dependencies with\n> 1.8.2. But I don't like it.\n\nNah, I don't like that either. I did a crude survey of ninja's version\nhistory by seeing which version is in each recent Fedora release:\n\nf20/ninja-build.spec:Version: 1.4.0\nf21/ninja-build.spec:Version: 1.5.1\nf22/ninja-build.spec:Version: 1.5.3\nf23/ninja-build.spec:Version: 1.7.1\nf24/ninja-build.spec:Version: 1.7.2\nf25/ninja-build.spec:Version: 1.8.2\nf26/ninja-build.spec:Version: 1.8.2\nf27/ninja-build.spec:Version: 1.8.2\nf28/ninja-build.spec:Version: 1.8.2\nf29/ninja-build.spec:Version: 1.8.2\nf30/ninja-build.spec:Version: 1.9.0\nf31/ninja-build.spec:Version: 1.10.1\nf32/ninja-build.spec:Version: 1.10.1\nf33/ninja-build.spec:Version: 1.10.2\nf34/ninja-build.spec:Version: 1.10.2\nf35/ninja-build.spec:Version: 1.10.2\nf36/ninja-build.spec:Version: 1.10.2\nf37/ninja-build.spec:Version: 1.10.2\nrawhide/ninja-build.spec:Version: 1.11.1\n\nRemembering that Fedora has a six-month release cycle, this shows that\n1.8.2 was around for awhile but 1.9.x was a real flash-in-the-pan.\nWe can probably get away with saying that you need 1.10 or newer.\nThat's already three-plus years old.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 20:25:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "I pushed the discussed documentation improvements, and changed the\nbehavior of \"ninja docs\" to only build the HTML docs. However,\nI've not done anything about documenting what is the minimum\nninja version.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Feb 2023 17:18:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-02-08 17:18:13 -0500, Tom Lane wrote:\n> However, I've not done anything about documenting what is the minimum ninja\n> version.\n\nSorry, plan to tackle work around this tomorrow. Got stuck for much longer\nthan I had hoped to debug flapping tests (parts resolved, several others not).\n\nMy next step is to survey ninja versions across OSs / versions.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Feb 2023 21:05:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "On 08.02.23 23:18, Tom Lane wrote:\n> I pushed the discussed documentation improvements, and changed the\n> behavior of \"ninja docs\" to only build the HTML docs.\n\nI don't like this change. Now the default set of docs is different \nbetween the make builds and the meson builds. And people will be less \nlikely to make sure the man pages still build.\n\nWhat's wrong with just typing \"ninja html\"?\n\n\n\n", "msg_date": "Thu, 9 Feb 2023 15:29:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 08.02.23 23:18, Tom Lane wrote:\n>> I pushed the discussed documentation improvements, and changed the\n>> behavior of \"ninja docs\" to only build the HTML docs.\n\n> I don't like this change. Now the default set of docs is different \n> between the make builds and the meson builds. And people will be less \n> likely to make sure the man pages still build.\n\nWhat? The default behavior of \"make\" has been to build only the\nhtml docs for many years. And I've never ever seen a case where\nthe html docs build and the man pages don't.\n\n> What's wrong with just typing \"ninja html\"?\n\nDon't really care how the command is spelled, but there needs to\nbe a convenient way to get that behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Feb 2023 09:57:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-02-09 09:57:42 -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 08.02.23 23:18, Tom Lane wrote:\n> >> I pushed the discussed documentation improvements, and changed the\n> >> behavior of \"ninja docs\" to only build the HTML docs.\n>\n> > I don't like this change. Now the default set of docs is different\n> > between the make builds and the meson builds. And people will be less\n> > likely to make sure the man pages still build.\n>\n> What? The default behavior of \"make\" has been to build only the\n> html docs for many years. And I've never ever seen a case where\n> the html docs build and the man pages don't.\n\nI think this misunderstanding is again due to the confusion between the 'all'\ntarget in doc/src/sgml and the default target, just like earlier in the thread\n/ why I ended up with the prior set of targets under 'docs'.\n\n # Make \"html\" the default target, since that is what most people tend\n # to want to use.\n html:\n ...\n all: html man\n\n\nGiven the repeated confusion from that, among fairly senior hackers, perhaps\nwe ought to at least put those lines next to each other? It's certainly not\nobvious as-is.\n\n\n\n> > What's wrong with just typing \"ninja html\"?\n>\n> Don't really care how the command is spelled, but there needs to\n> be a convenient way to get that behavior.\n\nPerhaps we should have doc-html, doc-man, doc-all or such?\n\nThe shell autocompletions for ninja work pretty well for me, a prefix like\nthat would make it easier to discover such \"sub\"-targets.\n\n\nI'm was pondering adding a 'help' target that shows important targets.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Feb 2023 10:16:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think this misunderstanding is again due to the confusion between the 'all'\n> target in doc/src/sgml and the default target, just like earlier in the thread\n> / why I ended up with the prior set of targets under 'docs'.\n\n> # Make \"html\" the default target, since that is what most people tend\n> # to want to use.\n> html:\n> ...\n> all: html man\n\n> Given the repeated confusion from that, among fairly senior hackers, perhaps\n> we ought to at least put those lines next to each other? It's certainly not\n> obvious as-is.\n\nI think there are ordering constraints between these and the\nMakefile.global inclusion. But we could add a comment beside the \"all:\"\nline pointing out that that's not the default target.\n\n> Perhaps we should have doc-html, doc-man, doc-all or such?\n\nNo objection here.\n\nIf we intend to someday build tarballs with meson, there'd need to be\na target that builds html+man, but that could perhaps be named\n\"distprep\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Feb 2023 13:48:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-02-09 13:48:46 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think this misunderstanding is again due to the confusion between the 'all'\n> > target in doc/src/sgml and the default target, just like earlier in the thread\n> > / why I ended up with the prior set of targets under 'docs'.\n> \n> > # Make \"html\" the default target, since that is what most people tend\n> > # to want to use.\n> > html:\n> > ...\n> > all: html man\n> \n> > Given the repeated confusion from that, among fairly senior hackers, perhaps\n> > we ought to at least put those lines next to each other? It's certainly not\n> > obvious as-is.\n> \n> I think there are ordering constraints between these and the\n> Makefile.global inclusion. But we could add a comment beside the \"all:\"\n> line pointing out that that's not the default target.\n\nYes, html: has to happen before the inclusion of Makefile.global to become the\ndefault target, but afaics we can just move \"all: html man\" up?\n\n\n> If we intend to someday build tarballs with meson, there'd need to be\n> a target that builds html+man, but that could perhaps be named\n> \"distprep\".\n\nYea, a distprep target just depending on all the required targets seems to be\nthe way to go for that.\n\n\nNot really related: I think we should seriously consider removing most of the\nthings distprep includes in the tarball. I'd leave docs in though. But IMO all\nof the generated code doesn't make sense in this day and age. I guess that's a\ndiscussion for a different thread and a different day.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Feb 2023 12:38:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi,\n\nOn 2023-02-05 16:52:07 -0800, Andres Freund wrote:\n> I did survey available meson versions, and chose what features to\n> use. But not really ninja, since I didn't know about this specific issue\n> and other than this the ninja version differences were handled by meson.\n\nRHEL world confuses me a fair bit. And it seems to have gotten painful to even\nget a realistic RHEL-like setup.\n\nRHEL7 epel has ninja 1.10\nRHEL8 code ready builder has ninja 1.8\nRHEL8 epel does not have ninja\nRHEL9 code ready builder has ninja 1.10\n\nSo actually RHEL8 doesn't suffice without something external, but RHEL7 + epel\ndoes. Huh.\n\n\nAs pointed out by Aleksander downthread, it's easy to build on RHEL8 if you're\nok using pip, it's just \"pip3.6 install meson ninja\".\n\n\nI tried to compile an OS matrix for some relevant OSs / OS versions:\n\n OS\t\t\t\t\t\t\t\t\t\tCurrently\n Supported OS Ver Ninja Ver Python Version Meson Version Sufficient\n\nDebian unoffical 10 1.8 3.7 0.49 n\nDebian y 11 1.10 3.9 0.56 y\nFedora n 32 1.10 3.8 0.55 y\nFreeBSD y 12 1.11 3.9 1.0 y\nNetBSD y 8.2 1.11 3.9 0.62 y\nOpenBSD y 7.1 1.10 3.9 0.62 y\nRHEL y 7 + epel 1.10 3.6 0.55 y\nRHEL y 8 + crb 1.8 3.6 0.58 n\nRHEL y 9 + crb 1.10 3.9 0.58 y\nUbuntu y 18.04 1.8 3.6 0.45 n\nUbuntu y 20.04 1.10 3.8 0.53 n\nUbuntu y 22.04 1.10 3.10 0.61 y\nopenSUSE Leap y 15.3 1.10 3.6 0.54 y\n\nThe only not sufficient ones that bother me to some degree are Ubuntu 20.04\nand RHEL 8. The issues are different, oddly enough. Ubuntu has a new enough\nninja, but meson is too old, RHEL has it the other way around.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Thu, 9 Feb 2023 20:41:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The only not sufficient ones that bother me to some degree are Ubuntu 20.04\n> and RHEL 8. The issues are different, oddly enough. Ubuntu has a new enough\n> ninja, but meson is too old, RHEL has it the other way around.\n\nYeah. Well, we were intending to maintain the autoconf build system for\nseveral years more anyway. Guess we have to plan on keeping it going\nuntil those platforms are EOL or nearly so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Feb 2023 23:45:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Hi, \n\nOn February 9, 2023 8:45:20 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> The only not sufficient ones that bother me to some degree are Ubuntu 20.04\n>> and RHEL 8. The issues are different, oddly enough. Ubuntu has a new enough\n>> ninja, but meson is too old, RHEL has it the other way around.\n>\n>Yeah. Well, we were intending to maintain the autoconf build system for\n>several years more anyway. Guess we have to plan on keeping it going\n>until those platforms are EOL or nearly so.\n\nBoth could be supported with a bit of effort, fwiw. I don't know if it's worth doing so though... \n\nRegards,\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 09 Feb 2023 21:13:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "=?US-ASCII?Q?Re=3A_MacOS=3A_xsltproc_fails_with_=22warn?=\n =?US-ASCII?Q?ing=3A_failed_to_load_external_entity=22?=" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On February 9, 2023 8:45:20 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah. Well, we were intending to maintain the autoconf build system for\n>> several years more anyway. Guess we have to plan on keeping it going\n>> until those platforms are EOL or nearly so.\n\n> Both could be supported with a bit of effort, fwiw. I don't know if it's worth doing so though... \n\nIt's probably not the highest-priority thing to be hacking on, on the\nwhole. If we get to the point where we're itching to drop autoconf\nand old-platform compatibility is the last thing holding us back,\nmaybe have a go at it then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Feb 2023 00:18:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?US-ASCII?Q?Re=3A_MacOS=3A_xsltproc_fails_with_=22warn?=\n =?US-ASCII?Q?ing=3A_failed_to_load_external_entity=22?=" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 08, 2023 at 05:18:13PM -0500, Tom Lane wrote:\n> I pushed the discussed documentation improvements, and changed the\n> behavior of \"ninja docs\" to only build the HTML docs. However,\n> I've not done anything about documenting what is the minimum\n> ninja version.\n\nFTR the documented XML_CATALOG_FILES environment variable is only valid for\nIntel based machines, as homebrew installs everything in a different location\nfor M1...\n\nI'm attaching a patch to make that distinction, hoping that no one else will\nhave to waste time trying to figure out how to get it working on such hardware.", "msg_date": "Mon, 27 Mar 2023 16:24:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "> On 27 Mar 2023, at 10:24, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> Hi,\n> \n> On Wed, Feb 08, 2023 at 05:18:13PM -0500, Tom Lane wrote:\n>> I pushed the discussed documentation improvements, and changed the\n>> behavior of \"ninja docs\" to only build the HTML docs. However,\n>> I've not done anything about documenting what is the minimum\n>> ninja version.\n> \n> FTR the documented XML_CATALOG_FILES environment variable is only valid for\n> Intel based machines, as homebrew installs everything in a different location\n> for M1...\n> \n> I'm attaching a patch to make that distinction, hoping that no one else will\n> have to waste time trying to figure out how to get it working on such hardware.\n\nLGTM apart from the double // in the export which is easy enough to fix before\npushing.\n\n+export XML_CATALOG_FILES=/opt/homebrew//etc/xml/catalog\n\nFor reference on why Homebrew use a different structure on Apple M1 the below\nissue has more details:\n\n\thttps://github.com/Homebrew/brew/issues/9177\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 27 Mar 2023 10:32:52 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "On Mon, Mar 27, 2023 at 10:32:52AM +0200, Daniel Gustafsson wrote:\n> > On 27 Mar 2023, at 10:24, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > On Wed, Feb 08, 2023 at 05:18:13PM -0500, Tom Lane wrote:\n> >> I pushed the discussed documentation improvements, and changed the\n> >> behavior of \"ninja docs\" to only build the HTML docs. However,\n> >> I've not done anything about documenting what is the minimum\n> >> ninja version.\n> >\n> > FTR the documented XML_CATALOG_FILES environment variable is only valid for\n> > Intel based machines, as homebrew installs everything in a different location\n> > for M1...\n> >\n> > I'm attaching a patch to make that distinction, hoping that no one else will\n> > have to waste time trying to figure out how to get it working on such hardware.\n>\n> LGTM apart from the double // in the export which is easy enough to fix before\n> pushing.\n>\n> +export XML_CATALOG_FILES=/opt/homebrew//etc/xml/catalog\n\nOh, I didn't notice it. Apparently apple's find isn't smart enough to trim a /\nwhen fed with a directory with a trailing /\n\n> For reference on why Homebrew use a different structure on Apple M1 the below\n> issue has more details:\n>\n> \thttps://github.com/Homebrew/brew/issues/9177\n\nAh I was wondering why, thanks!\n\n\n", "msg_date": "Mon, 27 Mar 2023 16:41:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "> On 27 Mar 2023, at 10:41, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Mon, Mar 27, 2023 at 10:32:52AM +0200, Daniel Gustafsson wrote:\n>>> On 27 Mar 2023, at 10:24, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n>>> FTR the documented XML_CATALOG_FILES environment variable is only valid for\n>>> Intel based machines, as homebrew installs everything in a different location\n>>> for M1...\n>>> \n>>> I'm attaching a patch to make that distinction, hoping that no one else will\n>>> have to waste time trying to figure out how to get it working on such hardware.\n>> \n>> LGTM apart from the double // in the export which is easy enough to fix before\n>> pushing.\n>> \n>> +export XML_CATALOG_FILES=/opt/homebrew//etc/xml/catalog\n> \n> Oh, I didn't notice it. Apparently apple's find isn't smart enough to trim a /\n> when fed with a directory with a trailing /\n\nApplied with a tiny but of changes to make it look like the rest of the\nparagraph more. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 27 Mar 2023 12:07:10 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> On 27 Mar 2023, at 10:41, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> On Mon, Mar 27, 2023 at 10:32:52AM +0200, Daniel Gustafsson wrote:\n>>>> On 27 Mar 2023, at 10:24, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n>>>> FTR the documented XML_CATALOG_FILES environment variable is only valid for\n>>>> Intel based machines, as homebrew installs everything in a different location\n>>>> for M1...\n>>>> \n>>>> I'm attaching a patch to make that distinction, hoping that no one else will\n>>>> have to waste time trying to figure out how to get it working on such hardware.\n>>> \n>>> LGTM apart from the double // in the export which is easy enough to fix before\n>>> pushing.\n>>> \n>>> +export XML_CATALOG_FILES=/opt/homebrew//etc/xml/catalog\n>> \n>> Oh, I didn't notice it. Apparently apple's find isn't smart enough to trim a /\n>> when fed with a directory with a trailing /\n>\n> Applied with a tiny but of changes to make it look like the rest of the\n> paragraph more. Thanks!\n\nDoesn't this apply to Apple Silicon generally, not just M1? M2 already\nexists, and M3 etc. will presumably also appear at some point. The\nlinked Homebrew issue refers to Apple Silicon, not any specific models.\n\n- ilmari\n\n\n", "msg_date": "Mon, 27 Mar 2023 13:04:03 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "> On 27 Mar 2023, at 14:04, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> Applied with a tiny but of changes to make it look like the rest of the\n>> paragraph more. Thanks!\n> \n> Doesn't this apply to Apple Silicon generally, not just M1? M2 already\n> exists, and M3 etc. will presumably also appear at some point. The\n> linked Homebrew issue refers to Apple Silicon, not any specific models.\n\nThats a good point, it should say Apple Silicon and not M1 specifically.\nThanks, I'll go fix.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 27 Mar 2023 14:06:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "On Mon, Mar 27, 2023 at 02:06:34PM +0200, Daniel Gustafsson wrote:\n> > On 27 Mar 2023, at 14:04, Dagfinn Ilmari Manns�ker <ilmari@ilmari.org> wrote:\n> > Daniel Gustafsson <daniel@yesql.se> writes:\n>\n> >> Applied with a tiny but of changes to make it look like the rest of the\n> >> paragraph more. Thanks!\n> >\n> > Doesn't this apply to Apple Silicon generally, not just M1? M2 already\n> > exists, and M3 etc. will presumably also appear at some point. The\n> > linked Homebrew issue refers to Apple Silicon, not any specific models.\n>\n> Thats a good point, it should say Apple Silicon and not M1 specifically.\n> Thanks, I'll go fix.\n\nAh indeed that's a good point. Thanks for pushing and fixing!\n\n\n", "msg_date": "Mon, 27 Mar 2023 20:56:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Mar 27, 2023 at 02:06:34PM +0200, Daniel Gustafsson wrote:\n> On 27 Mar 2023, at 14:04, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>>> Doesn't this apply to Apple Silicon generally, not just M1? M2 already\n>>> exists, and M3 etc. will presumably also appear at some point. The\n>>> linked Homebrew issue refers to Apple Silicon, not any specific models.\n\n>> Thats a good point, it should say Apple Silicon and not M1 specifically.\n>> Thanks, I'll go fix.\n\n> Ah indeed that's a good point. Thanks for pushing and fixing!\n\nAlso, this needs to be back-patched, as this same text appears in\nthe back branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Mar 2023 10:33:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" }, { "msg_contents": "\n\n> On 27 Mar 2023, at 16:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Julien Rouhaud <rjuju123@gmail.com> writes:\n>> On Mon, Mar 27, 2023 at 02:06:34PM +0200, Daniel Gustafsson wrote:\n>> On 27 Mar 2023, at 14:04, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>>>> Doesn't this apply to Apple Silicon generally, not just M1? M2 already\n>>>> exists, and M3 etc. will presumably also appear at some point. The\n>>>> linked Homebrew issue refers to Apple Silicon, not any specific models.\n> \n>>> Thats a good point, it should say Apple Silicon and not M1 specifically.\n>>> Thanks, I'll go fix.\n> \n>> Ah indeed that's a good point. Thanks for pushing and fixing!\n> \n> Also, this needs to be back-patched, as this same text appears in\n> the back branches.\n\nYeah, it’s on my TODO for tonight when I get back. Since I botched the first commit before I had prepped the backbranches I figured I’d give the second some time in case that needed an update as well.\n\n./daniel\n\n", "msg_date": "Mon, 27 Mar 2023 17:23:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MacOS: xsltproc fails with \"warning: failed to load external\n entity\"" } ]
[ { "msg_contents": "Hi,\n\nPlenty tap tests require a background psql. But they're pretty annoying to\nuse.\n\nI think the biggest improvement would be an easy way to run a single query and\nget the result of that query. Manually having to pump_until() is awkward and\noften leads to hangs/timeouts, instead of test failures, because one needs to\nspecify a match pattern to pump_until(), which on mismatch leads to trying to\nkeep pumping forever.\n\nIt's annoyingly hard to wait for the result of a query in a generic way with\nbackground_psql(), and more generally for psql. background_psql() uses -XAtq,\nwhich means that we'll not get \"status\" output (like \"BEGIN\" or \"(1 row)\"),\nand that queries not returning anything are completely invisible.\n\nA second annoyance is that issuing a query requires a trailing newline,\notherwise psql won't process it.\n\n\nThe best way I can see is to have a helper that issues the query, followed by\na trailing newline, an \\echo with a recognizable separator, and then uses\npump_until() to wait for that separator.\n\n\nAnother area worthy of improvement is that background_psql() requires passing\nin many variables externally - without a recognizable benefit afaict. What's\nthe point in 'stdin', 'stdout', 'timer' being passed in? stdin/stdout need to\npoint to empty strings, so we know what's needed - in fact we'll even reset\nthem if they're passed in. The timer is always going to be\nPostgreSQL::Test::Utils::timeout_default, so again, what's the point?\n\nI think it'd be far more usable if we made background_psql() return a hash\nwith the relevant variables. The 031_recovery_conflict.pl test has:\n\nmy $psql_timeout = IPC::Run::timer($PostgreSQL::Test::Utils::timeout_default);\nmy %psql_standby = ('stdin' => '', 'stdout' => '');\n$psql_standby{run} =\n $node_standby->background_psql($test_db, \\$psql_standby{stdin},\n \\$psql_standby{stdout},\n $psql_timeout);\n$psql_standby{stdout} = '';\n\nHow about just returning a reference to a hash like that? Except that I'd also\nmake stderr available, which one can't currently access.\n\n\nThe $psql_standby{stdout} = ''; is needed because background_psql() leaves a\nbanner in the output, which it shouldn't, but we probably should just fix\nthat.\n\n\nBrought to you by: Trying to write a test for vacuum_defer_cleanup_age.\n\n- Andres\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:43:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Making background psql nicer to use in tap tests" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's annoyingly hard to wait for the result of a query in a generic way with\n> background_psql(), and more generally for psql. background_psql() uses -XAtq,\n> which means that we'll not get \"status\" output (like \"BEGIN\" or \"(1 row)\"),\n> and that queries not returning anything are completely invisible.\n\nYeah, the empty-query-result problem was giving me fits recently.\n+1 for wrapping this into something more convenient to use.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Jan 2023 15:06:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 15:06:46 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It's annoyingly hard to wait for the result of a query in a generic way with\n> > background_psql(), and more generally for psql. background_psql() uses -XAtq,\n> > which means that we'll not get \"status\" output (like \"BEGIN\" or \"(1 row)\"),\n> > and that queries not returning anything are completely invisible.\n>\n> Yeah, the empty-query-result problem was giving me fits recently.\n> +1 for wrapping this into something more convenient to use.\n\nI've hacked some on this. I first tried to just introduce a few helper\nfunctions in Cluster.pm, but that ended up being awkward. So I bit the bullet\nand introduced a new class (in BackgroundPsql.pm), and made background_psql()\nand interactive_psql() return an instance of it.\n\nThis is just a rough prototype. Several function names don't seem great, it\nneed POD documentation, etc.\n\n\nThe main convenience things it has over the old interface:\n- $node->background_psql('dbname') is enough\n- $psql->query(), which returns the query results as a string, is a lot easier\n to use than having to pump, identify query boundaries via regex etc.\n- $psql->query_safe(), which dies if any query fails (detected via stderr)\n- $psql->query_until() is a helper that makes it a bit easier to start queries\n that won't finish until a later point\n\n\nI don't quite like the new interface yet:\n- It's somewhat common to want to know if there was a failure, but also get\n the query result, not sure what the best function signature for that is in\n perl.\n- query_until() sounds a bit too much like $node->poll_query_until(). Maybe\n query_wait_until() is better? OTOH, the other function has poll in the name,\n so maybe it's ok.\n- right now there's a bit too much logic in background_psql() /\n interactive_psql() for my taste\n\n\nThose points aside, I think it already makes the tests a good bit more\nreadable. My WIP vacuum_defer_cleanup_age patch shrunk by half with it.\n\nI think with a bit more polish it's easy enough to use that we could avoid a\ngood number of those one-off psql's that we do all over.\n\n\nI didn't really know what this, insrc/test/subscription/t/015_stream.pl, is\nabout:\n\n$h->finish; # errors make the next test fail, so ignore them here\n\nThere's no further test?\n\nI'm somewhat surprised it doesn't cause problems in another ->finish later on,\nwhere we then afterwards just use $h again. Apparently IPC::Run just\nautomagically restarts psql?\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 30 Jan 2023 16:00:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 31 Jan 2023, at 01:00, Andres Freund <andres@anarazel.de> wrote:\n\n> I've hacked some on this. I first tried to just introduce a few helper\n> functions in Cluster.pm, but that ended up being awkward. So I bit the bullet\n> and introduced a new class (in BackgroundPsql.pm), and made background_psql()\n> and interactive_psql() return an instance of it.\n\nThanks for working on this!\n\n> This is just a rough prototype. Several function names don't seem great, it\n> need POD documentation, etc.\n\nIt might be rough around the edges but I don't think it's too far off a state\nin which in can be committed, given that it's replacing something even rougher.\nWith documentation and some polish I think we can iterate on it in the tree.\n\nI've played around a lot with it and it seems fairly robust.\n\n> I don't quite like the new interface yet:\n> - It's somewhat common to want to know if there was a failure, but also get\n> the query result, not sure what the best function signature for that is in\n> perl.\n\nWhat if query() returns a list with the return value last? The caller will get\nthe return value when assigning a single var as the return, and can get both in\nthose cases when it's interesting. That would make for reasonably readable\ncode in most places?\n\n $ret_val = $h->query(\"SELECT 1;\");\n ($query_result, $ret_val) = $h->query(\"SELECT 1;\"); \n\nReturning a hash seems like a worse option since it will complicate callsites\nwhich only want to know success/failure.\n\n> - query_until() sounds a bit too much like $node->poll_query_until(). Maybe\n> query_wait_until() is better? OTOH, the other function has poll in the name,\n> so maybe it's ok.\n\nquery_until isn't great but query_wait_until is IMO worse since the function\nmay well be used for tests which aren't using longrunning waits. It's also\nvery useful for things which aren't queries at all, like psql backslash\ncommands. I don't have any better ideas though, so +1 for sticking with\nquery_until.\n\n> - right now there's a bit too much logic in background_psql() /\n> interactive_psql() for my taste\n\nNot sure what you mean, I don't think they're especially heavy on logic?\n\n> Those points aside, I think it already makes the tests a good bit more\n> readable. My WIP vacuum_defer_cleanup_age patch shrunk by half with it.\n\nThe test for \\password in the SCRAM iteration count patch shrunk to 1/3 of the\nprevious coding.\n\n> I think with a bit more polish it's easy enough to use that we could avoid a\n> good number of those one-off psql's that we do all over.\n\nAgreed, and ideally implement tests which were left unwritten due to the old\nAPI being clunky.\n\n+ # feed the query to psql's stdin, follwed by \\n (so psql processes the\n\ns/follwed/followed/\n\n+A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up,\n+which can modified later.\n\nThis require a bit of knowledge about the internals which I think we should\nhide in this new API. How about providing a function for defining the timeout?\n\nRe timeouts: one thing I've done repeatedly is to use short timeouts and reset\nthem per query, and that turns pretty ugly fast. I hacked up your patch to\nprovide $h->reset_timer_before_query() which then injects a {timeout}->start\nbefore running each query without the caller having to do it. Not sure if I'm\nalone in doing that but if not I think it makes sense to add.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 14 Mar 2023 21:24:32 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Hi,\n\nOn 2023-03-14 21:24:32 +0100, Daniel Gustafsson wrote:\n> > On 31 Jan 2023, at 01:00, Andres Freund <andres@anarazel.de> wrote:\n> \n> > I've hacked some on this. I first tried to just introduce a few helper\n> > functions in Cluster.pm, but that ended up being awkward. So I bit the bullet\n> > and introduced a new class (in BackgroundPsql.pm), and made background_psql()\n> > and interactive_psql() return an instance of it.\n> \n> Thanks for working on this!\n\nThanks for helping it move along :)\n\n\n> > This is just a rough prototype. Several function names don't seem great, it\n> > need POD documentation, etc.\n> \n> It might be rough around the edges but I don't think it's too far off a state\n> in which in can be committed, given that it's replacing something even rougher.\n> With documentation and some polish I think we can iterate on it in the tree.\n\nCool.\n\n\n> > I don't quite like the new interface yet:\n> > - It's somewhat common to want to know if there was a failure, but also get\n> > the query result, not sure what the best function signature for that is in\n> > perl.\n> \n> What if query() returns a list with the return value last? The caller will get\n> the return value when assigning a single var as the return, and can get both in\n> those cases when it's interesting. That would make for reasonably readable\n> code in most places?\n\n> $ret_val = $h->query(\"SELECT 1;\");\n> ($query_result, $ret_val) = $h->query(\"SELECT 1;\");\n\nI hate perl.\n\n\n> Returning a hash seems like a worse option since it will complicate callsites\n> which only want to know success/failure.\n\nYea. Perhaps it's worth having a separate function for this? ->query_rc() or such?\n\n\n\n> > - right now there's a bit too much logic in background_psql() /\n> > interactive_psql() for my taste\n> \n> Not sure what you mean, I don't think they're especially heavy on logic?\n\n-EMISSINGWORD on my part. A bit too much duplicated logic.\n\n\n> +A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up,\n> +which can modified later.\n> \n> This require a bit of knowledge about the internals which I think we should\n> hide in this new API. How about providing a function for defining the timeout?\n\n\"definining\" in the sense of accessing it? Or passing one in?\n\n\n> Re timeouts: one thing I've done repeatedly is to use short timeouts and reset\n> them per query, and that turns pretty ugly fast. I hacked up your patch to\n> provide $h->reset_timer_before_query() which then injects a {timeout}->start\n> before running each query without the caller having to do it. Not sure if I'm\n> alone in doing that but if not I think it makes sense to add.\n\nI don't quite understand the use case, but I don't mind it as a functionality.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 14 Mar 2023 18:03:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "On 2023-01-30 Mo 19:00, Andres Freund wrote:\n> Hi,\n>\n> On 2023-01-30 15:06:46 -0500, Tom Lane wrote:\n>> Andres Freund<andres@anarazel.de> writes:\n>>> It's annoyingly hard to wait for the result of a query in a generic way with\n>>> background_psql(), and more generally for psql. background_psql() uses -XAtq,\n>>> which means that we'll not get \"status\" output (like \"BEGIN\" or \"(1 row)\"),\n>>> and that queries not returning anything are completely invisible.\n>> Yeah, the empty-query-result problem was giving me fits recently.\n>> +1 for wrapping this into something more convenient to use.\n> I've hacked some on this. I first tried to just introduce a few helper\n> functions in Cluster.pm, but that ended up being awkward. So I bit the bullet\n> and introduced a new class (in BackgroundPsql.pm), and made background_psql()\n> and interactive_psql() return an instance of it.\n>\n> This is just a rough prototype. Several function names don't seem great, it\n> need POD documentation, etc.\n\n\nSince this class is only intended to have instances created from \nCluster, I would be inclined just to put it at the end of Cluster.pm \ninstead of creating a new file. That makes it clearer that the new \npackage is not standalone. We already have instances of that.\n\nThe first param of the constructor is a bit opaque. If it were going to \nbe called from elsewhere I'd want something a bit more obvious, but I \nguess we can live with it here. An alternative might be \nmultiple_constructors (e.g. new_background, new_interactive) which use a \ncommon private routine.\n\nDon't have comments yet on the other things, will continue looking.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-01-30 Mo 19:00, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-01-30 15:06:46 -0500, Tom Lane wrote:\n\n\nAndres Freund <andres@anarazel.de> writes:\n\n\nIt's annoyingly hard to wait for the result of a query in a generic way with\nbackground_psql(), and more generally for psql. background_psql() uses -XAtq,\nwhich means that we'll not get \"status\" output (like \"BEGIN\" or \"(1 row)\"),\nand that queries not returning anything are completely invisible.\n\n\n\nYeah, the empty-query-result problem was giving me fits recently.\n+1 for wrapping this into something more convenient to use.\n\n\n\nI've hacked some on this. I first tried to just introduce a few helper\nfunctions in Cluster.pm, but that ended up being awkward. So I bit the bullet\nand introduced a new class (in BackgroundPsql.pm), and made background_psql()\nand interactive_psql() return an instance of it.\n\nThis is just a rough prototype. Several function names don't seem great, it\nneed POD documentation, etc.\n\n\n\nSince this class is only intended to have instances created from\n Cluster, I would be inclined just to put it at the end of\n Cluster.pm instead of creating a new file. That makes it clearer\n that the new package is not standalone. We already have instances\n of that. \n\nThe first param of the constructor is a bit opaque. If it were\n going to be called from elsewhere I'd want something a bit more\n obvious, but I guess we can live with it here. An alternative\n might be multiple_constructors (e.g. new_background,\n new_interactive) which use a common private routine.\nDon't have comments yet on the other things, will continue\n looking.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 15 Mar 2023 10:10:20 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 15 Mar 2023, at 02:03, Andres Freund <andres@anarazel.de> wrote:\n\n>> Returning a hash seems like a worse option since it will complicate callsites\n>> which only want to know success/failure.\n> \n> Yea. Perhaps it's worth having a separate function for this? ->query_rc() or such?\n\nIf we are returning a hash then I agree it should be a separate function.\nMaybe Andrew has input on which is the most Perl way of doing this.\n\n>>> - right now there's a bit too much logic in background_psql() /\n>>> interactive_psql() for my taste\n>> \n>> Not sure what you mean, I don't think they're especially heavy on logic?\n> \n> -EMISSINGWORD on my part. A bit too much duplicated logic.\n\nThat makes more sense, and I can kind of agree. I don't think it's too bad but\nI agree there is room for improvement.\n\n>> +A default timeout of $PostgreSQL::Test::Utils::timeout_default is set up,\n>> +which can modified later.\n>> \n>> This require a bit of knowledge about the internals which I think we should\n>> hide in this new API. How about providing a function for defining the timeout?\n> \n> \"definining\" in the sense of accessing it? Or passing one in?\n\nI meant passing one in.\n\n>> Re timeouts: one thing I've done repeatedly is to use short timeouts and reset\n>> them per query, and that turns pretty ugly fast. I hacked up your patch to\n>> provide $h->reset_timer_before_query() which then injects a {timeout}->start\n>> before running each query without the caller having to do it. Not sure if I'm\n>> alone in doing that but if not I think it makes sense to add.\n> \n> I don't quite understand the use case, but I don't mind it as a functionality.\n\nI've used it a lot when I want to run n command which each should finish\nquickly or not at all. So one time budget per command rather than having a\nlonger timeout for a set of commands that comprise a test. It can be done\nalready today by calling ->start but it doesn't exactly make the code cleaner.\n\nAs mentioned off-list I did some small POD additions when reviewing, so I've\nattached them here in a v2 in the hopes that it might be helpful. I've also\nincluded the above POC for restarting the timeout per query to show what I\nmeant.\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 17 Mar 2023 10:48:09 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "On 2023-03-17 Fr 05:48, Daniel Gustafsson wrote:\n>> On 15 Mar 2023, at 02:03, Andres Freund<andres@anarazel.de> wrote:\n>>> Returning a hash seems like a worse option since it will complicate callsites\n>>> which only want to know success/failure.\n>> Yea. Perhaps it's worth having a separate function for this? ->query_rc() or such?\n> If we are returning a hash then I agree it should be a separate function.\n> Maybe Andrew has input on which is the most Perl way of doing this.\n\n\nI think the perlish way is use the `wantarray` function. Perl knows if \nyou're expecting a scalar return value or a list (which includes a hash).\n\n\n    return wantarray ? $retval : (list or hash);\n\n\nA few more issues:\n\nA common perl idiom is to start private routine names with an \nunderscore. so I'd rename wait_connect to _wait_connect;\n\nWhy is $restart_before_query a package/class level value instead of an \ninstance value? And why can we only ever set it to 1 but not back again? \nMaybe we don't want to, but it looks odd.\n\nIf we are going to keep this as a separate package, then we should put \nsome code in the constructor to prevent it being called from elsewhere \nthan the Cluster package. e.g.\n\n     # this constructor should only be called from PostgreSQL::Test::Cluster\n     my ($package, $file, $line) = caller;\n\n     die \"Forbidden caller of constructor: package: $package, file: \n$file:$line\"\n       unless $package eq 'PostgreSQL::Test::Cluster';\n\nThis should refer to the full class name:\n\n+=item $node->background_psql($dbname, %params) => BackgroundPsql instance\n\n\nStill reviewing ...\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-17 Fr 05:48, Daniel\n Gustafsson wrote:\n\n\n\nOn 15 Mar 2023, at 02:03, Andres Freund <andres@anarazel.de> wrote:\n\n\n\n\n\n\nReturning a hash seems like a worse option since it will complicate callsites\nwhich only want to know success/failure.\n\n\n\nYea. Perhaps it's worth having a separate function for this? ->query_rc() or such?\n\n\n\nIf we are returning a hash then I agree it should be a separate function.\nMaybe Andrew has input on which is the most Perl way of doing this.\n\n\n\nI think the perlish way is use the `wantarray` function. Perl\n knows if you're expecting a scalar return value or a list (which\n includes a hash).\n\n\n   return wantarray ? $retval : (list or hash);\n\n\n\nA few more issues:\nA common perl idiom is to start private routine names with an\n underscore. so I'd rename wait_connect to _wait_connect;\nWhy is $restart_before_query a package/class level value instead\n of an instance value? And why can we only ever set it to 1 but not\n back again? Maybe we don't want to, but it looks odd.\n\nIf we are going to keep this as a separate package, then we\n should put some code in the constructor to prevent it being called\n from elsewhere than the Cluster package. e.g.\n    # this constructor should only be called from\n PostgreSQL::Test::Cluster\n     my ($package, $file, $line) = caller;\n     \n     die \"Forbidden caller of constructor: package: $package, file:\n $file:$line\"\n       unless $package eq 'PostgreSQL::Test::Cluster';\n     \n\nThis should refer to the full class name:\n\n+=item $node->background_psql($dbname, %params) =>\n BackgroundPsql instance\n\n\nStill reviewing ...\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 17 Mar 2023 09:48:31 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 17 Mar 2023, at 14:48, Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2023-03-17 Fr 05:48, Daniel Gustafsson wrote:\n>>> On 15 Mar 2023, at 02:03, Andres Freund <andres@anarazel.de>\n>>> wrote:\n>>> \n>>>> Returning a hash seems like a worse option since it will complicate callsites\n>>>> which only want to know success/failure.\n>>>> \n>>> Yea. Perhaps it's worth having a separate function for this? ->query_rc() or such?\n>>> \n>> If we are returning a hash then I agree it should be a separate function.\n>> Maybe Andrew has input on which is the most Perl way of doing this.\n> \n> I think the perlish way is use the `wantarray` function. Perl knows if you're expecting a scalar return value or a list (which includes a hash).\n> \n> return wantarray ? $retval : (list or hash);\n\nAha, TIL. That seems like precisely what we want. \n\n> A common perl idiom is to start private routine names with an underscore. so I'd rename wait_connect to _wait_connect;\n\nThere are quite a few routines documented as internal in Cluster.pm which don't\nstart with an underscore. Should we change them as well? I'm happy to prepare\na separate patch to address that if we want that.\n\n> Why is $restart_before_query a package/class level value instead of an instance value? And why can we only ever set it to 1 but not back again? Maybe we don't want to, but it looks odd.\n\nIt was mostly a POC to show what I meant with the functionality. I think there\nshould be a way to turn it off (set it to zero) even though I doubt it will be\nused much.\n\n> If we are going to keep this as a separate package, then we should put some code in the constructor to prevent it being called from elsewhere than the Cluster package. e.g.\n> \n> # this constructor should only be called from PostgreSQL::Test::Cluster\n> my ($package, $file, $line) = caller;\n> \n> die \"Forbidden caller of constructor: package: $package, file: $file:$line\"\n> unless $package eq 'PostgreSQL::Test::Cluster';\n\nI don't have strong feelings about where to place this, but Cluster.pm is\nalready quite long so I see a small upside to keeping it separate to not make\nthat worse.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 17 Mar 2023 15:08:11 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "On 2023-03-17 Fr 10:08, Daniel Gustafsson wrote:\n>\n>> A common perl idiom is to start private routine names with an underscore. so I'd rename wait_connect to _wait_connect;\n> There are quite a few routines documented as internal in Cluster.pm which don't\n> start with an underscore. Should we change them as well? I'm happy to prepare\n> a separate patch to address that if we want that.\n\n\nPossibly. There are two concerns. First, make sure that they really are \nprivate. Last time I looked I think I noticed at least one thing that \nwas alleged to be private but was called from a TAP script. Second, \nunless we backpatch it there will be some drift between branches, which \ncan make backpatching things a bit harder. But by all means prep a patch \nso we can see the scope of the issue.\n\n\n>\n>> Why is $restart_before_query a package/class level value instead of an instance value? And why can we only ever set it to 1 but not back again? Maybe we don't want to, but it looks odd.\n> It was mostly a POC to show what I meant with the functionality. I think there\n> should be a way to turn it off (set it to zero) even though I doubt it will be\n> used much.\n\n\nA common idiom is to have a composite getter/setter method for object \nproperties something like this\n\n\n sub settingname\n\n {\n\n   my ($self, $arg) = @_;\n\n   $self->{settingname} = $arg if defined $arg;\n\n   return $self->{settingname};\n\n }\n\n\n>\n>> If we are going to keep this as a separate package, then we should put some code in the constructor to prevent it being called from elsewhere than the Cluster package. e.g.\n>>\n>> # this constructor should only be called from PostgreSQL::Test::Cluster\n>> my ($package, $file, $line) = caller;\n>> \n>> die \"Forbidden caller of constructor: package: $package, file: $file:$line\"\n>> unless $package eq 'PostgreSQL::Test::Cluster';\n> I don't have strong feelings about where to place this, but Cluster.pm is\n> already quite long so I see a small upside to keeping it separate to not make\n> that worse.\n>\n\nYeah, I can go along with that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-17 Fr 10:08, Daniel\n Gustafsson wrote:\n\n\n\nA common perl idiom is to start private routine names with an underscore. so I'd rename wait_connect to _wait_connect;\n\n\n\nThere are quite a few routines documented as internal in Cluster.pm which don't\nstart with an underscore. Should we change them as well? I'm happy to prepare\na separate patch to address that if we want that.\n\n\n\nPossibly. There are two concerns. First, make sure that they\n really are private. Last time I looked I think I noticed at least\n one thing that was alleged to be private but was called from a TAP\n script. Second, unless we backpatch it there will be some drift\n between branches, which can make backpatching things a bit harder.\n But by all means prep a patch so we can see the scope of the\n issue.\n\n\n\n\n\n\n\n\nWhy is $restart_before_query a package/class level value instead of an instance value? And why can we only ever set it to 1 but not back again? Maybe we don't want to, but it looks odd.\n\n\n\nIt was mostly a POC to show what I meant with the functionality. I think there\nshould be a way to turn it off (set it to zero) even though I doubt it will be\nused much.\n\n\n\nA common idiom is to have a composite getter/setter method for\n object properties something like this\n\n\n\nsub settingname\n{\n  my ($self, $arg) = @_;\n  $self->{settingname} = $arg if defined $arg;\n  return $self->{settingname};\n}\n\n\n\n\n\n\n\n\n\nIf we are going to keep this as a separate package, then we should put some code in the constructor to prevent it being called from elsewhere than the Cluster package. e.g.\n\n # this constructor should only be called from PostgreSQL::Test::Cluster\n my ($package, $file, $line) = caller;\n \n die \"Forbidden caller of constructor: package: $package, file: $file:$line\"\n unless $package eq 'PostgreSQL::Test::Cluster';\n\n\n\nI don't have strong feelings about where to place this, but Cluster.pm is\nalready quite long so I see a small upside to keeping it separate to not make\nthat worse.\n\n\n\n\n\nYeah, I can go along with that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 17 Mar 2023 12:25:14 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> On 2023-03-17 Fr 10:08, Daniel Gustafsson wrote:\n>>> Why is $restart_before_query a package/class level value instead of\n>>> an instance value? And why can we only ever set it to 1 but not back\n>>> again? Maybe we don't want to, but it looks odd.\n>> It was mostly a POC to show what I meant with the functionality. I think there\n>> should be a way to turn it off (set it to zero) even though I doubt it will be\n>> used much.\n>\n>\n> A common idiom is to have a composite getter/setter method for object\n> properties something like this\n>\n>\n> sub settingname\n> {\n> my ($self, $arg) = @_;\n> $self->{settingname} = $arg if defined $arg;\n> return $self->{settingname};\n> }\n\nOr, if undef is a valid value:\n\n\n sub settingname\n {\n my $self = shift;\n $self->[settingname} = shift if @_;\n return $self->{settingname};\n }\n\n- ilmari\n\n\n", "msg_date": "Fri, 17 Mar 2023 18:07:40 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "On 2023-03-17 Fr 14:07, Dagfinn Ilmari Mannsåker wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>\n>> On 2023-03-17 Fr 10:08, Daniel Gustafsson wrote:\n>>>> Why is $restart_before_query a package/class level value instead of\n>>>> an instance value? And why can we only ever set it to 1 but not back\n>>>> again? Maybe we don't want to, but it looks odd.\n>>> It was mostly a POC to show what I meant with the functionality. I think there\n>>> should be a way to turn it off (set it to zero) even though I doubt it will be\n>>> used much.\n>>\n>> A common idiom is to have a composite getter/setter method for object\n>> properties something like this\n>>\n>>\n>> sub settingname\n>> {\n>> my ($self, $arg) = @_;\n>> $self->{settingname} = $arg if defined $arg;\n>> return $self->{settingname};\n>> }\n> Or, if undef is a valid value:\n>\n>\n> sub settingname\n> {\n> my $self = shift;\n> $self->[settingname} = shift if @_;\n> return $self->{settingname};\n> }\n>\n\n\nYes, I agree that's better (modulo the bracket typo)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-17 Fr 14:07, Dagfinn Ilmari\n Mannsåker wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\n\nOn 2023-03-17 Fr 10:08, Daniel Gustafsson wrote:\n\n\n\nWhy is $restart_before_query a package/class level value instead of\nan instance value? And why can we only ever set it to 1 but not back\nagain? Maybe we don't want to, but it looks odd.\n\n\nIt was mostly a POC to show what I meant with the functionality. I think there\nshould be a way to turn it off (set it to zero) even though I doubt it will be\nused much.\n\n\n\n\nA common idiom is to have a composite getter/setter method for object\nproperties something like this\n\n\n sub settingname\n {\n my ($self, $arg) = @_;\n $self->{settingname} = $arg if defined $arg;\n return $self->{settingname};\n }\n\n\n\nOr, if undef is a valid value:\n\n\n sub settingname\n {\n my $self = shift;\n $self->[settingname} = shift if @_;\n return $self->{settingname};\n }\n\n\n\n\n\n\n\nYes, I agree that's better (modulo the bracket typo)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 17 Mar 2023 18:12:58 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Hi,\n\nOn 2023-03-17 12:25:14 -0400, Andrew Dunstan wrote:\n> On 2023-03-17 Fr 10:08, Daniel Gustafsson wrote:\n> > > If we are going to keep this as a separate package, then we should put some code in the constructor to prevent it being called from elsewhere than the Cluster package. e.g.\n> > > \n> > > # this constructor should only be called from PostgreSQL::Test::Cluster\n> > > my ($package, $file, $line) = caller;\n> > > die \"Forbidden caller of constructor: package: $package, file: $file:$line\"\n> > > unless $package eq 'PostgreSQL::Test::Cluster';\n> > I don't have strong feelings about where to place this, but Cluster.pm is\n> > already quite long so I see a small upside to keeping it separate to not make\n> > that worse.\n> > \n> \n> Yeah, I can go along with that.\n\nCool - I'd prefer a separate file. I do find Cluster.pm somewhat unwieldy at\nthis point, and I susect that we'll end up with additional helpers around\nBackgroundPsql.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Mar 2023 15:58:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "On 2023-03-17 Fr 18:58, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-17 12:25:14 -0400, Andrew Dunstan wrote:\n>> On 2023-03-17 Fr 10:08, Daniel Gustafsson wrote:\n>>>> If we are going to keep this as a separate package, then we should put some code in the constructor to prevent it being called from elsewhere than the Cluster package. e.g.\n>>>>\n>>>> # this constructor should only be called from PostgreSQL::Test::Cluster\n>>>> my ($package, $file, $line) = caller;\n>>>> die \"Forbidden caller of constructor: package: $package, file: $file:$line\"\n>>>> unless $package eq 'PostgreSQL::Test::Cluster';\n>>> I don't have strong feelings about where to place this, but Cluster.pm is\n>>> already quite long so I see a small upside to keeping it separate to not make\n>>> that worse.\n>>>\n>> Yeah, I can go along with that.\n> Cool - I'd prefer a separate file. I do find Cluster.pm somewhat unwieldy at\n> this point, and I susect that we'll end up with additional helpers around\n> BackgroundPsql.\n>\n\nYeah. BTW, a better test than the one above would be\n\n\n    $package->isa(\"PostgreSQL::Test::Cluster\")\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-17 Fr 18:58, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-17 12:25:14 -0400, Andrew Dunstan wrote:\n\n\nOn 2023-03-17 Fr 10:08, Daniel Gustafsson wrote:\n\n\n\nIf we are going to keep this as a separate package, then we should put some code in the constructor to prevent it being called from elsewhere than the Cluster package. e.g.\n\n # this constructor should only be called from PostgreSQL::Test::Cluster\n my ($package, $file, $line) = caller;\n die \"Forbidden caller of constructor: package: $package, file: $file:$line\"\n unless $package eq 'PostgreSQL::Test::Cluster';\n\n\nI don't have strong feelings about where to place this, but Cluster.pm is\nalready quite long so I see a small upside to keeping it separate to not make\nthat worse.\n\n\n\n\nYeah, I can go along with that.\n\n\n\nCool - I'd prefer a separate file. I do find Cluster.pm somewhat unwieldy at\nthis point, and I susect that we'll end up with additional helpers around\nBackgroundPsql.\n\n\n\n\n\nYeah. BTW, a better test than the one above would be\n\n\n   $package->isa(\"PostgreSQL::Test::Cluster\")\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 18 Mar 2023 18:07:46 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 18 Mar 2023, at 23:07, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> BTW, a better test than the one above would be\n> \n> $package->isa(\"PostgreSQL::Test::Cluster\")\n\nAttached is a quick updated v3 of the patch which, to the best of my Perl\nabilities, tries to address the comments raised here.\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 23 Mar 2023 23:36:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "The attached v4 fixes some incorrect documentation (added by me in v3), and\nfixes that background_psql() didn't honor on_error_stop and extraparams passed\nby the user. I've also added a commit which implements the \\password test from\nthe SCRAM iteration count patchset as well as cleaned up a few IPC::Run\nincludes from test scripts.\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 31 Mar 2023 22:33:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 31 Mar 2023, at 22:33, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> The attached v4 fixes some incorrect documentation (added by me in v3), and\n> fixes that background_psql() didn't honor on_error_stop and extraparams passed\n> by the user. I've also added a commit which implements the \\password test from\n> the SCRAM iteration count patchset as well as cleaned up a few IPC::Run\n> includes from test scripts.\n\nAnd a v5 to fix a test failure in recovery tests.\n\n--\nDaniel Gustafsson", "msg_date": "Sun, 2 Apr 2023 22:24:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Hi,\n\nOn 2023-04-02 22:24:16 +0200, Daniel Gustafsson wrote:\n> And a v5 to fix a test failure in recovery tests.\n\nThanks for workin gon this!\n\n\nThere's this XXX that I added:\n\n> @@ -57,11 +51,10 @@ sub test_streaming\n> \tCOMMIT;\n> \t});\n> \n> -\t$in .= q{\n> -\tCOMMIT;\n> -\t\\q\n> -\t};\n> -\t$h->finish; # errors make the next test fail, so ignore them here\n> +\t$h->query_safe('COMMIT');\n> +\t$h->quit;\n> +\t# XXX: Not sure what this means\n> + # errors make the next test fail, so ignore them here\n> \n> \t$node_publisher->wait_for_catchup($appname);\n\nI still don't know what that comment is supposed to mean, unfortunately.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Apr 2023 14:37:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 2 Apr 2023, at 23:37, Andres Freund <andres@anarazel.de> wrote:\n\n> There's this XXX that I added:\n> \n>> @@ -57,11 +51,10 @@ sub test_streaming\n>> \tCOMMIT;\n>> \t});\n>> \n>> -\t$in .= q{\n>> -\tCOMMIT;\n>> -\t\\q\n>> -\t};\n>> -\t$h->finish; # errors make the next test fail, so ignore them here\n>> +\t$h->query_safe('COMMIT');\n>> +\t$h->quit;\n>> +\t# XXX: Not sure what this means\n>> + # errors make the next test fail, so ignore them here\n>> \n>> \t$node_publisher->wait_for_catchup($appname);\n> \n> I still don't know what that comment is supposed to mean, unfortunately.\n\nMy reading of it is that it's ignoring any croak errors which IPC::Run might\nthrow if ->finish() isn't able to reap the psql process which had the \\q.\n\nI've added Amit who committed it in 216a784829c on cc: to see if he remembers\nthe comment in question and can shed some light. Skimming the linked thread\nyields no immediate clues.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 3 Apr 2023 10:39:41 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Unless there are objections I plan to get this in before the freeze, in order\nto have better interactive tests starting with 16. With a little bit of\ndocumentation polish I think it's ready.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 5 Apr 2023 23:44:31 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 5 Apr 2023, at 23:44, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> Unless there are objections I plan to get this in before the freeze, in order\n> to have better interactive tests starting with 16. With a little bit of\n> documentation polish I think it's ready.\n\nWhen looking at the CFBot failure on Linux and Windows (not on macOS) I noticed\nthat it was down to the instance lacking IO::Pty.\n\n[19:59:12.609](1.606s) ok 1 - scram_iterations in server side ROLE\nCan't locate IO/Pty.pm in @INC (you may need to install the IO::Pty module) (@INC contains: /tmp/cirrus-ci-build/src/test/perl /tmp/cirrus-ci-build/src/test/authentication /etc/perl /usr/local/lib/i386-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/i386-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/i386-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/IPC/Run.pm line 1828.\n\nSkimming the VM creation [0] it seems like it should be though? On macOS the\nmodule is installed inside Cirrus and the test runs fine.\n\nI don't think we should go ahead with a patch that refactors interactive_psql\nonly to SKIP over it in CI (which is what the tab_completion test does now), so\nlet's wait until we have that sorted before going ahead.\n\n--\nDaniel Gustafsson\n\n[0] https://github.com/anarazel/pg-vm-images/blob/main/scripts/linux_debian_install_deps.sh\n\n", "msg_date": "Fri, 7 Apr 2023 15:32:12 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "On 2023-04-07 Fr 09:32, Daniel Gustafsson wrote:\n>> On 5 Apr 2023, at 23:44, Daniel Gustafsson<daniel@yesql.se> wrote:\n>>\n>> Unless there are objections I plan to get this in before the freeze, in order\n>> to have better interactive tests starting with 16. With a little bit of\n>> documentation polish I think it's ready.\n> When looking at the CFBot failure on Linux and Windows (not on macOS) I noticed\n> that it was down to the instance lacking IO::Pty.\n>\n> [19:59:12.609](1.606s) ok 1 - scram_iterations in server side ROLE\n> Can't locate IO/Pty.pm in @INC (you may need to install the IO::Pty module) (@INC contains: /tmp/cirrus-ci-build/src/test/perl /tmp/cirrus-ci-build/src/test/authentication /etc/perl /usr/local/lib/i386-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/i386-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/i386-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/IPC/Run.pm line 1828.\n>\n> Skimming the VM creation [0] it seems like it should be though? On macOS the\n> module is installed inside Cirrus and the test runs fine.\n>\n> I don't think we should go ahead with a patch that refactors interactive_psql\n> only to SKIP over it in CI (which is what the tab_completion test does now), so\n> let's wait until we have that sorted before going ahead.\n\n\nIt should probably be added to config/check_modules.pl if we're going to \nuse it, but it seems to be missing for Strawberry Perl and msys/ucrt64 \nperl and I'm not sure how easy it will be to add there. It would \ncertainly add an installation burden for test instances at the very least.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-07 Fr 09:32, Daniel\n Gustafsson wrote:\n\n\n\nOn 5 Apr 2023, at 23:44, Daniel Gustafsson <daniel@yesql.se> wrote:\n\nUnless there are objections I plan to get this in before the freeze, in order\nto have better interactive tests starting with 16. With a little bit of\ndocumentation polish I think it's ready.\n\n\n\nWhen looking at the CFBot failure on Linux and Windows (not on macOS) I noticed\nthat it was down to the instance lacking IO::Pty.\n\n[19:59:12.609](1.606s) ok 1 - scram_iterations in server side ROLE\nCan't locate IO/Pty.pm in @INC (you may need to install the IO::Pty module) (@INC contains: /tmp/cirrus-ci-build/src/test/perl /tmp/cirrus-ci-build/src/test/authentication /etc/perl /usr/local/lib/i386-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/i386-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/i386-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/IPC/Run.pm line 1828.\n\nSkimming the VM creation [0] it seems like it should be though? On macOS the\nmodule is installed inside Cirrus and the test runs fine.\n\nI don't think we should go ahead with a patch that refactors interactive_psql\nonly to SKIP over it in CI (which is what the tab_completion test does now), so\nlet's wait until we have that sorted before going ahead.\n\n\n\nIt should probably be added to config/check_modules.pl if we're\n going to use it, but it seems to be missing for Strawberry Perl\n and msys/ucrt64 perl and I'm not sure how easy it will be to add\n there. It would certainly add an installation burden for test\n instances at the very least.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 7 Apr 2023 10:55:19 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Hi,\n\nOn 2023-04-07 15:32:12 +0200, Daniel Gustafsson wrote:\n> > On 5 Apr 2023, at 23:44, Daniel Gustafsson <daniel@yesql.se> wrote:\n> > \n> > Unless there are objections I plan to get this in before the freeze, in order\n> > to have better interactive tests starting with 16. With a little bit of\n> > documentation polish I think it's ready.\n> \n> When looking at the CFBot failure on Linux and Windows (not on macOS) I noticed\n> that it was down to the instance lacking IO::Pty.\n> \n> [19:59:12.609](1.606s) ok 1 - scram_iterations in server side ROLE\n> Can't locate IO/Pty.pm in @INC (you may need to install the IO::Pty module) (@INC contains: /tmp/cirrus-ci-build/src/test/perl /tmp/cirrus-ci-build/src/test/authentication /etc/perl /usr/local/lib/i386-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/i386-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/i386-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/IPC/Run.pm line 1828.\n> \n> Skimming the VM creation [0] it seems like it should be though?\n\nNote it just fails on the 32build, not the 64bit build. Unfortunately I don't\nthink debian's multiarch in bullseye support installing enough of perl in\n32bit and 64bit.\n\nWe can't have a hard dependency on non-default modules like IO::Pty anyway, so\nthe test needs to skip if it's not available.\n\nOn windows IO::Pty can't be installed, IIRC.\n\n\n> I don't think we should go ahead with a patch that refactors interactive_psql\n> only to SKIP over it in CI (which is what the tab_completion test does now), so\n> let's wait until we have that sorted before going ahead.\n\nMaybe I am a bit confused, but isn't that just an existing requirement? Why\nwould we expect this patchset to change what dependencies use of\ninteractive_psql() has?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Apr 2023 07:58:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Hi,\n\nOn 2023-04-07 10:55:19 -0400, Andrew Dunstan wrote:\n> It should probably be added to config/check_modules.pl if we're going to use\n> it, but it seems to be missing for Strawberry Perl and msys/ucrt64 perl and\n> I'm not sure how easy it will be to add there. It would certainly add an\n> installation burden for test instances at the very least.\n\nThe last time I tried, it can't be installed on windows with cpan either, the\nmodule simply doesn't have the necessary windows bits - likely because\ntraditionally windows didn't really have ptys. I think some stuff has been\nadded, but it probably would still require a bunch of portability work.\n\nNote that we normally don't even build with readline support on windows - so\nthere's not really much point in using IO::Pty there. While I've gotten that\nto work manually not too long ago, it's still manual and not documented etc.\n\n\nAfaict the failures are purely about patch 2, not 1, right?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Apr 2023 08:04:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-04-07 15:32:12 +0200, Daniel Gustafsson wrote:\n>> I don't think we should go ahead with a patch that refactors interactive_psql\n>> only to SKIP over it in CI (which is what the tab_completion test does now), so\n>> let's wait until we have that sorted before going ahead.\n\n> Maybe I am a bit confused, but isn't that just an existing requirement? Why\n> would we expect this patchset to change what dependencies use of\n> interactive_psql() has?\n\nIt is an existing requirement, but only for a test that's not too\ncritical. If interactive_psql starts getting used for more interesting\nthings, we might be sad that the coverage is weak.\n\nHaving said that, weak coverage is better than no coverage. I don't\nthink this point should be a show-stopper for committing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 11:52:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 7 Apr 2023, at 16:58, Andres Freund <andres@anarazel.de> wrote:\n\n> Note it just fails on the 32build, not the 64bit build. Unfortunately I don't\n> think debian's multiarch in bullseye support installing enough of perl in\n> 32bit and 64bit.\n\nI should probably avoid parsing logfiles with fever-induced brainfog, I\nconfused myself to think it was both =(\n\n> Maybe I am a bit confused, but isn't that just an existing requirement? Why\n> would we expect this patchset to change what dependencies use of\n> interactive_psql() has?\n\nCorrect, there is no change from the current implementation.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 7 Apr 2023 17:55:08 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Hi,\n\nOn 2023-04-07 11:52:37 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-04-07 15:32:12 +0200, Daniel Gustafsson wrote:\n> >> I don't think we should go ahead with a patch that refactors interactive_psql\n> >> only to SKIP over it in CI (which is what the tab_completion test does now), so\n> >> let's wait until we have that sorted before going ahead.\n> \n> > Maybe I am a bit confused, but isn't that just an existing requirement? Why\n> > would we expect this patchset to change what dependencies use of\n> > interactive_psql() has?\n> \n> It is an existing requirement, but only for a test that's not too\n> critical. If interactive_psql starts getting used for more interesting\n> things, we might be sad that the coverage is weak.\n\nI don't really expect it to be used for non-critical things - after all,\ninteractive_psql() also depends on psql being built with readline support,\nwhich we traditionally don't have on windows... For most tasks background_psql\nshould suffice...\n\n\n> Having said that, weak coverage is better than no coverage. I don't\n> think this point should be a show-stopper for committing.\n\nYea.\n\nOne thing I wonder is whether we should have a central function for checking\nif interactive_psql() is available, instead of copying 010_tab_completion.pl's\nlogic for it into multiple tests. But that could come later too.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Apr 2023 08:58:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 7 Apr 2023, at 17:04, Andres Freund <andres@anarazel.de> wrote:\n\n> Afaict the failures are purely about patch 2, not 1, right?\n\nCorrect. The attached v6 wraps the interactive_psql test in a SKIP block with\na conditional on IO::Pty being available.\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 7 Apr 2023 18:14:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 7 Apr 2023, at 18:14, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 7 Apr 2023, at 17:04, Andres Freund <andres@anarazel.de> wrote:\n\n>> Afaict the failures are purely about patch 2, not 1, right?\n> \n> Correct. The attached v6 wraps the interactive_psql test in a SKIP block with\n> a conditional on IO::Pty being available.\n\nThis version was green in the CFBot, so I ended up pushing it after some\ndocumentation fixups and polish.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 7 Apr 2023 22:24:04 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 7 Apr 2023, at 22:24, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 7 Apr 2023, at 18:14, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> On 7 Apr 2023, at 17:04, Andres Freund <andres@anarazel.de> wrote:\n> \n>>> Afaict the failures are purely about patch 2, not 1, right?\n>> \n>> Correct. The attached v6 wraps the interactive_psql test in a SKIP block with\n>> a conditional on IO::Pty being available.\n> \n> This version was green in the CFBot, so I ended up pushing it after some\n> documentation fixups and polish.\n\nLooks like morepork wasn't happy with the interactive \\password test.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2023-04-07%2020%3A30%3A29\n\nLooking into why the timer timed out in that test.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 7 Apr 2023 23:01:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 7 Apr 2023, at 23:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Looks like morepork wasn't happy with the interactive \\password test.\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2023-04-07%2020%3A30%3A29\n> \n> Looking into why the timer timed out in that test.\n\nStaring at this I've been unable to figure out if there an underlying problem\nhere or a flaky testrun, since I can't reproduce it. Maybe the animal owner\n(on cc) have any insights?\n\nThe test has passed on several different platforms in the buildfarm, including\nLinux, Solaris, macOS, NetBSD, FreeBSD and other OpenBSD animals. It also\npassed in an OpenBSD VM running with our Cirrus framework.\n\nUnless there are objections raised I propose leaving it in for now, and I will\nreturn to it tomorrow after some sleep, and install OpenBSD 6.9 to see if it's\nreproducible.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 7 Apr 2023 23:59:44 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 7 Apr 2023, at 23:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n> Staring at this I've been unable to figure out if there an underlying problem\n> here or a flaky testrun, since I can't reproduce it. Maybe the animal owner\n> (on cc) have any insights?\n\n> The test has passed on several different platforms in the buildfarm, including\n> Linux, Solaris, macOS, NetBSD, FreeBSD and other OpenBSD animals. It also\n> passed in an OpenBSD VM running with our Cirrus framework.\n\nprion and mantid have now failed with the same symptom. I don't\nsee a pattern, but it's not OpenBSD-only. It will be interesting\nto see if the failure is intermittent or not on those animals.\n\n> Unless there are objections raised I propose leaving it in for now, and I will\n> return to it tomorrow after some sleep, and install OpenBSD 6.9 to see if it's\n> reproducible.\n\nAgreed, we don't need a hasty revert here. Better to gather data.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 18:35:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 8 Apr 2023, at 00:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 7 Apr 2023, at 23:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Staring at this I've been unable to figure out if there an underlying problem\n>> here or a flaky testrun, since I can't reproduce it. Maybe the animal owner\n>> (on cc) have any insights?\n> \n>> The test has passed on several different platforms in the buildfarm, including\n>> Linux, Solaris, macOS, NetBSD, FreeBSD and other OpenBSD animals. It also\n>> passed in an OpenBSD VM running with our Cirrus framework.\n> \n> prion and mantid have now failed with the same symptom. I don't\n> see a pattern, but it's not OpenBSD-only. It will be interesting\n> to see if the failure is intermittent or not on those animals.\n\nIt would be interesting to know how far in the pumped input they get, if they\ntime out on the first one with nothing going through? Will investigate further\ntomorrow to see.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 8 Apr 2023 00:59:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 8 Apr 2023, at 00:59, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 8 Apr 2023, at 00:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>>> On 7 Apr 2023, at 23:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Staring at this I've been unable to figure out if there an underlying problem\n>>> here or a flaky testrun, since I can't reproduce it. Maybe the animal owner\n>>> (on cc) have any insights?\n>> \n>>> The test has passed on several different platforms in the buildfarm, including\n>>> Linux, Solaris, macOS, NetBSD, FreeBSD and other OpenBSD animals. It also\n>>> passed in an OpenBSD VM running with our Cirrus framework.\n>> \n>> prion and mantid have now failed with the same symptom. I don't\n>> see a pattern, but it's not OpenBSD-only. It will be interesting\n>> to see if the failure is intermittent or not on those animals.\n\nmorepork has failed again, which is good, since intermittent failures are\nharder to track down.\n\n> It would be interesting to know how far in the pumped input they get, if they\n> time out on the first one with nothing going through? Will investigate further\n> tomorrow to see.\n\nActually, one quick datapoint. prion and mantid report running IPC::Run\nversion 0.92, and morepork 0.96. Animals that pass are running 20180523.0,\n20200505.0, 20220807.0 or similar versions. We don't print the IO::Pty version\nduring configure, but maybe this is related to older versions of the modules\nand this test (not all of them apparently) need to SKIP if IO::Pty is missing\nor too old? Somewhere to start looking at the very least.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 8 Apr 2023 01:14:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "On 2023-04-07 Fr 19:14, Daniel Gustafsson wrote:\n>> On 8 Apr 2023, at 00:59, Daniel Gustafsson<daniel@yesql.se> wrote:\n>>\n>>> On 8 Apr 2023, at 00:35, Tom Lane<tgl@sss.pgh.pa.us> wrote:\n>>>\n>>> Daniel Gustafsson<daniel@yesql.se> writes:\n>>>>> On 7 Apr 2023, at 23:01, Daniel Gustafsson<daniel@yesql.se> wrote:\n>>>> Staring at this I've been unable to figure out if there an underlying problem\n>>>> here or a flaky testrun, since I can't reproduce it. Maybe the animal owner\n>>>> (on cc) have any insights?\n>>>> The test has passed on several different platforms in the buildfarm, including\n>>>> Linux, Solaris, macOS, NetBSD, FreeBSD and other OpenBSD animals. It also\n>>>> passed in an OpenBSD VM running with our Cirrus framework.\n>>> prion and mantid have now failed with the same symptom. I don't\n>>> see a pattern, but it's not OpenBSD-only. It will be interesting\n>>> to see if the failure is intermittent or not on those animals.\n> morepork has failed again, which is good, since intermittent failures are\n> harder to track down.\n>\n>> It would be interesting to know how far in the pumped input they get, if they\n>> time out on the first one with nothing going through? Will investigate further\n>> tomorrow to see.\n> Actually, one quick datapoint. prion and mantid report running IPC::Run\n> version 0.92, and morepork 0.96. Animals that pass are running 20180523.0,\n> 20200505.0, 20220807.0 or similar versions. We don't print the IO::Pty version\n> during configure, but maybe this is related to older versions of the modules\n> and this test (not all of them apparently) need to SKIP if IO::Pty is missing\n> or too old? Somewhere to start looking at the very least.\n\n\nThose aren't CPAN version numbers. See <https://metacpan.org/pod/IO::Pty>\n\n\nprion was running 1.10 (dated to 2010). I have just updated it to 1.17 \n(the CPAN latest). We'll see if that makes a difference.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-07 Fr 19:14, Daniel\n Gustafsson wrote:\n\n\n\nOn 8 Apr 2023, at 00:59, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n\n\nOn 8 Apr 2023, at 00:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nDaniel Gustafsson <daniel@yesql.se> writes:\n\n\n\nOn 7 Apr 2023, at 23:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n\nStaring at this I've been unable to figure out if there an underlying problem\nhere or a flaky testrun, since I can't reproduce it. Maybe the animal owner\n(on cc) have any insights?\n\n\n\n\n\nThe test has passed on several different platforms in the buildfarm, including\nLinux, Solaris, macOS, NetBSD, FreeBSD and other OpenBSD animals. It also\npassed in an OpenBSD VM running with our Cirrus framework.\n\n\n\nprion and mantid have now failed with the same symptom. I don't\nsee a pattern, but it's not OpenBSD-only. It will be interesting\nto see if the failure is intermittent or not on those animals.\n\n\n\n\nmorepork has failed again, which is good, since intermittent failures are\nharder to track down.\n\n\n\nIt would be interesting to know how far in the pumped input they get, if they\ntime out on the first one with nothing going through? Will investigate further\ntomorrow to see.\n\n\n\nActually, one quick datapoint. prion and mantid report running IPC::Run\nversion 0.92, and morepork 0.96. Animals that pass are running 20180523.0,\n20200505.0, 20220807.0 or similar versions. We don't print the IO::Pty version\nduring configure, but maybe this is related to older versions of the modules\nand this test (not all of them apparently) need to SKIP if IO::Pty is missing\nor too old? Somewhere to start looking at the very least.\n\n\n\nThose aren't CPAN version numbers. See\n <https://metacpan.org/pod/IO::Pty>\n\n\nprion was running 1.10 (dated to 2010). I have just updated it to\n 1.17 (the CPAN latest). We'll see if that makes a difference.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 7 Apr 2023 20:02:18 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n>> Actually, one quick datapoint. prion and mantid report running IPC::Run\n>> version 0.92, and morepork 0.96. Animals that pass are running 20180523.0,\n>> 20200505.0, 20220807.0 or similar versions. We don't print the IO::Pty version\n>> during configure, but maybe this is related to older versions of the modules\n>> and this test (not all of them apparently) need to SKIP if IO::Pty is missing\n>> or too old? Somewhere to start looking at the very least.\n\n> prion was running 1.10 (dated to 2010). I have just updated it to 1.17 \n> (the CPAN latest). We'll see if that makes a difference.\n\nI've been doing some checking with perlbrew locally. It appears to not\nbe about IO::Pty so much as IPC::Run: it works with IPC::Run 0.99 but\nnot 0.79. Still bisecting to identify exactly what's the minimum\nokay version.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 20:38:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "I wrote:\n> I've been doing some checking with perlbrew locally. It appears to not\n> be about IO::Pty so much as IPC::Run: it works with IPC::Run 0.99 but\n> not 0.79. Still bisecting to identify exactly what's the minimum\n> okay version.\n\nThe answer is: it works with IPC::Run >= 0.98. The version of IO::Pty\ndoesn't appear significant; it works at least back to 1.00 from early\n2002.\n\nIPC::Run 0.98 is relatively new (2018), so I don't think it'd fly\nto make that our new minimum version across-the-board. I recommend\njust setting up this one test to SKIP if IPC::Run is too old.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 20:49:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Hi,\n\nOn 2023-04-07 20:49:39 -0400, Tom Lane wrote:\n> I wrote:\n> > I've been doing some checking with perlbrew locally. It appears to not\n> > be about IO::Pty so much as IPC::Run: it works with IPC::Run 0.99 but\n> > not 0.79. Still bisecting to identify exactly what's the minimum\n> > okay version.\n> \n> The answer is: it works with IPC::Run >= 0.98. The version of IO::Pty\n> doesn't appear significant; it works at least back to 1.00 from early\n> 2002.\n> \n> IPC::Run 0.98 is relatively new (2018), so I don't think it'd fly\n> to make that our new minimum version across-the-board. I recommend\n> just setting up this one test to SKIP if IPC::Run is too old.\n\nDoes the test actually take a while before it fails, or is it quick? It's\npossible the failure is caused by 001_password.pl's use of\nset_query_timer_restart(). I don't think other tests do something quite\ncomparable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Apr 2023 17:57:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-04-07 20:49:39 -0400, Tom Lane wrote:\n>> IPC::Run 0.98 is relatively new (2018), so I don't think it'd fly\n>> to make that our new minimum version across-the-board. I recommend\n>> just setting up this one test to SKIP if IPC::Run is too old.\n\n> Does the test actually take a while before it fails, or is it quick?\n\nIt times out at whatever your PG_TEST_TIMEOUT_DEFAULT is. I waited\n3 minutes the first time, and then reduced that to 20sec for the\nrest of the tries ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 21:04:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "> On 8 Apr 2023, at 02:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> I've been doing some checking with perlbrew locally. It appears to not\n>> be about IO::Pty so much as IPC::Run: it works with IPC::Run 0.99 but\n>> not 0.79. Still bisecting to identify exactly what's the minimum\n>> okay version.\n> \n> The answer is: it works with IPC::Run >= 0.98. The version of IO::Pty\n> doesn't appear significant; it works at least back to 1.00 from early\n> 2002.\n\nThanks for investigating this!\n\n> IPC::Run 0.98 is relatively new (2018), so I don't think it'd fly\n> to make that our new minimum version across-the-board. \n\nAbsolutely, that's not an option.\n\n> I recommend\n> just setting up this one test to SKIP if IPC::Run is too old.\n\nYes, will do that when I have a little more time at hand for monitoring the BF\nlater today.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 8 Apr 2023 09:53:58 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "\nOn 2023-04-08 09:53, Daniel Gustafsson wrote:\n\n>> I recommend\n>> just setting up this one test to SKIP if IPC::Run is too old.\n> \n> Yes, will do that when I have a little more time at hand for monitoring the BF\n> later today.\n\nSo what do you want me to do about grison and morepork? I guess I could \ntry to install a newer version of IPC::Run from CPAN or should I just \nleave it be?\n\n/Mikael\n\n\n\n", "msg_date": "Sat, 8 Apr 2023 10:00:50 +0200", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@gmail.com> writes:\n> So what do you want me to do about grison and morepork? I guess I could \n> try to install a newer version of IPC::Run from CPAN or should I just \n> leave it be?\n\nI think \"leave it be\" is fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Apr 2023 10:09:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making background psql nicer to use in tap tests" }, { "msg_contents": "While fixing a recent bug on visibility on a standby [1], I wrote a \nregression test that uses BackgroundPsql to run some queries in a \nlong-running psql session. The problem is that that was refactored in \nv17, commit 664d757531. The test I wrote for v17 doesn't work as it is \non backbranches. Options:\n\n1. Write the new test differently on backbranches. Before 664d757531, \nthe test needs to work a lot harder to use the background psql session, \ncalling pump() etc. That's doable, but as noted in the discussion that \nled to 664d757531, it's laborious and error-prone.\n\n2. Backport commit 664d757531. This might break out-of-tree perl tests \nthat use the background_psql() function. I don't know if any such tests \nexist, and they would need to be changed for v17 anyway, so that seems \nacceptable. Anyone aware of any extensions using the perl test modules?\n\n3. Backport commit 664d757531, but keep the existing background_psql() \nfunction unchanged. Add a different constructor to get the v17-style \nBackgroundPsql session, something like \"$node->background_psql_new()\".\n\nI'm leaning towards 3. We might need to backport more perl tests that \nuse background_psql() in the future, backporting the test module will \nmake that easier.\n\nThoughts?\n\n[1] \nhttps://www.postgresql.org/message-id/6b852e98-2d49-4ca1-9e95-db419a2696e0@iki.fi\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:26:23 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Backporting BackgroundPsql" }, { "msg_contents": "On 25/06/2024 13:26, Heikki Linnakangas wrote:\n> While fixing a recent bug on visibility on a standby [1], I wrote a \n> regression test that uses BackgroundPsql to run some queries in a \n> long-running psql session. The problem is that that was refactored in \n> v17, commit 664d757531. The test I wrote for v17 doesn't work as it is \n> on backbranches. Options:\n\nSorry, I meant v16. The BackgroundPsql refactorings went into v16. The \nbackporting question remains for v15 and below.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:32:59 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "Hi,\n\nOn 2024-06-25 13:26:23 +0300, Heikki Linnakangas wrote:\n> While fixing a recent bug on visibility on a standby [1], I wrote a\n> regression test that uses BackgroundPsql to run some queries in a\n> long-running psql session. The problem is that that was refactored in v17,\n> commit 664d757531. The test I wrote for v17 doesn't work as it is on\n> backbranches. Options:\n> \n> 1. Write the new test differently on backbranches. Before 664d757531, the\n> test needs to work a lot harder to use the background psql session, calling\n> pump() etc. That's doable, but as noted in the discussion that led to\n> 664d757531, it's laborious and error-prone.\n> \n> 2. Backport commit 664d757531. This might break out-of-tree perl tests that\n> use the background_psql() function. I don't know if any such tests exist,\n> and they would need to be changed for v17 anyway, so that seems acceptable.\n> Anyone aware of any extensions using the perl test modules?\n> \n> 3. Backport commit 664d757531, but keep the existing background_psql()\n> function unchanged. Add a different constructor to get the v17-style\n> BackgroundPsql session, something like \"$node->background_psql_new()\".\n> \n> I'm leaning towards 3. We might need to backport more perl tests that use\n> background_psql() in the future, backporting the test module will make that\n> easier.\n> \n> Thoughts?\n\nYes, I've wished for this a couple times. I think 2 or 3 would be reasonable.\nI think 1) often just leads to either tests not being written or being\nfragile...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2024 04:40:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2024-06-25 13:26:23 +0300, Heikki Linnakangas wrote:\n>> 1. Write the new test differently on backbranches. Before 664d757531, the\n>> test needs to work a lot harder to use the background psql session, calling\n>> pump() etc. That's doable, but as noted in the discussion that led to\n>> 664d757531, it's laborious and error-prone.\n>> \n>> 2. Backport commit 664d757531. This might break out-of-tree perl tests that\n>> use the background_psql() function. I don't know if any such tests exist,\n>> and they would need to be changed for v17 anyway, so that seems acceptable.\n>> Anyone aware of any extensions using the perl test modules?\n>> \n>> 3. Backport commit 664d757531, but keep the existing background_psql()\n>> function unchanged. Add a different constructor to get the v17-style\n>> BackgroundPsql session, something like \"$node->background_psql_new()\".\n\n> Yes, I've wished for this a couple times. I think 2 or 3 would be reasonable.\n> I think 1) often just leads to either tests not being written or being\n> fragile...\n\nI'd vote for (2). (3) is just leaving a foot-gun for people to\nhurt themselves with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:26:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On Tue, Jun 25, 2024 at 7:40 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2024-06-25 13:26:23 +0300, Heikki Linnakangas wrote:\n> > While fixing a recent bug on visibility on a standby [1], I wrote a\n> > regression test that uses BackgroundPsql to run some queries in a\n> > long-running psql session. The problem is that that was refactored in v17,\n> > commit 664d757531. The test I wrote for v17 doesn't work as it is on\n> > backbranches. Options:\n> >\n> > 1. Write the new test differently on backbranches. Before 664d757531, the\n> > test needs to work a lot harder to use the background psql session, calling\n> > pump() etc. That's doable, but as noted in the discussion that led to\n> > 664d757531, it's laborious and error-prone.\n> >\n> > 2. Backport commit 664d757531. This might break out-of-tree perl tests that\n> > use the background_psql() function. I don't know if any such tests exist,\n> > and they would need to be changed for v17 anyway, so that seems acceptable.\n> > Anyone aware of any extensions using the perl test modules?\n> >\n> > 3. Backport commit 664d757531, but keep the existing background_psql()\n> > function unchanged. Add a different constructor to get the v17-style\n> > BackgroundPsql session, something like \"$node->background_psql_new()\".\n> >\n> > I'm leaning towards 3. We might need to backport more perl tests that use\n> > background_psql() in the future, backporting the test module will make that\n> > easier.\n> >\n> > Thoughts?\n>\n> Yes, I've wished for this a couple times. I think 2 or 3 would be reasonable.\n> I think 1) often just leads to either tests not being written or being\n> fragile...\n\n+1 to backporting background psql!\n\nI'm also okay with 2 or 3. But note that for 2, there are several\ntests with skip_all and at least one of them uses background_psql\n(031_recovery_conflict.pl), so we'll just have to remember to update\nthose. I assume that is easy enough to do if you grep for\nbackground_psql -- but, just in case you were going to be 100%\ntest-driven :)\n\n- Melanie\n\n\n", "msg_date": "Tue, 25 Jun 2024 11:34:22 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On 2024-06-25 Tu 10:26 AM, Tom Lane wrote:\n> Andres Freund<andres@anarazel.de> writes:\n>> On 2024-06-25 13:26:23 +0300, Heikki Linnakangas wrote:\n>>> 1. Write the new test differently on backbranches. Before 664d757531, the\n>>> test needs to work a lot harder to use the background psql session, calling\n>>> pump() etc. That's doable, but as noted in the discussion that led to\n>>> 664d757531, it's laborious and error-prone.\n>>>\n>>> 2. Backport commit 664d757531. This might break out-of-tree perl tests that\n>>> use the background_psql() function. I don't know if any such tests exist,\n>>> and they would need to be changed for v17 anyway, so that seems acceptable.\n>>> Anyone aware of any extensions using the perl test modules?\n>>>\n>>> 3. Backport commit 664d757531, but keep the existing background_psql()\n>>> function unchanged. Add a different constructor to get the v17-style\n>>> BackgroundPsql session, something like \"$node->background_psql_new()\".\n>> Yes, I've wished for this a couple times. I think 2 or 3 would be reasonable.\n>> I think 1) often just leads to either tests not being written or being\n>> fragile...\n> I'd vote for (2). (3) is just leaving a foot-gun for people to\n> hurt themselves with.\n>\n> \t\t\t\n\n\n+1\n\n\nI'd like to get rid of it in its current form at least. Just about all \nthe uses I'm aware of could be transformed to use the Session object \nI've been working on, based either on FFI or a small XS wrapper for some \nof libpq.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-25 Tu 10:26 AM, Tom Lane\n wrote:\n\n\nAndres Freund <andres@anarazel.de> writes:\n\n\nOn 2024-06-25 13:26:23 +0300, Heikki Linnakangas wrote:\n\n\n1. Write the new test differently on backbranches. Before 664d757531, the\ntest needs to work a lot harder to use the background psql session, calling\npump() etc. That's doable, but as noted in the discussion that led to\n664d757531, it's laborious and error-prone.\n\n2. Backport commit 664d757531. This might break out-of-tree perl tests that\nuse the background_psql() function. I don't know if any such tests exist,\nand they would need to be changed for v17 anyway, so that seems acceptable.\nAnyone aware of any extensions using the perl test modules?\n\n3. Backport commit 664d757531, but keep the existing background_psql()\nfunction unchanged. Add a different constructor to get the v17-style\nBackgroundPsql session, something like \"$node->background_psql_new()\".\n\n\n\n\n\n\nYes, I've wished for this a couple times. I think 2 or 3 would be reasonable.\nI think 1) often just leads to either tests not being written or being\nfragile...\n\n\n\nI'd vote for (2). (3) is just leaving a foot-gun for people to\nhurt themselves with.\n\n\t\t\t\n\n\n\n+1\n\n\nI'd like to get rid of it in its current form at least. Just about all the uses I'm aware of could be transformed to use the Session object I've been working on, based either on FFI or a small XS wrapper for some of libpq.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 25 Jun 2024 11:59:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "> On 25 Jun 2024, at 16:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andres Freund <andres@anarazel.de> writes:\n>> On 2024-06-25 13:26:23 +0300, Heikki Linnakangas wrote:\n>>> 1. Write the new test differently on backbranches. Before 664d757531, the\n>>> test needs to work a lot harder to use the background psql session, calling\n>>> pump() etc. That's doable, but as noted in the discussion that led to\n>>> 664d757531, it's laborious and error-prone.\n>>> \n>>> 2. Backport commit 664d757531. This might break out-of-tree perl tests that\n>>> use the background_psql() function. I don't know if any such tests exist,\n>>> and they would need to be changed for v17 anyway, so that seems acceptable.\n>>> Anyone aware of any extensions using the perl test modules?\n>>> \n>>> 3. Backport commit 664d757531, but keep the existing background_psql()\n>>> function unchanged. Add a different constructor to get the v17-style\n>>> BackgroundPsql session, something like \"$node->background_psql_new()\".\n> \n>> Yes, I've wished for this a couple times. I think 2 or 3 would be reasonable.\n>> I think 1) often just leads to either tests not being written or being\n>> fragile...\n> \n> I'd vote for (2). (3) is just leaving a foot-gun for people to\n> hurt themselves with.\n\nI agree with this, if we're backporting we should opt for 2. The only out of\ntree user of background_psql() that I could find was check_pgactivity but they\nseem to have vendored an old copy of our lib rather than use anything in a our\ntree so we should be fine there.\n\nBefore pulling any triggers I think https://commitfest.postgresql.org/48/4959/\nshould be considered, since Tom found some flaws in the current code around how\ntimers and timeouts are used.\n\nHowever, since Andrew is actively aiming to replace all of this shortly, should\nwe wait a see where that lands to avoid having to backport another library\nchange?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 22:47:11 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Before pulling any triggers I think https://commitfest.postgresql.org/48/4959/\n> should be considered, since Tom found some flaws in the current code around how\n> timers and timeouts are used.\n\nThat's certainly another issue to consider, but is it really a blocker\nfor this one?\n\n> However, since Andrew is actively aiming to replace all of this shortly, should\n> we wait a see where that lands to avoid having to backport another library\n> change?\n\nI would like to see what he comes up with ... but is it likely to\nbe something we'd risk back-patching?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 16:57:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "> On 25 Jun 2024, at 22:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Before pulling any triggers I think https://commitfest.postgresql.org/48/4959/\n>> should be considered, since Tom found some flaws in the current code around how\n>> timers and timeouts are used.\n> \n> That's certainly another issue to consider, but is it really a blocker\n> for this one?\n\nIt's not a blocker, but when poking at the code it seems useful to consider the\nopen items around it.\n\n>> However, since Andrew is actively aiming to replace all of this shortly, should\n>> we wait a see where that lands to avoid having to backport another library\n>> change?\n> \n> I would like to see what he comes up with ... but is it likely to\n> be something we'd risk back-patching?\n\nMaybe it'll be a stretch given that it's likely to introduce new dependencies.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 23:10:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On 2024-Jun-25, Tom Lane wrote:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n\n> > However, since Andrew is actively aiming to replace all of this shortly, should\n> > we wait a see where that lands to avoid having to backport another library\n> > change?\n> \n> I would like to see what he comes up with ... but is it likely to\n> be something we'd risk back-patching?\n\nFWIW I successfully used the preliminary PqFFI stuff Andrew posted to\nwrite a test program for bug #18377, which I think ended up being better\nthan with BackgroundPsql, so I think it's a good way forward. As for\nback-patching it, I suspect we're going to end up backpatching the\nframework anyway just because we'll want to have it available for\nbackpatching future tests, even if we keep a backpatch minimal by doing\nonly the framework and not existing tests.\n\nI also backpatched the PqFFI and PostgreSQL::Session modules to older PG\nbranches, to run my test program there. This required only removing\nsome lines from PqFFI.pm that were about importing libpq functions that\nolder libpq didn't have.\n\nOf course, the PostgreSQL::Session stuff is not ready yet, so if we want\nthis test in the tree soon, I don't think we should wait.\n\n\nI'll note, though, that Test::More doesn't work terribly nicely with\nperl threads, but that only relates to my test program and not to PqFFI.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La gente vulgar sólo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"\n\n\n", "msg_date": "Wed, 26 Jun 2024 02:12:42 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On Wed, Jun 26, 2024 at 02:12:42AM +0200, Alvaro Herrera wrote:\n> FWIW I successfully used the preliminary PqFFI stuff Andrew posted to\n> write a test program for bug #18377, which I think ended up being better\n> than with BackgroundPsql, so I think it's a good way forward. As for\n> back-patching it, I suspect we're going to end up backpatching the\n> framework anyway just because we'll want to have it available for\n> backpatching future tests, even if we keep a backpatch minimal by doing\n> only the framework and not existing tests.\n> \n> I also backpatched the PqFFI and PostgreSQL::Session modules to older PG\n> branches, to run my test program there. This required only removing\n> some lines from PqFFI.pm that were about importing libpq functions that\n> older libpq didn't have.\n\nNice! I definitely +1 the backpatching of the testing bits. This\nstuff can make validating bugs so much easier, particularly when there\nare conflicting parts in the backend after a cherry-pick.\n--\nMichael", "msg_date": "Wed, 26 Jun 2024 09:25:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On 26/06/2024 03:25, Michael Paquier wrote:\n> On Wed, Jun 26, 2024 at 02:12:42AM +0200, Alvaro Herrera wrote:\n>> FWIW I successfully used the preliminary PqFFI stuff Andrew posted to\n>> write a test program for bug #18377, which I think ended up being better\n>> than with BackgroundPsql, so I think it's a good way forward. As for\n>> back-patching it, I suspect we're going to end up backpatching the\n>> framework anyway just because we'll want to have it available for\n>> backpatching future tests, even if we keep a backpatch minimal by doing\n>> only the framework and not existing tests.\n>>\n>> I also backpatched the PqFFI and PostgreSQL::Session modules to older PG\n>> branches, to run my test program there. This required only removing\n>> some lines from PqFFI.pm that were about importing libpq functions that\n>> older libpq didn't have.\n> \n> Nice! I definitely +1 the backpatching of the testing bits. This\n> stuff can make validating bugs so much easier, particularly when there\n> are conflicting parts in the backend after a cherry-pick.\n\nI haven't looked closely at the new PgFFI stuff but +1 on that in \ngeneral, and it makes sense to backport that once it lands on master. In \nthe meanwhile, I think we should backport BackgroundPsql as it is, to \nmake it possible to backport tests using it right now, even if it is \nshort-lived.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 26 Jun 2024 10:34:31 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On Wed, Jun 26, 2024 at 3:34 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I haven't looked closely at the new PgFFI stuff but +1 on that in\n> general, and it makes sense to backport that once it lands on master. In\n> the meanwhile, I think we should backport BackgroundPsql as it is, to\n> make it possible to backport tests using it right now, even if it is\n> short-lived.\n\n+1. The fact that PgFFI may be coming isn't a reason to not back-patch\nthis. The risk of back-patching testing infrastructure is also very\nlow as compared with code; in fact, there's a lot of risk from NOT\nback-patching popular testing infrastructure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Jun 2024 07:54:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On 26/06/2024 14:54, Robert Haas wrote:\n> On Wed, Jun 26, 2024 at 3:34 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> I haven't looked closely at the new PgFFI stuff but +1 on that in\n>> general, and it makes sense to backport that once it lands on master. In\n>> the meanwhile, I think we should backport BackgroundPsql as it is, to\n>> make it possible to backport tests using it right now, even if it is\n>> short-lived.\n> \n> +1. The fact that PgFFI may be coming isn't a reason to not back-patch\n> this. The risk of back-patching testing infrastructure is also very\n> low as compared with code; in fact, there's a lot of risk from NOT\n> back-patching popular testing infrastructure.\n\nOk, I pushed commits to backport BackgroundPsql down to v12. I used \n\"option 2\", i.e. I changed background_psql() to return the new \nBackgroundPsql object.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 19:35:05 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On Thu, Jun 27, 2024 at 10:05 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n>\n> Ok, I pushed commits to backport BackgroundPsql down to v12. I used\n> \"option 2\", i.e. I changed background_psql() to return the new\n> BackgroundPsql object.\n>\n>\nDon't we need to add install and uninstall rules for the new module, like\nwe did in\nhttps://git.postgresql.org/pg/commitdiff/a4c17c86176cfa712f541b81b2a026ae054b275e\nand\nhttps://git.postgresql.org/pg/commitdiff/7039c7cff6736780c3bbb41a90a6dfea0f581ad2\n?\n\nThanks,\nPavan\n\nOn Thu, Jun 27, 2024 at 10:05 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\nOk, I pushed commits to backport BackgroundPsql down to v12. I used \n\"option 2\", i.e. I changed background_psql() to return the new \nBackgroundPsql object.Don't we need to add install and uninstall rules for the new module, like we did in https://git.postgresql.org/pg/commitdiff/a4c17c86176cfa712f541b81b2a026ae054b275e and  https://git.postgresql.org/pg/commitdiff/7039c7cff6736780c3bbb41a90a6dfea0f581ad2 ?Thanks,Pavan", "msg_date": "Sat, 29 Jun 2024 10:08:19 +0530", "msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "> On 29 Jun 2024, at 06:38, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n\n> Don't we need to add install and uninstall rules for the new module, like we did in https://git.postgresql.org/pg/commitdiff/a4c17c86176cfa712f541b81b2a026ae054b275e and https://git.postgresql.org/pg/commitdiff/7039c7cff6736780c3bbb41a90a6dfea0f581ad2 ?\n\nThats correct, we should backport those as well.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 09:39:25 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "H i Daniel,\n\nOn Mon, Jul 1, 2024 at 1:09 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 29 Jun 2024, at 06:38, Pavan Deolasee <pavan.deolasee@gmail.com>\n> wrote:\n>\n> > Don't we need to add install and uninstall rules for the new module,\n> like we did in\n> https://git.postgresql.org/pg/commitdiff/a4c17c86176cfa712f541b81b2a026ae054b275e\n> and\n> https://git.postgresql.org/pg/commitdiff/7039c7cff6736780c3bbb41a90a6dfea0f581ad2\n> ?\n>\n> Thats correct, we should backport those as well.\n>\n\nThanks for confirming. Attaching patches for PG15 and PG14, but this will\nneed backporting all the way up to PG12.\n\nThanks,\nPavan", "msg_date": "Mon, 1 Jul 2024 19:41:35 +0530", "msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" }, { "msg_contents": "On 01/07/2024 17:11, Pavan Deolasee wrote:\n> H i Daniel,\n> \n> On Mon, Jul 1, 2024 at 1:09 PM Daniel Gustafsson <daniel@yesql.se \n> <mailto:daniel@yesql.se>> wrote:\n> \n> > On 29 Jun 2024, at 06:38, Pavan Deolasee\n> <pavan.deolasee@gmail.com <mailto:pavan.deolasee@gmail.com>> wrote:\n> \n> > Don't we need to add install and uninstall rules for the new\n> module, like we did in\n> https://git.postgresql.org/pg/commitdiff/a4c17c86176cfa712f541b81b2a026ae054b275e <https://git.postgresql.org/pg/commitdiff/a4c17c86176cfa712f541b81b2a026ae054b275e> and https://git.postgresql.org/pg/commitdiff/7039c7cff6736780c3bbb41a90a6dfea0f581ad2 <https://git.postgresql.org/pg/commitdiff/7039c7cff6736780c3bbb41a90a6dfea0f581ad2> ?\n> \n> Thats correct, we should backport those as well.\n> \n> Thanks for confirming. Attaching patches for PG15 and PG14, but this \n> will need backporting all the way up to PG12.\n\nThanks! Pushed to v12-v15.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 19:33:47 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Backporting BackgroundPsql" } ]
[ { "msg_contents": "Hi hackers,\n\nI propose to add a new option \"updates_without_script\" to extension's\ncontrol file which a list of updates that do not need update script. \nThis enables to update an extension by ALTER EXTENSION even if the\nextension module doesn't provide the update script.\n\nCurrently, even when we don't need to execute any command to update an\nextension from one version to the next, we need to provide an update\nscript that doesn't contain any command. Preparing such meaningless\nfiles are sometimes annoying.\n\nThe attached patch introduces a new option \"updates_without_script\"\ninto extension control file. This specifies a list of such updates\nfollowing the pattern 'old_version--target_version'. \n\nFor example, \n\n updates_without_script = '1.1--1.2, 1.3--1.4'\n \nmeans that updates from version 1.1 to version 1.2 and from version 1.3\nto version 1.4 don't need an update script. In this case, users don't\nneed to prepare update scripts extension--1.1--1.2.sql and\nextension--1.3--1.4.sql if it is not necessary to execute any commands.\n\nThe updated path of an extension is determined based on both the filenames\nof update scripts and the list of updates specified in updates_without_script.\nPresence of update script has higher priority than the option. Therefore,\nif an update script is provided, the script will be executed even if this\nupdate is specified in updates_without_script.\n\nWhat do you think of this feature?\nAny feedback would be appreciated.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Tue, 31 Jan 2023 05:25:02 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Allow an extention to be updated without a script" }, { "msg_contents": "Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> Currently, even when we don't need to execute any command to update an\n> extension from one version to the next, we need to provide an update\n> script that doesn't contain any command. Preparing such meaningless\n> files are sometimes annoying.\n\nIf you have no update script, why call it a new version? The point\nof extension versions is to distinguish different states of the\nextension's SQL objects. We do not consider mods in underlying C code\nto justify a new version.\n\n> The attached patch introduces a new option \"updates_without_script\"\n> into extension control file. This specifies a list of such updates\n> following the pattern 'old_version--target_version'.\n\nThis seems completely unnecessary and confusing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Jan 2023 16:05:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow an extention to be updated without a script" }, { "msg_contents": "Hi,\n\nThank you for your comment.\n\nOn Mon, 30 Jan 2023 16:05:52 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > Currently, even when we don't need to execute any command to update an\n> > extension from one version to the next, we need to provide an update\n> > script that doesn't contain any command. Preparing such meaningless\n> > files are sometimes annoying.\n> \n> If you have no update script, why call it a new version? The point\n> of extension versions is to distinguish different states of the\n> extension's SQL objects. We do not consider mods in underlying C code\n> to justify a new version.\n\nAlthough, as you say, the extension manager doesn't track changes in C code\nfunctions, a new version could be released with changes in only in C\nfunctions that implement a new feature or fix some bugs because it looks\na new version from user's view. Actually, there are several extensions\nthat provide empty update scripts in order to allow user to install such\nnew versions, for example;\n\n- pglogical\n (https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical--2.4.1--2.4.2.sql)\n- hll\n (https://github.com/citusdata/postgresql-hll/blob/master/update/hll--2.16--2.17.sql)\n- orafce\n (https://github.com/orafce/orafce/blob/master/orafce--3.12--3.13.sql)\n- hypopg\n (https://github.com/HypoPG/hypopg/blob/REL1_STABLE/hypopg--1.3.1--1.3.2.sql)\n- timescaledb\n (https://github.com/timescale/timescaledb/blob/main/sql/updates/2.9.2--2.9.1.sql)\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 31 Jan 2023 11:17:22 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow an extention to be updated without a script" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 31, 2023 at 11:17:22AM +0900, Yugo NAGATA wrote:\n>\n> On Mon, 30 Jan 2023 16:05:52 -0500\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > If you have no update script, why call it a new version? The point\n> > of extension versions is to distinguish different states of the\n> > extension's SQL objects. We do not consider mods in underlying C code\n> > to justify a new version.\n>\n> Although, as you say, the extension manager doesn't track changes in C code\n> functions, a new version could be released with changes in only in C\n> functions that implement a new feature or fix some bugs because it looks\n> a new version from user's view. Actually, there are several extensions\n> that provide empty update scripts in order to allow user to install such\n> new versions, for example;\n>\n> [...]\n> - hypopg\n> (https://github.com/HypoPG/hypopg/blob/REL1_STABLE/hypopg--1.3.1--1.3.2.sql)\n> [...]\n\nIndeed, almost all users don't really understand the difference between the\nmodule / C code and the extension, and that gets worse when\nshared_preload_libraries gets in the way.\n\nI personally choose to ship \"empty\" extension versions so that people can\nupgrade it if they want to have e.g. the OS level version match the SQL level\nversion. I know some extensions that chose a different approach: keep the\nfirst 2 digits for anything that involves extension changes and have a 3rd\ndigit for C level bugfix only. But they get very frequent bug reports about\nversion mismatch any time a C bugfix is released, so I will again personally\nkeep shipping those useless versions. That being said, I agree with Tom here\nand we shouldn't add hacks for that.\n\n\n", "msg_date": "Wed, 1 Feb 2023 14:37:27 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow an extention to be updated without a script" }, { "msg_contents": "On Wed, 1 Feb 2023 14:37:27 +0800\nJulien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n> \n> On Tue, Jan 31, 2023 at 11:17:22AM +0900, Yugo NAGATA wrote:\n> >\n> > On Mon, 30 Jan 2023 16:05:52 -0500\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > If you have no update script, why call it a new version? The point\n> > > of extension versions is to distinguish different states of the\n> > > extension's SQL objects. We do not consider mods in underlying C code\n> > > to justify a new version.\n> >\n> > Although, as you say, the extension manager doesn't track changes in C code\n> > functions, a new version could be released with changes in only in C\n> > functions that implement a new feature or fix some bugs because it looks\n> > a new version from user's view. Actually, there are several extensions\n> > that provide empty update scripts in order to allow user to install such\n> > new versions, for example;\n> >\n> > [...]\n> > - hypopg\n> > (https://github.com/HypoPG/hypopg/blob/REL1_STABLE/hypopg--1.3.1--1.3.2.sql)\n> > [...]\n> \n> Indeed, almost all users don't really understand the difference between the\n> module / C code and the extension, and that gets worse when\n> shared_preload_libraries gets in the way.\n> \n> I personally choose to ship \"empty\" extension versions so that people can\n> upgrade it if they want to have e.g. the OS level version match the SQL level\n> version. I know some extensions that chose a different approach: keep the\n> first 2 digits for anything that involves extension changes and have a 3rd\n> digit for C level bugfix only. But they get very frequent bug reports about\n> version mismatch any time a C bugfix is released, so I will again personally\n> keep shipping those useless versions. That being said, I agree with Tom here\n> and we shouldn't add hacks for that.\n\nThank you for your comment and explanation. That helped me understand extension\nrelease approaches.\n\nI will withdraw the proposal since just providing empty update scripts can\nresolve the problem and it wouldn't be worth the confusing. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 1 Feb 2023 17:32:52 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow an extention to be updated without a script" } ]
[ { "msg_contents": "My colleague Jeremy Schneider (CC'd) was recently looking into usage count\ndistributions for various workloads, and he mentioned that it would be nice\nto have an easy way to do $SUBJECT. I've attached a patch that adds a\npg_buffercache_usage_counts() function. This function returns a row per\npossible usage count with some basic information about the corresponding\nbuffers.\n\n postgres=# SELECT * FROM pg_buffercache_usage_counts();\n usage_count | buffers | dirty | pinned\n -------------+---------+-------+--------\n 0 | 0 | 0 | 0\n 1 | 1436 | 671 | 0\n 2 | 102 | 88 | 0\n 3 | 23 | 21 | 0\n 4 | 9 | 7 | 0\n 5 | 164 | 106 | 0\n (6 rows)\n\nThis new function provides essentially the same information as\npg_buffercache_summary(), but pg_buffercache_summary() only shows the\naverage usage count for the buffers in use. If there is interest in this\nidea, another approach to consider could be to alter\npg_buffercache_summary() instead.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 30 Jan 2023 15:30:40 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "monitoring usage count distribution" }, { "msg_contents": "On Mon, 30 Jan 2023 at 18:31, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> My colleague Jeremy Schneider (CC'd) was recently looking into usage count\n> distributions for various workloads, and he mentioned that it would be nice\n> to have an easy way to do $SUBJECT. I've attached a patch that adds a\n> pg_buffercache_usage_counts() function. This function returns a row per\n> possible usage count with some basic information about the corresponding\n> buffers.\n>\n> postgres=# SELECT * FROM pg_buffercache_usage_counts();\n> usage_count | buffers | dirty | pinned\n> -------------+---------+-------+--------\n> 0 | 0 | 0 | 0\n> 1 | 1436 | 671 | 0\n> 2 | 102 | 88 | 0\n> 3 | 23 | 21 | 0\n> 4 | 9 | 7 | 0\n> 5 | 164 | 106 | 0\n> (6 rows)\n>\n> This new function provides essentially the same information as\n> pg_buffercache_summary(), but pg_buffercache_summary() only shows the\n> average usage count for the buffers in use. If there is interest in this\n> idea, another approach to consider could be to alter\n> pg_buffercache_summary() instead.\n\nTom expressed skepticism that there's wide interest here. It seems as\nmuch from the lack of response. But perhaps that's just because people\ndon't understand what the importance of this info is -- I certainly\ndon't :)\n\nI feel like the original sin here is having the function return an\naggregate data. If it returned the raw data then people could slice,\ndice, and aggregate the data in any ways they want using SQL. And\nperhaps people would come up with queries that have more readily\ninterpretable important information?\n\nObviously there are performance questions in that but I suspect they\nmight be solvable given how small the data for each buffer are.\n\nJust as a warning though -- if nobody was interested in this patch\nplease don't take my comments as a recommendation that you spend a lot\nof time developing a more complex version in the same direction\nwithout seeing if anyone agrees with my suggestion :)\n\n-- \ngreg\n\n\n", "msg_date": "Tue, 4 Apr 2023 14:14:36 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Mon, Jan 30, 2023 at 6:30 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> My colleague Jeremy Schneider (CC'd) was recently looking into usage count\n> distributions for various workloads, and he mentioned that it would be nice\n> to have an easy way to do $SUBJECT. I've attached a patch that adds a\n> pg_buffercache_usage_counts() function. This function returns a row per\n> possible usage count with some basic information about the corresponding\n> buffers.\n>\n> postgres=# SELECT * FROM pg_buffercache_usage_counts();\n> usage_count | buffers | dirty | pinned\n> -------------+---------+-------+--------\n> 0 | 0 | 0 | 0\n> 1 | 1436 | 671 | 0\n> 2 | 102 | 88 | 0\n> 3 | 23 | 21 | 0\n> 4 | 9 | 7 | 0\n> 5 | 164 | 106 | 0\n> (6 rows)\n>\n> This new function provides essentially the same information as\n> pg_buffercache_summary(), but pg_buffercache_summary() only shows the\n> average usage count for the buffers in use. If there is interest in this\n> idea, another approach to consider could be to alter\n> pg_buffercache_summary() instead.\n\nI'm skeptical that pg_buffercache_summary() is a good idea at all, but\nhaving it display the average usage count seems like a particularly\npoor idea. That information is almost meaningless. Replacing that with\na six-element integer array would be a clear improvement and, IMHO,\nbetter than adding yet another function to the extension.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Apr 2023 14:31:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jan 30, 2023 at 6:30 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> My colleague Jeremy Schneider (CC'd) was recently looking into usage count\n>> distributions for various workloads, and he mentioned that it would be nice\n>> to have an easy way to do $SUBJECT.\n\n> I'm skeptical that pg_buffercache_summary() is a good idea at all, but\n> having it display the average usage count seems like a particularly\n> poor idea. That information is almost meaningless. Replacing that with\n> a six-element integer array would be a clear improvement and, IMHO,\n> better than adding yet another function to the extension.\n\nI had not realized that pg_buffercache_summary() is new in v16,\nbut since it is, we still have time to rethink its definition.\n+1 for de-aggregating --- I agree that the overall average is\nunlikely to have much value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Apr 2023 14:40:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Tue, Apr 4, 2023 at 2:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Jan 30, 2023 at 6:30 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> My colleague Jeremy Schneider (CC'd) was recently looking into usage count\n> >> distributions for various workloads, and he mentioned that it would be nice\n> >> to have an easy way to do $SUBJECT.\n>\n> > I'm skeptical that pg_buffercache_summary() is a good idea at all, but\n> > having it display the average usage count seems like a particularly\n> > poor idea. That information is almost meaningless. Replacing that with\n> > a six-element integer array would be a clear improvement and, IMHO,\n> > better than adding yet another function to the extension.\n>\n> I had not realized that pg_buffercache_summary() is new in v16,\n> but since it is, we still have time to rethink its definition.\n> +1 for de-aggregating --- I agree that the overall average is\n> unlikely to have much value.\n\nSo, I have used pg_buffercache_summary() to give me a high-level idea of\nthe usage count when I am benchmarking a particular workload -- and I\nwould have found it harder to look at 6 rows instead of 1. That being\nsaid, having six rows is more versatile as you could aggregate it\nyourself easily.\n\n- Melanie\n\n\n", "msg_date": "Tue, 4 Apr 2023 19:10:24 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "Hi,\n\nOn 2023-04-04 14:14:36 -0400, Greg Stark wrote:\n> Tom expressed skepticism that there's wide interest here. It seems as\n> much from the lack of response. But perhaps that's just because people\n> don't understand what the importance of this info is -- I certainly\n> don't :)\n\npg_buffercache has exposed the raw data for a long time. The problem is that\nit's way too slow to look at that way.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 Apr 2023 16:25:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "Hi,\n\nOn 2023-04-04 14:31:36 -0400, Robert Haas wrote:\n> On Mon, Jan 30, 2023 at 6:30 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > My colleague Jeremy Schneider (CC'd) was recently looking into usage count\n> > distributions for various workloads, and he mentioned that it would be nice\n> > to have an easy way to do $SUBJECT. I've attached a patch that adds a\n> > pg_buffercache_usage_counts() function. This function returns a row per\n> > possible usage count with some basic information about the corresponding\n> > buffers.\n> >\n> > postgres=# SELECT * FROM pg_buffercache_usage_counts();\n> > usage_count | buffers | dirty | pinned\n> > -------------+---------+-------+--------\n> > 0 | 0 | 0 | 0\n> > 1 | 1436 | 671 | 0\n> > 2 | 102 | 88 | 0\n> > 3 | 23 | 21 | 0\n> > 4 | 9 | 7 | 0\n> > 5 | 164 | 106 | 0\n> > (6 rows)\n> >\n> > This new function provides essentially the same information as\n> > pg_buffercache_summary(), but pg_buffercache_summary() only shows the\n> > average usage count for the buffers in use. If there is interest in this\n> > idea, another approach to consider could be to alter\n> > pg_buffercache_summary() instead.\n> \n> I'm skeptical that pg_buffercache_summary() is a good idea at all\n\nWhy? It's about two orders of magnitude faster than querying the equivalent\ndata by aggregating in SQL. And knowing how many free and dirty buffers are\nover time is something quite useful to monitor / correlate with performance\nissues.\n\n\n> but having it display the average usage count seems like a particularly poor\n> idea. That information is almost meaningless.\n\nI agree there are more meaningful ways to represent the data, but I don't\nagree that it's almost meaningless. It can give you a rough estimate of\nwhether data in s_b is referenced or not.\n\n\n> Replacing that with a six-element integer array would be a clear improvement\n> and, IMHO, better than adding yet another function to the extension.\n\nI'd have no issue with that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 Apr 2023 16:29:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Tue, Apr 4, 2023 at 7:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm skeptical that pg_buffercache_summary() is a good idea at all\n>\n> Why? It's about two orders of magnitude faster than querying the equivalent\n> data by aggregating in SQL. And knowing how many free and dirty buffers are\n> over time is something quite useful to monitor / correlate with performance\n> issues.\n\nWell, OK, fair point.\n\n> > but having it display the average usage count seems like a particularly poor\n> > idea. That information is almost meaningless.\n>\n> I agree there are more meaningful ways to represent the data, but I don't\n> agree that it's almost meaningless. It can give you a rough estimate of\n> whether data in s_b is referenced or not.\n\nI might have overstated my case.\n\n> > Replacing that with a six-element integer array would be a clear improvement\n> > and, IMHO, better than adding yet another function to the extension.\n>\n> I'd have no issue with that.\n\nCool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 09:44:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Wed, Apr 05, 2023 at 09:44:58AM -0400, Robert Haas wrote:\n> On Tue, Apr 4, 2023 at 7:29 PM Andres Freund <andres@anarazel.de> wrote:\n>> > Replacing that with a six-element integer array would be a clear improvement\n>> > and, IMHO, better than adding yet another function to the extension.\n>>\n>> I'd have no issue with that.\n> \n> Cool.\n\nThe six-element array approach won't show the number of dirty and pinned\nbuffers for each usage count, but I'm not sure that's a deal-breaker.\nBarring objections, I'll post an updated patch shortly with that approach.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 10:51:24 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Wed, Apr 5, 2023 at 1:51 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Wed, Apr 05, 2023 at 09:44:58AM -0400, Robert Haas wrote:\n> > On Tue, Apr 4, 2023 at 7:29 PM Andres Freund <andres@anarazel.de> wrote:\n> >> > Replacing that with a six-element integer array would be a clear improvement\n> >> > and, IMHO, better than adding yet another function to the extension.\n> >>\n> >> I'd have no issue with that.\n> >\n> > Cool.\n>\n> The six-element array approach won't show the number of dirty and pinned\n> buffers for each usage count, but I'm not sure that's a deal-breaker.\n> Barring objections, I'll post an updated patch shortly with that approach.\n\nRight, well, I would personally be OK with 6 rows too, but I don't\nknow what other people want. I think either this or that is better\nthan average.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 15:00:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 5, 2023 at 1:51 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> The six-element array approach won't show the number of dirty and pinned\n>> buffers for each usage count, but I'm not sure that's a deal-breaker.\n>> Barring objections, I'll post an updated patch shortly with that approach.\n\n> Right, well, I would personally be OK with 6 rows too, but I don't\n> know what other people want. I think either this or that is better\n> than average.\n\nSeems to me that six rows would be easier to aggregate manually.\nAn array column seems less SQL-ish and harder to manipulate.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Apr 2023 15:07:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "Hi,\n\nOn 2023-04-05 15:00:20 -0400, Robert Haas wrote:\n> On Wed, Apr 5, 2023 at 1:51 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > On Wed, Apr 05, 2023 at 09:44:58AM -0400, Robert Haas wrote:\n> > > On Tue, Apr 4, 2023 at 7:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > >> > Replacing that with a six-element integer array would be a clear improvement\n> > >> > and, IMHO, better than adding yet another function to the extension.\n> > >>\n> > >> I'd have no issue with that.\n> > >\n> > > Cool.\n> >\n> > The six-element array approach won't show the number of dirty and pinned\n> > buffers for each usage count, but I'm not sure that's a deal-breaker.\n> > Barring objections, I'll post an updated patch shortly with that approach.\n> \n> Right, well, I would personally be OK with 6 rows too, but I don't\n> know what other people want. I think either this or that is better\n> than average.\n\nI would not mind having a separate function returning 6 rows, if we really\nwant that, but making pg_buffercache_summary() return 6 rows would imo make it\nless useful for getting a quick overview. At least I am not that quick summing\nup multple rows, just to get a quick overview over the number of dirty rows.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Apr 2023 12:09:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Wed, Apr 05, 2023 at 03:07:10PM -0400, Tom Lane wrote:\n> Seems to me that six rows would be easier to aggregate manually.\n> An array column seems less SQL-ish and harder to manipulate.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 12:35:05 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Wed, Apr 05, 2023 at 12:09:21PM -0700, Andres Freund wrote:\n> I would not mind having a separate function returning 6 rows, if we really\n> want that, but making pg_buffercache_summary() return 6 rows would imo make it\n> less useful for getting a quick overview. At least I am not that quick summing\n> up multple rows, just to get a quick overview over the number of dirty rows.\n\nThis is what v1-0001 does. We could probably make pg_buffercache_summary a\nview on pg_buffercache_usage_counts, too, but that doesn't strike me as\ntremendously important.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 12:41:01 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Apr 05, 2023 at 12:09:21PM -0700, Andres Freund wrote:\n>> I would not mind having a separate function returning 6 rows, if we really\n>> want that, but making pg_buffercache_summary() return 6 rows would imo make it\n>> less useful for getting a quick overview. At least I am not that quick summing\n>> up multple rows, just to get a quick overview over the number of dirty rows.\n\n> This is what v1-0001 does. We could probably make pg_buffercache_summary a\n> view on pg_buffercache_usage_counts, too, but that doesn't strike me as\n> tremendously important.\n\nHaving two functions doesn't seem unreasonable to me either.\nRobert spoke against it to start with, does he still want to\nadvocate for that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Apr 2023 16:16:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Wed, Apr 5, 2023 at 4:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > On Wed, Apr 05, 2023 at 12:09:21PM -0700, Andres Freund wrote:\n> >> I would not mind having a separate function returning 6 rows, if we really\n> >> want that, but making pg_buffercache_summary() return 6 rows would imo make it\n> >> less useful for getting a quick overview. At least I am not that quick summing\n> >> up multple rows, just to get a quick overview over the number of dirty rows.\n>\n> > This is what v1-0001 does. We could probably make pg_buffercache_summary a\n> > view on pg_buffercache_usage_counts, too, but that doesn't strike me as\n> > tremendously important.\n>\n> Having two functions doesn't seem unreasonable to me either.\n> Robert spoke against it to start with, does he still want to\n> advocate for that?\n\nMy position is that if we replace the average usage count with\nsomething that gives a count for each usage count, that's a win. I\ndon't have a strong opinion on an array vs. a result set vs. some\nother way of doing that. If we leave the average usage count in there\nand add yet another function to give the detail, I tend to think\nthat's not a great plan, but I'll desist if everyone else thinks\notherwise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Apr 2023 13:20:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 5, 2023 at 4:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Having two functions doesn't seem unreasonable to me either.\n>> Robert spoke against it to start with, does he still want to\n>> advocate for that?\n\n> My position is that if we replace the average usage count with\n> something that gives a count for each usage count, that's a win. I\n> don't have a strong opinion on an array vs. a result set vs. some\n> other way of doing that. If we leave the average usage count in there\n> and add yet another function to give the detail, I tend to think\n> that's not a great plan, but I'll desist if everyone else thinks\n> otherwise.\n\nThere seems to be enough support for the existing summary function\ndefinition to leave it as-is; Andres likes it for one, and I'm not\nexcited about trying to persuade him he's wrong. But a second\nslightly-less-aggregated summary function is clearly useful as well.\nSo I'm now thinking that we do want the patch as-submitted.\n(Caveat: I've not read the patch, just the description.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Apr 2023 13:32:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Thu, Apr 06, 2023 at 01:32:35PM -0400, Tom Lane wrote:\n> There seems to be enough support for the existing summary function\n> definition to leave it as-is; Andres likes it for one, and I'm not\n> excited about trying to persuade him he's wrong. But a second\n> slightly-less-aggregated summary function is clearly useful as well.\n> So I'm now thinking that we do want the patch as-submitted.\n> (Caveat: I've not read the patch, just the description.)\n\nIn case we want to do both, here's a 0002 that changes usagecount_avg to an\narray of usage counts.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Apr 2023 11:06:08 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Apr 06, 2023 at 01:32:35PM -0400, Tom Lane wrote:\n>> There seems to be enough support for the existing summary function\n>> definition to leave it as-is; Andres likes it for one, and I'm not\n>> excited about trying to persuade him he's wrong. But a second\n>> slightly-less-aggregated summary function is clearly useful as well.\n>> So I'm now thinking that we do want the patch as-submitted.\n>> (Caveat: I've not read the patch, just the description.)\n\n> In case we want to do both, here's a 0002 that changes usagecount_avg to an\n> array of usage counts.\n\nI'm not sure if there is consensus for 0002, but I reviewed and pushed\n0001. I made one non-cosmetic change: it no longer skips invalid\nbuffers. Otherwise, the row for usage count 0 would be pretty useless.\nAlso it seemed to me that sum(buffers) ought to agree with the\nshared_buffers setting.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 14:29:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: monitoring usage count distribution" }, { "msg_contents": "On Fri, Apr 07, 2023 at 02:29:31PM -0400, Tom Lane wrote:\n> I'm not sure if there is consensus for 0002, but I reviewed and pushed\n> 0001. I made one non-cosmetic change: it no longer skips invalid\n> buffers. Otherwise, the row for usage count 0 would be pretty useless.\n> Also it seemed to me that sum(buffers) ought to agree with the\n> shared_buffers setting.\n\nMakes sense. Thanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 7 Apr 2023 17:18:16 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: monitoring usage count distribution" } ]
[ { "msg_contents": "Hi,\n\nI am looking for function calls to scan the buffer cache for a table and\nfind the cached pages. I want to find out which pages are cached and which\nof them are dirty. Having the relation id, how can I do that? I have gone\nthrough bufmgr.c and relcache.c, but could not find a way to get\nrelation-specific pages from the buffer cache.\n\nThank you!\n\nHi,I am looking for function calls to scan the buffer cache for a table and find the cached pages. I want to find out which pages are cached and which of them are dirty. Having the relation id, how can I do that? I have gone through bufmgr.c and relcache.c, but could not find a way to get relation-specific pages from the buffer cache.Thank you!", "msg_date": "Mon, 30 Jan 2023 18:01:08 -0800", "msg_from": "Amin <amin.fallahi@gmail.com>", "msg_from_op": true, "msg_subject": "Scan buffercache for a table" }, { "msg_contents": "On Mon, Jan 30, 2023 at 06:01:08PM -0800, Amin wrote:\n> Hi,\n> \n> I am looking for function calls to scan the buffer cache for a table and\n> find the cached pages. I want to find out which pages are cached and which\n> of them are dirty. Having the relation id, how can I do that? I have gone\n> through bufmgr.c and relcache.c, but could not find a way to get\n> relation-specific pages from the buffer cache.\n\nThis looks like a re-post of the question you asked on Jan 13:\nCAF-KA8_axSMpQW1scOTnAQx8NFHgmJc6L87QzAo3JezLiBU1HQ@mail.gmail.com\nIt'd be better not to start a new thread (or if you do that, it'd be\nbetter to mention the old one and include its participants).\n\nOn Fri, Jan 13, 2023 at 05:28:31PM -0800, Amin wrote:\n> Hi,\n> \n> Before scanning a relation, in the planner stage, I want to make a\n> call to\n> retrieve information about how many pages will be a hit for a specific\n> relation. The module pg_buffercache seems to be doing a similar thing.\n\nThe planner is a *model* which (among other things) tries to guess how\nmany pages will be read/hit. It's not expected to be anywhere near\naccurate.\n\npg_buffercache only checks for pages within postgres' own buffer cache.\nIt doesn't look for pages which are in the OS page cache, which require\na system call to access (but don't require device I/O).\n\nRead about pgfincore for introinspection of the OS page cache.\n\n> Also, pg_statio_all_tables seems to be having that information, but it\n> is updated after execution. However, I want the information before\n> execution. Also not sure how pg_statio_all_tables is created and how\n> I can access it in the code.\n\nBut the view isn't omnicient. When you execute a plan, you don't know\nhow it's going to end. If you did, you wouldn't need to run it - you\ncould just print the answer.\n\nNote that planning and execution are separate and independant. It's\npossible to plan a query without ever running it, or to plan it once and\nrun it multiple times. The view reflects I/O requested by postgres; the\nI/O normally comes primarily from execution.\n\nYou can look at how the view is defined:\n\\sv pg_statio_all_tables\n\nAnd then you can look at how the functions that it calls are implemented\n(\\df+). Same for pg_buffercache. It seems like you'll want to learn\nhow to navigate the source code to find how things are connected.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 30 Jan 2023 20:43:20 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Scan buffercache for a table" }, { "msg_contents": "Thank you Justin. I started a new thread because the context is a little\nbit different. I am no longer interested in statistics anymore. I want to\nfind exact individual pages of a table which are cached and are/aren't\ndirty. pg_buffercache implements the loop, but it goes over all the\nbuffers. However, I want to scan a specific table cache pages.\n\nOn Mon, Jan 30, 2023 at 6:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Jan 30, 2023 at 06:01:08PM -0800, Amin wrote:\n> > Hi,\n> >\n> > I am looking for function calls to scan the buffer cache for a table and\n> > find the cached pages. I want to find out which pages are cached and\n> which\n> > of them are dirty. Having the relation id, how can I do that? I have gone\n> > through bufmgr.c and relcache.c, but could not find a way to get\n> > relation-specific pages from the buffer cache.\n>\n> This looks like a re-post of the question you asked on Jan 13:\n> CAF-KA8_axSMpQW1scOTnAQx8NFHgmJc6L87QzAo3JezLiBU1HQ@mail.gmail.com\n> It'd be better not to start a new thread (or if you do that, it'd be\n> better to mention the old one and include its participants).\n>\n> On Fri, Jan 13, 2023 at 05:28:31PM -0800, Amin wrote:\n> > Hi,\n> >\n> > Before scanning a relation, in the planner stage, I want to make a\n> > call to\n> > retrieve information about how many pages will be a hit for a specific\n> > relation. The module pg_buffercache seems to be doing a similar thing.\n>\n> The planner is a *model* which (among other things) tries to guess how\n> many pages will be read/hit. It's not expected to be anywhere near\n> accurate.\n>\n> pg_buffercache only checks for pages within postgres' own buffer cache.\n> It doesn't look for pages which are in the OS page cache, which require\n> a system call to access (but don't require device I/O).\n>\n> Read about pgfincore for introinspection of the OS page cache.\n>\n> > Also, pg_statio_all_tables seems to be having that information, but it\n> > is updated after execution. However, I want the information before\n> > execution. Also not sure how pg_statio_all_tables is created and how\n> > I can access it in the code.\n>\n> But the view isn't omnicient. When you execute a plan, you don't know\n> how it's going to end. If you did, you wouldn't need to run it - you\n> could just print the answer.\n>\n> Note that planning and execution are separate and independant. It's\n> possible to plan a query without ever running it, or to plan it once and\n> run it multiple times. The view reflects I/O requested by postgres; the\n> I/O normally comes primarily from execution.\n>\n> You can look at how the view is defined:\n> \\sv pg_statio_all_tables\n>\n> And then you can look at how the functions that it calls are implemented\n> (\\df+). Same for pg_buffercache. It seems like you'll want to learn\n> how to navigate the source code to find how things are connected.\n>\n> --\n> Justin\n>\n\nThank you Justin. I started a new thread because the context is a little bit different. I am no longer interested in statistics anymore. I want to find exact individual pages of a table which are cached and are/aren't dirty. pg_buffercache implements the loop, but it goes over all the buffers. However, I want to scan a specific table cache pages.On Mon, Jan 30, 2023 at 6:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Jan 30, 2023 at 06:01:08PM -0800, Amin wrote:\n> Hi,\n> \n> I am looking for function calls to scan the buffer cache for a table and\n> find the cached pages. I want to find out which pages are cached and which\n> of them are dirty. Having the relation id, how can I do that? I have gone\n> through bufmgr.c and relcache.c, but could not find a way to get\n> relation-specific pages from the buffer cache.\n\nThis looks like a re-post of the question you asked on Jan 13:\nCAF-KA8_axSMpQW1scOTnAQx8NFHgmJc6L87QzAo3JezLiBU1HQ@mail.gmail.com\nIt'd be better not to start a new thread (or if you do that, it'd be\nbetter to mention the old one and include its participants).\n\nOn Fri, Jan 13, 2023 at 05:28:31PM -0800, Amin wrote:\n> Hi,\n> \n> Before scanning a relation, in the planner stage, I want to make a\n> call to\n> retrieve information about how many pages will be a hit for a specific\n> relation. The module pg_buffercache seems to be doing a similar thing.\n\nThe planner is a *model* which (among other things) tries to guess how\nmany pages will be read/hit.  It's not expected to be anywhere near\naccurate.\n\npg_buffercache only checks for pages within postgres' own buffer cache.\nIt doesn't look for pages which are in the OS page cache, which require\na system call to access (but don't require device I/O).\n\nRead about pgfincore for introinspection of the OS page cache.\n\n> Also, pg_statio_all_tables seems to be having that information, but it\n> is updated after execution. However, I want the information before\n> execution.  Also not sure how pg_statio_all_tables is created and how\n> I can access it in the code.\n\nBut the view isn't omnicient.  When you execute a plan, you don't know\nhow it's going to end.  If you did, you wouldn't need to run it - you\ncould just print the answer.\n\nNote that planning and execution are separate and independant.  It's\npossible to plan a query without ever running it, or to plan it once and\nrun it multiple times.  The view reflects I/O requested by postgres; the\nI/O normally comes primarily from execution.\n\nYou can look at how the view is defined:\n\\sv pg_statio_all_tables\n\nAnd then you can look at how the functions that it calls are implemented\n(\\df+).  Same for pg_buffercache.  It seems like you'll want to learn\nhow to navigate the source code to find how things are connected.\n\n-- \nJustin", "msg_date": "Mon, 30 Jan 2023 20:11:30 -0800", "msg_from": "Amin <amin.fallahi@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Scan buffercache for a table" }, { "msg_contents": "On Mon, Jan 30, 2023 at 08:11:30PM -0800, Amin wrote:\n> Thank you Justin. I started a new thread because the context is a little\n> bit different. I am no longer interested in statistics anymore. I want to\n> find exact individual pages of a table which are cached and are/aren't\n> dirty. pg_buffercache implements the loop, but it goes over all the\n> buffers. However, I want to scan a specific table cache pages.\n\nCheck ReadBuffer*(), BufTableLookup() or loops around it like\nFindAndDropRelationBuffers(), which is in the file you referenced.\n\n> > On Mon, Jan 30, 2023 at 06:01:08PM -0800, Amin wrote:\n> > > Hi,\n> > >\n> > > I am looking for function calls to scan the buffer cache for a table and\n> > > find the cached pages. I want to find out which pages are cached and which\n> > > of them are dirty. Having the relation id, how can I do that? I have gone\n> > > through bufmgr.c and relcache.c, but could not find a way to get\n> > > relation-specific pages from the buffer cache.\n\n\n", "msg_date": "Tue, 31 Jan 2023 07:11:51 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Scan buffercache for a table" } ]
[ { "msg_contents": "Per:\n\n```\nselect ts_debug('english', 'you''re a star');\n ts_debug\n-----------------------------------------------------------------------\n (asciiword,\"Word, all ASCII\",you,{english_stem},english_stem,{})\n (blank,\"Space symbols\",',{},,)\n (asciiword,\"Word, all ASCII\",re,{english_stem},english_stem,{re})\n (blank,\"Space symbols\",\" \",{},,)\n (asciiword,\"Word, all ASCII\",a,{english_stem},english_stem,{})\n (blank,\"Space symbols\",\" \",{},,)\n (asciiword,\"Word, all ASCII\",star,{english_stem},english_stem,{star})\n(7 rows)\n```\n\nAnd:\n\nhttps://snowballstem.org/demo.html\nhttps://snowballstem.org/texts/apostrophe.html\n\nSnowball stemmer has special handling for contraction built in, but\nout-of-the-box due to the order of filters it never gets access to the\ndata.\n\nThat means that a word such as `you're` stems incorrectly down to\n`re`. Prefix matches end up hitting lots of surprising words.\n\nI know this is a big can of worms... and unlikely easy to resolve ...\nthe latest changes to `to_tsquery` (replacing & with <=>) are already\na bitter enough pill for lots to swallow and another breaking change\nis not something many desire. However, it feels like an oversight (at\nleast documentation wise). Perhaps a good starting point might be to\nclearly document the issue and workaround?\n\n\n", "msg_date": "Tue, 31 Jan 2023 17:27:40 +1100", "msg_from": "Sam Saffron <sam.saffron@gmail.com>", "msg_from_op": true, "msg_subject": "Contractions in full text search result in very surprising stemming" } ]
[ { "msg_contents": "On Mon, Jan 30, 2023 at 11:50 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n> It was the classical case of out-of-bounds access.\n\n> This mistake would've been caught early if there were assertions\n> preventing access beyond the number of arguments passed to the\n> function. I'll send the assert_enough_args.patch, that adds these\n> checks, in a separate thread to avoid potentially confusing cfbot.\n\nPlease see attached the patch to that ensures we don't accidentally\naccess more parameters than that are passed to a SQL callable\nfunction.\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Mon, 30 Jan 2023 23:58:28 -0800", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Assert fcinfo has enough args before allowing parameter access (was:\n Re: generate_series for timestamptz and time zone problem)" }, { "msg_contents": "Gurjeet Singh <gurjeet@singh.im> writes:\n> Please see attached the patch to that ensures we don't accidentally\n> access more parameters than that are passed to a SQL callable\n> function.\n\nI'm unexcited by that. It'd add a pretty substantial amount\nof code to catch an error that hardly anyone ever makes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Jan 2023 09:45:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assert fcinfo has enough args before allowing parameter access\n (was: Re: generate_series for timestamptz and time zone problem)" } ]
[ { "msg_contents": "We use Valgrind --together with the suppression file provided in Postgres repo-- to test Citus extension against memory errors.\nWe replace /bin/postgres executable with a simple bash script that executes the original postgres executable under Valgrind and then we run our usual regression tests.\nHowever, it is quite hard to understand which query caused a memory error in the stack traces that has been dumped into valgrind logfile.\n\nFor this reason, we propose the attached patch to allow Valgrind to report the query string that caused a memory error right after the relevant stack trace.\nI belive this would not only be useful for Citus but also for Postgres and other extensions in their valgrind-testing process.\n\nAn example piece of valgrind test output for a memory error found in Citus is as follows:\n\n==67222== VALGRINDERROR-BEGIN\n==67222== Invalid write of size 8\n==67222== at 0x7A6F040: dlist_delete (home/pguser/postgres-installation/include/postgresql/server/lib/ilist.h:360)\n==67222== by 0x7A6F040: ResetRemoteTransaction (home/pguser/citus/src/backend/distributed/transaction/remote_transaction.c:872)\n==67222== by 0x79CF606: AfterXactHostConnectionHandling (home/pguser/citus/src/backend/distributed/connection/connection_management.c:1468)\n==67222== by 0x79CF65E: AfterXactConnectionHandling (home/pguser/citus/src/backend/distributed/connection/connection_management.c:175)\n==67222== by 0x7A6FEDA: CoordinatedTransactionCallback (home/pguser/citus/src/backend/distributed/transaction/transaction_management.c:309)\n==67222== by 0x544F30: CallXactCallbacks (home/pguser/postgres-source/postgresql-15.1/src/backend/access/transam/xact.c:3661)\n==67222== by 0x548E12: CommitTransaction (home/pguser/postgres-source/postgresql-15.1/src/backend/access/transam/xact.c:2298)\n==67222== by 0x549BBC: CommitTransactionCommand (home/pguser/postgres-source/postgresql-15.1/src/backend/access/transam/xact.c:3048)\n==67222== by 0x832C30: finish_xact_command (home/pguser/postgres-source/postgresql-15.1/src/backend/tcop/postgres.c:2750)\n==67222== by 0x8352AF: exec_simple_query (home/pguser/postgres-source/postgresql-15.1/src/backend/tcop/postgres.c:1279)\n==67222== by 0x837312: PostgresMain (home/pguser/postgres-source/postgresql-15.1/src/backend/tcop/postgres.c:4595)\n==67222== by 0x79F7B5: BackendRun (home/pguser/postgres-source/postgresql-15.1/src/backend/postmaster/postmaster.c:4504)\n==67222== by 0x7A24E6: BackendStartup (home/pguser/postgres-source/postgresql-15.1/src/backend/postmaster/postmaster.c:4232)\n==67222== Address 0x7486378 is 3,512 bytes inside a recently re-allocated block of size 8,192 alloc'd\n==67222== at 0x484486F: malloc (builddir/build/BUILD/valgrind-3.19.0/coregrind/m_replacemalloc/vg_replace_malloc.c:381)\n==67222== by 0x98B6EB: AllocSetContextCreateInternal (home/pguser/postgres-source/postgresql-15.1/src/backend/utils/mmgr/aset.c:469)\n==67222== by 0x79CEABA: InitializeConnectionManagement (home/pguser/citus/src/backend/distributed/connection/connection_management.c:107)\n==67222== by 0x799FE9F: _PG_init (home/pguser/citus/src/backend/distributed/shared_library_init.c:464)\n==67222== by 0x96AE6B: internal_load_library (home/pguser/postgres-source/postgresql-15.1/src/backend/utils/fmgr/dfmgr.c:289)\n==67222== by 0x96B09A: load_file (home/pguser/postgres-source/postgresql-15.1/src/backend/utils/fmgr/dfmgr.c:156)\n==67222== by 0x973122: load_libraries (home/pguser/postgres-source/postgresql-15.1/src/backend/utils/init/miscinit.c:1668)\n==67222== by 0x974680: process_shared_preload_libraries (home/pguser/postgres-source/postgresql-15.1/src/backend/utils/init/miscinit.c:1686)\n==67222== by 0x7A336A: PostmasterMain (home/pguser/postgres-source/postgresql-15.1/src/backend/postmaster/postmaster.c:1026)\n==67222== by 0x6F303C: main (home/pguser/postgres-source/postgresql-15.1/src/backend/main/main.c:202)\n==67222==\n==67222== VALGRINDERROR-END\n**67222** The query for which valgrind reported a memory error was: REFRESH MATERIALIZED VIEW other_schema.mat_view;", "msg_date": "Tue, 31 Jan 2023 14:00:05 +0000", "msg_from": "Onur Tirtir <Onur.Tirtir@microsoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] Report the query string that caused a memory error under\n Valgrind" }, { "msg_contents": "On 31.01.23 15:00, Onur Tirtir wrote:\n> We use Valgrind --together with the suppression file provided in \n> Postgres repo-- to test Citus extension against memory errors.\n> \n> We replace /bin/postgres executable with a simple bash script that \n> executes the original postgres executable under Valgrind and then we run \n> our usual regression tests.\n> \n> However, it is quite hard to understand which query caused a memory \n> error in the stack traces that has been dumped into valgrind logfile.\n> \n> For this reason, we propose the attached patch to allow Valgrind to \n> report the query string that caused a memory error right after the \n> relevant stack trace.\n> \n> I belive this would not only be useful for Citus but also for Postgres \n> and other extensions in their valgrind-testing process.\n\nI can see how this could be useful. But this only handles queries using \nthe simple protocol. At least the extended protocol should be handled \nas well. Maybe it would be better to move this up to PostgresMain() and \nhandle all protocol messages?\n\n\n\n", "msg_date": "Wed, 22 Mar 2023 17:00:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Report the query string that caused a memory error under\n Valgrind" }, { "msg_contents": "Hey Peter,\n\nThank you for reviewing the patch and for your feedback. I believe the v2 patch should be able to handle other protocol messages too.\n\n-----Original Message-----\nFrom: Peter Eisentraut <peter.eisentraut@enterprisedb.com> \nSent: Wednesday, March 22, 2023 7:00 PM\nTo: Onur Tirtir <Onur.Tirtir@microsoft.com>; pgsql-hackers@lists.postgresql.org\nSubject: [EXTERNAL] Re: [PATCH] Report the query string that caused a memory error under Valgrind\n\n[You don't often get email from peter.eisentraut@enterprisedb.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]\n\nOn 31.01.23 15:00, Onur Tirtir wrote:\n> We use Valgrind --together with the suppression file provided in \n> Postgres repo-- to test Citus extension against memory errors.\n>\n> We replace /bin/postgres executable with a simple bash script that \n> executes the original postgres executable under Valgrind and then we \n> run our usual regression tests.\n>\n> However, it is quite hard to understand which query caused a memory \n> error in the stack traces that has been dumped into valgrind logfile.\n>\n> For this reason, we propose the attached patch to allow Valgrind to \n> report the query string that caused a memory error right after the \n> relevant stack trace.\n>\n> I belive this would not only be useful for Citus but also for Postgres \n> and other extensions in their valgrind-testing process.\n\nI can see how this could be useful. But this only handles queries using the simple protocol. At least the extended protocol should be handled as well. Maybe it would be better to move this up to PostgresMain() and handle all protocol messages?", "msg_date": "Thu, 23 Mar 2023 17:11:32 +0000", "msg_from": "Onur Tirtir <Onur.Tirtir@microsoft.com>", "msg_from_op": true, "msg_subject": "RE: [EXTERNAL] Re: [PATCH] Report the query string that caused a\n memory error under Valgrind" }, { "msg_contents": "Onur Tirtir <Onur.Tirtir@microsoft.com> writes:\n> Thank you for reviewing the patch and for your feedback. I believe the v2 patch should be able to handle other protocol messages too.\n\nI like the concept here, but the reporting that the v2 patch provides\nwould be seriously horrid: it's trying to print stuff that isn't\nnecessarily text, and for bind and execute messages it's substantially\ndumber than the existing debug_query_string infrastructure. Another\nthing that is not great is that if Postgres itself throws an error\nlater in the query, nothing will be reported since we don't reach the\nbottom of the processing loop.\n\nI suggest that we need something closer to the attached. Some\nbikeshedding is possible on the specific printouts, but I'm not\nsure it's worth working harder than this.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 02 Apr 2023 16:13:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Report the query string that caused a\n memory error under Valgrind" }, { "msg_contents": "Hey Tom,\n\nThank you for sharing your proposal as a patch. It looks much nicer and useful than mine.\nI've also tested it for a few cases --by injecting a memory error on purpose-- and seen that it helps reporting the problematic query.\nShould we move forward with v3 then?\n\n==13210== VALGRINDERROR-BEGIN\n==13210== Conditional jump or move depends on uninitialised value(s)\n==13210== at 0x75B88C: exec_simple_query (home/onurctirtir/postgres/src/backend/tcop/postgres.c:1070)\n==13210== by 0x760ABB: PostgresMain (home/onurctirtir/postgres/src/backend/tcop/postgres.c:4624)\n==13210== by 0x688F1A: BackendRun (home/onurctirtir/postgres/src/backend/postmaster/postmaster.c:4461)\n==13210== by 0x688801: BackendStartup (home/onurctirtir/postgres/src/backend/postmaster/postmaster.c:4189)\n==13210== by 0x684D21: ServerLoop (home/onurctirtir/postgres/src/backend/postmaster/postmaster.c:1779)\n==13210== by 0x6845F6: PostmasterMain (home/onurctirtir/postgres/src/backend/postmaster/postmaster.c:1463)\n==13210== by 0x540351: main (home/onurctirtir/postgres/src/backend/main/main.c:200)\n==13210== Uninitialised value was created by a heap allocation\n==13210== at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)\n==13210== by 0x75B812: exec_simple_query (home/onurctirtir/postgres/src/backend/tcop/postgres.c:1023)\n==13210== by 0x760ABB: PostgresMain (home/onurctirtir/postgres/src/backend/tcop/postgres.c:4624)\n==13210== by 0x688F1A: BackendRun (home/onurctirtir/postgres/src/backend/postmaster/postmaster.c:4461)\n==13210== by 0x688801: BackendStartup (home/onurctirtir/postgres/src/backend/postmaster/postmaster.c:4189)\n==13210== by 0x684D21: ServerLoop (home/onurctirtir/postgres/src/backend/postmaster/postmaster.c:1779)\n==13210== by 0x6845F6: PostmasterMain (home/onurctirtir/postgres/src/backend/postmaster/postmaster.c:1463)\n==13210== by 0x540351: main (home/onurctirtir/postgres/src/backend/main/main.c:200)\n==13210==\n==13210== VALGRINDERROR-END\n**13210** Valgrind detected 1 error(s) during execution of \"select 1;\"\n**13210** Valgrind detected 1 error(s) during execution of \"select 1;\"\n\nBest, Onur\n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Sunday, April 2, 2023 11:14 PM\nTo: Onur Tirtir <Onur.Tirtir@microsoft.com>\nCc: peter.eisentraut@enterprisedb.com; pgsql-hackers@lists.postgresql.org\nSubject: Re: [EXTERNAL] Re: [PATCH] Report the query string that caused a memory error under Valgrind\n\n[You don't often get email from tgl@sss.pgh.pa.us. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]\n\nOnur Tirtir <Onur.Tirtir@microsoft.com> writes:\n> Thank you for reviewing the patch and for your feedback. I believe the v2 patch should be able to handle other protocol messages too.\n\nI like the concept here, but the reporting that the v2 patch provides would be seriously horrid: it's trying to print stuff that isn't necessarily text, and for bind and execute messages it's substantially dumber than the existing debug_query_string infrastructure. Another thing that is not great is that if Postgres itself throws an error later in the query, nothing will be reported since we don't reach the bottom of the processing loop.\n\nI suggest that we need something closer to the attached. Some bikeshedding is possible on the specific printouts, but I'm not sure it's worth working harder than this.\n\n regards, tom lane\n\n\n\n", "msg_date": "Mon, 3 Apr 2023 11:09:50 +0000", "msg_from": "Onur Tirtir <Onur.Tirtir@microsoft.com>", "msg_from_op": true, "msg_subject": "RE: [EXTERNAL] Re: [PATCH] Report the query string that caused a\n memory error under Valgrind" }, { "msg_contents": "Onur Tirtir <Onur.Tirtir@microsoft.com> writes:\n> Thank you for sharing your proposal as a patch. It looks much nicer and useful than mine.\n> I've also tested it for a few cases --by injecting a memory error on purpose-- and seen that it helps reporting the problematic query.\n> Should we move forward with v3 then?\n\nOK, I pushed v3 as-is. We can refine it later if anyone has suggestions.\n\nThanks for the contribution!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Apr 2023 10:19:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Report the query string that caused a\n memory error under Valgrind" } ]
[ { "msg_contents": "Remove over-optimistic Assert.\n\nIn commit 2489d76c4, I'd thought it'd be safe to assert that a\nPlaceHolderVar appearing in a scan-level expression has empty\nnullingrels. However this is not so, as when we determine that a\njoin relation is certainly empty we'll put its targetlist into a\nResult-with-constant-false-qual node, and nothing is done to adjust\nthe nullingrels of the Vars or PHVs therein. (Arguably, a Result\nused in this way isn't really a scan-level node, but it certainly\nisn't an upper node either ...)\n\nIt's not clear this is worth any close analysis, so let's just\ntake out the faulty Assert.\n\nPer report from Robins Tharakan. I added a test case based on\nhis example, just in case somebody tries to tighten this up.\n\nDiscussion: https://postgr.es/m/CAEP4nAz7Enq3+DEthGG7j27DpuwSRZnW0Nh6jtNh75yErQ_nbA@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/eae0e20deffb0a73f7cb0e94746f94a1347e71b1\n\nModified Files\n--------------\nsrc/backend/optimizer/plan/setrefs.c | 2 +-\nsrc/test/regress/expected/join.out | 14 ++++++++++++++\nsrc/test/regress/sql/join.sql | 8 ++++++++\n3 files changed, 23 insertions(+), 1 deletion(-)", "msg_date": "Tue, 31 Jan 2023 16:58:10 +0000", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgsql: Remove over-optimistic Assert." }, { "msg_contents": "On Thu, Feb 2, 2023 at 8:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Remove over-optimistic Assert.\n>\n> In commit 2489d76c4, I'd thought it'd be safe to assert that a\n> PlaceHolderVar appearing in a scan-level expression has empty\n> nullingrels. However this is not so, as when we determine that a\n> join relation is certainly empty we'll put its targetlist into a\n> Result-with-constant-false-qual node, and nothing is done to adjust\n> the nullingrels of the Vars or PHVs therein. (Arguably, a Result\n> used in this way isn't really a scan-level node, but it certainly\n> isn't an upper node either ...)\n\n\nIt seems this is the only case we can have PlaceHolderVar with non-empty\nnullingrels at scan level. So I wonder if we can manually adjust the\nnullingrels of PHVs in this special case, and keep the assertion about\nphnullingrels being NULL in fix_scan_expr. I think that assertion is\nasserting the right thing in most cases. It's a pity to lose it.\n\nCurrently for the tlist of a childless Result, we special-case ROWID_VAR\nVars in set_plan_refs and thus keep assertions about varno != ROWID_VAR\nin fix_scan_expr. Do you think we can special-case PHVs at the same\nplace by setting its phnullingrels to NULL? I'm imagining something\nlike attached.\n\nThanks\nRichard", "msg_date": "Thu, 2 Feb 2023 09:51:58 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Remove over-optimistic Assert." }, { "msg_contents": "Resend this email to -hackers. Sorry for the noise.\n\nThanks\nRichard\n\n---------- Forwarded message ---------\nFrom: Richard Guo <guofenglinux@gmail.com>\nDate: Thu, Feb 2, 2023 at 9:51 AM\nSubject: Re: pgsql: Remove over-optimistic Assert.\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: <pgsql-committers@lists.postgresql.org>, PostgreSQL-development <\npgsql-hackers@postgresql.org>\n\n\n\nOn Thu, Feb 2, 2023 at 8:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Remove over-optimistic Assert.\n>\n> In commit 2489d76c4, I'd thought it'd be safe to assert that a\n> PlaceHolderVar appearing in a scan-level expression has empty\n> nullingrels. However this is not so, as when we determine that a\n> join relation is certainly empty we'll put its targetlist into a\n> Result-with-constant-false-qual node, and nothing is done to adjust\n> the nullingrels of the Vars or PHVs therein. (Arguably, a Result\n> used in this way isn't really a scan-level node, but it certainly\n> isn't an upper node either ...)\n\n\nIt seems this is the only case we can have PlaceHolderVar with non-empty\nnullingrels at scan level. So I wonder if we can manually adjust the\nnullingrels of PHVs in this special case, and keep the assertion about\nphnullingrels being NULL in fix_scan_expr. I think that assertion is\nasserting the right thing in most cases. It's a pity to lose it.\n\nCurrently for the tlist of a childless Result, we special-case ROWID_VAR\nVars in set_plan_refs and thus keep assertions about varno != ROWID_VAR\nin fix_scan_expr. Do you think we can special-case PHVs at the same\nplace by setting its phnullingrels to NULL? I'm imagining something\nlike attached.\n\nThanks\nRichard", "msg_date": "Thu, 2 Feb 2023 10:01:49 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Fwd: pgsql: Remove over-optimistic Assert." }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Thu, Feb 2, 2023 at 8:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In commit 2489d76c4, I'd thought it'd be safe to assert that a\n>> PlaceHolderVar appearing in a scan-level expression has empty\n>> nullingrels. However this is not so, as when we determine that a\n>> join relation is certainly empty we'll put its targetlist into a\n>> Result-with-constant-false-qual node, and nothing is done to adjust\n>> the nullingrels of the Vars or PHVs therein. (Arguably, a Result\n>> used in this way isn't really a scan-level node, but it certainly\n>> isn't an upper node either ...)\n\n> It seems this is the only case we can have PlaceHolderVar with non-empty\n> nullingrels at scan level. So I wonder if we can manually adjust the\n> nullingrels of PHVs in this special case, and keep the assertion about\n> phnullingrels being NULL in fix_scan_expr. I think that assertion is\n> asserting the right thing in most cases. It's a pity to lose it.\n\nWell, if we change the nullingrels of the PHV in the Result, then we\nwill likely have to loosen the nullingrels cross-check in the next\nplan level up. That doesn't seem like much of an improvement.\nKeeping the Result's tlist the same as what we would have generated for\na non-dummy join node seems right to me.\n\nWe could perhaps use a weaker assert like \"phv->phnullingrels == NULL ||\nwe-are-at-a-dummy-Result\", but I didn't think it was worth passing down\nthe extra flag needed to make that happen. (Also, it's fair to wonder\nwhether setrefs.c actually knows whether a Result arose this way.)\n\nAlso, there are other places in setrefs.c that are punting on checking\nphnullingrels. If we don't tighten all of them, I doubt we've moved\nthe ball very far.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 21:04:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Remove over-optimistic Assert." } ]
[ { "msg_contents": "Hi Hackers,\n\nA user on IRC was confused about how to delete a security label using\nthe `SECURITY LABLEL ON … IS …` command, and looking at the docs I can\nsee why.\n\nThe synopsis just says `IS 'label'`, which implies that it can only be a\nstring. It's not until you read the description for `label` that you\nsee \"or `NULL` to drop the security label.\" I propose making the\nsynopsis say `IS { 'label' | NULL }` to make it clear that it can be\nNULL as well. The same applies to `COMMENT ON … IS …`, which I've also\nchanged similarly in the attached.\n\n- ilmari", "msg_date": "Tue, 31 Jan 2023 17:07:06 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Clarify deleting comments and security labels in synopsis" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> A user on IRC was confused about how to delete a security label using\n> the `SECURITY LABLEL ON … IS …` command, and looking at the docs I can\n> see why.\n\n> The synopsis just says `IS 'label'`, which implies that it can only be a\n> string. It's not until you read the description for `label` that you\n> see \"or `NULL` to drop the security label.\" I propose making the\n> synopsis say `IS { 'label' | NULL }` to make it clear that it can be\n> NULL as well. The same applies to `COMMENT ON … IS …`, which I've also\n> changed similarly in the attached.\n\nAgreed; as-is, the syntax summary is not just confusing but outright\nwrong.\n\nI think we could go further and split the entry under Parameters\nto match:\n\n\t'text'\n\t\tThe new comment (must be a simple string literal,\n\t\tnot an expression).\n\n\tNULL\n\t\tWrite NULL to drop the comment.\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Jan 2023 12:35:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Clarify deleting comments and security labels in synopsis" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> A user on IRC was confused about how to delete a security label using\n>> the `SECURITY LABLEL ON … IS …` command, and looking at the docs I can\n>> see why.\n>\n>> The synopsis just says `IS 'label'`, which implies that it can only be a\n>> string. It's not until you read the description for `label` that you\n>> see \"or `NULL` to drop the security label.\" I propose making the\n>> synopsis say `IS { 'label' | NULL }` to make it clear that it can be\n>> NULL as well. The same applies to `COMMENT ON … IS …`, which I've also\n>> changed similarly in the attached.\n>\n> Agreed; as-is, the syntax summary is not just confusing but outright\n> wrong.\n>\n> I think we could go further and split the entry under Parameters\n> to match:\n>\n> \t'text'\n> \t\tThe new comment (must be a simple string literal,\n> \t\tnot an expression).\n>\n> \tNULL\n> \t\tWrite NULL to drop the comment.\n\nMakes sense. Something like the attached v2?\n\n> \t\t\tregards, tom lane\n\n- ilmari", "msg_date": "Tue, 31 Jan 2023 18:07:15 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: Clarify deleting comments and security labels in synopsis" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> Agreed; as-is, the syntax summary is not just confusing but outright\n>> wrong.\n>> \n>> I think we could go further and split the entry under Parameters\n>> to match:\n\n> Makes sense. Something like the attached v2?\n\nWFM, will push.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Jan 2023 13:52:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Clarify deleting comments and security labels in synopsis" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>>> Agreed; as-is, the syntax summary is not just confusing but outright\n>>> wrong.\n>>> \n>>> I think we could go further and split the entry under Parameters\n>>> to match:\n>\n>> Makes sense. Something like the attached v2?\n>\n> WFM, will push.\n\nThanks!\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n", "msg_date": "Wed, 01 Feb 2023 10:50:42 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: Clarify deleting comments and security labels in synopsis" } ]
[ { "msg_contents": "Hi all,\n\nWhile browsing the buildfarm, I have noticed this failure on curcilio:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2023-02-01%2001%3A05%3A17\n\nThe test that has reported a failure is the check on the archive\nmodule callback:\n# Failed test 'check shutdown callback of shell archive module'\n# at t/020_archive_status.pl line 248.\n# Looks like you failed 1 test of 17.\n[02:28:06] t/020_archive_status.pl .............. \nDubious, test returned 1 (wstat 256, 0x100)\nFailed 1/17 subtests \n\nLooking closer, this is a result of an assertion failure in the latch\ncode:\n2023-02-01 02:28:05.615 CET [6961:8] LOG: received fast shutdown request\n2023-02-01 02:28:05.615 CET [6961:9] LOG: aborting any active transactions\n2023-02-01 02:28:05.616 CET [30681:9] LOG: process 30681 releasing ProcSignal slot 33, but it contains 0\nTRAP: FailedAssertion(\"latch->owner_pid == MyProcPid\", File: \"latch.c\", Line: 451, PID: 30681)\n\nThe information available in standby2.log shows that 30681 is the\nstartup process. I am not sure what all that means, yet.\n\nThoughts or comments welcome.\n--\nMichael", "msg_date": "Wed, 1 Feb 2023 10:53:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 10:53:17 +0900, Michael Paquier wrote:\n> While browsing the buildfarm, I have noticed this failure on curcilio:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2023-02-01%2001%3A05%3A17\n> \n> The test that has reported a failure is the check on the archive\n> module callback:\n> # Failed test 'check shutdown callback of shell archive module'\n> # at t/020_archive_status.pl line 248.\n> # Looks like you failed 1 test of 17.\n> [02:28:06] t/020_archive_status.pl .............. \n> Dubious, test returned 1 (wstat 256, 0x100)\n> Failed 1/17 subtests \n> \n> Looking closer, this is a result of an assertion failure in the latch\n> code:\n> 2023-02-01 02:28:05.615 CET [6961:8] LOG: received fast shutdown request\n> 2023-02-01 02:28:05.615 CET [6961:9] LOG: aborting any active transactions\n> 2023-02-01 02:28:05.616 CET [30681:9] LOG: process 30681 releasing ProcSignal slot 33, but it contains 0\n> TRAP: FailedAssertion(\"latch->owner_pid == MyProcPid\", File: \"latch.c\", Line: 451, PID: 30681)\n\nGiven the ProcSignal LOG message before it, I don't think this is about\nlatches.\n\n\n> The information available in standby2.log shows that 30681 is the\n> startup process. I am not sure what all that means, yet.\n>\n> Thoughts or comments welcome.\n\nPerhaps a wild write overwriting shared memory state?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Jan 2023 18:12:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "My database off assertion failures has four like that, all 15 and master:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2023-02-01%2001:05:17\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2023-01-11%2011:16:40\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2022-11-22%2012:19:21\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-11-17%2021:47:02\n\nIt's always in proc_exit() in StartupProcShutdownHandler(), a SIGTERM\nhandler which is allowed to call that while in_restore_command is\ntrue.\n\nHere's a different one, some kind of latch corruption in the WAL\nwriter under 017_shm:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2022-01-20%2016:26:54\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2022-01-20%2016:26:54\n\n\n", "msg_date": "Wed, 1 Feb 2023 16:21:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 16:21:16 +1300, Thomas Munro wrote:\n> My database off assertion failures has four like that, all 15 and master:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2023-02-01%2001:05:17\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2023-01-11%2011:16:40\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2022-11-22%2012:19:21\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-11-17%2021:47:02\n> \n> It's always in proc_exit() in StartupProcShutdownHandler(), a SIGTERM\n> handler which is allowed to call that while in_restore_command is\n> true.\n\nUgh, no wonder we're getting crashes. This whole business seems bogus as\nhell.\n\n\nRestoreArchivedFile():\n...\n\t/*\n\t * Check signals before restore command and reset afterwards.\n\t */\n\tPreRestoreCommand();\n\n\t/*\n\t * Copy xlog from archival storage to XLOGDIR\n\t */\n\tret = shell_restore(xlogfname, xlogpath, lastRestartPointFname);\n\n\tPostRestoreCommand();\n\n\n/* SIGTERM: set flag to abort redo and exit */\nstatic void\nStartupProcShutdownHandler(SIGNAL_ARGS)\n{\n\tint\t\t\tsave_errno = errno;\n\n\tif (in_restore_command)\n\t\tproc_exit(1);\n\telse\n\t\tshutdown_requested = true;\n\tWakeupRecovery();\n\n\terrno = save_errno;\n}\n\nWhere PreRestoreCommand()/PostRestoreCommand() set in_restore_command.\n\n\n\nThere's *a lot* of stuff happening inside shell_restore() that's not\ncompatible with doing proc_exit() inside a signal handler. We're\nallocating memory! Interact with stdout.\n\nThere's also the fact that system() isn't signal safe, but that's a much\nless likely problematic issue.\n\n\nThis appears to have gotten worse over a sequence of commits. The\nfollowing commits each added something betwen PreRestoreCommand() and\nPostRestoreCommand().\n\n\ncommit 1b06d7bac901e5fd20bba597188bae2882bf954b\nAuthor: Fujii Masao <fujii@postgresql.org>\nDate: 2021-11-22 10:28:21 +0900\n \n Report wait events for local shell commands like archive_command.\n\nadded pgstat_report_wait_start/end. Unlikely to cause big issues, but\nnot good.\n\n\ncommit 7fed801135bae14d63b11ee4a10f6083767046d8\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2022-08-29 13:55:38 -0400\n\n Clean up inconsistent use of fflush().\n\nMade it a bit worse by adding an fflush(). That certainly seems like it\ncould cause hangs.\n\n\ncommit 9a740f81eb02e04179d78f3df2ce671276c27b07\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: 2023-01-16 16:31:43 +0900\n\n Refactor code in charge of running shell-based recovery commands\n\nwhich completely broke the mechanism. We suddenly run the entirety of\nshell_restore(), which does pallocs etc to build the string passed to\nsystem, and raises errors, all within a section in which a signal\nhandler can invoke proc_exit(). That's just completely broken.\n\n\nSorry, but particularly in this area, you got to be a heck of a lot more\ncareful.\n\nI don't see a choice but to revert the recent changes. They need a\nfairly large rewrite.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 02:55:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-01 16:21:16 +1300, Thomas Munro wrote:\n>> It's always in proc_exit() in StartupProcShutdownHandler(), a SIGTERM\n>> handler which is allowed to call that while in_restore_command is\n>> true.\n\n> Ugh, no wonder we're getting crashes. This whole business seems bogus as\n> hell.\n\nIndeed :-(\n\n> I don't see a choice but to revert the recent changes. They need a\n> fairly large rewrite.\n\n9a740f81e clearly made things a lot worse, but it wasn't great\nbefore. Can we see a way forward to removing the problem entirely?\n\nThe fundamental issue is that we have no good way to break out\nof system(), and I think the original idea was that\nin_restore_command would be set *only* for the duration of the\nsystem() call. That's clearly been lost sight of completely,\nbut maybe as a stopgap we could try to get back to that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 10:12:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 01, 2023 at 10:12:26AM -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2023-02-01 16:21:16 +1300, Thomas Munro wrote:\n>>> It's always in proc_exit() in StartupProcShutdownHandler(), a SIGTERM\n>>> handler which is allowed to call that while in_restore_command is\n>>> true.\n> \n>> Ugh, no wonder we're getting crashes. This whole business seems bogus as\n>> hell.\n> \n> Indeed :-(\n\nUgh. My bad.\n\n> The fundamental issue is that we have no good way to break out\n> of system(), and I think the original idea was that\n> in_restore_command would be set *only* for the duration of the\n> system() call. That's clearly been lost sight of completely,\n> but maybe as a stopgap we could try to get back to that.\n\n+1. I'll produce some patches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 08:06:09 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 10:12:26 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-01 16:21:16 +1300, Thomas Munro wrote:\n> >> It's always in proc_exit() in StartupProcShutdownHandler(), a SIGTERM\n> >> handler which is allowed to call that while in_restore_command is\n> >> true.\n>\n> > Ugh, no wonder we're getting crashes. This whole business seems bogus as\n> > hell.\n>\n> Indeed :-(\n>\n> > I don't see a choice but to revert the recent changes. They need a\n> > fairly large rewrite.\n>\n> 9a740f81e clearly made things a lot worse, but it wasn't great\n> before. Can we see a way forward to removing the problem entirely?\n\nYea, I think we can - we should stop relying on system(). If we instead\nrun the command properly as a subprocess, we don't need to do bad things\nin the signal handler anymore.\n\n\n> The fundamental issue is that we have no good way to break out\n> of system(), and I think the original idea was that\n> in_restore_command would be set *only* for the duration of the\n> system() call. That's clearly been lost sight of completely,\n> but maybe as a stopgap we could try to get back to that.\n\nWe could push the functions setting in_restore_command down into\nExecuteRecoveryCommand(). But I don't think that'd end up necessarily\nbeing right either - we'd now use the mechanism in places we previously\ndidn't (cleanup/end commands).\n\nAnd there's just plenty other stuff in the 14bdb3f13de 9a740f81eb0 that\ndoesn't look right:\n- We now have two places open-coding what BuildRestoreCommand did\n\n- I'm doubtful that the new shell_* functions are the base for a good\n API to abstract restoring files\n\n- the error message for a failed restore command seems to have gotten\n worse:\n could not restore file \\\"%s\\\" from archive: %s\"\n ->\n \"%s \\\"%s\\\": %s\", commandName, command\n\n- shell_* imo is not a good namespace for something called from xlog.c,\n xlogarchive.c. I realize the intention is that shell_archive.c is\n going to be its own \"restore module\", but for now it imo looks odd\n\n- The comment moved out of RestoreArchivedFile() doesn't seems less\n useful at its new location\n\n- explanation of why we use GetOldestRestartPoint() is halfway lost\n\n\nMy name is listed as the first Reviewed-by, but I certainly haven't done\nany meaningful review of these patches. I just replied to top-level\nemail proposing \"recovery modules\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 08:58:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 1, 2023 at 11:58 AM Andres Freund <andres@anarazel.de> wrote:\n> > 9a740f81e clearly made things a lot worse, but it wasn't great\n> > before. Can we see a way forward to removing the problem entirely?\n>\n> Yea, I think we can - we should stop relying on system(). If we instead\n> run the command properly as a subprocess, we don't need to do bad things\n> in the signal handler anymore.\n\nI like the idea of not relying on system(). In most respects, doing\nfork() + exec() ourselves seems superior. We can control where the\noutput goes, what we do while waiting, etc. But system() runs the\ncommand through the shell, so that for example you don't have to\ninvent your own way of splitting a string into words to be passed to\nexec[whatever](). I've never understood how you're supposed to get\nthat behavior other than by calling system().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 12:08:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 12:08:24 -0500, Robert Haas wrote:\n> On Wed, Feb 1, 2023 at 11:58 AM Andres Freund <andres@anarazel.de> wrote:\n> > > 9a740f81e clearly made things a lot worse, but it wasn't great\n> > > before. Can we see a way forward to removing the problem entirely?\n> >\n> > Yea, I think we can - we should stop relying on system(). If we instead\n> > run the command properly as a subprocess, we don't need to do bad things\n> > in the signal handler anymore.\n> \n> I like the idea of not relying on system(). In most respects, doing\n> fork() + exec() ourselves seems superior. We can control where the\n> output goes, what we do while waiting, etc. But system() runs the\n> command through the shell, so that for example you don't have to\n> invent your own way of splitting a string into words to be passed to\n> exec[whatever](). I've never understood how you're supposed to get\n> that behavior other than by calling system().\n\nWe could just exec the shell in the forked process, using -c to invoke\nthe command. That should give us pretty much the same efficiency as\nsystem(), with a lot more control.\n\nI think we already do that somewhere. <dig>. Ah, yes, spawn_process() in\npg_regress.c. I suspect we couldn't use exec for restore_command etc,\nas I think it's not uncommon to use && in the command.\n\n\nPerhaps we should abstract the relevant pieces of spawn_process() that\ninto something more general? The OS specifics are sufficiently\ncomplicated that I don't think it'd be good to have multiple copies.\n\n\nIt's too bad that we have the history of passing things to shell,\notherwise we could define a common argument handling of the GUC and just\nexecve ourselves, but that ship has sailed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 09:20:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-01 12:08:24 -0500, Robert Haas wrote:\n>> I like the idea of not relying on system(). In most respects, doing\n>> fork() + exec() ourselves seems superior. We can control where the\n>> output goes, what we do while waiting, etc. But system() runs the\n>> command through the shell, so that for example you don't have to\n>> invent your own way of splitting a string into words to be passed to\n>> exec[whatever](). I've never understood how you're supposed to get\n>> that behavior other than by calling system().\n\n> We could just exec the shell in the forked process, using -c to invoke\n> the command. That should give us pretty much the same efficiency as\n> system(), with a lot more control.\n\nThe main thing that system() brings to the table is platform-specific\nknowledge of where the shell is. I'm not very sure that we want to\nwire in \"/bin/sh\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 12:27:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 01, 2023 at 08:58:01AM -0800, Andres Freund wrote:\n> On 2023-02-01 10:12:26 -0500, Tom Lane wrote:\n>> The fundamental issue is that we have no good way to break out\n>> of system(), and I think the original idea was that\n>> in_restore_command would be set *only* for the duration of the\n>> system() call. That's clearly been lost sight of completely,\n>> but maybe as a stopgap we could try to get back to that.\n> \n> We could push the functions setting in_restore_command down into\n> ExecuteRecoveryCommand(). But I don't think that'd end up necessarily\n> being right either - we'd now use the mechanism in places we previously\n> didn't (cleanup/end commands).\n\nRight, we'd only want to set it for restore_command. I think that's\ndoable.\n\n> And there's just plenty other stuff in the 14bdb3f13de 9a740f81eb0 that\n> doesn't look right:\n> - We now have two places open-coding what BuildRestoreCommand did\n\nThis was done because BuildRestoreCommand() had become a thin wrapper\naround replace_percent_placeholders(). I can add it back if you don't\nthink this was the right decision.\n\n> - I'm doubtful that the new shell_* functions are the base for a good\n> API to abstract restoring files\n\nWhy?\n\n> - the error message for a failed restore command seems to have gotten\n> worse:\n> could not restore file \\\"%s\\\" from archive: %s\"\n> ->\n> \"%s \\\"%s\\\": %s\", commandName, command\n\nOkay, I'll work on improving this message.\n\n> - shell_* imo is not a good namespace for something called from xlog.c,\n> xlogarchive.c. I realize the intention is that shell_archive.c is\n> going to be its own \"restore module\", but for now it imo looks odd\n\nWhat do you propose instead? FWIW this should go away with recovery\nmodules. This is just an intermediate state to simplify those patches.\n\n> - The comment moved out of RestoreArchivedFile() doesn't seems less\n> useful at its new location\n\nWhere do you think it should go?\n\n> - explanation of why we use GetOldestRestartPoint() is halfway lost\n\nOkay, I'll work on adding more context here.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 09:58:06 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 12:27:19 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-01 12:08:24 -0500, Robert Haas wrote:\n> >> I like the idea of not relying on system(). In most respects, doing\n> >> fork() + exec() ourselves seems superior. We can control where the\n> >> output goes, what we do while waiting, etc. But system() runs the\n> >> command through the shell, so that for example you don't have to\n> >> invent your own way of splitting a string into words to be passed to\n> >> exec[whatever](). I've never understood how you're supposed to get\n> >> that behavior other than by calling system().\n> \n> > We could just exec the shell in the forked process, using -c to invoke\n> > the command. That should give us pretty much the same efficiency as\n> > system(), with a lot more control.\n> \n> The main thing that system() brings to the table is platform-specific\n> knowledge of where the shell is. I'm not very sure that we want to\n> wire in \"/bin/sh\".\n\nWe seem to be doing OK with using SHELLPROG in pg_regress, which just\nseems to be using $SHELL from the build environment.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Feb 2023 10:18:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 01, 2023 at 09:58:06AM -0800, Nathan Bossart wrote:\n> On Wed, Feb 01, 2023 at 08:58:01AM -0800, Andres Freund wrote:\n>> On 2023-02-01 10:12:26 -0500, Tom Lane wrote:\n>>> The fundamental issue is that we have no good way to break out\n>>> of system(), and I think the original idea was that\n>>> in_restore_command would be set *only* for the duration of the\n>>> system() call. That's clearly been lost sight of completely,\n>>> but maybe as a stopgap we could try to get back to that.\n>> \n>> We could push the functions setting in_restore_command down into\n>> ExecuteRecoveryCommand(). But I don't think that'd end up necessarily\n>> being right either - we'd now use the mechanism in places we previously\n>> didn't (cleanup/end commands).\n> \n> Right, we'd only want to set it for restore_command. I think that's\n> doable.\n\nHere is a first draft for the proposed stopgap fix. If we want to proceed\nwith this, I can provide patches for the back branches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 1 Feb 2023 14:35:55 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 01, 2023 at 10:18:27AM -0800, Andres Freund wrote:\n> On 2023-02-01 12:27:19 -0500, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>> The main thing that system() brings to the table is platform-specific\n>> knowledge of where the shell is. I'm not very sure that we want to\n>> wire in \"/bin/sh\".\n> \n> We seem to be doing OK with using SHELLPROG in pg_regress, which just\n> seems to be using $SHELL from the build environment.\n\nIt looks like this had better centralize a bit more of the logic from\npg_regress.c if that were to happen, to avoid more fuzzy logic with\nWIN32. That becomes invasive for a back-patch.\n\nBy the way, there is something that's itching me a bit here. 9a740f8\nhas enlarged by a lot the window between PreRestoreCommand() and\nPostRestoreCommand(), however curculio has reported a failure on\nREL_15_STABLE, where we only manipulate my_wait_event_info while the\nflag is on. Or I am getting that right that there is no way out of it\nunless we remove the dependency to system() even in the back-branches?\nCould there be an extra missing piece here?\n--\nMichael", "msg_date": "Thu, 2 Feb 2023 10:23:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 01, 2023 at 02:35:55PM -0800, Nathan Bossart wrote:\n> Here is a first draft for the proposed stopgap fix. If we want to proceed\n> with this, I can provide patches for the back branches.\n\n> +\t/*\n> +\t * PreRestoreCommand() is used to tell the SIGTERM handler for the startup\n> +\t * process that it is okay to proc_exit() right away on SIGTERM. This is\n> +\t * done for the duration of the system() call because there isn't a good\n> +\t * way to break out while it is executing. Since we might call proc_exit()\n> +\t * in a signal handler here, it is extremely important that nothing but the\n> +\t * system() call happens between the calls to PreRestoreCommand() and\n> +\t * PostRestoreCommand(). Any additional code must go before or after this\n> +\t * section.\n> +\t */\n> +\tif (exitOnSigterm)\n> +\t\tPreRestoreCommand();\n> +\n> \trc = system(command);\n> +\n> +\tif (exitOnSigterm)\n> +\t\tPostRestoreCommand();\n> +\n> \tpgstat_report_wait_end();\n\nHmm. Isn't that something that we should also document in startup.c\nwhere both routines are defined? If we begin to use\nPreRestoreCommand() and PostRestoreCommand() in more code paths in the\nfuture, that could be again an issue. That looks enough to me to\nreduce the window back to what it was before 9a740f8, as exitOnSigterm\nis only used for restore_command. There is a different approach\npossible here: rely more on wait_event_info rather than failOnSignal\nand exitOnSigterm to decide which code path should do what.\n\nAndres Freund wrote:\n> - the error message for a failed restore command seems to have gotten\n> worse:\n> could not restore file \\\"%s\\\" from archive: %s\"\n> ->\n> \"%s \\\"%s\\\": %s\", commandName, command\n\nIMO, we don't lose any context with this method: the command type and\nthe command string itself are the bits actually relevant. Perhaps\nsomething like that would be more intuitive? One idea:\n\"could not execute command for %s: %s\", commandName, command\n\n> - shell_* imo is not a good namespace for something called from xlog.c,\n> xlogarchive.c. I realize the intention is that shell_archive.c is\n> going to be its own \"restore module\", but for now it imo looks odd\n\nshell_restore.c does not sound that bad to me, FWIW. The parallel\nwith the archive counterparts is here. My recent history is not that\ngood when it comes to naming, based on the feedback I received,\nthough.\n\n> And there's just plenty other stuff in the 14bdb3f13de 9a740f81eb0 that\n> doesn't look right:\n> - We now have two places open-coding what BuildRestoreCommand did\n\nYeah, BuildRestoreCommand() was just a small wrapper on top of the new\npercentrepl.c, making it rather irrelevant at this stage, IMO. For\nthe two code paths where it was called.\n\n> - The comment moved out of RestoreArchivedFile() doesn't seems less\n> useful at its new location\n\nWe are talking about that:\n- /*\n- * Remember, we rollforward UNTIL the restore fails so failure here is\n- * just part of the process... that makes it difficult to determine\n- * whether the restore failed because there isn't an archive to restore,\n- * or because the administrator has specified the restore program\n- * incorrectly. We have to assume the former.\n- *\n- * However, if the failure was due to any sort of signal, it's best to\n- * punt and abort recovery. (If we \"return false\" here, upper levels will\n- * assume that recovery is complete and start up the database!) It's\n- * essential to abort on child SIGINT and SIGQUIT, because per spec\n- * system() ignores SIGINT and SIGQUIT while waiting; if we see one of\n- * those it's a good bet we should have gotten it too.\n- *\n- * On SIGTERM, assume we have received a fast shutdown request, and exit\n- * cleanly. It's pure chance whether we receive the SIGTERM first, or the\n- * child process. If we receive it first, the signal handler will call\n- * proc_exit, otherwise we do it here. If we or the child process received\n- * SIGTERM for any other reason than a fast shutdown request, postmaster\n- * will perform an immediate shutdown when it sees us exiting\n- * unexpectedly.\n- *\n- * We treat hard shell errors such as \"command not found\" as fatal, too.\n- */\n\nThe failure processing is stuck within the way we build and handle the\ncommand given down to system(), so keeping this in shell_restore.c (or\nwhatever name you think would be a better fit) makes sense to me.\nNow, thinking a bit more of this, we could just push the description\ndown to ExecuteRecoveryCommand(), that actually does the work,\nadaptinh the comment based on the refactored internals of the\nroutine.\n--\nMichael", "msg_date": "Thu, 2 Feb 2023 11:06:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Hmm. Isn't that something that we should also document in startup.c\n> where both routines are defined? If we begin to use\n> PreRestoreCommand() and PostRestoreCommand() in more code paths in the\n> future, that could be again an issue.\n\nI was vaguely wondering about removing both of those functions\nin favor of an integrated function that does a system() call\nwith those things before and after it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 21:34:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 01, 2023 at 09:34:44PM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Hmm. Isn't that something that we should also document in startup.c\n>> where both routines are defined? If we begin to use\n>> PreRestoreCommand() and PostRestoreCommand() in more code paths in the\n>> future, that could be again an issue.\n> \n> I was vaguely wondering about removing both of those functions\n> in favor of an integrated function that does a system() call\n> with those things before and after it.\n\nIt seems to me that this is pretty much the same as storing\nin_restore_command in shell_restore.c, and that for recovery modules\nthis comes down to the addition of an extra callback called in\nstartup.c to check if the flag is up or not. Now the patch is doing \nthings the opposite way: like on HEAD, store the flag in startup.c but\nswitch it at will with the routines in startup.c. I find the approach\nof the patch a bit more intuitive, TBH, as that makes the interface\nsimpler for other recovery modules that may want to switch the flag\nback-and-forth, and I suspect that there may be cases in recovery\nmodules where we'd still want to switch the flag, but not necessarily\nlink it to system().\n--\nMichael", "msg_date": "Thu, 2 Feb 2023 13:24:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 02, 2023 at 01:24:15PM +0900, Michael Paquier wrote:\n> On Wed, Feb 01, 2023 at 09:34:44PM -0500, Tom Lane wrote:\n>> I was vaguely wondering about removing both of those functions\n>> in favor of an integrated function that does a system() call\n>> with those things before and after it.\n> \n> It seems to me that this is pretty much the same as storing\n> in_restore_command in shell_restore.c, and that for recovery modules\n> this comes down to the addition of an extra callback called in\n> startup.c to check if the flag is up or not. Now the patch is doing \n> things the opposite way: like on HEAD, store the flag in startup.c but\n> switch it at will with the routines in startup.c. I find the approach\n> of the patch a bit more intuitive, TBH, as that makes the interface\n> simpler for other recovery modules that may want to switch the flag\n> back-and-forth, and I suspect that there may be cases in recovery\n> modules where we'd still want to switch the flag, but not necessarily\n> link it to system().\n\nHm. I don't know if we want to encourage further use of\nin_restore_command since it seems to be prone to misuse. Here's a v2 that\ndemonstrateѕ Tom's idea (bikeshedding on names and comments is welcome). I\npersonally like this approach a bit more.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 2 Feb 2023 12:09:57 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 2, 2023 at 3:10 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Hm. I don't know if we want to encourage further use of\n> in_restore_command since it seems to be prone to misuse. Here's a v2 that\n> demonstrateѕ Tom's idea (bikeshedding on names and comments is welcome). I\n> personally like this approach a bit more.\n\n+ /*\n+ * When exitOnSigterm is set and we are in the startup process, use the\n+ * special wrapper for system() that enables exiting immediately upon\n+ * receiving SIGTERM. This ensures we can break out of system() if\n+ * required.\n+ */\n\nThis comment, for me, raises more questions than it answers. Why do we\nonly do this in the startup process?\n\nAlso, and this part is not the fault of this patch but a defect of the\npre-existing comments, under what circumstances do we not want to exit\nwhen we get a SIGTERM? It's standard behavior for PostgreSQL backends\nto exit when they receive SIGTERM, so the question isn't why we\nsometimes exit immediately but why we ever don't. The existing code\ncalls ExecuteRecoveryCommand with exitOnSigterm true in some cases and\nfalse in other cases, and AFAICS there are zero words of comments\nexplaining the reasoning.\n\n+ if (exitOnSigterm && MyBackendType == B_STARTUP)\n+ rc = RunInterruptibleShellCommand(command);\n+ else\n+ rc = system(command);\n\nAnd this looks like pure magic. I'm all in favor of not relying on\nsystem(), but using it under some opaque set of conditions and\notherwise doing something else is not the way. At the very least this\nneeds to be explained a whole lot better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 16:14:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 02, 2023 at 04:14:54PM -0500, Robert Haas wrote:\n> + /*\n> + * When exitOnSigterm is set and we are in the startup process, use the\n> + * special wrapper for system() that enables exiting immediately upon\n> + * receiving SIGTERM. This ensures we can break out of system() if\n> + * required.\n> + */\n> \n> This comment, for me, raises more questions than it answers. Why do we\n> only do this in the startup process?\n\nCurrently, this functionality only exists in the startup process because it\nis only used for restore_command. More below...\n\n> Also, and this part is not the fault of this patch but a defect of the\n> pre-existing comments, under what circumstances do we not want to exit\n> when we get a SIGTERM? It's standard behavior for PostgreSQL backends\n> to exit when they receive SIGTERM, so the question isn't why we\n> sometimes exit immediately but why we ever don't. The existing code\n> calls ExecuteRecoveryCommand with exitOnSigterm true in some cases and\n> false in other cases, and AFAICS there are zero words of comments\n> explaining the reasoning.\n\nI've been digging into the history here. This e-mail seems to have the\nmost context [0]. IIUC this was intended to prevent \"fast\" shutdowns from\nescalating to \"immediate\" shutdowns because the restore command died\nunexpectedly. This doesn't apply to archive_cleanup_command because we\ndon't FATAL if it dies unexpectedly. It seems like this idea should apply\nto recovery_end_command, too, but AFAICT it doesn't use the same approach.\nMy guess is that this hasn't come up because it's less likely that both 1)\nrecovery_end_command is used and 2) someone initiates shutdown while it is\nrunning.\n\nBTW the relevant commits are cdd46c7 (added SIGTERM handling for\nrestore_command), 9e403c2 (added recovery_end_command), and c21ac0b (added\nwhat is today called archive_cleanup_command).\n\n> + if (exitOnSigterm && MyBackendType == B_STARTUP)\n> + rc = RunInterruptibleShellCommand(command);\n> + else\n> + rc = system(command);\n> \n> And this looks like pure magic. I'm all in favor of not relying on\n> system(), but using it under some opaque set of conditions and\n> otherwise doing something else is not the way. At the very least this\n> needs to be explained a whole lot better.\n\nIf we applied this exit-on-SIGTERM behavior to recovery_end_command, I\nthink we could combine failOnSignal and exitOnSigterm into one flag, and\nthen it might be a little easier to explain what is going on. In any case,\nI agree that this deserves a lengthy explanation, which I'll continue to\nwork on.\n\n[0] https://postgr.es/m/499047FE.9090407%40enterprisedb.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 14:01:13 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 02, 2023 at 02:01:13PM -0800, Nathan Bossart wrote:\n> I've been digging into the history here. This e-mail seems to have the\n> most context [0]. IIUC this was intended to prevent \"fast\" shutdowns from\n> escalating to \"immediate\" shutdowns because the restore command died\n> unexpectedly. This doesn't apply to archive_cleanup_command because we\n> don't FATAL if it dies unexpectedly. It seems like this idea should apply\n> to recovery_end_command, too, but AFAICT it doesn't use the same approach.\n> My guess is that this hasn't come up because it's less likely that both 1)\n> recovery_end_command is used and 2) someone initiates shutdown while it is\n> running.\n\nActually, this still doesn't really explain why we need to exit immediately\nin the SIGTERM handler for restore_command. We already have handling for\nwhen the command indicates it exited due to SIGTERM, so it should be no\nproblem if the command receives it before the startup process. And\nHandleStartupProcInterrupts() should exit at an appropriate time after the\nstartup process receives SIGTERM.\n\nMy guess was that this is meant to allow breaking out of the system() call,\nbut I don't understand why that's important here. Maybe we could just\nremove this exit-in-SIGTERM-handler business...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 14:39:19 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 02, 2023 at 02:39:19PM -0800, Nathan Bossart wrote:\n> Maybe we could just\n> remove this exit-in-SIGTERM-handler business...\n\nI've spent some time testing this. It seems to work pretty well, but only\nif I keep the exit-on-SIGTERM logic in shell_restore(). Without that, I'm\nseeing delayed shutdowns, which I assume means\nHandleStartupProcInterrupts() isn't getting called (I'm still investigating\nthis). Іn any case, the fact that shell_restore() exits if the command\nfails due to SIGTERM seems like an implementation detail that we won't\nnecessarily want to rely on once recovery modules are available. In short,\nwe seem to depend on the SIGTERM handling in RestoreArchivedFile() in order\nto be responsive to shutdown requests.\n\nOne idea I have is to approximate the current behavior by simply checking\nfor the shutdown_requested flag before before and after executing\nrestore_command. This seems to work as desired even if the exit-on-SIGTERM\nlogic is removed from shell_restore(). Unless there is some reason to\nbreak out of system() (versus just waiting for the command to fail after it\nreceives SIGTERM), I think this approach should suffice.\n\nI've attached a draft patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 2 Feb 2023 21:35:48 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-02 10:23:21 +0900, Michael Paquier wrote:\n> On Wed, Feb 01, 2023 at 10:18:27AM -0800, Andres Freund wrote:\n> > On 2023-02-01 12:27:19 -0500, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >> The main thing that system() brings to the table is platform-specific\n> >> knowledge of where the shell is. I'm not very sure that we want to\n> >> wire in \"/bin/sh\".\n> > \n> > We seem to be doing OK with using SHELLPROG in pg_regress, which just\n> > seems to be using $SHELL from the build environment.\n> \n> It looks like this had better centralize a bit more of the logic from\n> pg_regress.c if that were to happen, to avoid more fuzzy logic with\n> WIN32. That becomes invasive for a back-patch.\n\nI don't think we should consider backpatching such a change. There's\nenough subtlety that I'd want to see it bake for some time.\n\n\n> By the way, there is something that's itching me a bit here. 9a740f8\n> has enlarged by a lot the window between PreRestoreCommand() and\n> PostRestoreCommand(), however curculio has reported a failure on\n> REL_15_STABLE, where we only manipulate my_wait_event_info while the\n> flag is on. Or I am getting that right that there is no way out of it\n> unless we remove the dependency to system() even in the back-branches?\n> Could there be an extra missing piece here?\n\nYea, that's indeed odd.\n\n\nUgh, I think I might understand what's happening:\n\nThe signal arrives just after the fork() (within system()). Because we\nhave all our processes configure themselves as process group leaders,\nand we signal the entire process group (c.f. signal_child()), both the\nchild process and the parent will process the signal. So we'll end up\ndoing a proc_exit() in both. As both are trying to remove themselves\nfrom the same PGPROC etc entry, that doesn't end well.\n\nI don't see how we can solve that properly as long as we use system().\n\nA workaround for the back branches could be to have a test in\nStartupProcShutdownHandler() that tests if MyProcPid == getpid(), and\nnot do the proc_exit() if they don't match. We probably should just do\nan _exit() in that case.\n\n\nI doubt the idea to signal the entire process group in in signal_child()\nis good. I regularly see core dumps of archive commands because we sent\nSIGQUIT during an immediate shutdown, and of course cp etc don't have a\nSIGQUIT handler, and the default action is to core dump.\n\nBut a replacement for it is not a small amount of work. While a\nsubprocess is running, we can't just handle SIGQUIT with _exit() while\nthe subprocess is running, we need to first signal the child with\nsomething appropriate (SIGKILL?).\n\nOTOH, the current approach only works on systems with setsid(2) support,\nso we probably shouldn't rely so hard on it anyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Feb 2023 23:15:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Ugh, I think I might understand what's happening:\n\n> The signal arrives just after the fork() (within system()). Because we\n> have all our processes configure themselves as process group leaders,\n> and we signal the entire process group (c.f. signal_child()), both the\n> child process and the parent will process the signal. So we'll end up\n> doing a proc_exit() in both. As both are trying to remove themselves\n> from the same PGPROC etc entry, that doesn't end well.\n\nUgh ...\n\n> I don't see how we can solve that properly as long as we use system().\n\n... but I don't see how that's system()'s fault? Doing the fork()\nourselves wouldn't change anything about that.\n\n> A workaround for the back branches could be to have a test in\n> StartupProcShutdownHandler() that tests if MyProcPid == getpid(), and\n> not do the proc_exit() if they don't match. We probably should just do\n> an _exit() in that case.\n\nMight work.\n\n> OTOH, the current approach only works on systems with setsid(2) support,\n> so we probably shouldn't rely so hard on it anyway.\n\nsetsid(2) is required since SUSv2, so I'm not sure which systems\nare of concern here ... other than Redmond's of course.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Feb 2023 02:24:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Fri, Feb 3, 2023 at 8:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Ugh, I think I might understand what's happening:\n>\n> > The signal arrives just after the fork() (within system()). Because we\n> > have all our processes configure themselves as process group leaders,\n> > and we signal the entire process group (c.f. signal_child()), both the\n> > child process and the parent will process the signal. So we'll end up\n> > doing a proc_exit() in both. As both are trying to remove themselves\n> > from the same PGPROC etc entry, that doesn't end well.\n>\n> Ugh ...\n\nYuck, but yeah that makes sense.\n\n> > I don't see how we can solve that properly as long as we use system().\n>\n> ... but I don't see how that's system()'s fault? Doing the fork()\n> ourselves wouldn't change anything about that.\n\nWhat if we block signals, fork, then in the child, install the default\nSIGTERM handler, then unblock, and then exec the shell? If SIGTERM is\ndelivered either before or after exec (but before whatever is loaded\ninstalls a new handler) then the child is terminated, but without\nrunning the handler. Isn't that what we want here?\n\n\n", "msg_date": "Fri, 3 Feb 2023 20:34:36 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-02 14:39:19 -0800, Nathan Bossart wrote:\n> Actually, this still doesn't really explain why we need to exit immediately\n> in the SIGTERM handler for restore_command. We already have handling for\n> when the command indicates it exited due to SIGTERM, so it should be no\n> problem if the command receives it before the startup process. And\n> HandleStartupProcInterrupts() should exit at an appropriate time after the\n> startup process receives SIGTERM.\n\n> My guess was that this is meant to allow breaking out of the system() call,\n> but I don't understand why that's important here. Maybe we could just\n> remove this exit-in-SIGTERM-handler business...\n\nI don't think you can, at least not easily. For one, we have no\nguarantee that the child process got a signal at all - we don't have a\nhard dependency on setsid(). And even if we have setsid(), there's no\nguarantee that the executed process reacts to SIGTERM and that the child\ndidn't create its own process group (and thus isn't reached by the\nsignal to the process group, sent in signal_child()).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Feb 2023 23:35:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Fri, Feb 3, 2023 at 8:35 PM Andres Freund <andres@anarazel.de> wrote:\n> we don't have a\n> hard dependency on setsid()\n\nFTR There are no Unixes without setsid()... HAVE_SETSID was only left\nin the tree because we were discussing whether to replace it with\n!defined(WIN32) or whether that somehow made things more confusing,\nbut then while trying to figure out what to do about that, I noticed\nthat Windows *does* have a near-equivalent thing, or IIRC several\nthings like that, and that kinda stopped me in my tracks.\n\n\n", "msg_date": "Fri, 3 Feb 2023 20:42:07 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-03 02:24:03 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Ugh, I think I might understand what's happening:\n> \n> > The signal arrives just after the fork() (within system()). Because we\n> > have all our processes configure themselves as process group leaders,\n> > and we signal the entire process group (c.f. signal_child()), both the\n> > child process and the parent will process the signal. So we'll end up\n> > doing a proc_exit() in both. As both are trying to remove themselves\n> > from the same PGPROC etc entry, that doesn't end well.\n> \n> Ugh ...\n> \n> > I don't see how we can solve that properly as long as we use system().\n> \n> ... but I don't see how that's system()'s fault? Doing the fork()\n> ourselves wouldn't change anything about that.\n\nIf we did the fork ourselves, we'd temporarily change the signal mask\nbefore the fork() and reset it immediately in the parent, but not in the\nchild. We can't do that with system(), because we don't get control back\nearly enough - we'd just block signals for the entire duration of\nsystem().\n\nI wonder if this shows a problem with the change in 14 to make pgarch.c\nbe attached to shared memory. Before that it didn't have to worry about\nproblems like the above in the archiver, but now we do. It's less severe\nthan the startup process issue, because we don't have a comparable\nsignal handler in pgarch, but still.\n\nI'm e.g. not sure that there aren't issues with\nprocsignal_sigusr1_handler() or such executing in a forked process.\n\n\n> > A workaround for the back branches could be to have a test in\n> > StartupProcShutdownHandler() that tests if MyProcPid == getpid(), and\n> > not do the proc_exit() if they don't match. We probably should just do\n> > an _exit() in that case.\n> \n> Might work.\n\nI wonder if we should add code complaining loudly about such a mismatch\nto proc_exit(), in addition to handling it more silently in\nStartupProcShutdownHandler(). Also, an assertion in\n[Auxiliary]ProcKill that proc->xid == MyProcPid == getpid() seems like a\ngood idea.\n\n\n> > OTOH, the current approach only works on systems with setsid(2) support,\n> > so we probably shouldn't rely so hard on it anyway.\n> \n> setsid(2) is required since SUSv2, so I'm not sure which systems\n> are of concern here ... other than Redmond's of course.\n\nI was thinking of windows, yes.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Feb 2023 23:44:22 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Feb 3, 2023 at 8:35 PM Andres Freund <andres@anarazel.de> wrote:\n>> we don't have a hard dependency on setsid()\n\n> FTR There are no Unixes without setsid()...\n\nYeah. What I just got done reading in SUSv2 (1997) is\n\"Derived from the POSIX.1-1988 standard\". We need not\nconcern ourselves with any systems not having it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Feb 2023 02:46:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-03 02:24:03 -0500, Tom Lane wrote:\n>> setsid(2) is required since SUSv2, so I'm not sure which systems\n>> are of concern here ... other than Redmond's of course.\n\n> I was thinking of windows, yes.\n\nBut given the lack of fork(2), Windows requires a completely\ndifferent solution anyway, no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Feb 2023 02:50:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-03 20:34:36 +1300, Thomas Munro wrote:\n> What if we block signals, fork, then in the child, install the default\n> SIGTERM handler, then unblock, and then exec the shell?\n\nYep. I was momentarily wondering why we'd even need to unblock signals,\nbut while exec (et al) reset the signal handler, they don't reset the\nmask...\n\nWe could, for good measure, do PGSharedMemoryDetach() etc. But I don't\nthink it's quite worth it if we're careful with signals. However\nClosePostmasterPorts() might be a good idea? I think not doing it might\ncause issues like keeping the listen sockets alive after we shut down\npostmaster, preventing us from startup up again?\n\nLooks like PR_SET_PDEATHSIG isn't reset across an execve(). But that\nactually seems good?\n\n\n> If SIGTERM is delivered either before or after exec (but before\n> whatever is loaded installs a new handler) then the child is\n> terminated, but without running the handler. Isn't that what we want\n> here?\n\nYep, I think so.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Feb 2023 23:58:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-03 02:50:38 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-03 02:24:03 -0500, Tom Lane wrote:\n> >> setsid(2) is required since SUSv2, so I'm not sure which systems\n> >> are of concern here ... other than Redmond's of course.\n>\n> > I was thinking of windows, yes.\n>\n> But given the lack of fork(2), Windows requires a completely\n> different solution anyway, no?\n\nNot sure it needs to be that different. I think what we basically want\nis:\n\n1) Something vaguely popen() shaped that starts a subprocess, while\n being careful about signal handlers, returning the pid of the child\n process. Not sure if we want to redirect stdout/stderr or\n not. Probably not?\n\n2) A blocking wrapper around 1) that takes care to forward fatal signals\n to the subprocess, including in the SIGQUIT case and probably being\n interruptible with query cancels etc in the relevant process types.\n\n\nThinking about popen() suggests that we have a similar problem with COPY\nFROM PROGRAM as we have in pgarch (i.e. not as bad as the startup\nprocess issue, but still not great, due to\nprocsignal_sigusr1_handler()).\n\n\nWhat's worse, the problem exists for untrusted PLs as well, and\nobviously we can't ensure that signals are correctly masked there.\n\nThis seems to suggest that we ought to install a MyProcPid != getpid()\nlike defense in all our signal handlers...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Feb 2023 00:09:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Fri, Feb 3, 2023 at 9:09 PM Andres Freund <andres@anarazel.de> wrote:\n> Thinking about popen() suggests that we have a similar problem with COPY\n> FROM PROGRAM as we have in pgarch (i.e. not as bad as the startup\n> process issue, but still not great, due to\n> procsignal_sigusr1_handler()).\n\nA small mercy: while we promote some kinds of fatal-ish signals to\ngroup level with kill(-PID, ...), we don't do that for SIGUSR1 for\nlatches or procsignals.\n\n\n", "msg_date": "Fri, 3 Feb 2023 21:19:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi, \n\nOn February 3, 2023 9:19:23 AM GMT+01:00, Thomas Munro <thomas.munro@gmail.com> wrote:\n>On Fri, Feb 3, 2023 at 9:09 PM Andres Freund <andres@anarazel.de> wrote:\n>> Thinking about popen() suggests that we have a similar problem with COPY\n>> FROM PROGRAM as we have in pgarch (i.e. not as bad as the startup\n>> process issue, but still not great, due to\n>> procsignal_sigusr1_handler()).\n>\n>A small mercy: while we promote some kinds of fatal-ish signals to\n>group level with kill(-PID, ...), we don't do that for SIGUSR1 for\n>latches or procsignals.\n\nNot as bad, but we still do SetLatch() from a bunch of places that would be reached... \n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 03 Feb 2023 10:24:18 +0100", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 02, 2023 at 11:44:22PM -0800, Andres Freund wrote:\n> On 2023-02-03 02:24:03 -0500, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>> > A workaround for the back branches could be to have a test in\n>> > StartupProcShutdownHandler() that tests if MyProcPid == getpid(), and\n>> > not do the proc_exit() if they don't match. We probably should just do\n>> > an _exit() in that case.\n>> \n>> Might work.\n> \n> I wonder if we should add code complaining loudly about such a mismatch\n> to proc_exit(), in addition to handling it more silently in\n> StartupProcShutdownHandler(). Also, an assertion in\n> [Auxiliary]ProcKill that proc->xid == MyProcPid == getpid() seems like a\n> good idea.\n\n From the discussion, it sounds like we don't want to depend on the child\nprocess receiving/handling the signal, so we can't get rid of the\nbreak-out-of-system() behavior (at least not in back-branches). I've put\ntogether some work-in-progress patches for the stopgap/back-branch fix.\n\n0001 is just v1-0001 from upthread. This moves Pre/PostRestoreCommand to\nsurround only the call to system(). I think this should get us closer to\npre-v15 behavior.\n\n0002 adds the getpid() check mentioned above to\nStartupProcShutdownHandler(), and it adds assertions to proc_exit() and\n[Auxiliary]ProcKill().\n\n0003 adds checks for shutdown requests before and after the call to\nshell_restore(). IMO the Pre/PostRestoreCommand stuff is an implementation\ndetail for restore_command, so I think it behooves us to have some\nadditional shutdown checks that apply even for recovery modules. This\npatch could probably be moved to the recovery modules thread.\n\nIs this somewhat close to what folks had in mind?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 3 Feb 2023 10:54:17 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Fri, Feb 03, 2023 at 10:54:17AM -0800, Nathan Bossart wrote:\n> 0001 is just v1-0001 from upthread. This moves Pre/PostRestoreCommand to\n> surround only the call to system(). I think this should get us closer to\n> pre-v15 behavior.\n\n+ if (exitOnSigterm)\n+ PreRestoreCommand();\n+\n rc = system(command);\n+\n+ if (exitOnSigterm)\n+ PostRestoreCommand();\n\nI don't really want to let that hanging around on HEAD much longer, so\nI'm OK to do that for HEAD, then figure out what needs to be done for\nthe older issue at hand.\n\n+ /*\n+ * PreRestoreCommand() is used to tell the SIGTERM handler for the startup\n+ * process that it is okay to proc_exit() right away on SIGTERM. This is\n+ * done for the duration of the system() call because there isn't a good\n+ * way to break out while it is executing. Since we might call proc_exit()\n+ * in a signal handler here, it is extremely important that nothing but the\n+ * system() call happens between the calls to PreRestoreCommand() and\n+ * PostRestoreCommand(). Any additional code must go before or after this\n+ * section.\n+ */\n\nStill, it seems to me that the large comment block in shell_restore()\nought to be moved to ExecuteRecoveryCommand(), no? The assumptions\nunder which one can use exitOnSigterm and failOnSignal could be\ncompleted in the header of the function based on that.\n--\nMichael", "msg_date": "Sat, 4 Feb 2023 11:47:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-03 10:54:17 -0800, Nathan Bossart wrote:\n> @@ -146,7 +146,25 @@ ExecuteRecoveryCommand(const char *command, const char *commandName,\n> \t */\n> \tfflush(NULL);\n> \tpgstat_report_wait_start(wait_event_info);\n> +\n> +\t/*\n> +\t * PreRestoreCommand() is used to tell the SIGTERM handler for the startup\n> +\t * process that it is okay to proc_exit() right away on SIGTERM. This is\n> +\t * done for the duration of the system() call because there isn't a good\n> +\t * way to break out while it is executing. Since we might call proc_exit()\n> +\t * in a signal handler here, it is extremely important that nothing but the\n> +\t * system() call happens between the calls to PreRestoreCommand() and\n> +\t * PostRestoreCommand(). Any additional code must go before or after this\n> +\t * section.\n> +\t */\n> +\tif (exitOnSigterm)\n> +\t\tPreRestoreCommand();\n> +\n> \trc = system(command);\n> +\n> +\tif (exitOnSigterm)\n> +\t\tPostRestoreCommand();\n> +\n> \tpgstat_report_wait_end();\n>\n> \tif (rc != 0)\n\nIt's somewhat weird that we now call the startup-process specific\nPreRestoreCommand/PostRestoreCommand() in other processes than the\nstartup process. Gated on a variable that's not immediately obviously\ntied to being in the startup process.\n\nI think at least we ought to add comments to PreRestoreCommand /\nPostRestoreCommand that they need to be robust against being called\noutside of the startup process, and similarly a comment in\nExecuteRecoveryCommand(), explaining that all this stuff just works in\nthe startup process.\n\n\n> diff --git a/src/backend/postmaster/startup.c b/src/backend/postmaster/startup.c\n> index bcd23542f1..503eb1a5a6 100644\n> --- a/src/backend/postmaster/startup.c\n> +++ b/src/backend/postmaster/startup.c\n> @@ -19,6 +19,8 @@\n> */\n> #include \"postgres.h\"\n> \n> +#include <unistd.h>\n> +\n> #include \"access/xlog.h\"\n> #include \"access/xlogrecovery.h\"\n> #include \"access/xlogutils.h\"\n> @@ -121,7 +123,17 @@ StartupProcShutdownHandler(SIGNAL_ARGS)\n> \tint\t\t\tsave_errno = errno;\n> \n> \tif (in_restore_command)\n> -\t\tproc_exit(1);\n> +\t{\n> +\t\t/*\n> +\t\t * If we are in a child process (e.g., forked by system() in\n> +\t\t * shell_restore()), we don't want to call any exit callbacks. The\n> +\t\t * parent will take care of that.\n> +\t\t */\n> +\t\tif (MyProcPid == (int) getpid())\n> +\t\t\tproc_exit(1);\n> +\t\telse\n> +\t\t\t_exit(1);\n\nI think it might be worth adding something like\n const char msg[] = \"StartupProcShutdownHandler() called in child process\";\n write(STDERR_FILENO, msg, sizeof(msg));\nto this path. Otherwise it might end up being a very hard to debug path.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 4 Feb 2023 03:20:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-02 11:06:19 +0900, Michael Paquier wrote:\n> > - the error message for a failed restore command seems to have gotten\n> > worse:\n> > could not restore file \\\"%s\\\" from archive: %s\"\n> > ->\n> > \"%s \\\"%s\\\": %s\", commandName, command\n> \n> IMO, we don't lose any context with this method: the command type and\n> the command string itself are the bits actually relevant. Perhaps\n> something like that would be more intuitive? One idea:\n> \"could not execute command for %s: %s\", commandName, command\n\nWe do - you now can't identify the filename that failed without parsing\nthe command, which obviously isn't easily possible generically, because,\nwell, it's user-configurable. I like that the command is logged now,\nbut I think we need the filename be at a predictable position in addition.\n\n\n> > - shell_* imo is not a good namespace for something called from xlog.c,\n> > xlogarchive.c. I realize the intention is that shell_archive.c is\n> > going to be its own \"restore module\", but for now it imo looks odd\n> \n> shell_restore.c does not sound that bad to me, FWIW. The parallel\n> with the archive counterparts is here. My recent history is not that\n> good when it comes to naming, based on the feedback I received,\n> though.\n\nI don't mind shell_restore.c much - after all, the filename is\nnamespaced by the directory. However, I do mind function names like\nshell_restore(), that could also be about restoring what SHELL is set\nto, or whatever. And functions aren't namespaced in C.\n\n\n> > And there's just plenty other stuff in the 14bdb3f13de 9a740f81eb0 that\n> > doesn't look right:\n> > - We now have two places open-coding what BuildRestoreCommand did\n> \n> Yeah, BuildRestoreCommand() was just a small wrapper on top of the new\n> percentrepl.c, making it rather irrelevant at this stage, IMO. For\n> the two code paths where it was called.\n\nI don't at all agree. Particularly because you didn't even leave a\npointer in each of the places that if you update one, you also need to\nupdate the other. I don't mind the amount of code it adds, I do mind\nthat it, without any recognizable reason, implements policy in multiple\nplaces.\n\n\n> > - The comment moved out of RestoreArchivedFile() doesn't seems less\n> > useful at its new location\n> \n> We are talking about that:\n> - /*\n> - * Remember, we rollforward UNTIL the restore fails so failure here is\n> - * just part of the process... that makes it difficult to determine\n> - * whether the restore failed because there isn't an archive to restore,\n> - * or because the administrator has specified the restore program\n> - * incorrectly. We have to assume the former.\n> - *\n> - * However, if the failure was due to any sort of signal, it's best to\n> - * punt and abort recovery. (If we \"return false\" here, upper levels will\n> - * assume that recovery is complete and start up the database!) It's\n> - * essential to abort on child SIGINT and SIGQUIT, because per spec\n> - * system() ignores SIGINT and SIGQUIT while waiting; if we see one of\n> - * those it's a good bet we should have gotten it too.\n> - *\n> - * On SIGTERM, assume we have received a fast shutdown request, and exit\n> - * cleanly. It's pure chance whether we receive the SIGTERM first, or the\n> - * child process. If we receive it first, the signal handler will call\n> - * proc_exit, otherwise we do it here. If we or the child process received\n> - * SIGTERM for any other reason than a fast shutdown request, postmaster\n> - * will perform an immediate shutdown when it sees us exiting\n> - * unexpectedly.\n> - *\n> - * We treat hard shell errors such as \"command not found\" as fatal, too.\n> - */\n> \n> The failure processing is stuck within the way we build and handle the\n> command given down to system(), so keeping this in shell_restore.c (or\n> whatever name you think would be a better fit) makes sense to me.\n> Now, thinking a bit more of this, we could just push the description\n> down to ExecuteRecoveryCommand(), that actually does the work,\n> adaptinh the comment based on the refactored internals of the\n> routine.\n\nNothing in the shell_restore() comments explains / asserts that it is\nonly ever to be called in the startup process. And outside of the\nstartup process none of the above actually makes sense.\n\nThat's kind of my problem with these changes. They try to introduce new\nabstraction layers, but don't provide real abstraction, because they're\nvery tightly bound to the way the functions were called before the\nrefactoring. And none of these restrictions are actually documented.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 4 Feb 2023 03:30:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sat, Feb 04, 2023 at 03:30:29AM -0800, Andres Freund wrote:\n> That's kind of my problem with these changes. They try to introduce new\n> abstraction layers, but don't provide real abstraction, because they're\n> very tightly bound to the way the functions were called before the\n> refactoring. And none of these restrictions are actually documented.\n\nOkay. Michael, why don't we revert the shell_restore stuff for now? Once\nthe archive modules interface changes and the fix for this\nSIGTERM-during-system() problem are in, I will work through this feedback\nand give recovery modules another try. I'm still hoping to have recovery\nmodules ready in time for the v16 feature freeze.\n\nMy intent was to improve this code by refactoring and reducing code\nduplication, but I seem to have missed the mark. I am sorry.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 4 Feb 2023 10:03:54 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sat, Feb 04, 2023 at 10:03:54AM -0800, Nathan Bossart wrote:\n> Okay. Michael, why don't we revert the shell_restore stuff for now? Once\n> the archive modules interface changes and the fix for this\n> SIGTERM-during-system() problem are in, I will work through this feedback\n> and give recovery modules another try. I'm still hoping to have recovery\n> modules ready in time for the v16 feature freeze.\n\nYes, at this stage a revert of the refactoring with shell_restore.c is\nthe best path forward.\n\nFrom the discussion, I got the following things on top of my mind, for\nreference:\n- Should we include archive_cleanup_command into the recovery modules\nat all? We've discussed offloading that from the checkpointer, and it\nmakes the failure handling trickier when it comes to unexpected GUC\nconfigurations, for one. The same may actually apply to\nrestore_end_command. Though it is done in the startup process now,\nthere may be an argument to offload that somewhere else based on the\ntiming of the end-of-recovery checkpoint. My opinion on this stuff is\nthat only including restore_command in the modules would make most\nusers I know of happy enough as it removes the overhead of the command\ninvocation from the startup process, if able to replay things fast\nenough so as the restore command is the bottleneck.\nrestore_end_command would be simple enough, but if there is a wish to\nredesign the startup process to offload it somewhere else, then the\nrecovery module makes backward-compatibility concerns harder to think\nabout in the long-term.\n- Do we need to reconsider the assumptions of the startup\nprocess where SIGTERM enforces an immediate shutdown while running\nsystem() for the restore command? For example, the difference of\nbehavior when a restore_command uses a system sleep() that does not\nreact on signals from what I recall?\n- Fixing the original issue of this thread may finish by impacting\nwhat you are trying to do in this area, so fixing the original issue\nfirst sounds like a pre-requirement to me at the end because it may\nimpact the final design of the modules and their callbacks. (I have\nnot looked at all the arguments raised about what to do with ~15,\nstill it does not look like we have a clear picture here yet.)\n--\nMichael", "msg_date": "Sun, 5 Feb 2023 09:49:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-04 10:03:54 -0800, Nathan Bossart wrote:\n> On Sat, Feb 04, 2023 at 03:30:29AM -0800, Andres Freund wrote:\n> > That's kind of my problem with these changes. They try to introduce new\n> > abstraction layers, but don't provide real abstraction, because they're\n> > very tightly bound to the way the functions were called before the\n> > refactoring. And none of these restrictions are actually documented.\n> \n> Okay. Michael, why don't we revert the shell_restore stuff for now? Once\n> the archive modules interface changes and the fix for this\n> SIGTERM-during-system() problem are in, I will work through this feedback\n> and give recovery modules another try. I'm still hoping to have recovery\n> modules ready in time for the v16 feature freeze.\n> \n> My intent was to improve this code by refactoring and reducing code\n> duplication, but I seem to have missed the mark. I am sorry.\n\nFWIW, I think the patches were going roughly the right direction, they\njust needs a bit more work.\n\nI don't think we should expose the proc_exit() hack, and its supporting\ninfrastructure, to the pluggable *_command logic. It's bad enough as-is,\nbut having to do this stuff within an extension module seems likely to\nend badly. There's just way too much action-at-a-distance.\n\nI think Thomas has been hacking on a interruptible system()\nreplacement. With that, a lot of this ugliness would be resolved.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Feb 2023 08:08:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sun, Feb 05, 2023 at 09:49:57AM +0900, Michael Paquier wrote:\n> - Should we include archive_cleanup_command into the recovery modules\n> at all? We've discussed offloading that from the checkpointer, and it\n> makes the failure handling trickier when it comes to unexpected GUC\n> configurations, for one. The same may actually apply to\n> restore_end_command. Though it is done in the startup process now,\n> there may be an argument to offload that somewhere else based on the\n> timing of the end-of-recovery checkpoint. My opinion on this stuff is\n> that only including restore_command in the modules would make most\n> users I know of happy enough as it removes the overhead of the command\n> invocation from the startup process, if able to replay things fast\n> enough so as the restore command is the bottleneck.\n> restore_end_command would be simple enough, but if there is a wish to\n> redesign the startup process to offload it somewhere else, then the\n> recovery module makes backward-compatibility concerns harder to think\n> about in the long-term.\n\nI agree. I think we ought to first focus on getting the recovery modules\ninterface and restore_command functionality in place before we take on more\ndifficult things like archive_cleanup_command. But I still think the\narchive_cleanup_command/recovery_end_command functionality should\neventually be added to recovery modules.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 5 Feb 2023 14:19:38 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-05 14:19:38 -0800, Nathan Bossart wrote:\n> On Sun, Feb 05, 2023 at 09:49:57AM +0900, Michael Paquier wrote:\n> > - Should we include archive_cleanup_command into the recovery modules\n> > at all? We've discussed offloading that from the checkpointer, and it\n> > makes the failure handling trickier when it comes to unexpected GUC\n> > configurations, for one. The same may actually apply to\n> > restore_end_command. Though it is done in the startup process now,\n> > there may be an argument to offload that somewhere else based on the\n> > timing of the end-of-recovery checkpoint. My opinion on this stuff is\n> > that only including restore_command in the modules would make most\n> > users I know of happy enough as it removes the overhead of the command\n> > invocation from the startup process, if able to replay things fast\n> > enough so as the restore command is the bottleneck.\n> > restore_end_command would be simple enough, but if there is a wish to\n> > redesign the startup process to offload it somewhere else, then the\n> > recovery module makes backward-compatibility concerns harder to think\n> > about in the long-term.\n> \n> I agree. I think we ought to first focus on getting the recovery modules\n> interface and restore_command functionality in place before we take on more\n> difficult things like archive_cleanup_command. But I still think the\n> archive_cleanup_command/recovery_end_command functionality should\n> eventually be added to recovery modules.\n\nI tend not to agree. If you make the API that small, you're IME likely\nto end up with something that looks somewhat incoherent once extended.\n\n\nThe more I think about it, the less I am convinced that\none-callback-per-segment, invoked just before needing the file, is the\nright approach to address the performance issues of restore_commmand.\n\nThe main performance issue isn't the shell invocation overhead, it's\nsynchronously needing to restore the archive, before replay can\ncontinue. It's also gonna be slow if a restore module copies the segment\nfrom a remote system - the latency is the problem.\n\nThe only way the restore module approach can do better, is to\nasynchronously restore ahead of the current segment. But for that the\nAPI really isn't suited well. The signature of the relevant callback is:\n\n> +typedef bool (*RecoveryRestoreCB) (const char *file, const char *path,\n> +\t\t\t\t\t\t\t\t const char *lastRestartPointFileName);\n\n\nThat's not very suited to restoring \"ahead of time\". You need to parse\nfile to figure out whether a segment or something else is restored, turn\n\"file\" back into an LSN, figure out where to to store further segments,\nsomehow hand off to some background worker, etc.\n\nThat doesn't strike me as something we want to happen inside multiple\nrestore libraries.\n\nI think at the very least you'd want to have a separate callback for\nrestoring segments than for restoring other files. But more likely a\nseparate callback for each type of file to be restored.\n\nFor the timeline history case an parameter indicating that we don't want\nto restore the file, just to see if there's a conflict, would make\nsense.\n\nFor the segment files, we'd likely need a parameter to indicate whether\nthe restore is random or not.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Feb 2023 15:01:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sun, Feb 05, 2023 at 09:49:57AM +0900, Michael Paquier wrote:\n> Yes, at this stage a revert of the refactoring with shell_restore.c is\n> the best path forward.\n\nDone that now, as of 2f6e15a.\n--\nMichael", "msg_date": "Mon, 6 Feb 2023 08:36:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sun, Feb 05, 2023 at 03:01:57PM -0800, Andres Freund wrote:\n> I think at the very least you'd want to have a separate callback for\n> restoring segments than for restoring other files. But more likely a\n> separate callback for each type of file to be restored.\n> \n> For the timeline history case an parameter indicating that we don't want\n> to restore the file, just to see if there's a conflict, would make\n> sense.\n\nThat seems reasonable.\n\n> For the segment files, we'd likely need a parameter to indicate whether\n> the restore is random or not.\n\nWouldn't this approach still require each module to handle restoring ahead\nof time? I agree that the shell overhead isn't the main performance issue,\nbut it's unclear to me how much of this should be baked into PostgreSQL. I\nmean, we could introduce a GUC that tells us how far ahead to restore and\nhave a background worker (or multiple background workers) asynchronously\npull files into a staging directory via the callbacks. Is that the sort of\nscope you are envisioning?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 5 Feb 2023 15:57:47 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-05 15:57:47 -0800, Nathan Bossart wrote:\n> > For the segment files, we'd likely need a parameter to indicate whether\n> > the restore is random or not.\n> \n> Wouldn't this approach still require each module to handle restoring ahead\n> of time?\n\nYes, to some degree at least. I was just describing a few pretty obvious\nimprovements.\n\nThe core code can make that a lot easier though. The problem of where to\nstore such files can be provided by core code (presumably a separate\ndirectory). A GUC for aggressiveness can be provided. Etc.\n\n\n> I agree that the shell overhead isn't the main performance issue,\n> but it's unclear to me how much of this should be baked into\n> PostgreSQL.\n\nI don't know fully either. But just reimplementing all of it in\ndifferent modules doesn't seem like a sane approach either. A lot of it\nis policy that we need to solve once, centrally.\n\n\n> I mean, we could introduce a GUC that tells us how far ahead to\n> restore and have a background worker (or multiple background workers)\n> asynchronously pull files into a staging directory via the callbacks.\n> Is that the sort of scope you are envisioning?\n\nCloser, at least.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Feb 2023 16:07:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sun, Feb 05, 2023 at 04:07:50PM -0800, Andres Freund wrote:\n> On 2023-02-05 15:57:47 -0800, Nathan Bossart wrote:\n>> I agree that the shell overhead isn't the main performance issue,\n>> but it's unclear to me how much of this should be baked into\n>> PostgreSQL.\n> \n> I don't know fully either. But just reimplementing all of it in\n> different modules doesn't seem like a sane approach either. A lot of it\n> is policy that we need to solve once, centrally.\n> \n>> I mean, we could introduce a GUC that tells us how far ahead to\n>> restore and have a background worker (or multiple background workers)\n>> asynchronously pull files into a staging directory via the callbacks.\n>> Is that the sort of scope you are envisioning?\n> \n> Closer, at least.\n\nGot it. I suspect we'll want to do something similar for archive modules\neventually, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 5 Feb 2023 16:46:31 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sun, Feb 5, 2023 at 7:46 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Got it. I suspect we'll want to do something similar for archive modules\n> eventually, too.\n\n+1.\n\nI felt like the archive modules work was a step forward when we did,\nbecause basic_archive does some things that you're not likely to get\nright if you do it on your own. And a similar approach to\nrestore_command might be also be valuable, at least in my opinion.\nHowever, the gains that we can get out of the archive module facility\nin its present form do seem to be somewhat limited, for exactly the\nkinds of reasons being discussed here.\n\nI kind of wonder whether we ought to try to flip the model around. At\npresent, the idea is that the archiver is doing its thing and it makes\ncallbacks into the archive module. But what if we got rid of the\narchiver main loop altogether and put the main loop inside of the\narchive module, and have it call back to some functions that we\nprovide? One function could be like char\n*pgarch_next_file_to_be_archived_if_there_is_one_ready(void) and the\nother could be like void\npgarch_some_file_that_you_gave_me_previously_is_now_fully_archived(char\n*which_one). That way, we'd break the tight coupling where you have to\nget a unit of work and perform it in full before you can get the next\nunit of work. Some variant of this could work on the restore side,\ntoo, I think, although we have less certainty about how much it makes\nto prefetch for restore than we do about what needs to be archived.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Feb 2023 10:22:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 08, 2023 at 10:22:24AM -0500, Robert Haas wrote:\n> I felt like the archive modules work was a step forward when we did,\n> because basic_archive does some things that you're not likely to get\n> right if you do it on your own. And a similar approach to\n> restore_command might be also be valuable, at least in my opinion.\n> However, the gains that we can get out of the archive module facility\n> in its present form do seem to be somewhat limited, for exactly the\n> kinds of reasons being discussed here.\n\nI'm glad to hear that there is interest in taking this stuff to the next\nlevel. I'm currently planning to first get the basic API in place for\nrecovery modules like we have for archive modules, but I'm hoping to\nposition it so that it leads naturally to asynchronous, parallel, and/or\nbatching approaches down the road (v17?).\n\n> I kind of wonder whether we ought to try to flip the model around. At\n> present, the idea is that the archiver is doing its thing and it makes\n> callbacks into the archive module. But what if we got rid of the\n> archiver main loop altogether and put the main loop inside of the\n> archive module, and have it call back to some functions that we\n> provide? One function could be like char\n> *pgarch_next_file_to_be_archived_if_there_is_one_ready(void) and the\n> other could be like void\n> pgarch_some_file_that_you_gave_me_previously_is_now_fully_archived(char\n> *which_one). That way, we'd break the tight coupling where you have to\n> get a unit of work and perform it in full before you can get the next\n> unit of work. Some variant of this could work on the restore side,\n> too, I think, although we have less certainty about how much it makes\n> to prefetch for restore than we do about what needs to be archived.\n\nI think this could be a good approach if we decide not to bake too much\ninto PostgreSQL itself (e.g., such as creating multiple archive workers\nthat each call out to the module). Archive module authors would\neffectively need to write their own archiver processes. That sounds super\nflexible, but it also sounds like it might be harder to get right.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 Feb 2023 09:43:50 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 8, 2023 at 12:43 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I think this could be a good approach if we decide not to bake too much\n> into PostgreSQL itself (e.g., such as creating multiple archive workers\n> that each call out to the module). Archive module authors would\n> effectively need to write their own archiver processes. That sounds super\n> flexible, but it also sounds like it might be harder to get right.\n\nYep. That's a problem, and I'm certainly open to better ideas.\n\nHowever, if we assume that the archive module is likely to be doing\nsomething like juggling a bunch of file descriptors over which it is\nspeaking HTTP, what other model works, really? It might be juggling\nthose file descriptors indirectly, or it might be relying on an\nintermediate library like curl or something from Amazon that talks to\nS3 or whatever, but only it knows what resources it's juggling, or\nwhat functions it needs to call to manage them. On the other hand, we\ndon't really need a lot from it. We need it to CHECK_FOR_INTERRUPTS()\nand handle that without leaking resources or breaking the world in\nsome way, and we sort of need it to, you know, actually archive stuff,\nbut apart from that I guess it can do what it likes (unless I'm\nmissing some other important function of the archiver?).\n\nIt's probably a good idea if the archiver function returns when it's\nfully caught up and there's no more work to do. Then we could handle\ndecisions about hibernation in the core code, rather than having every\narchive module invent its own way of doing that. But when there's work\nhappening, as far as I can see, the archive module needs to have\ncontrol pretty nearly all the time, or it's not going to be able to do\nanything clever.\n\nAlways happy to hear if you see it differently....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Feb 2023 16:24:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 08, 2023 at 04:24:15PM -0500, Robert Haas wrote:\n> On Wed, Feb 8, 2023 at 12:43 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I think this could be a good approach if we decide not to bake too much\n>> into PostgreSQL itself (e.g., such as creating multiple archive workers\n>> that each call out to the module). Archive module authors would\n>> effectively need to write their own archiver processes. That sounds super\n>> flexible, but it also sounds like it might be harder to get right.\n> \n> Yep. That's a problem, and I'm certainly open to better ideas.\n> \n> However, if we assume that the archive module is likely to be doing\n> something like juggling a bunch of file descriptors over which it is\n> speaking HTTP, what other model works, really? It might be juggling\n> those file descriptors indirectly, or it might be relying on an\n> intermediate library like curl or something from Amazon that talks to\n> S3 or whatever, but only it knows what resources it's juggling, or\n> what functions it needs to call to manage them. On the other hand, we\n> don't really need a lot from it. We need it to CHECK_FOR_INTERRUPTS()\n> and handle that without leaking resources or breaking the world in\n> some way, and we sort of need it to, you know, actually archive stuff,\n> but apart from that I guess it can do what it likes (unless I'm\n> missing some other important function of the archiver?).\n> \n> It's probably a good idea if the archiver function returns when it's\n> fully caught up and there's no more work to do. Then we could handle\n> decisions about hibernation in the core code, rather than having every\n> archive module invent its own way of doing that. But when there's work\n> happening, as far as I can see, the archive module needs to have\n> control pretty nearly all the time, or it's not going to be able to do\n> anything clever.\n> \n> Always happy to hear if you see it differently....\n\nThese are all good points. Perhaps there could be a base archiver\nimplementation that shell_archive uses (and that other modules could use if\ndesired, which might be important for backward compatibility with the\nexisting callbacks). But if you want to do something fancier than\narchiving sequentially, you could write your own.\n\nIn any case, I'm not really wedded to any particular approach at the\nmoment, so I am likewise open to better ideas.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 Feb 2023 14:25:54 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 08, 2023 at 02:25:54PM -0800, Nathan Bossart wrote:\n> These are all good points. Perhaps there could be a base archiver\n> implementation that shell_archive uses (and that other modules could use if\n> desired, which might be important for backward compatibility with the\n> existing callbacks). But if you want to do something fancier than\n> archiving sequentially, you could write your own.\n\nWhich is basically the kind of things you can already achieve with a\nbackground worker and a module of your own?\n\n> In any case, I'm not really wedded to any particular approach at the\n> moment, so I am likewise open to better ideas.\n\nI am not sure how much we should try to move from core into the\nmodules when it comes to the current archiver process, with how much\ncontrol you'd like to give to users. It also looks like to me that\nthis is the kind of problem where we would not have the correct\ncallback design until someone comes in and develops a solution that\nwould shape around it. On top of that, this is the kind of things\nachievable with just a bgworker, and perhaps simpler as the parallel\nstate can just be maintained in it, which is what the archiver process\nis about at the end? Or were there some restrictions in the bgworker\nAPIs that would not fit with what an archiver process should do?\nPerhaps these restrictions, if any, are what we'd better try to lift\nfirst?\n--\nMichael", "msg_date": "Thu, 9 Feb 2023 08:56:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 09, 2023 at 08:56:24AM +0900, Michael Paquier wrote:\n> On Wed, Feb 08, 2023 at 02:25:54PM -0800, Nathan Bossart wrote:\n>> These are all good points. Perhaps there could be a base archiver\n>> implementation that shell_archive uses (and that other modules could use if\n>> desired, which might be important for backward compatibility with the\n>> existing callbacks). But if you want to do something fancier than\n>> archiving sequentially, you could write your own.\n> \n> Which is basically the kind of things you can already achieve with a\n> background worker and a module of your own?\n\nIMO one of the big pieces that's missing is a way to get the next N files\nto archive. Right now, you'd have to trawl through archive_status on your\nown if you wanted to batch/parallelize. I think one advantage of what\nRobert is suggesting is that we could easily provide a supported way to get\nthe next set of files to archive, and we can asynchronously mark them\n\"done\". Otherwise, each module has to implement this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 Feb 2023 16:24:13 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 8, 2023 at 7:24 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Thu, Feb 09, 2023 at 08:56:24AM +0900, Michael Paquier wrote:\n> > On Wed, Feb 08, 2023 at 02:25:54PM -0800, Nathan Bossart wrote:\n> >> These are all good points. Perhaps there could be a base archiver\n> >> implementation that shell_archive uses (and that other modules could use if\n> >> desired, which might be important for backward compatibility with the\n> >> existing callbacks). But if you want to do something fancier than\n> >> archiving sequentially, you could write your own.\n> >\n> > Which is basically the kind of things you can already achieve with a\n> > background worker and a module of your own?\n>\n> IMO one of the big pieces that's missing is a way to get the next N files\n> to archive. Right now, you'd have to trawl through archive_status on your\n> own if you wanted to batch/parallelize. I think one advantage of what\n> Robert is suggesting is that we could easily provide a supported way to get\n> the next set of files to archive, and we can asynchronously mark them\n> \"done\". Otherwise, each module has to implement this.\n\nRight.\n\nI think that we could certainly, as Michael suggests, have people\nprovide their own background worker rather than having the archiver\ninvoke the user-supplied code directly. As long as the functions that\nyou need in order to get the necessary information can be called from\nsome other process, that's fine. The only difficulty I see is that if\nthe archiving is happening from a separate background worker rather\nthan from the archiver, then what is the archiver doing? We could\nsomehow arrange to not run the archiver process at all, or I guess to\njust sit there and have it do nothing. Or, we can decide not to have a\nseparate background worker and just have the archiver call the\nuser-supplied core directly. I kind of like that approach at the\nmoment; it seems more elegant to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Feb 2023 09:22:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think that we could certainly, as Michael suggests, have people\n> provide their own background worker rather than having the archiver\n> invoke the user-supplied code directly. As long as the functions that\n> you need in order to get the necessary information can be called from\n> some other process, that's fine. The only difficulty I see is that if\n> the archiving is happening from a separate background worker rather\n> than from the archiver, then what is the archiver doing? We could\n> somehow arrange to not run the archiver process at all, or I guess to\n> just sit there and have it do nothing. Or, we can decide not to have a\n> separate background worker and just have the archiver call the\n> user-supplied core directly. I kind of like that approach at the\n> moment; it seems more elegant to me.\n\nI'm fairly concerned about the idea of making it common for people\nto write their own main loop for the archiver. That means that, if\nwe have a bug fix that requires the archiver to do X, we will not\njust be patching our own code but trying to get an indeterminate\nset of third parties to add the fix to their code.\n\nIf we think we need primitives to let the archiver hooks get all\nthe pending files, or whatever, by all means add those. But don't\ncede fundamental control of the archiver. The hooks need to be\ndecoration on a framework we provide, not the framework themselves.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Feb 2023 10:51:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 9, 2023 at 10:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm fairly concerned about the idea of making it common for people\n> to write their own main loop for the archiver. That means that, if\n> we have a bug fix that requires the archiver to do X, we will not\n> just be patching our own code but trying to get an indeterminate\n> set of third parties to add the fix to their code.\n\nI don't know what kind of bug we could really have in the main loop\nthat would be common to every implementation. They're probably all\ngoing to check for interrupts, do some work, and then wait for I/O on\nsome things by calling select() or some equivalent. But the work, and\nthe wait for the I/O, would be different for every implementation. I\nwould anticipate that the amount of common code would be nearly zero.\n\nImagine two archive modules, one of which archives files via HTTP and\nthe other of which archives them via SSH. They need to do a lot of the\nsame things, but the code is going to be totally different. When the\nHTTP archiver module needs to open a new connection, it's going to\ncall some libcurl function. When the SSH archiver module needs to do\nthe same thing, it's going to call some libssh function. It seems\nquite likely that the HTTP implementation would want to juggle\nmultiple connections in parallel, but the SSH implementation might not\nwant to do that, or its logic for determining how many connections to\nopen might be completely different based on the behavior of that\nprotocol vs. the other protocol. Once either implementation has sent\nas much data it can over the connections it has open, it needs to wait\nfor those sockets to become write-ready or, possibly, read-ready.\nThere again, each one will be calling into a different library to do\nthat. It could be that in this particular case, but would be waiting\nfor a set of file descriptors, and we could provide some framework for\nwaiting on a set of file descriptors provided by the module. But you\ncould also have some other archiver implementation that is, say,\nwaiting for a process to terminate rather than for a file descriptor\nto become ready for I/O.\n\n> If we think we need primitives to let the archiver hooks get all\n> the pending files, or whatever, by all means add those. But don't\n> cede fundamental control of the archiver. The hooks need to be\n> decoration on a framework we provide, not the framework themselves.\n\nI don't quite see how you can make asynchronous and parallel archiving\nwork if the archiver process only calls into the archive module at\ntimes that it chooses. That would mean that the module has to return\ncontrol to the archiver when it's in the middle of archiving one or\nmore files -- and then I don't see how it can get control back at the\nappropriate time. Do you have a thought about that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Feb 2023 11:12:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 09, 2023 at 11:12:21AM -0500, Robert Haas wrote:\n> On Thu, Feb 9, 2023 at 10:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we think we need primitives to let the archiver hooks get all\n>> the pending files, or whatever, by all means add those. But don't\n>> cede fundamental control of the archiver. The hooks need to be\n>> decoration on a framework we provide, not the framework themselves.\n> \n> I don't quite see how you can make asynchronous and parallel archiving\n> work if the archiver process only calls into the archive module at\n> times that it chooses. That would mean that the module has to return\n> control to the archiver when it's in the middle of archiving one or\n> more files -- and then I don't see how it can get control back at the\n> appropriate time. Do you have a thought about that?\n\nI've been thinking about this, actually. I'm wondering if we could provide\na list of files to the archiving callback (configurable via a variable in\nArchiveModuleState), and then have the callback return a list of files that\nare archived. (Or maybe we just put the list of files that need archiving\nin ArchiveModuleState.) The returned list could include files that were\nsent to the callback previously. The archive module would be responsible\nfor creating background worker(s) (if desired), dispatching files\nto-be-archived to its background worker(s), and gathering the list of\narchived files to return.\n\nThis is admittedly half-formed, but I'm tempted to hack something together\nquickly to see whether it might be viable.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 Feb 2023 09:23:08 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-09 11:12:21 -0500, Robert Haas wrote:\n> On Thu, Feb 9, 2023 at 10:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm fairly concerned about the idea of making it common for people\n> > to write their own main loop for the archiver. That means that, if\n> > we have a bug fix that requires the archiver to do X, we will not\n> > just be patching our own code but trying to get an indeterminate\n> > set of third parties to add the fix to their code.\n\nI'm somewhat concerned about that too, but perhaps from a different\nangle. First, I think we don't do our users a service by defaulting the\nin-core implementation to something that doesn't scale to even a moderately\nbusy server. Second, I doubt we'll get the API for any of this right, without\nan acutual user that does something more complicated than restoring one-by-one\nin a blocking manner.\n\n\n> I don't know what kind of bug we could really have in the main loop\n> that would be common to every implementation. They're probably all\n> going to check for interrupts, do some work, and then wait for I/O on\n> some things by calling select() or some equivalent. But the work, and\n> the wait for the I/O, would be different for every implementation. I\n> would anticipate that the amount of common code would be nearly zero.\n\nI don't think it's that hard to imagine problems. To be reasonably fast, a\ndecent restore implementation will have to 'restore ahead'. Which also\nprovides ample things to go wrong. E.g.\n\n- WAL source is switched, restore module needs to react to that, but doesn't,\n we end up lots of wasted work, or worse, filename conflicts\n- recovery follows a timeline, restore module doesn't catch on quickly enough\n- end of recovery happens, restore just continues on\n\n\n> > If we think we need primitives to let the archiver hooks get all\n> > the pending files, or whatever, by all means add those. But don't\n> > cede fundamental control of the archiver. The hooks need to be\n> > decoration on a framework we provide, not the framework themselves.\n>\n> I don't quite see how you can make asynchronous and parallel archiving\n> work if the archiver process only calls into the archive module at\n> times that it chooses. That would mean that the module has to return\n> control to the archiver when it's in the middle of archiving one or\n> more files -- and then I don't see how it can get control back at the\n> appropriate time. Do you have a thought about that?\n\nI don't think archiver is the hard part, that already has a dedicated\nprocess, and it also has something of a queuing system already. The startup\nprocess imo is the complicated one...\n\nIf we had a 'restorer' process, startup fed some sort of a queue with things\nto restore in the near future, it might be more realistic to do something you\ndescribe?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Feb 2023 11:29:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Here is a new version of the stopgap/back-branch fix for restore_command.\nThis is more or less a rebased version of v4 with an added stderr message\nas Andres suggested upthread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 14 Feb 2023 09:47:55 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 9, 2023 at 10:53 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I've been thinking about this, actually. I'm wondering if we could provide\n> a list of files to the archiving callback (configurable via a variable in\n> ArchiveModuleState), and then have the callback return a list of files that\n> are archived. (Or maybe we just put the list of files that need archiving\n> in ArchiveModuleState.) The returned list could include files that were\n> sent to the callback previously. The archive module would be responsible\n> for creating background worker(s) (if desired), dispatching files\n> to-be-archived to its background worker(s), and gathering the list of\n> archived files to return.\n\nHmm. So in this design, the archiver doesn't really do the archiving\nany more, because the interface makes that impossible. It has to use a\nseparate background worker process for that, full stop.\n\nI don't think that's a good design. It's fine if some people want to\nimplement it that way, but it shouldn't be forced by the interface.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Feb 2023 15:08:14 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Fri, Feb 10, 2023 at 12:59 AM Andres Freund <andres@anarazel.de> wrote:\n> I'm somewhat concerned about that too, but perhaps from a different\n> angle. First, I think we don't do our users a service by defaulting the\n> in-core implementation to something that doesn't scale to even a moderately\n> busy server.\n\n+1.\n\n> Second, I doubt we'll get the API for any of this right, without\n> an acutual user that does something more complicated than restoring one-by-one\n> in a blocking manner.\n\nFair.\n\n> I don't think it's that hard to imagine problems. To be reasonably fast, a\n> decent restore implementation will have to 'restore ahead'. Which also\n> provides ample things to go wrong. E.g.\n>\n> - WAL source is switched, restore module needs to react to that, but doesn't,\n> we end up lots of wasted work, or worse, filename conflicts\n> - recovery follows a timeline, restore module doesn't catch on quickly enough\n> - end of recovery happens, restore just continues on\n\nI don't see how you can prevent those things from happening. If the\nrestore process is working in some way that requires an event loop,\nand I think that will be typical for any kind of remote archiving,\nthen either it has control most of the time, so the event loop can be\nrun inside the restore process, or, as Nathan proposes, we don't let\nthe archiver have control and it needs to run that restore process in\na separate background worker. The hazards that you mention here exist\neither way. If the event loop is running inside the restore process,\nit can decide not to call the functions that we provide in a timely\nfashion and thus fail to react as it should. If the event loop runs\ninside a separate background worker, then that process can fail to be\nresponsive in precisely the same way. Fundamentally, if the author of\na restore module writes code to have multiple I/Os in flight at the\nsame time and does not write code to cancel those I/Os if something\nchanges, then such cancellation will not occur. That remains true no\nmatter which process is performing the I/O.\n\n> > I don't quite see how you can make asynchronous and parallel archiving\n> > work if the archiver process only calls into the archive module at\n> > times that it chooses. That would mean that the module has to return\n> > control to the archiver when it's in the middle of archiving one or\n> > more files -- and then I don't see how it can get control back at the\n> > appropriate time. Do you have a thought about that?\n>\n> I don't think archiver is the hard part, that already has a dedicated\n> process, and it also has something of a queuing system already. The startup\n> process imo is the complicated one...\n>\n> If we had a 'restorer' process, startup fed some sort of a queue with things\n> to restore in the near future, it might be more realistic to do something you\n> describe?\n\nSome kind of queueing system might be a useful part of the interface,\nand a dedicated restorer process does sound like a good idea. But the\narchiver doesn't have this solved, precisely because you have to\narchive a single file, return control, and wait to be invoked again\nfor the next file. That does not scale.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Feb 2023 15:18:57 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-16 15:18:57 +0530, Robert Haas wrote:\n> On Fri, Feb 10, 2023 at 12:59 AM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think it's that hard to imagine problems. To be reasonably fast, a\n> > decent restore implementation will have to 'restore ahead'. Which also\n> > provides ample things to go wrong. E.g.\n> >\n> > - WAL source is switched, restore module needs to react to that, but doesn't,\n> > we end up lots of wasted work, or worse, filename conflicts\n> > - recovery follows a timeline, restore module doesn't catch on quickly enough\n> > - end of recovery happens, restore just continues on\n> \n> I don't see how you can prevent those things from happening. If the\n> restore process is working in some way that requires an event loop,\n> and I think that will be typical for any kind of remote archiving,\n> then either it has control most of the time, so the event loop can be\n> run inside the restore process, or, as Nathan proposes, we don't let\n> the archiver have control and it needs to run that restore process in\n> a separate background worker. The hazards that you mention here exist\n> either way. If the event loop is running inside the restore process,\n> it can decide not to call the functions that we provide in a timely\n> fashion and thus fail to react as it should. If the event loop runs\n> inside a separate background worker, then that process can fail to be\n> responsive in precisely the same way. Fundamentally, if the author of\n> a restore module writes code to have multiple I/Os in flight at the\n> same time and does not write code to cancel those I/Os if something\n> changes, then such cancellation will not occur. That remains true no\n> matter which process is performing the I/O.\n\nIDK. I think we can make that easier or harder. Right now the proposed API\ndoesn't provide anything to allow to address this.\n\n\n> > > I don't quite see how you can make asynchronous and parallel archiving\n> > > work if the archiver process only calls into the archive module at\n> > > times that it chooses. That would mean that the module has to return\n> > > control to the archiver when it's in the middle of archiving one or\n> > > more files -- and then I don't see how it can get control back at the\n> > > appropriate time. Do you have a thought about that?\n> >\n> > I don't think archiver is the hard part, that already has a dedicated\n> > process, and it also has something of a queuing system already. The startup\n> > process imo is the complicated one...\n> >\n> > If we had a 'restorer' process, startup fed some sort of a queue with things\n> > to restore in the near future, it might be more realistic to do something you\n> > describe?\n> \n> Some kind of queueing system might be a useful part of the interface,\n> and a dedicated restorer process does sound like a good idea. But the\n> archiver doesn't have this solved, precisely because you have to\n> archive a single file, return control, and wait to be invoked again\n> for the next file. That does not scale.\n\nBut there's nothing inherent in that. We know for certain which files we're\ngoing to archive. And we don't need to work one-by-one. The archiver could\njust start multiple subprocesses at the same time. All the blocking it does\nright now are artificially imposed by the use of system(). We could instead\njust use something popen() like and have a configurable number of processes\nrunning at the same time.\n\nWhat I was trying to point out was that the work a \"restorer\" process has to\ndo is more speculative, because we don't know when we'll promote, whether\nwe'll follow a timeline increase, whether the to-be-restored WAL already\nexists. That's solvable, but a bunch of the relevant work ought to be solved\nin core core code, instead of just in archive modules.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Feb 2023 08:32:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 16, 2023 at 03:08:14PM +0530, Robert Haas wrote:\n> On Thu, Feb 9, 2023 at 10:53 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I've been thinking about this, actually. I'm wondering if we could provide\n>> a list of files to the archiving callback (configurable via a variable in\n>> ArchiveModuleState), and then have the callback return a list of files that\n>> are archived. (Or maybe we just put the list of files that need archiving\n>> in ArchiveModuleState.) The returned list could include files that were\n>> sent to the callback previously. The archive module would be responsible\n>> for creating background worker(s) (if desired), dispatching files\n>> to-be-archived to its background worker(s), and gathering the list of\n>> archived files to return.\n> \n> Hmm. So in this design, the archiver doesn't really do the archiving\n> any more, because the interface makes that impossible. It has to use a\n> separate background worker process for that, full stop.\n> \n> I don't think that's a good design. It's fine if some people want to\n> implement it that way, but it shouldn't be forced by the interface.\n\nI don't think it would force you to use a background worker, but if you\nwanted to, the tools would be available. At least, that is the intent.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 16 Feb 2023 08:57:59 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 16, 2023 at 10:28 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> > Hmm. So in this design, the archiver doesn't really do the archiving\n> > any more, because the interface makes that impossible. It has to use a\n> > separate background worker process for that, full stop.\n> >\n> > I don't think that's a good design. It's fine if some people want to\n> > implement it that way, but it shouldn't be forced by the interface.\n>\n> I don't think it would force you to use a background worker, but if you\n> wanted to, the tools would be available. At least, that is the intent.\n\nI'm 100% amenable to somebody demonstrating how that is super easy,\nbarely an inconvenience. But I think we would need to see some code\nshowing at least what the API is going to look like, and ideally a\nsample implementation, in order for me to be convinced of that. What I\nsuspect is that if somebody tries to do that they are going to find\nthat the core API has to be quite opinionated about how the archive\nmodule has to do things, which I think is not what we want. But if\nthat turns out to be false, cool!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Feb 2023 14:19:53 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Thu, Feb 16, 2023 at 10:02 PM Andres Freund <andres@anarazel.de> wrote:\n> But there's nothing inherent in that. We know for certain which files we're\n> going to archive. And we don't need to work one-by-one. The archiver could\n> just start multiple subprocesses at the same time.\n\nBut what if it doesn't want to start multiple processes, just\nmultiplex within a single process?\n\n> What I was trying to point out was that the work a \"restorer\" process has to\n> do is more speculative, because we don't know when we'll promote, whether\n> we'll follow a timeline increase, whether the to-be-restored WAL already\n> exists. That's solvable, but a bunch of the relevant work ought to be solved\n> in core core code, instead of just in archive modules.\n\nYep, I can see that there are some things to figure out there, and I\nagree that they should be figured out in the core code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Feb 2023 15:51:06 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-18 15:51:06 +0530, Robert Haas wrote:\n> On Thu, Feb 16, 2023 at 10:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > But there's nothing inherent in that. We know for certain which files we're\n> > going to archive. And we don't need to work one-by-one. The archiver could\n> > just start multiple subprocesses at the same time.\n> \n> But what if it doesn't want to start multiple processes, just\n> multiplex within a single process?\n\nTo me that seems even simpler? Nothing but the archiver is supposed to create\n.done files and nothing is supposed to remove .ready files without archiver\nhaving created the .done files. So the archiver process can scan\narchive_status until its done or until N archives have been collected, and\nthen process them at once? Only the creation of the .done files would be\nserial, but I don't think that's commonly a problem (and could be optimized as\nwell, by creating multiple files and then fsyncing them in a second pass,\navoiding N filesystem journal flushes).\n\nMaybe I am misunderstanding what you see as the problem?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 18 Feb 2023 13:15:22 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sun, Feb 19, 2023 at 2:45 AM Andres Freund <andres@anarazel.de> wrote:\n> To me that seems even simpler? Nothing but the archiver is supposed to create\n> .done files and nothing is supposed to remove .ready files without archiver\n> having created the .done files. So the archiver process can scan\n> archive_status until its done or until N archives have been collected, and\n> then process them at once? Only the creation of the .done files would be\n> serial, but I don't think that's commonly a problem (and could be optimized as\n> well, by creating multiple files and then fsyncing them in a second pass,\n> avoiding N filesystem journal flushes).\n>\n> Maybe I am misunderstanding what you see as the problem?\n\nWell right now the archiver process calls ArchiveFileCB when there's a\nfile ready for archiving, and that process is supposed to archive the\nwhole thing before returning. That pretty obviously seems to preclude\nhaving more than one file being archived at the same time. What\ncallback structure do you have in mind to allow for that?\n\nI mean, my idea was to basically just have one big callback:\nArchiverModuleMainLoopCB(). Which wouldn't return, or perhaps, would\nonly return when archiving was totally caught up and there was nothing\nmore to do right now. And then that callback could call functions like\nAreThereAnyMoreFilesIShouldBeArchivingAndIfYesWhatIsTheNextOne(). So\nit would call that function and it would find out about a file and\nstart an HTTP session or whatever and then call that function again\nand start another HTTP session for the second file and so on until it\nhad as much concurrency as it wanted. And then when it hit the\nconcurrency limit, it would wait until at least one HTTP request\nfinished. At that point it would call\nHeyEverybodyISuccessfullyArchivedAWalFile(), after which it could\nagain ask for the next file and start a request for that one and so on\nand so forth.\n\nI don't really understand what the other possible model is here,\nhonestly. Right now, control remains within the archive module for the\nentire time that a file is being archived. If we generalize the model\nto allow multiple files to be in the process of being archived at the\nsame time, the archive module is going to need to have control as long\nas >= 1 of them are in progress, at least AFAICS. If you have some\nother idea how it would work, please explain it to me...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 19 Feb 2023 20:06:24 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sun, Feb 19, 2023 at 08:06:24PM +0530, Robert Haas wrote:\n> I mean, my idea was to basically just have one big callback:\n> ArchiverModuleMainLoopCB(). Which wouldn't return, or perhaps, would\n> only return when archiving was totally caught up and there was nothing\n> more to do right now. And then that callback could call functions like\n> AreThereAnyMoreFilesIShouldBeArchivingAndIfYesWhatIsTheNextOne(). So\n> it would call that function and it would find out about a file and\n> start an HTTP session or whatever and then call that function again\n> and start another HTTP session for the second file and so on until it\n> had as much concurrency as it wanted. And then when it hit the\n> concurrency limit, it would wait until at least one HTTP request\n> finished. At that point it would call\n> HeyEverybodyISuccessfullyArchivedAWalFile(), after which it could\n> again ask for the next file and start a request for that one and so on\n> and so forth.\n\nThis archiving implementation is not completely impossible with the\ncurrent API infrastructure, either? If you consider the archiving as\na two-step process where segments are first copied into a cheap,\nreliable area. Then these could be pushed in block in a more remote\narea like a S3 bucket? Of course this depends on other things like\nthe cluster structure, but redundancy can be added with standby\narchiving, as well.\n\nI am not sure exactly how many requirements we want to push into a\ncallback, to be honest, and surely more requirements pushed to the\ncallback increases the odds of implementation mistakes, like a full\nloop. There already many ways to get it wrong with archiving, like\nmissing a flush of the archived segment before the callback returns to\nensure its durability..\n--\nMichael", "msg_date": "Tue, 21 Feb 2023 07:38:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15[" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Sun, Feb 19, 2023 at 08:06:24PM +0530, Robert Haas wrote:\n> > I mean, my idea was to basically just have one big callback:\n> > ArchiverModuleMainLoopCB(). Which wouldn't return, or perhaps, would\n> > only return when archiving was totally caught up and there was nothing\n> > more to do right now. And then that callback could call functions like\n> > AreThereAnyMoreFilesIShouldBeArchivingAndIfYesWhatIsTheNextOne(). So\n> > it would call that function and it would find out about a file and\n> > start an HTTP session or whatever and then call that function again\n> > and start another HTTP session for the second file and so on until it\n> > had as much concurrency as it wanted. And then when it hit the\n> > concurrency limit, it would wait until at least one HTTP request\n> > finished. At that point it would call\n> > HeyEverybodyISuccessfullyArchivedAWalFile(), after which it could\n> > again ask for the next file and start a request for that one and so on\n> > and so forth.\n> \n> This archiving implementation is not completely impossible with the\n> current API infrastructure, either? If you consider the archiving as\n> a two-step process where segments are first copied into a cheap,\n> reliable area. Then these could be pushed in block in a more remote\n> area like a S3 bucket? Of course this depends on other things like\n> the cluster structure, but redundancy can be added with standby\n> archiving, as well.\n\nSurely it can't be too cheap as it needs to be reliable.. We have\nlooked at this before (copying to a queue area before copying with a\nseparate process off-system) and it simply isn't great and requires more\nwork than you really want to do if you can help it and for no real\nbenefit.\n\n> I am not sure exactly how many requirements we want to push into a\n> callback, to be honest, and surely more requirements pushed to the\n> callback increases the odds of implementation mistakes, like a full\n> loop. There already many ways to get it wrong with archiving, like\n> missing a flush of the archived segment before the callback returns to\n> ensure its durability..\n\nWithout any actual user of any of this it's surprising to me how much\neffort has been put into it. Have I missed the part where someone has\nsaid they're actually implementing an archive library that we can look\nat and see how it works and how the archive library and the core system\ncould work better together..?\n\nWe (pgbackrest) are generally interested in the idea to reduce the\nstartup time, but that's not really a big issue for us currently and so\nit hasn't really risen up to the level of being something we're working\non, not to mention that if it keeps changing each release then it's just\ngoing to end up being more work for us for a feature that doesn't gain\nus all that much.\n\nNow, all that said, at least in initial discussions, we expect the\npgbackrest archive_library to look very similar to how we handle\narchive_command and async archiving today- when called if there's\nmultiple WAL files to process then we fork an async process off and it\ngoes and spawns multiple processes and does its work to move the WAL\nfiles to the off-system storage and when we are called via\narchive_command we just check a status flag to see if that WAL has been\narchived yet by the async process or not. If not and there's no async\nprocess running then we'll start a new one (starting a new async process\nperiodically actually makes things a lot easier to test for us too,\nwhich is why we don't just have an async process running around forever-\nthe startup time typically isn't that big of a deal), if there is a\nstatus flag then we return whatever it says, and if the async process is\nrunning and no status flag yet then we wait.\n\nOnce we have that going then perhaps there could be some interesting\niteration between pgbackrest and the core code to improve things, but\nall this discussion and churn feels more likely to put folks off of\ntrying to implement something using this approach than the opposite,\nunless someone in this discussion is actually working on an archive\nlibrary, but that isn't the impression I've gotten, at least (though if\nthere is such a work in progress out there, I'd love to see it!).\n\nThanks,\n\nStephen", "msg_date": "Mon, 20 Feb 2023 18:17:21 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15[" }, { "msg_contents": "On Tue, Feb 14, 2023 at 09:47:55AM -0800, Nathan Bossart wrote:\n> Here is a new version of the stopgap/back-branch fix for restore_command.\n> This is more or less a rebased version of v4 with an added stderr message\n> as Andres suggested upthread.\n\nSo, this thread has moved around many subjects, still we did not get\nto the core of the issue which is what we should try to do to avoid\nsporadic failures like what the top of the thread is mentioning.\n\nPerhaps beginning a new thread with a patch and a summary would be\nbetter at this stage? Another thing I am wondering is if it could be\npossible to test that rather reliably. I have been playing with a few\nscenarios like holding the system() call for a bit with hardcoded\nsleep()s, without much success. I'll try harder on that part.. It's\nbeen mentioned as well that we could just move away from system() in\nthe long-term.\n--\nMichael", "msg_date": "Tue, 21 Feb 2023 09:03:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Tue, Feb 21, 2023 at 1:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Perhaps beginning a new thread with a patch and a summary would be\n> better at this stage? Another thing I am wondering is if it could be\n> possible to test that rather reliably. I have been playing with a few\n> scenarios like holding the system() call for a bit with hardcoded\n> sleep()s, without much success. I'll try harder on that part.. It's\n> been mentioned as well that we could just move away from system() in\n> the long-term.\n\nI've been experimenting with ideas for master, which I'll start a new\nthread about. Actually I was already thinking about this before this\nbroken signal handler stuff came up, because it was already\nunacceptable that all these places that are connected to shared memory\nignore interrupts for unbounded time while a shell script/whatever\nruns. At first I thought it would be relatively simple to replace\nsystem() with something that has a latch wait loop (though there are\ndetails to argue about, like whether you want to allow interrupts that\nthrow, and if so, how you clean up the subprocess, which have several\nplausible solutions). But once I started looking at the related\npopen-based stuff where you want to communicate with the subprocess\n(for example COPY FROM PROGRAM), I realised that it needs more\nanalysis and work: that stuff is currently entirely based on stdio\nFILE (that is, fread() and fwrite()), but it's not really possible (at\nleast portably) to make that nonblocking, and in fact it's a pretty\nterrible interface in terms of error reporting in general. I've been\nsketching/thinking about a new module called 'subprocess', with a\ncouple of ways to start processes, and interact with them via\nWaitEventSet and direct pipe I/O; or if buffering is needed, it'd be\nour own, not <stdio.h>'s. But don't let me stop anyone else proposing\nideas.\n\n\n", "msg_date": "Tue, 21 Feb 2023 13:32:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Tue, Feb 21, 2023 at 09:03:27AM +0900, Michael Paquier wrote:\n> On Tue, Feb 14, 2023 at 09:47:55AM -0800, Nathan Bossart wrote:\n>> Here is a new version of the stopgap/back-branch fix for restore_command.\n>> This is more or less a rebased version of v4 with an added stderr message\n>> as Andres suggested upthread.\n> \n> So, this thread has moved around many subjects, still we did not get\n> to the core of the issue which is what we should try to do to avoid\n> sporadic failures like what the top of the thread is mentioning.\n> \n> Perhaps beginning a new thread with a patch and a summary would be\n> better at this stage? Another thing I am wondering is if it could be\n> possible to test that rather reliably. I have been playing with a few\n> scenarios like holding the system() call for a bit with hardcoded\n> sleep()s, without much success. I'll try harder on that part.. It's\n> been mentioned as well that we could just move away from system() in\n> the long-term.\n\nI'm happy to create a new thread if needed, but I can't tell if there is\nany interest in this stopgap/back-branch fix. Perhaps we should just jump\nstraight to the long-term fix that Thomas is looking into.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 20 Feb 2023 20:50:39 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Tue, Feb 21, 2023 at 5:50 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Tue, Feb 21, 2023 at 09:03:27AM +0900, Michael Paquier wrote:\n> > Perhaps beginning a new thread with a patch and a summary would be\n> > better at this stage? Another thing I am wondering is if it could be\n> > possible to test that rather reliably. I have been playing with a few\n> > scenarios like holding the system() call for a bit with hardcoded\n> > sleep()s, without much success. I'll try harder on that part.. It's\n> > been mentioned as well that we could just move away from system() in\n> > the long-term.\n>\n> I'm happy to create a new thread if needed, but I can't tell if there is\n> any interest in this stopgap/back-branch fix. Perhaps we should just jump\n> straight to the long-term fix that Thomas is looking into.\n\nUnfortunately the latch-friendly subprocess module proposal I was\ntalking about would be for 17. I may post a thread fairly soon with\ndesign ideas + list of problems and decision points as I see them, and\nhopefully some sketch code, but it won't be a proposal for [/me checks\ncalendar] next week's commitfest and probably wouldn't be appropriate\nin a final commitfest anyway, and I also have some other existing\nstuff to clear first. So please do continue with the stopgap ideas.\n\nBTW Here's an idea (untested) about how to reproduce the problem. You\ncould copy the source from a system() implementation, call it\ndoomed_system(), and insert kill(-getppid(), SIGQUIT) in between\nsigprocmask(SIG_SETMASK, &omask, NULL) and exec*(). Parent and self\nwill handle the signal and both reach the proc_exit().\n\nThe systems that failed are running code like this:\n\nhttps://github.com/openbsd/src/blob/master/lib/libc/stdlib/system.c\nhttps://github.com/DragonFlyBSD/DragonFlyBSD/blob/master/lib/libc/stdlib/system.c\n\nI'm pretty sure these other implementations could fail in just the\nsame way (they restore the handler before unblocking, so can run it\njust before exec() replaces the image):\n\nhttps://github.com/freebsd/freebsd-src/blob/main/lib/libc/stdlib/system.c\nhttps://github.com/lattera/glibc/blob/master/sysdeps/posix/system.c\n\nThe glibc one is a bit busier and, huh, has a lock (I guess maybe\ndeadlockable if proc_exit() also calls system(), but hopefully it\ndoesn't), and uses fork() instead of vfork() but I don't think that's\na material difference here (with fork(), parent and child run\nconcurrently, while with vfork() the parent is suspended until the\nchild exists or execs, and then processes its pending signals).\n\n\n", "msg_date": "Wed, 22 Feb 2023 21:48:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Wed, Feb 22, 2023 at 09:48:10PM +1300, Thomas Munro wrote:\n> On Tue, Feb 21, 2023 at 5:50 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Tue, Feb 21, 2023 at 09:03:27AM +0900, Michael Paquier wrote:\n>> > Perhaps beginning a new thread with a patch and a summary would be\n>> > better at this stage? Another thing I am wondering is if it could be\n>> > possible to test that rather reliably. I have been playing with a few\n>> > scenarios like holding the system() call for a bit with hardcoded\n>> > sleep()s, without much success. I'll try harder on that part.. It's\n>> > been mentioned as well that we could just move away from system() in\n>> > the long-term.\n>>\n>> I'm happy to create a new thread if needed, but I can't tell if there is\n>> any interest in this stopgap/back-branch fix. Perhaps we should just jump\n>> straight to the long-term fix that Thomas is looking into.\n> \n> Unfortunately the latch-friendly subprocess module proposal I was\n> talking about would be for 17. I may post a thread fairly soon with\n> design ideas + list of problems and decision points as I see them, and\n> hopefully some sketch code, but it won't be a proposal for [/me checks\n> calendar] next week's commitfest and probably wouldn't be appropriate\n> in a final commitfest anyway, and I also have some other existing\n> stuff to clear first. So please do continue with the stopgap ideas.\n\nI've created a new thread for the stopgap fix [0].\n\n[0] https://postgr.es/m/20230223231503.GA743455%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 23 Feb 2023 15:16:50 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Hi,\n\nOn 2023-02-19 20:06:24 +0530, Robert Haas wrote:\n> On Sun, Feb 19, 2023 at 2:45 AM Andres Freund <andres@anarazel.de> wrote:\n> > To me that seems even simpler? Nothing but the archiver is supposed to create\n> > .done files and nothing is supposed to remove .ready files without archiver\n> > having created the .done files. So the archiver process can scan\n> > archive_status until its done or until N archives have been collected, and\n> > then process them at once? Only the creation of the .done files would be\n> > serial, but I don't think that's commonly a problem (and could be optimized as\n> > well, by creating multiple files and then fsyncing them in a second pass,\n> > avoiding N filesystem journal flushes).\n> >\n> > Maybe I am misunderstanding what you see as the problem?\n> \n> Well right now the archiver process calls ArchiveFileCB when there's a\n> file ready for archiving, and that process is supposed to archive the\n> whole thing before returning. That pretty obviously seems to preclude\n> having more than one file being archived at the same time. What\n> callback structure do you have in mind to allow for that?\n\nTBH, I think the current archive and restore module APIs aren't useful. I\nthink it was a mistake to add archive modules without having demonstrated that\none can do something useful with them that the restore_command didn't already\ndo. If anything, archive modules have made it harder to improve archiving\nperformance via concurrency.\n\nMy point was that it's easy to have multiple archive commands in process at\nthe same time, because we already have a queuing system, and that\narchive_command is entire compatible with doing that, because running multiple\nsubprocesses is pretty trivial. It wasn't that the archive API is suitable for\nthat.\n\n\n> I mean, my idea was to basically just have one big callback:\n> ArchiverModuleMainLoopCB(). Which wouldn't return, or perhaps, would\n> only return when archiving was totally caught up and there was nothing\n> more to do right now. And then that callback could call functions like\n> AreThereAnyMoreFilesIShouldBeArchivingAndIfYesWhatIsTheNextOne(). So\n> it would call that function and it would find out about a file and\n> start an HTTP session or whatever and then call that function again\n> and start another HTTP session for the second file and so on until it\n> had as much concurrency as it wanted. And then when it hit the\n> concurrency limit, it would wait until at least one HTTP request\n> finished. At that point it would call\n> HeyEverybodyISuccessfullyArchivedAWalFile(), after which it could\n> again ask for the next file and start a request for that one and so on\n> and so forth.\n\n> I don't really understand what the other possible model is here,\n> honestly. Right now, control remains within the archive module for the\n> entire time that a file is being archived. If we generalize the model\n> to allow multiple files to be in the process of being archived at the\n> same time, the archive module is going to need to have control as long\n> as >= 1 of them are in progress, at least AFAICS. If you have some\n> other idea how it would work, please explain it to me...\n\nI don't think that a main loop approach is the only viable one. It might be\nthe most likely to succeed one though. As an alternative, consider something\nlike\n\nstruct ArchiveFileState {\n int fd;\n enum WaitFor { READ, WRITE, CONNECT };\n void *file_private;\n}\n\ntypedef bool (*ArchiveFileStartCB)(ArchiveModuleState *state,\n ArchiveFileState *file_state,\n const char *file, const char *path);\n\ntypedef bool (*ArchiveFileContinueCB)(ArchiveModuleState *state,\n ArchiveFileState *file_state);\n\nAn archive module could open an HTTP connection, do IO until it's blocked, put\nthe fd in file_state, return. The main loop could do big event loop around all\nof the file descriptors and whenever any of FDs signal IO is ready, call\nArchiveFileContinueCB() for that file.\n\nI don't know if that's better than ArchiverModuleMainLoopCB(). I can see both\nadvantages and disadvantages.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 25 Feb 2023 11:00:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Sat, Feb 25, 2023 at 11:00:31AM -0800, Andres Freund wrote:\n> TBH, I think the current archive and restore module APIs aren't useful. I\n> think it was a mistake to add archive modules without having demonstrated that\n> one can do something useful with them that the restore_command didn't already\n> do. If anything, archive modules have made it harder to improve archiving\n> performance via concurrency.\n\nI must respectfully disagree that this work is useless. Besides the\nperformance and security benefits of not shelling out for every WAL file,\nI've found it very useful to be able to use the standard module framework\nto develop archive modules. It's relatively easy to make use of GUCs,\nbackground workers, compression, etc. Of course, there is room for\nimprovement in areas like concurrency support as you rightly point out, but\nI don't think that makes the current state worthless.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Feb 2023 20:29:19 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Sat, Feb 25, 2023 at 11:00:31AM -0800, Andres Freund wrote:\n> > TBH, I think the current archive and restore module APIs aren't useful. I\n> > think it was a mistake to add archive modules without having demonstrated that\n> > one can do something useful with them that the restore_command didn't already\n> > do. If anything, archive modules have made it harder to improve archiving\n> > performance via concurrency.\n> \n> I must respectfully disagree that this work is useless. Besides the\n> performance and security benefits of not shelling out for every WAL file,\n> I've found it very useful to be able to use the standard module framework\n> to develop archive modules. It's relatively easy to make use of GUCs,\n> background workers, compression, etc. Of course, there is room for\n> improvement in areas like concurrency support as you rightly point out, but\n> I don't think that makes the current state worthless.\n\nWould be great to see these archive modules, perhaps it would help show\nhow this functionality is useful and what could be done in core to make\nthings easier for the archive module.\n\nThanks,\n\nStephen", "msg_date": "Wed, 1 Mar 2023 10:00:49 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" }, { "msg_contents": "On Tue, Feb 28, 2023 at 11:29 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> On Sat, Feb 25, 2023 at 11:00:31AM -0800, Andres Freund wrote:\n> > TBH, I think the current archive and restore module APIs aren't useful. I\n> > think it was a mistake to add archive modules without having demonstrated that\n> > one can do something useful with them that the restore_command didn't already\n> > do. If anything, archive modules have made it harder to improve archiving\n> > performance via concurrency.\n>\n> I must respectfully disagree that this work is useless. Besides the\n> performance and security benefits of not shelling out for every WAL file,\n> I've found it very useful to be able to use the standard module framework\n> to develop archive modules. It's relatively easy to make use of GUCs,\n> background workers, compression, etc. Of course, there is room for\n> improvement in areas like concurrency support as you rightly point out, but\n> I don't think that makes the current state worthless.\n\nI also disagree with Andres. The status quo ante was that we did not\nprovide any way of doing archiving correctly even to a directory on\nthe local machine. We could only recommend silly things like 'cp' that\nare incorrect in multiple ways. basic_archive isn't the most wonderful\nthing ever, and its deficiencies are more obvious to me now than they\nwere when I committed it. But it's better than recommending a shell\ncommand that doesn't even work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Mar 2023 12:44:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Weird failure with latches in curculio on v15" } ]
[ { "msg_contents": "Our document states that EXPLAIN can generate \"Subplan Removed\":\n\n\thttps://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITION-PRUNING\n\n\tIt is possible to determine the number of partitions which were removed\n\tduring this phase by observing the “Subplans Removed” property in the\n\tEXPLAIN output.\n\nHowever, I can't figure out how to generate that string in EXPLAIN\noutput. I tried many examples and searched the web for examples but I\ncan't generate it in queries using git master.\n\nFor example, this website:\n\n\thttps://gist.github.com/amitlan/cd13271142bb2d26ae46b69afb675a31\n\nhas several EXPLAIN examples that show \"Subplan Removed\" but running the\nqueries in git master doesn't generate it for me.\n\nDoes anyone know how to generate this? Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 31 Jan 2023 20:59:57 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Generating \"Subplan Removed\" in EXPLAIN" }, { "msg_contents": "On Tue, Jan 31, 2023 at 08:59:57PM -0500, Bruce Momjian wrote:\n> Our document states that EXPLAIN can generate \"Subplan Removed\":\n> \n> \thttps://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITION-PRUNING\n> \n> \tIt is possible to determine the number of partitions which were removed\n> \tduring this phase by observing the “Subplans Removed” property in the\n> \tEXPLAIN output.\n\nSorry, here is the full paragraph:\n\n\tDuring initialization of the query plan. Partition pruning can\n\tbe performed here for parameter values which are known during\n\tthe initialization phase of execution. Partitions which are\n\tpruned during this stage will not show up in the query's EXPLAIN\n\tor EXPLAIN ANALYZE. It is possible to determine the number of\n\tpartitions which were removed during this phase by observing the\n\t“Subplans Removed” property in the EXPLAIN output.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 31 Jan 2023 21:28:40 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Generating \"Subplan Removed\" in EXPLAIN" }, { "msg_contents": "On Tue, Jan 31, 2023 at 08:59:57PM -0500, Bruce Momjian wrote:\n> Does anyone know how to generate this? Thanks.\n\nThe regression tests know:\n\n$ git grep -c 'Subplans Removed' ./src/test/regress/\nsrc/test/regress/expected/partition_prune.out:29\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 31 Jan 2023 20:38:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Generating \"Subplan Removed\" in EXPLAIN" }, { "msg_contents": "On Tue, 31 Jan 2023 20:38:21 -0600\nJustin Pryzby <pryzby@telsasoft.com> wrote:\n\n> \n> To: Bruce Momjian <bruce@momjian.us>\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: Generating \"Subplan Removed\" in EXPLAIN\n> Date: Tue, 31 Jan 2023 20:38:21 -0600\n> User-Agent: Mutt/1.9.4 (2018-02-28)\n> \n> On Tue, Jan 31, 2023 at 08:59:57PM -0500, Bruce Momjian wrote:\n> > Does anyone know how to generate this? Thanks.\n> \n> The regression tests know:\n> \n> $ git grep -c 'Subplans Removed' ./src/test/regress/\n> src/test/regr\n\nMaybe, you missed to set plan_cache_mode to force_generic_plan.\n\"Subplan Removed\" doesn't appear when using a custom plan.\n\npostgres=# set enable_indexonlyscan = off; \nSET\npostgres=# prepare ab_q1 (int, int, int) as\nselect * from ab where a between $1 and $2 and b <= $3;\nPREPARE\npostgres=# explain (analyze, costs off, summary off, timing off) execute ab_q1 (2, 2, 3);\n QUERY PLAN \n---------------------------------------------------------\n Append (actual rows=0 loops=1)\n -> Seq Scan on ab_a2_b1 ab_1 (actual rows=0 loops=1)\n Filter: ((a >= 2) AND (a <= 2) AND (b <= 3))\n -> Seq Scan on ab_a2_b2 ab_2 (actual rows=0 loops=1)\n Filter: ((a >= 2) AND (a <= 2) AND (b <= 3))\n -> Seq Scan on ab_a2_b3 ab_3 (actual rows=0 loops=1)\n Filter: ((a >= 2) AND (a <= 2) AND (b <= 3))\n(7 rows)\n\npostgres=# show plan_cache_mode ;\n plan_cache_mode \n-----------------\n auto\n(1 row)\n\npostgres=# set plan_cache_mode to force_generic_plan;\nSET\npostgres=# explain (analyze, costs off, summary off, timing off) execute ab_q1 (2, 2, 3);\n QUERY PLAN \n---------------------------------------------------------\n Append (actual rows=0 loops=1)\n Subplans Removed: 6\n -> Seq Scan on ab_a2_b1 ab_1 (actual rows=0 loops=1)\n Filter: ((a >= $1) AND (a <= $2) AND (b <= $3))\n -> Seq Scan on ab_a2_b2 ab_2 (actual rows=0 loops=1)\n Filter: ((a >= $1) AND (a <= $2) AND (b <= $3))\n -> Seq Scan on ab_a2_b3 ab_3 (actual rows=0 loops=1)\n Filter: ((a >= $1) AND (a <= $2) AND (b <= $3))\n(8 rows)\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 1 Feb 2023 11:53:34 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Generating \"Subplan Removed\" in EXPLAIN" }, { "msg_contents": "On Wed, Feb 1, 2023 at 11:53:34AM +0900, Yugo NAGATA wrote:\n> On Tue, 31 Jan 2023 20:38:21 -0600\n> Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > \n> > To: Bruce Momjian <bruce@momjian.us>\n> > Cc: pgsql-hackers@postgresql.org\n> > Subject: Re: Generating \"Subplan Removed\" in EXPLAIN\n> > Date: Tue, 31 Jan 2023 20:38:21 -0600\n> > User-Agent: Mutt/1.9.4 (2018-02-28)\n> > \n> > On Tue, Jan 31, 2023 at 08:59:57PM -0500, Bruce Momjian wrote:\n> > > Does anyone know how to generate this? Thanks.\n> > \n> > The regression tests know:\n> > \n> > $ git grep -c 'Subplans Removed' ./src/test/regress/\n> > src/test/regr\n> \n> Maybe, you missed to set plan_cache_mode to force_generic_plan.\n> \"Subplan Removed\" doesn't appear when using a custom plan.\n\nYes, that is exactly what I as missing. Thank you!\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 31 Jan 2023 22:05:08 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Generating \"Subplan Removed\" in EXPLAIN" }, { "msg_contents": "On Wed, 1 Feb 2023 at 15:53, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> Maybe, you missed to set plan_cache_mode to force_generic_plan.\n> \"Subplan Removed\" doesn't appear when using a custom plan.\n\nI wouldn't say that's 100% true. The planner is only able to prune\nusing values which are known during planning. Constant folding is\ngoing to evaluate any immutable functions during planning, but nothing\nmore.\n\nPartition pruning might be delayed until execution time if some\nexpression that's being compared to the partition key is stable. e.g:\n\ncreate table rp (t timestamp not null) partition by range(t);\ncreate table rp2022 partition of rp for values from ('2022-01-01') to\n('2023-01-01');\ncreate table rp2023 partition of rp for values from ('2023-01-01') to\n('2024-01-01');\n\nexplain select * from rp where t >= now();\n\n Append (cost=0.00..95.33 rows=1506 width=8)\n Subplans Removed: 1\n -> Seq Scan on rp2023 rp_1 (cost=0.00..43.90 rows=753 width=8)\n Filter: (t >= now())\n\nDavid\n\n\n", "msg_date": "Wed, 1 Feb 2023 16:52:07 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generating \"Subplan Removed\" in EXPLAIN" }, { "msg_contents": "On Wed, 1 Feb 2023 16:52:07 +1300\nDavid Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 1 Feb 2023 at 15:53, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > Maybe, you missed to set plan_cache_mode to force_generic_plan.\n> > \"Subplan Removed\" doesn't appear when using a custom plan.\n> \n> I wouldn't say that's 100% true. The planner is only able to prune\n> using values which are known during planning. Constant folding is\n> going to evaluate any immutable functions during planning, but nothing\n> more.\n> \n> Partition pruning might be delayed until execution time if some\n> expression that's being compared to the partition key is stable. e.g:\n> \n> create table rp (t timestamp not null) partition by range(t);\n> create table rp2022 partition of rp for values from ('2022-01-01') to\n> ('2023-01-01');\n> create table rp2023 partition of rp for values from ('2023-01-01') to\n> ('2024-01-01');\n> \n> explain select * from rp where t >= now();\n> \n> Append (cost=0.00..95.33 rows=1506 width=8)\n> Subplans Removed: 1\n> -> Seq Scan on rp2023 rp_1 (cost=0.00..43.90 rows=753 width=8)\n> Filter: (t >= now())\n> \n\nI am sorry for my explanation was not completely correct. Thank you for\nyour clarification.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 1 Feb 2023 19:05:34 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Generating \"Subplan Removed\" in EXPLAIN" } ]
[ { "msg_contents": "I think there is a tiny typo in src/interfaces/ecpg/ecpglib/meson.build:\n\ndiff --git a/src/interfaces/ecpg/ecpglib/meson.build \nb/src/interfaces/ecpg/ecpglib/meson.build\nindex dba9e3c3d9..da8d304f54 100644\n--- a/src/interfaces/ecpg/ecpglib/meson.build\n+++ b/src/interfaces/ecpg/ecpglib/meson.build\n@@ -57,7 +57,7 @@ pkgconfig.generate(\n description: 'PostgreSQL libecpg library',\n url: pg_url,\n libraries: ecpglib_so,\n- libraries_private: [frontend_shlib_code, thread_dep],\n+ libraries_private: [frontend_stlib_code, thread_dep],\n requires_private: ['libpgtypes', 'libpq'],\n )\n\nThis makes it match the other libraries.\n\nWithout this change, we get\n\nLibs.private: ... -lpgport_shlib -lpgcommon_shlib\n\nwhich seems wrong.\n\n\n", "msg_date": "Wed, 1 Feb 2023 08:40:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "meson: pkgconfig difference" }, { "msg_contents": "Hi, \n\nOn January 31, 2023 11:40:52 PM PST, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>I think there is a tiny typo in src/interfaces/ecpg/ecpglib/meson.build:\n>\n>diff --git a/src/interfaces/ecpg/ecpglib/meson.build b/src/interfaces/ecpg/ecpglib/meson.build\n>index dba9e3c3d9..da8d304f54 100644\n>--- a/src/interfaces/ecpg/ecpglib/meson.build\n>+++ b/src/interfaces/ecpg/ecpglib/meson.build\n>@@ -57,7 +57,7 @@ pkgconfig.generate(\n> description: 'PostgreSQL libecpg library',\n> url: pg_url,\n> libraries: ecpglib_so,\n>- libraries_private: [frontend_shlib_code, thread_dep],\n>+ libraries_private: [frontend_stlib_code, thread_dep],\n> requires_private: ['libpgtypes', 'libpq'],\n> )\n>\n>This makes it match the other libraries.\n>\n>Without this change, we get\n>\n>Libs.private: ... -lpgport_shlib -lpgcommon_shlib\n>\n>which seems wrong.\n\nUgh, yes, that's wrong. Do you want me to apply the fix?\n\nRegards,\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Tue, 31 Jan 2023 23:55:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson: pkgconfig difference" }, { "msg_contents": "On 01.02.23 08:55, Andres Freund wrote:\n> Hi,\n> \n> On January 31, 2023 11:40:52 PM PST, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>> I think there is a tiny typo in src/interfaces/ecpg/ecpglib/meson.build:\n>>\n>> diff --git a/src/interfaces/ecpg/ecpglib/meson.build b/src/interfaces/ecpg/ecpglib/meson.build\n>> index dba9e3c3d9..da8d304f54 100644\n>> --- a/src/interfaces/ecpg/ecpglib/meson.build\n>> +++ b/src/interfaces/ecpg/ecpglib/meson.build\n>> @@ -57,7 +57,7 @@ pkgconfig.generate(\n>> description: 'PostgreSQL libecpg library',\n>> url: pg_url,\n>> libraries: ecpglib_so,\n>> - libraries_private: [frontend_shlib_code, thread_dep],\n>> + libraries_private: [frontend_stlib_code, thread_dep],\n>> requires_private: ['libpgtypes', 'libpq'],\n>> )\n>>\n>> This makes it match the other libraries.\n>>\n>> Without this change, we get\n>>\n>> Libs.private: ... -lpgport_shlib -lpgcommon_shlib\n>>\n>> which seems wrong.\n> \n> Ugh, yes, that's wrong. Do you want me to apply the fix?\n\nI've done it now.\n\n\n\n", "msg_date": "Wed, 1 Feb 2023 18:15:22 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: meson: pkgconfig difference" } ]
[ { "msg_contents": "Earlier today I gave a talk about MERGE and wanted to provide an example\nwith FOR EACH STATEMENT triggers using transition tables. However, I\ncan't find a non-ugly way to obtain the NEW row that corresponds to each\nOLD row ... I had to resort to an ugly trick with OFFSET n LIMIT 1.\nCan anyone suggest anything better? I couldn't find any guidance in the\ndocs.\n\nThis is the example function I wrote:\n\nCREATE FUNCTION wine_audit() RETURNS trigger LANGUAGE plpgsql AS $$\n BEGIN\n IF (TG_OP = 'DELETE') THEN\n INSERT INTO wine_audit\n SELECT 'D', now(), row_to_json(o), NULL FROM old_table o;\n ELSIF (TG_OP = 'INSERT') THEN\n INSERT INTO wine_audit\n SELECT 'I', now(), NULL, row_to_json(n) FROM new_table n;\n ELSIF (TG_OP = 'UPDATE') THEN\n DECLARE\n oldrec record;\n\tnewrec jsonb;\n\ti integer := 0;\n BEGIN\n FOR oldrec IN SELECT * FROM old_table LOOP\n newrec := row_to_json(n) FROM new_table n OFFSET i LIMIT 1;\n i := i + 1;\n INSERT INTO wine_audit\n SELECT 'U', now(), row_to_json(oldrec), newrec;\n END LOOP;\n END;\n\n END IF;\n RETURN NULL;\n END;\n$$;\n\nCREATE TABLE wines (winery text, brand text, variety text, year int, bottles int);\nCREATE TABLE shipment (LIKE wines);\nCREATE TABLE wine_audit (op varchar(1), datetime timestamptz,\n\t\t\t oldrow jsonb, newrow jsonb);\n\nCREATE TRIGGER wine_update\n AFTER UPDATE ON wines\n REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table\n FOR EACH STATEMENT EXECUTE FUNCTION wine_audit();\n-- I omit triggers on insert and update because the trigger code for those is trivial\n\nINSERT INTO wines VALUES ('Concha y Toro', 'Sunrise', 'Chardonnay', 2021, 12),\n('Concha y Toro', 'Sunrise', 'Merlot', 2022, 12);\n\nINSERT INTO shipment VALUES ('Concha y Toro', 'Sunrise', 'Chardonnay', 2021, 96),\n('Concha y Toro', 'Sunrise', 'Merlot', 2022, 120),\n('Concha y Toro', 'Marqués de Casa y Concha', 'Carmenere', 2021, 48),\n('Concha y Toro', 'Casillero del Diablo', 'Cabernet Sauvignon', 2019, 240);\n\nALTER TABLE shipment ADD COLUMN marked timestamp with time zone;\n\nWITH unmarked_shipment AS\n (UPDATE shipment SET marked = now() WHERE marked IS NULL\n RETURNING winery, brand, variety, year, bottles)\nMERGE INTO wines AS w\n USING (SELECT winery, brand, variety, year,\n sum(bottles) as bottles\n FROM unmarked_shipment\n GROUP BY winery, brand, variety, year) AS s\n ON (w.winery, w.brand, w.variety, w.year) =\n (s.winery, s.brand, s.variety, s.year)\nWHEN MATCHED THEN\n UPDATE SET bottles = w.bottles + s.bottles\nWHEN NOT MATCHED THEN\n INSERT (winery, brand, variety, year, bottles)\n VALUES (s.winery, s.brand, s.variety, s.year, s.bottles)\n;\n\n\nIf you examine table wine_audit after pasting all of the above, you'll\nsee this, which is correct:\n\n─[ RECORD 1 ]────────────────────────────────────────────────────────────────────────────────────────────────────\nop │ U\ndatetime │ 2023-02-01 01:16:44.704036+01\noldrow │ {\"year\": 2021, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 12, \"variety\": \"Chardonnay\"}\nnewrow │ {\"year\": 2021, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 108, \"variety\": \"Chardonnay\"}\n─[ RECORD 2 ]────────────────────────────────────────────────────────────────────────────────────────────────────\nop │ U\ndatetime │ 2023-02-01 01:16:44.704036+01\noldrow │ {\"year\": 2022, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 12, \"variety\": \"Merlot\"}\nnewrow │ {\"year\": 2022, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 132, \"variety\": \"Merlot\"}\n\nMy question is how to obtain the same rows without the LIMIT/OFFSET line\nin the trigger function.\n\n\nAlso: how can we \"subtract\" both JSON blobs so that the 'newrow' only\ncontains the members that differ? I would like to have this:\n\n─[ RECORD 1 ]────────────────────────────────────────────────────────────────────────────────────────────────────\nop │ U\ndatetime │ 2023-02-01 01:16:44.704036+01\noldrow │ {\"year\": 2021, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 12, \"variety\": \"Chardonnay\"}\nnewrow │ {\"bottles\": 108}\n─[ RECORD 2 ]────────────────────────────────────────────────────────────────────────────────────────────────────\nop │ U\ndatetime │ 2023-02-01 01:16:44.704036+01\noldrow │ {\"year\": 2022, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 12, \"variety\": \"Merlot\"}\nnewrow │ {\"bottles\": 132}\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La gente vulgar sólo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"\n\n\n", "msg_date": "Wed, 1 Feb 2023 10:03:26 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "transition tables and UPDATE" }, { "msg_contents": "On Wed, Feb 1, 2023 at 10:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Earlier today I gave a talk about MERGE and wanted to provide an example\n> with FOR EACH STATEMENT triggers using transition tables. However, I\n> can't find a non-ugly way to obtain the NEW row that corresponds to each\n> OLD row ... I had to resort to an ugly trick with OFFSET n LIMIT 1.\n> Can anyone suggest anything better? I couldn't find any guidance in the\n> docs.\n\nI don't know the answer, either in PostgreSQL or the SQL spec. I\nwondered if there *should* be a way here:\n\nhttps://www.postgresql.org/message-id/CAEepm=1ncxBNna-pXGr2hnMHRyYi_6_AwG_352-Jn=mwdFdAGw@mail.gmail.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 22:29:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: transition tables and UPDATE" }, { "msg_contents": "On 2023-Feb-01, Thomas Munro wrote:\n\n> On Wed, Feb 1, 2023 at 10:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Earlier today I gave a talk about MERGE and wanted to provide an example\n> > with FOR EACH STATEMENT triggers using transition tables. However, I\n> > can't find a non-ugly way to obtain the NEW row that corresponds to each\n> > OLD row ... I had to resort to an ugly trick with OFFSET n LIMIT 1.\n> > Can anyone suggest anything better? I couldn't find any guidance in the\n> > docs.\n> \n> I don't know the answer, either in PostgreSQL or the SQL spec. I\n> wondered if there *should* be a way here:\n> \n> https://www.postgresql.org/message-id/CAEepm=1ncxBNna-pXGr2hnMHRyYi_6_AwG_352-Jn=mwdFdAGw@mail.gmail.com\n\nI had tried to tie these relations using WITH ORDINALITY, but the only\nway I could think of (array_agg to then unnest() WITH ORDINALITY) was\neven uglier than what I already had. So yeah, I think it might be\nuseful if we had a way to inject a counter or something in there.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The important things in the world are problems with society that we don't\nunderstand at all. The machines will become more complicated but they won't\nbe more complicated than the societies that run them.\" (Freeman Dyson)\n\n\n", "msg_date": "Wed, 1 Feb 2023 12:57:24 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: transition tables and UPDATE" }, { "msg_contents": "On Wed, 1 Feb 2023 at 12:12, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> I had tried to tie these relations using WITH ORDINALITY, but the only\n> way I could think of (array_agg to then unnest() WITH ORDINALITY) was\n> even uglier than what I already had. So yeah, I think it might be\n> useful if we had a way to inject a counter or something in there.\n>\n\nYou could use a pair of cursors like this:\n\nCREATE OR REPLACE FUNCTION wine_audit() RETURNS trigger LANGUAGE plpgsql AS $$\n BEGIN\n IF (TG_OP = 'DELETE') THEN\n INSERT INTO wine_audit\n SELECT 'D', now(), row_to_json(o), NULL FROM old_table o;\n ELSIF (TG_OP = 'INSERT') THEN\n INSERT INTO wine_audit\n SELECT 'I', now(), NULL, row_to_json(n) FROM new_table n;\n ELSIF (TG_OP = 'UPDATE') THEN\n DECLARE\n oldcur CURSOR FOR SELECT row_to_json(o) FROM old_table o;\n newcur CURSOR FOR SELECT row_to_json(n) FROM new_table n;\n oldrec jsonb;\n newrec jsonb;\n BEGIN\n OPEN oldcur;\n OPEN newcur;\n\n LOOP\n FETCH oldcur INTO oldrec;\n EXIT WHEN NOT FOUND;\n\n FETCH newcur INTO newrec;\n EXIT WHEN NOT FOUND;\n\n INSERT INTO wine_audit VALUES('U', now(), oldrec, newrec);\n END LOOP;\n\n CLOSE oldcur;\n CLOSE newcur;\n END;\n\n END IF;\n RETURN NULL;\n END;\n$$;\n\nthough it would be nicer if there was a way to do it in a single SQL statement.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 1 Feb 2023 12:38:15 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: transition tables and UPDATE" }, { "msg_contents": "On Wed, 1 Feb 2023 10:03:26 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> Earlier today I gave a talk about MERGE and wanted to provide an example\n> with FOR EACH STATEMENT triggers using transition tables. However, I\n> can't find a non-ugly way to obtain the NEW row that corresponds to each\n> OLD row ... I had to resort to an ugly trick with OFFSET n LIMIT 1.\n> Can anyone suggest anything better? I couldn't find any guidance in the\n> docs.\n\nWhat I could come up with is to join old_table and new_table using keys\nof the wine table (winery, brand, variety, year), or join them using\nvalues from row_number(), like;\n\n INSERT INTO wine_audit\n SELECT 'U', now(), row_o, row_n\n FROM (SELECT row_number() OVER() i, row_to_json(o) row_o FROM old_table o) \n JOIN (SELECT row_number() OVER() i, row_to_json(n) row_n FROM new_table n) \n USING (i);\n\n> \n> This is the example function I wrote:\n> \n> CREATE FUNCTION wine_audit() RETURNS trigger LANGUAGE plpgsql AS $$\n> BEGIN\n> IF (TG_OP = 'DELETE') THEN\n> INSERT INTO wine_audit\n> SELECT 'D', now(), row_to_json(o), NULL FROM old_table o;\n> ELSIF (TG_OP = 'INSERT') THEN\n> INSERT INTO wine_audit\n> SELECT 'I', now(), NULL, row_to_json(n) FROM new_table n;\n> ELSIF (TG_OP = 'UPDATE') THEN\n> DECLARE\n> oldrec record;\n> \tnewrec jsonb;\n> \ti integer := 0;\n> BEGIN\n> FOR oldrec IN SELECT * FROM old_table LOOP\n> newrec := row_to_json(n) FROM new_table n OFFSET i LIMIT 1;\n> i := i + 1;\n> INSERT INTO wine_audit\n> SELECT 'U', now(), row_to_json(oldrec), newrec;\n> END LOOP;\n> END;\n> \n> END IF;\n> RETURN NULL;\n> END;\n> $$;\n> \n> CREATE TABLE wines (winery text, brand text, variety text, year int, bottles int);\n> CREATE TABLE shipment (LIKE wines);\n> CREATE TABLE wine_audit (op varchar(1), datetime timestamptz,\n> \t\t\t oldrow jsonb, newrow jsonb);\n> \n> CREATE TRIGGER wine_update\n> AFTER UPDATE ON wines\n> REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table\n> FOR EACH STATEMENT EXECUTE FUNCTION wine_audit();\n> -- I omit triggers on insert and update because the trigger code for those is trivial\n> \n> INSERT INTO wines VALUES ('Concha y Toro', 'Sunrise', 'Chardonnay', 2021, 12),\n> ('Concha y Toro', 'Sunrise', 'Merlot', 2022, 12);\n> \n> INSERT INTO shipment VALUES ('Concha y Toro', 'Sunrise', 'Chardonnay', 2021, 96),\n> ('Concha y Toro', 'Sunrise', 'Merlot', 2022, 120),\n> ('Concha y Toro', 'Marqués de Casa y Concha', 'Carmenere', 2021, 48),\n> ('Concha y Toro', 'Casillero del Diablo', 'Cabernet Sauvignon', 2019, 240);\n> \n> ALTER TABLE shipment ADD COLUMN marked timestamp with time zone;\n> \n> WITH unmarked_shipment AS\n> (UPDATE shipment SET marked = now() WHERE marked IS NULL\n> RETURNING winery, brand, variety, year, bottles)\n> MERGE INTO wines AS w\n> USING (SELECT winery, brand, variety, year,\n> sum(bottles) as bottles\n> FROM unmarked_shipment\n> GROUP BY winery, brand, variety, year) AS s\n> ON (w.winery, w.brand, w.variety, w.year) =\n> (s.winery, s.brand, s.variety, s.year)\n> WHEN MATCHED THEN\n> UPDATE SET bottles = w.bottles + s.bottles\n> WHEN NOT MATCHED THEN\n> INSERT (winery, brand, variety, year, bottles)\n> VALUES (s.winery, s.brand, s.variety, s.year, s.bottles)\n> ;\n> \n> \n> If you examine table wine_audit after pasting all of the above, you'll\n> see this, which is correct:\n> \n> ─[ RECORD 1 ]────────────────────────────────────────────────────────────────────────────────────────────────────\n> op │ U\n> datetime │ 2023-02-01 01:16:44.704036+01\n> oldrow │ {\"year\": 2021, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 12, \"variety\": \"Chardonnay\"}\n> newrow │ {\"year\": 2021, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 108, \"variety\": \"Chardonnay\"}\n> ─[ RECORD 2 ]────────────────────────────────────────────────────────────────────────────────────────────────────\n> op │ U\n> datetime │ 2023-02-01 01:16:44.704036+01\n> oldrow │ {\"year\": 2022, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 12, \"variety\": \"Merlot\"}\n> newrow │ {\"year\": 2022, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 132, \"variety\": \"Merlot\"}\n> \n> My question is how to obtain the same rows without the LIMIT/OFFSET line\n> in the trigger function.\n> \n> \n> Also: how can we \"subtract\" both JSON blobs so that the 'newrow' only\n> contains the members that differ? I would like to have this:\n> \n> ─[ RECORD 1 ]────────────────────────────────────────────────────────────────────────────────────────────────────\n> op │ U\n> datetime │ 2023-02-01 01:16:44.704036+01\n> oldrow │ {\"year\": 2021, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 12, \"variety\": \"Chardonnay\"}\n> newrow │ {\"bottles\": 108}\n> ─[ RECORD 2 ]────────────────────────────────────────────────────────────────────────────────────────────────────\n> op │ U\n> datetime │ 2023-02-01 01:16:44.704036+01\n> oldrow │ {\"year\": 2022, \"brand\": \"Sunrise\", \"winery\": \"Concha y Toro\", \"bottles\": 12, \"variety\": \"Merlot\"}\n> newrow │ {\"bottles\": 132}\n> \n> -- \n> Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n> \"La gente vulgar sólo piensa en pasar el tiempo;\n> el que tiene talento, en aprovecharlo\"\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 1 Feb 2023 21:55:54 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: transition tables and UPDATE" }, { "msg_contents": ">\n>\n> even uglier than what I already had. So yeah, I think it might be\n> useful if we had a way to inject a counter or something in there.\n>\n>\nThis came up for me when I was experimenting with making the referential\nintegrity triggers fire on statements rather than rows. Doing so has the\npotential to take a lot of the sting out of big deletes where the\nreferencing column isn't indexed (thus resulting in N sequentials scans of\nthe referencing table). If that were 1 statement then we'd get a single\n(still ugly) hash join, but it's an improvement.\n\nIt has been suggested that the the overhead of forming the tuplestores of\naffected rows and reconstituting them into EphemerialNamedRelations could\nbe made better by instead storing an array of old ctids and new ctids,\nwhich obviously would be in the same order, if we had a means of\nreconstituting those with just the columns needed for the check (and\ngenerating a fake ordinality column for your needs), that would be\nconsiderably lighter weight than the tuplestores, and it might make\nstatement level triggers more useful all around.\n\neven uglier than what I already had.  So yeah, I think it might be\nuseful if we had a way to inject a counter or something in there.This came up for me when I was experimenting with making the referential integrity triggers fire on statements rather than rows. Doing so has the potential to take a lot of the sting out of big deletes where the referencing column isn't indexed (thus resulting in N sequentials scans of the referencing table). If that were 1 statement then we'd get a single (still ugly) hash join, but it's an improvement.It has been suggested that the the overhead of forming the tuplestores of affected rows and reconstituting them into EphemerialNamedRelations could be made better by instead storing an array of old ctids and new ctids, which obviously would be in the same order, if we had a means of reconstituting those with just the columns needed for the check (and generating a fake ordinality column for your needs), that would be considerably lighter weight than the tuplestores, and it might make statement level triggers more useful all around.", "msg_date": "Thu, 2 Feb 2023 15:28:57 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: transition tables and UPDATE" } ]
[ { "msg_contents": "While working on [1] I noticed that if RLS gets enabled, the COPY TO command\nincludes the contents of child table into the result, although the\ndocumentation says it should not:\n\n\t\"COPY TO can be used only with plain tables, not views, and does not\n\tcopy rows from child tables or child partitions. For example, COPY\n\ttable TO copies the same rows as SELECT * FROM ONLY table. The syntax\n\tCOPY (SELECT * FROM table) TO ... can be used to dump all of the rows\n\tin an inheritance hierarchy, partitioned table, or view.\"\n\nA test case is attached (rls.sql) as well as fix proposal\n(copy_rls_no_inh.diff).\n\n[1] https://commitfest.postgresql.org/41/3641/\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\ncreate table a(i int);\ninsert into a values (1);\n\ncreate table a1() inherits(a);\ninsert into a1 values (1);\n\n-- Only the parent table is copied, as the documentation claims.\ncopy a to stdout;\n\nalter table a enable row level security;\ncreate role u;\ncreate policy pol_perm on a as permissive for select to u using (true);\ngrant select on table a to u;\nset role u;\n\n-- Both \"a\" and \"a1\" appears in the output.\ncopy a to stdout;", "msg_date": "Wed, 01 Feb 2023 12:45:57 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "RLS makes COPY TO process child tables" }, { "msg_contents": "On Wed, 01 Feb 2023 12:45:57 +0100\nAntonin Houska <ah@cybertec.at> wrote:\n\n> While working on [1] I noticed that if RLS gets enabled, the COPY TO command\n> includes the contents of child table into the result, although the\n> documentation says it should not:\n> \n> \t\"COPY TO can be used only with plain tables, not views, and does not\n> \tcopy rows from child tables or child partitions. For example, COPY\n> \ttable TO copies the same rows as SELECT * FROM ONLY table. The syntax\n> \tCOPY (SELECT * FROM table) TO ... can be used to dump all of the rows\n> \tin an inheritance hierarchy, partitioned table, or view.\"\n> \n> A test case is attached (rls.sql) as well as fix proposal\n> (copy_rls_no_inh.diff).\n\nI think this is a bug because the current behaviour is different from\nthe documentation. \n\nWhen RLS is enabled on a table in `COPY ... TO ...`, the query is converted\nto `COPY (SELECT * FROM ...) TO ...` to allow the rewriter to add in RLS\nclauses. This causes to dump the rows of child tables.\n\nThe patch fixes this by setting \"inh\" of the table in the converted query\nto false. This seems reasonable and actually fixes the problem.\n\nHowever, I think we would want a comment on the added line. Also, the\nattached test should be placed in the regression test.\n\nRegards,\nYugo Nagata\n\n> \n> [1] https://commitfest.postgresql.org/41/3641/\n> \n> -- \n> Antonin Houska\n> Web: https://www.cybertec-postgresql.com\n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 2 Feb 2023 01:15:25 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: RLS makes COPY TO process child tables" }, { "msg_contents": "Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> Antonin Houska <ah@cybertec.at> wrote:\n>> While working on [1] I noticed that if RLS gets enabled, the COPY TO command\n>> includes the contents of child table into the result, although the\n>> documentation says it should not:\n\n> I think this is a bug because the current behaviour is different from\n> the documentation. \n\nI agree, it shouldn't do that.\n\n> When RLS is enabled on a table in `COPY ... TO ...`, the query is converted\n> to `COPY (SELECT * FROM ...) TO ...` to allow the rewriter to add in RLS\n> clauses. This causes to dump the rows of child tables.\n\nDo we actually say that in so many words, either in the code or docs?\nIf so, it ought to read `COPY (SELECT * FROM ONLY ...) TO ...`\ninstead. (If we say that in the docs, then arguably the code *does*\nconform to the docs. But I don't see it in the COPY ref page at least.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 11:47:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RLS makes COPY TO process child tables" }, { "msg_contents": "On Wed, 01 Feb 2023 11:47:23 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > Antonin Houska <ah@cybertec.at> wrote:\n> >> While working on [1] I noticed that if RLS gets enabled, the COPY TO command\n> >> includes the contents of child table into the result, although the\n> >> documentation says it should not:\n> \n> > I think this is a bug because the current behaviour is different from\n> > the documentation. \n> \n> I agree, it shouldn't do that.\n> \n> > When RLS is enabled on a table in `COPY ... TO ...`, the query is converted\n> > to `COPY (SELECT * FROM ...) TO ...` to allow the rewriter to add in RLS\n> > clauses. This causes to dump the rows of child tables.\n> \n> Do we actually say that in so many words, either in the code or docs?\n> If so, it ought to read `COPY (SELECT * FROM ONLY ...) TO ...`\n> instead. (If we say that in the docs, then arguably the code *does*\n> conform to the docs. But I don't see it in the COPY ref page at least.)\n\nThe documentation do not say that, but the current code actually do that.\nAlso, there is the following comment in BeginCopyTo().\n\n * With row-level security and a user using \"COPY relation TO\", we\n * have to convert the \"COPY relation TO\" to a query-based COPY (eg:\n * \"COPY (SELECT * FROM relation) TO\"), to allow the rewriter to add\n * in any RLS clauses.\n\nMaybe, it is be better to change the description in the comment to\n\"COPY (SELECT * FROM ONLY relation) TO\" when fixing the bug.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 2 Feb 2023 16:00:31 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: RLS makes COPY TO process child tables" }, { "msg_contents": "Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 01 Feb 2023 12:45:57 +0100\n> Antonin Houska <ah@cybertec.at> wrote:\n> \n> > While working on [1] I noticed that if RLS gets enabled, the COPY TO command\n> > includes the contents of child table into the result, although the\n> > documentation says it should not:\n> > \n> > \t\"COPY TO can be used only with plain tables, not views, and does not\n> > \tcopy rows from child tables or child partitions. For example, COPY\n> > \ttable TO copies the same rows as SELECT * FROM ONLY table. The syntax\n> > \tCOPY (SELECT * FROM table) TO ... can be used to dump all of the rows\n> > \tin an inheritance hierarchy, partitioned table, or view.\"\n> > \n> > A test case is attached (rls.sql) as well as fix proposal\n> > (copy_rls_no_inh.diff).\n> \n> I think this is a bug because the current behaviour is different from\n> the documentation. \n> \n> When RLS is enabled on a table in `COPY ... TO ...`, the query is converted\n> to `COPY (SELECT * FROM ...) TO ...` to allow the rewriter to add in RLS\n> clauses. This causes to dump the rows of child tables.\n> \n> The patch fixes this by setting \"inh\" of the table in the converted query\n> to false. This seems reasonable and actually fixes the problem.\n> \n> However, I think we would want a comment on the added line.\n\nA short comment added, see the new patch version.\n\n> Also, the attached test should be placed in the regression test.\n\nHm, I'm not sure it's necessary. It would effectively test whether the 'inh'\nfield works, but if it didn't, many other tests would fail. I discovered the\nbug by reading the code, so I wanted to demonstrate (also to myself) that it\ncauses incorrect behavior from user perspective. That was the purpose of the\ntest.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Thu, 02 Feb 2023 08:01:54 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: RLS makes COPY TO process child tables" }, { "msg_contents": "Greetings,\n\n* Yugo NAGATA (nagata@sraoss.co.jp) wrote:\n> On Wed, 01 Feb 2023 11:47:23 -0500\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > > Antonin Houska <ah@cybertec.at> wrote:\n> > >> While working on [1] I noticed that if RLS gets enabled, the COPY TO command\n> > >> includes the contents of child table into the result, although the\n> > >> documentation says it should not:\n> > \n> > > I think this is a bug because the current behaviour is different from\n> > > the documentation. \n> > \n> > I agree, it shouldn't do that.\n\nYeah, I agree based on what the COPY table TO docs say should be\nhappening.\n\n> > > When RLS is enabled on a table in `COPY ... TO ...`, the query is converted\n> > > to `COPY (SELECT * FROM ...) TO ...` to allow the rewriter to add in RLS\n> > > clauses. This causes to dump the rows of child tables.\n> > \n> > Do we actually say that in so many words, either in the code or docs?\n> > If so, it ought to read `COPY (SELECT * FROM ONLY ...) TO ...`\n> > instead. (If we say that in the docs, then arguably the code *does*\n> > conform to the docs. But I don't see it in the COPY ref page at least.)\n> \n> The documentation do not say that, but the current code actually do that.\n> Also, there is the following comment in BeginCopyTo().\n> \n> * With row-level security and a user using \"COPY relation TO\", we\n> * have to convert the \"COPY relation TO\" to a query-based COPY (eg:\n> * \"COPY (SELECT * FROM relation) TO\"), to allow the rewriter to add\n> * in any RLS clauses.\n> \n> Maybe, it is be better to change the description in the comment to\n> \"COPY (SELECT * FROM ONLY relation) TO\" when fixing the bug.\n\nYeah, that should also be updated. Perhaps you'd send an updated patch\nwhich includes fixing that too and maybe adds clarifying documentation\nto COPY which mentions what happens when RLS is enabled on the relation?\n\nI'm not sure if this makes good sense to back-patch.\n\nThanks,\n\nStephen", "msg_date": "Tue, 7 Feb 2023 20:02:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: RLS makes COPY TO process child tables" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n>> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n>>>> I think this is a bug because the current behaviour is different from\n>>>> the documentation. \n\n>>> I agree, it shouldn't do that.\n\n> Yeah, I agree based on what the COPY table TO docs say should be\n> happening.\n\nYeah, the documentation is quite clear that child data is not included.\n\n> I'm not sure if this makes good sense to back-patch.\n\nI think we have to. The alternative is to back-patch some very confusing\ndocumentation changes saying \"never mind all that if RLS is on\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Mar 2023 11:34:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RLS makes COPY TO process child tables" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Yeah, that should also be updated. Perhaps you'd send an updated patch\n> which includes fixing that too and maybe adds clarifying documentation\n> to COPY which mentions what happens when RLS is enabled on the relation?\n\nI couldn't find anything in copy.sgml that seemed to need adjustment.\nIt already says\n\n If row-level security is enabled for the table, the relevant SELECT\n policies will apply to COPY table TO statements.\n\nTogether with the already-mentioned\n\n COPY table TO copies the same rows as SELECT * FROM ONLY table.\n\nI'd say that the behavior is very clearly specified already. We just\nneed to make the code do what the docs say. So I pushed the patch\nwithout any docs changes. I did add a test case, because I don't\nlike back-patching without something that proves that the issue is\nrelevant to, and corrected in, each branch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Mar 2023 13:57:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RLS makes COPY TO process child tables" } ]
[ { "msg_contents": "Hi hackers:\r\n I came across a blog that I was very impressed with, especially the views mentioned in it about PostgreSQL Core Team,Especially tom_lane in PostgreSQL Core Team,The new features submitted by some developers are often tainted with personal preferences, but fortunately not some new features do not lose the opportunity to be merged because of their personal preferences, This is blog url:https://www.percona.com/blog/why-postgresql-needs-transparent-database-encryption-tde/\r\n[https://www.percona.com/blog/wp-content/uploads/2023/01/PostgreSQL-Needs-Transparent-Database-Encryption.jpg]<https://www.percona.com/blog/why-postgresql-needs-transparent-database-encryption-tde/>\r\nWhy PostgreSQL Needs Transparent Database Encryption (TDE)<https://www.percona.com/blog/why-postgresql-needs-transparent-database-encryption-tde/>\r\nAs Ibrar Ahmed noted in his blog post on Transparent Database Encryption (TDE). PostgreSQL is a surprising outlier when it comes to offering Transparent Database Encryption. Instead, it seems PostgreSQL Developers are of the opinion that encryption is a storage-level problem and is better solved on the filesystem or block device level.\r\nwww.percona.com\r\n\r\n\r\n```\r\n\r\nWhile I believe Transparent Database Encryption in PostgreSQL is important, I think it is just an illustration of a bigger question. Is technical governance in PostgreSQL designed to maximize its success in the future, or is it more about sticking to the approaches that helped PostgreSQL reach current success levels? For a project of such scale and influence, there seems to be surprisingly little user impact on PostgreSQL Governance. The PostgreSQL Core Team<https://www.postgresql.org/developer/core/> consists of “seven long-time community members with various specializations” rather than having clear electable positions, as many other open source organizations do. The development process in PostgreSQL is based around a mailing list<https://www.postgresql.org/list/pgsql-hackers/> rather than more modern and organized issue tracking and pull-request-based development workflows. Interested in PostgreSQL Bugs<https://www.postgresql.org/docs/current/bug-reporting.html>? There is no bugs database that allows you to easily see which bug is confirmed and what version it was fixed in a user-friendly way. Instead, you need to dig through the bugs mailing list.\r\n\r\n```\r\n\r\n[cid:02faaf41-94ca-4bda-a940-dd8192724467]", "msg_date": "Wed, 1 Feb 2023 12:24:11 +0000", "msg_from": "adherent postgres <adherent_postgres@hotmail.com>", "msg_from_op": true, "msg_subject": "About PostgreSQL Core Team" }, { "msg_contents": "Hi, Adherent!\n\nIMO \"not liking\" that you quote in the picture is just other words for\nexpressing caution for the patch or for the general direction of some\nchange. At least I never felt personal or arbitrary presumptions in\nrelation to my patches. So if you can join a discussion with your proposals\nto address the sources of caution, and improve or review the patches, it\nwould be really helpful. And these are really the things that move patches\nforward, not just complaining about words.\n\nRegards,\nPavel Borisov,\nSupabase\n\nHi, Adherent!IMO \"not liking\" that you quote in the picture is just other words for expressing caution for the patch or for the general direction of some change. At least I never felt personal or arbitrary presumptions in relation to my patches. So if you can join a discussion with your proposals to address the sources of caution, and improve or review the patches, it would be really helpful. And these are really the things that move patches forward, not just complaining about words.Regards,Pavel Borisov,Supabase", "msg_date": "Wed, 1 Feb 2023 20:57:45 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: About PostgreSQL Core Team" }, { "msg_contents": "On 2023-02-01 We 07:24, adherent postgres wrote:\n> Hi hackers:\n>     I came across a blog that I was very impressed with, especially \n> the views mentioned in it about PostgreSQL Core Team\n>\n\n\nPerhaps people might take more notice if you didn't hide behind an \nanonymous hotmail account.\n\n\nI've heard these sort of criticisms before, in one case very recently, \nbut almost always from people who aren't contributors or potential \ncontributors. I've never had someone say to me \"Well I would contribute \nlots of code to Postgres but I won't as you don't do PRs.\"\n\n\n\ncheers\n\n\n\nandrew (Not a core team member)\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-01 We 07:24, adherent\n postgres wrote:\n\n\n\n\nHi hackers:\n    I\n came across a blog that I was very impressed with, especially\n the views mentioned in it about PostgreSQL Core Team\n\n\n\n\n\n\nPerhaps people might take more notice if you didn't hide behind\n an anonymous hotmail account.\n\n\nI've heard these sort of criticisms before, in one case very\n recently, but almost always from people who aren't contributors or\n potential contributors. I've never had someone say to me \"Well I\n would contribute lots of code to Postgres but I won't as you don't\n do PRs.\"\n\n\n\n\n\ncheers\n\n\n\n\nandrew (Not a core team member)\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 3 Feb 2023 17:17:00 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: About PostgreSQL Core Team" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile migrating from PostgreSQL 14 to 15, we encountered the following \nperformance degradation caused by commit 46846433a03dff: \"shm_mq: Update \nmq_bytes_written less often\", discussion in [1].\n\nThe batching can make queries with a LIMIT clause run significantly \nslower compared to PostgreSQL 14, because neither the ring buffer write \nposition is updated, nor the latch to inform the leader that there's \ndata available is set, before a worker's queue is 1/4th full. This can \nbe seen in the number of rows produced by a parallel worker. Worst-case, \nthe data set is large and all rows to answer the query appear early, but \nare not big enough to fill the queue to 1/4th (e.g. when the LIMIT and \nthe tuple sizes are small). Here is an example to reproduce the problem.\n\nCREATE TABLE t(id1 INT, id2 INT, id3 INT, id4 INT, id5 INT);\nINSERT INTO t(id1, id2, id3, id4, id5) SELECT i%1000, i, i, i, i FROM \ngenerate_series(1, 10000000) AS i;\nANALYZE t;\nSET parallel_tuple_cost = 0;\nSET parallel_setup_cost = 0;\nSET min_parallel_table_scan_size = 0;\nSET max_parallel_workers_per_gather = 8;\nEXPLAIN ANALYZE VERBOSE SELECT id2 FROM t WHERE id1 = 100 LIMIT 100;\n\nPostgreSQL 15:\n\n  Limit  (cost=0.00..797.43 rows=100 width=4) (actual \ntime=65.083..69.207 rows=100 loops=1)\n    Output: id2\n    ->  Gather  (cost=0.00..79320.18 rows=9947 width=4) (actual \ntime=65.073..68.417 rows=100 loops=1)\n          Output: id2\n          Workers Planned: 8\n          Workers Launched: 7\n          ->  Parallel Seq Scan on public.t (cost=0.00..79320.18 \nrows=1243 width=4) (actual time=0.204..33.049 rows=100 loops=7)\n                Output: id2\n                Filter: (t.id1 = 100)\n                Rows Removed by Filter: 99345\n                Worker 0:  actual time=0.334..32.284 rows=100 loops=1\n                Worker 1:  actual time=0.060..32.680 rows=100 loops=1\n                Worker 2:  actual time=0.637..33.954 rows=98 loops=1\n                Worker 3:  actual time=0.136..33.301 rows=100 loops=1\n                Worker 4:  actual time=0.140..31.942 rows=100 loops=1\n                Worker 5:  actual time=0.062..33.673 rows=100 loops=1\n                Worker 6:  actual time=0.062..33.512 rows=100 loops=1\n  Planning Time: 0.113 ms\n  Execution Time: 69.772 ms\n\nPostgreSQL 14:\n\n  Limit  (cost=0.00..797.75 rows=100 width=4) (actual \ntime=30.602..38.459 rows=100 loops=1)\n    Output: id2\n    ->  Gather  (cost=0.00..79320.18 rows=9943 width=4) (actual \ntime=30.592..37.669 rows=100 loops=1)\n          Output: id2\n          Workers Planned: 8\n          Workers Launched: 7\n          ->  Parallel Seq Scan on public.t (cost=0.00..79320.18 \nrows=1243 width=4) (actual time=0.221..5.181 rows=15 loops=7)\n                Output: id2\n                Filter: (t.id1 = 100)\n                Rows Removed by Filter: 15241\n                Worker 0:  actual time=0.129..4.840 rows=15 loops=1\n                Worker 1:  actual time=0.125..4.924 rows=15 loops=1\n                Worker 2:  actual time=0.314..5.249 rows=17 loops=1\n                Worker 3:  actual time=0.252..5.341 rows=15 loops=1\n                Worker 4:  actual time=0.163..5.179 rows=15 loops=1\n                Worker 5:  actual time=0.422..5.248 rows=15 loops=1\n                Worker 6:  actual time=0.139..5.489 rows=16 loops=1\n  Planning Time: 0.084 ms\n  Execution Time: 38.880 ms\n\nI had a quick look at the code and I started wondering if we can't \nachieve the same performance improvement without batching by e.g.:\n\n- Only set the latch if new data is written to an empty queue. \nOtherwise, the leader should anyways keep try reading from the queues \nwithout waiting for the latch, so no need to set the latch again.\n\n- Reorganize struct shm_mq. There seems to be false sharing happening \nbetween at least mq_ring_size and the atomics and potentially also \nbetween the atomics. I'm wondering if the that's not the root cause of \nthe \"slow atomics\" observed in [1]? I'm happy to do some profiling.\n\nAlternatively, we could always set the latch if numberTuples in \nExecutePlan() is reasonably low. To do so, the DestReceiver's receive() \nmethod would only need an additional \"force flush\" argument.\n\n\nA slightly different but related problem is when some workers have \nalready produced enough rows to answer the LIMIT query, but other \nworkers are still running without producing any new rows. In that case \nthe \"already done\" workers will stop running even though they haven't \nreached 1/4th of the queue size, because the for-loop in execMain.c \nbails out in the following condition:\n\n         if (numberTuples && numberTuples == current_tuple_count)\n             break;\n\nSubsequently, the leader will end the plan and then wait in the Gather \nnode for all workers to shutdown. However, workers still running but not \nproducing any new rows will never reach the following condition in \nexecMain.c to check if they're supposed to stop (the shared memory queue \ndest receiver will return false on detached queues):\n\n             /*\n              * If we are not able to send the tuple, we assume the \ndestination\n              * has closed and no more tuples can be sent. If that's the \ncase,\n              * end the loop.\n              */\n             if (!dest->receiveSlot(slot, dest))\n                 break;\n\nReproduction steps for this problem are below. Here the worker getting \nthe first table page will be done right away, but the query takes as \nlong as it takes to scan all pages of the entire table.\n\nCREATE TABLE bar (col INT);\nINSERT INTO bar SELECT generate_series(1, 5000000);\nSET max_parallel_workers_per_gather = 8;\nEXPLAIN ANALYZE VERBOSE SELECT col FROM bar WHERE col = 1 LIMIT 1;\n\n  Limit  (cost=0.00..1.10 rows=1 width=4) (actual time=32.289..196.200 \nrows=1 loops=1)\n    Output: col\n    ->  Gather  (cost=0.00..30939.03 rows=28208 width=4) (actual \ntime=32.278..196.176 rows=1 loops=1)\n          Output: col\n          Workers Planned: 8\n          Workers Launched: 7\n          ->  Parallel Seq Scan on public.bar (cost=0.00..30939.03 \nrows=3526 width=4) (actual time=137.251..137.255 rows=0 loops=7)\n                Output: col\n                Filter: (bar.col = 1)\n                Rows Removed by Filter: 713769\n                Worker 0:  actual time=160.177..160.181 rows=0 loops=1\n                Worker 1:  actual time=160.111..160.115 rows=0 loops=1\n                Worker 2:  actual time=0.043..0.047 rows=1 loops=1\n                Worker 3:  actual time=160.040..160.044 rows=0 loops=1\n                Worker 4:  actual time=160.167..160.171 rows=0 loops=1\n                Worker 5:  actual time=160.018..160.022 rows=0 loops=1\n                Worker 6:  actual time=160.201..160.205 rows=0 loops=1\n  Planning Time: 0.087 ms\n  Execution Time: 196.247 ms\n\nWe would need something similar to CHECK_FOR_INTERRUPTS() which returns \na NULL slot if a parallel worker is supposed to stop execution (we could \ne.g. check if the queue got detached). Or could we amend \nCHECK_FOR_INTERRUPTS() to just stop the worker gracefully if the queue \ngot detached?\n\nJasper Smit, Spiros Agathos and Dimos Stamatakis helped working on this.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CAFiTN-tVXqn_OG7tHNeSkBbN%2BiiCZTiQ83uakax43y1sQb2OBA%40mail.gmail.com\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n", "msg_date": "Wed, 1 Feb 2023 14:41:02 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Performance issues with parallelism and LIMIT" }, { "msg_contents": "On 2/1/23 14:41, David Geier wrote:\n> Hi hackers,\n> \n> While migrating from PostgreSQL 14 to 15, we encountered the following\n> performance degradation caused by commit 46846433a03dff: \"shm_mq: Update\n> mq_bytes_written less often\", discussion in [1].\n> \n> The batching can make queries with a LIMIT clause run significantly\n> slower compared to PostgreSQL 14, because neither the ring buffer write\n> position is updated, nor the latch to inform the leader that there's\n> data available is set, before a worker's queue is 1/4th full. This can\n> be seen in the number of rows produced by a parallel worker. Worst-case,\n> the data set is large and all rows to answer the query appear early, but\n> are not big enough to fill the queue to 1/4th (e.g. when the LIMIT and\n> the tuple sizes are small). Here is an example to reproduce the problem.\n> \n\nYeah, this is a pretty annoying regression. We already can hit poor\nbehavior when matching rows are not distributed uniformly in the tables\n(which is what LIMIT costing assumes), and this makes it more likely to\nhit similar issues. A bit like when doing many HTTP requests makes it\nmore likely to hit at least one 99% outlier.\n\n> ...\n> \n> I had a quick look at the code and I started wondering if we can't\n> achieve the same performance improvement without batching by e.g.:\n> \n> - Only set the latch if new data is written to an empty queue.\n> Otherwise, the leader should anyways keep try reading from the queues\n> without waiting for the latch, so no need to set the latch again.\n> \n> - Reorganize struct shm_mq. There seems to be false sharing happening\n> between at least mq_ring_size and the atomics and potentially also\n> between the atomics. I'm wondering if the that's not the root cause of\n> the \"slow atomics\" observed in [1]? I'm happy to do some profiling.\n> \n> Alternatively, we could always set the latch if numberTuples in\n> ExecutePlan() is reasonably low. To do so, the DestReceiver's receive()\n> method would only need an additional \"force flush\" argument.\n> \n\nNo opinion on these options, but worth a try. Alternatively, we could\ntry the usual doubling approach - start with a low threshold (and set\nthe latch frequently), and then gradually increase it up to the 1/4.\n\nThat should work both for queries expecting only few rows and those\nproducing a lot of data.\n\n> \n> A slightly different but related problem is when some workers have\n> already produced enough rows to answer the LIMIT query, but other\n> workers are still running without producing any new rows. In that case\n> the \"already done\" workers will stop running even though they haven't\n> reached 1/4th of the queue size, because the for-loop in execMain.c\n> bails out in the following condition:\n> \n>         if (numberTuples && numberTuples == current_tuple_count)\n>             break;\n> \n> Subsequently, the leader will end the plan and then wait in the Gather\n> node for all workers to shutdown. However, workers still running but not\n> producing any new rows will never reach the following condition in\n> execMain.c to check if they're supposed to stop (the shared memory queue\n> dest receiver will return false on detached queues):\n> \n>             /*\n>              * If we are not able to send the tuple, we assume the\n> destination\n>              * has closed and no more tuples can be sent. If that's the\n> case,\n>              * end the loop.\n>              */\n>             if (!dest->receiveSlot(slot, dest))\n>                 break;\n> \n> Reproduction steps for this problem are below. Here the worker getting\n> the first table page will be done right away, but the query takes as\n> long as it takes to scan all pages of the entire table.\n> \n\nOuch!\n\n> ...\n> \n> We would need something similar to CHECK_FOR_INTERRUPTS() which returns\n> a NULL slot if a parallel worker is supposed to stop execution (we could\n> e.g. check if the queue got detached). Or could we amend\n> CHECK_FOR_INTERRUPTS() to just stop the worker gracefully if the queue\n> got detached?\n> \n\nThat sounds reasonable, but I'm not very familiar the leader-worker\ncommunication, so no opinion on how it should be done.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Feb 2023 11:42:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Performance issues with parallelism and LIMIT" }, { "msg_contents": "Hi,\n\nOn 2/8/23 11:42, Tomas Vondra wrote:\n> On 2/1/23 14:41, David Geier wrote:\n>\n> Yeah, this is a pretty annoying regression. We already can hit poor\n> behavior when matching rows are not distributed uniformly in the tables\n> (which is what LIMIT costing assumes), and this makes it more likely to\n> hit similar issues. A bit like when doing many HTTP requests makes it\n> more likely to hit at least one 99% outlier.\nAre you talking about the use of ordering vs filtering indexes in \nqueries where there's both an ORDER BY and a filter present (e.g. using \nan ordering index but then all rows passing the filter are at the end of \nthe table)? If not, can you elaborate a bit more on that and maybe give \nan example.\n> No opinion on these options, but worth a try. Alternatively, we could\n> try the usual doubling approach - start with a low threshold (and set\n> the latch frequently), and then gradually increase it up to the 1/4.\n>\n> That should work both for queries expecting only few rows and those\n> producing a lot of data.\n\nI was thinking about this variant as well. One more alternative would be \nlatching the leader once a worker has produced 1/Nth of the LIMIT where \nN is the number of workers. Both variants have the disadvantage that \nthere are still corner cases where the latch is set too late; but it \nwould for sure be much better than what we have today.\n\nI also did some profiling and - at least on my development laptop with 8 \nphysical cores - the original example, motivating the batching change is \nslower than when it's disabled by commenting out:\n\n     if (force_flush || mqh->mqh_send_pending > (mq->mq_ring_size >> 2))\n\nSET parallel_tuple_cost TO 0;\nCREATE TABLE b (a int);\nINSERT INTO b SELECT generate_series(1, 200000000);\nANALYZE b;\nEXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM b;\n\n  Gather  (cost=1000.00..1200284.61 rows=200375424 width=4) (actual \nrows=200000000 loops=1)\n    Workers Planned: 7\n    Workers Launched: 7\n    ->  Parallel Seq Scan on b  (cost=0.00..1199284.61 rows=28625061 \nwidth=4) (actual rows=25000000 loops=8)\n\nAlways latch: 19055 ms\nBatching:     19575 ms\n\nIf I find some time, I'll play around a bit more and maybe propose a patch.\n\n>> ...\n>>\n>> We would need something similar to CHECK_FOR_INTERRUPTS() which returns\n>> a NULL slot if a parallel worker is supposed to stop execution (we could\n>> e.g. check if the queue got detached). Or could we amend\n>> CHECK_FOR_INTERRUPTS() to just stop the worker gracefully if the queue\n>> got detached?\n>>\n> That sounds reasonable, but I'm not very familiar the leader-worker\n> communication, so no opinion on how it should be done.\n\nI think an extra macro that needs to be called from dozens of places to \ncheck if parallel execution is supposed to end is the least preferred \napproach. I'll read up more on how CHECK_FOR_INTERRUPTS() works and if \nwe cannot actively signal the workers that they should stop.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n", "msg_date": "Mon, 20 Feb 2023 19:18:22 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Performance issues with parallelism and LIMIT" }, { "msg_contents": "\nOn 2/20/23 19:18, David Geier wrote:\n> Hi,\n> \n> On 2/8/23 11:42, Tomas Vondra wrote:\n>> On 2/1/23 14:41, David Geier wrote:\n>>\n>> Yeah, this is a pretty annoying regression. We already can hit poor\n>> behavior when matching rows are not distributed uniformly in the tables\n>> (which is what LIMIT costing assumes), and this makes it more likely to\n>> hit similar issues. A bit like when doing many HTTP requests makes it\n>> more likely to hit at least one 99% outlier.\n> Are you talking about the use of ordering vs filtering indexes in\n> queries where there's both an ORDER BY and a filter present (e.g. using\n> an ordering index but then all rows passing the filter are at the end of\n> the table)? If not, can you elaborate a bit more on that and maybe give\n> an example.\n\nYeah, roughly. I don't think the explicit ORDER BY is a requirement for\nthis to happen - it's enough when the part of the plan below LIMIT\nproduces many rows, but the matching rows are at the end.\n\n>> No opinion on these options, but worth a try. Alternatively, we could\n>> try the usual doubling approach - start with a low threshold (and set\n>> the latch frequently), and then gradually increase it up to the 1/4.\n>>\n>> That should work both for queries expecting only few rows and those\n>> producing a lot of data.\n> \n> I was thinking about this variant as well. One more alternative would be\n> latching the leader once a worker has produced 1/Nth of the LIMIT where\n> N is the number of workers. Both variants have the disadvantage that\n> there are still corner cases where the latch is set too late; but it\n> would for sure be much better than what we have today.\n> \n> I also did some profiling and - at least on my development laptop with 8\n> physical cores - the original example, motivating the batching change is\n> slower than when it's disabled by commenting out:\n> \n>     if (force_flush || mqh->mqh_send_pending > (mq->mq_ring_size >> 2))\n> \n> SET parallel_tuple_cost TO 0;\n> CREATE TABLE b (a int);\n> INSERT INTO b SELECT generate_series(1, 200000000);\n> ANALYZE b;\n> EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM b;\n> \n>  Gather  (cost=1000.00..1200284.61 rows=200375424 width=4) (actual\n> rows=200000000 loops=1)\n>    Workers Planned: 7\n>    Workers Launched: 7\n>    ->  Parallel Seq Scan on b  (cost=0.00..1199284.61 rows=28625061\n> width=4) (actual rows=25000000 loops=8)\n> \n> Always latch: 19055 ms\n> Batching:     19575 ms\n> \n> If I find some time, I'll play around a bit more and maybe propose a patch.\n> \n\nOK. Once you have a WIP patch maybe share it and I'll try to do some\nprofiling too.\n\n>>> ...\n>>>\n>>> We would need something similar to CHECK_FOR_INTERRUPTS() which returns\n>>> a NULL slot if a parallel worker is supposed to stop execution (we could\n>>> e.g. check if the queue got detached). Or could we amend\n>>> CHECK_FOR_INTERRUPTS() to just stop the worker gracefully if the queue\n>>> got detached?\n>>>\n>> That sounds reasonable, but I'm not very familiar the leader-worker\n>> communication, so no opinion on how it should be done.\n> \n> I think an extra macro that needs to be called from dozens of places to\n> check if parallel execution is supposed to end is the least preferred\n> approach. I'll read up more on how CHECK_FOR_INTERRUPTS() works and if\n> we cannot actively signal the workers that they should stop.\n> \n\nIMHO if this requires adding another macro to a bunch of ad hoc places\nis rather inconvenient. It'd be much better to fix this in a localized\nmanner (especially as it seems related to a fairly specific place).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Feb 2023 13:22:27 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Performance issues with parallelism and LIMIT" }, { "msg_contents": "On 2/22/23 13:22, Tomas Vondra wrote:\n> ...\n> \n>>> No opinion on these options, but worth a try. Alternatively, we could\n>>> try the usual doubling approach - start with a low threshold (and set\n>>> the latch frequently), and then gradually increase it up to the 1/4.\n>>>\n>>> That should work both for queries expecting only few rows and those\n>>> producing a lot of data.\n>>\n>> I was thinking about this variant as well. One more alternative would be\n>> latching the leader once a worker has produced 1/Nth of the LIMIT where\n>> N is the number of workers. Both variants have the disadvantage that\n>> there are still corner cases where the latch is set too late; but it\n>> would for sure be much better than what we have today.\n>>\n>> I also did some profiling and - at least on my development laptop with 8\n>> physical cores - the original example, motivating the batching change is\n>> slower than when it's disabled by commenting out:\n>>\n>>     if (force_flush || mqh->mqh_send_pending > (mq->mq_ring_size >> 2))\n>>\n>> SET parallel_tuple_cost TO 0;\n>> CREATE TABLE b (a int);\n>> INSERT INTO b SELECT generate_series(1, 200000000);\n>> ANALYZE b;\n>> EXPLAIN (ANALYZE, TIMING OFF) SELECT * FROM b;\n>>\n>>  Gather  (cost=1000.00..1200284.61 rows=200375424 width=4) (actual\n>> rows=200000000 loops=1)\n>>    Workers Planned: 7\n>>    Workers Launched: 7\n>>    ->  Parallel Seq Scan on b  (cost=0.00..1199284.61 rows=28625061\n>> width=4) (actual rows=25000000 loops=8)\n>>\n>> Always latch: 19055 ms\n>> Batching:     19575 ms\n>>\n>> If I find some time, I'll play around a bit more and maybe propose a patch.\n>>\n> \n> OK. Once you have a WIP patch maybe share it and I'll try to do some\n> profiling too.\n> \n>>>> ...\n>>>>\n>>>> We would need something similar to CHECK_FOR_INTERRUPTS() which returns\n>>>> a NULL slot if a parallel worker is supposed to stop execution (we could\n>>>> e.g. check if the queue got detached). Or could we amend\n>>>> CHECK_FOR_INTERRUPTS() to just stop the worker gracefully if the queue\n>>>> got detached?\n>>>>\n>>> That sounds reasonable, but I'm not very familiar the leader-worker\n>>> communication, so no opinion on how it should be done.\n>>\n>> I think an extra macro that needs to be called from dozens of places to\n>> check if parallel execution is supposed to end is the least preferred\n>> approach. I'll read up more on how CHECK_FOR_INTERRUPTS() works and if\n>> we cannot actively signal the workers that they should stop.\n>>\n> \n> IMHO if this requires adding another macro to a bunch of ad hoc places\n> is rather inconvenient. It'd be much better to fix this in a localized\n> manner (especially as it seems related to a fairly specific place).\n> \n\nDavid, do you still plan to try fixing these issues? I have a feeling\nthose issues may be fairly common but often undetected, or just brushed\nof as \"slow query\" (AFAICS it was only caught thanks to comparing\ntimings before/after upgrade). Would be great to improve this.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Nov 2023 21:48:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Performance issues with parallelism and LIMIT" } ]
[ { "msg_contents": "Over at [1] we have a complaint that dump-and-restore fails for\nhash-partitioned tables if a partitioning column is an enum,\nbecause the enum values are unlikely to receive the same OIDs\nin the destination database as they had in the source, and the\nhash codes are dependent on those OIDs. So restore tries to\nload rows into the wrong leaf tables, and it's all a mess.\nThe patch approach proposed at [1] doesn't really work, but\nwhat does work is to use pg_dump's --load-via-partition-root\noption, so that the tuple routing decisions are all re-made.\n\nI'd initially proposed that we force --load-via-partition-root\nif we notice that we have hash partitioning on an enum column.\nBut the more I thought about this, the more comparable problems\ncame to mind:\n\n1. Hash partitioning on text columns will likely fail if the\ndestination uses a different encoding.\n\n2. Hash partitioning on float columns will fail if you use\n--extra-float-digits to round off the values. And then\nthere's the fact that the behavior of strtod() might vary\nacross platforms.\n\n3. Hash partitioning on floats is also endian-dependent,\nand the same is likely true for some other types.\n\n4. Anybody want to bet that complex types such as jsonb\nare entirely free of similar hazards? (Yes, somebody\nthought it'd be a good idea to provide jsonb_hash.)\n\nIn general, we've never thought that hash values are\nrequired to be consistent across platforms.\n\nThat was leading me to think that we should force\n--load-via-partition-root for any hash-partitioned table,\njust to pre-emptively avoid these problems. But then\nI remembered that\n\n5. Range partitioning on text columns will likely fail if the\ndestination uses a different collation.\n\nThis is looking like a very large-caliber foot-gun, isn't it?\nAnd remember that --load-via-partition-root acts at pg_dump\ntime, not restore. If all you have is a dump file with no\nopportunity to go back and get a new one, and it won't load\ninto your new server, you have got a nasty problem.\n\nI don't think this is an acceptable degree of risk, considering\nthat the primary use-cases for pg_dump involve target systems\nthat aren't 100.00% identical to the source.\n\nSo here's what I think we should actually do: make\n--load-via-partition-root the default. We can provide a\nswitch to turn it off, for those who are brave or foolish\nenough to risk that in the name of saving a few cycles,\nbut it ought to be the default.\n\nFurthermore, I think we should make this happen in time for\nnext week's releases. I can write the patch easily enough,\nbut we need a consensus PDQ that this is what to do.\n\nAnyone want to bikeshed on the spelling of the new switch?\nI'm contemplating \"--load-via-partition-leaf\" or perhaps\n\"--no-load-via-partition-root\".\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/765e5968-6c39-470f-95bf-7b14e6b9a1c0%40app.fastmail.com\n\n\n", "msg_date": "Wed, 01 Feb 2023 11:17:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 11:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Over at [1] we have a complaint that dump-and-restore fails for\n> hash-partitioned tables if a partitioning column is an enum,\n> because the enum values are unlikely to receive the same OIDs\n> in the destination database as they had in the source, and the\n> hash codes are dependent on those OIDs.\n\nIt seems to me that this is the root of the problem. We can't expect\nto hash on something that's not present in the dump file and have\nanything work.\n\n> So here's what I think we should actually do: make\n> --load-via-partition-root the default. We can provide a\n> switch to turn it off, for those who are brave or foolish\n> enough to risk that in the name of saving a few cycles,\n> but it ought to be the default.\n>\n> Furthermore, I think we should make this happen in time for\n> next week's releases. I can write the patch easily enough,\n> but we need a consensus PDQ that this is what to do.\n\nThis seems extremely precipitous to me and I'm against it. I like the\nfact that we have --load-via-partition-root, but it is a bit of a\nhack. You don't get a single copy into the partition root, you get one\nper child table -- and those COPY statements are listed as data for\nthe partitions where the data lives now, not for the parent table. I\nam not completely sure whether there is a scenario where that's an\nissue, but it's certainly an oddity. Also, and I think pretty\nsignificantly, using --load-via-partition-root forces you to pay the\noverhead of rerouting every tuple to the target partition whether you\nneed it or not, which is potentially a large unnecessary expense. I\ndon't think we should just foist that kind of overhead onto everyone\nin every situation for every data type because somebody had a problem\nin a certain case.\n\nAnd even if we do decide to do that at some point, I don't think it is\nright at all to rush such a change out on a short time scale, with\nlittle time to mull over consequences and alternative fixes. I think\nthat could easily hurt more people than it helps.\n\nI think that not all of the cases that you list are of the same type.\nLoading a dump under a different encoding or on a different endianness\nare surely corner cases. They might come up for some people\noccasionally, but they're not typical. In the case of endianness,\nthat's because little-Endian has pretty much taken over the world; in\nthe case of encoding, that's because converting data between encodings\nis a real pain, and combining that with a database dump and restore is\nlikely to be very little fun. It's hard to argue that collation\nchanges fall into the same category: we know that they get changed all\nthe time, often silently. But none of us database geeks think that's a\ngood thing: just that it's a thing that we have to deal with.\n\nThe enum case seems completely different to me. That's not the result\nof trying to migrate your data to another architecture or of the glibc\nmaintainers not believing in sorting working the same way on Tuesday\nthat it did on Monday. That's the result of the PostgreSQL project\nhashing data in a way that is does not make any real sense for the\napplication at hand. Any hash function that we use for partitioning\nhas to work based on data that is preserved by the dump-and-restore\nprocess. I would argue that the float case is not of the same kind:\nyes, if you round your data off, then the values are going to hash\ndifferently, but if you truncate your strings, those will hash\ndifferently too. Duh. Intentionally changing the value is supposed to\nchange the hash code, that's kind of the point of hashing.\n\nSo I think we should be asking ourselves what we could do about the\nenum case specifically, rather than resorting to a bazooka.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 12:39:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 1, 2023 at 11:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Over at [1] we have a complaint that dump-and-restore fails for\n>> hash-partitioned tables if a partitioning column is an enum,\n>> because the enum values are unlikely to receive the same OIDs\n>> in the destination database as they had in the source, and the\n>> hash codes are dependent on those OIDs.\n\n> It seems to me that this is the root of the problem.\n\nWell, that was what I thought too to start with, but I now think that\nit is far too narrow-minded a view of the problem. The real issue\nis something I said that you trimmed:\n\n>> In general, we've never thought that hash values are\n>> required to be consistent across platforms.\n\nhashenum() is doing something that is perfectly fine in the context\nof hash indexes, which have traditionally been our standard of how\nreproducible hash values need to be. Hash partitioning has moved\nthose goalposts, and if there was any discussion of the consequences\nof that, I didn't see it.\n\n>> Furthermore, I think we should make this happen in time for\n>> next week's releases. I can write the patch easily enough,\n>> but we need a consensus PDQ that this is what to do.\n\n> This seems extremely precipitous to me and I'm against it.\n\nYeah, it's been this way for awhile, so if there's debate then\nwe can let it go awhile longer.\n\n> So I think we should be asking ourselves what we could do about the\n> enum case specifically, rather than resorting to a bazooka.\n\nMy idea of solving this with a bazooka would be to deprecate hash\npartitioning. Quite frankly that's where I think we will end up\nsomeday, but I'm not going to try to make that argument today.\n\nIn the meantime, I think we need to recognize that hash values are\nnot very portable. I do not think we do our users a service by\nletting them discover the corner cases the hard way.\n\nI'd be willing to compromise on the intermediate idea of forcing\n--load-via-partition-root just for hashed partitioning.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 13:23:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... I like the\n> fact that we have --load-via-partition-root, but it is a bit of a\n> hack. You don't get a single copy into the partition root, you get one\n> per child table -- and those COPY statements are listed as data for\n> the partitions where the data lives now, not for the parent table. I\n> am not completely sure whether there is a scenario where that's an\n> issue, but it's certainly an oddity.\n\nI spent a bit more time thinking about that, and while I agree that\nit's an oddity, I don't see that it matters in the case of hash\npartitioning. You would notice an issue if you tried to do a selective\nrestore of just one partition --- but under what circumstance would\nthat be a useful thing to do? By definition, under hash partitioning\nthere is no user-meaningful difference between different partitions.\nMoreover, in the case at hand you would get constraint failures without\n--load-via-partition-root, or tuple routing failures with it,\nso what's the difference? (Unless you'd created all the partitions\nto start with and were doing a selective restore of just one partition's\ndata, in which case the outcome is \"fails\" or \"works\" respectively.)\n\n> Also, and I think pretty\n> significantly, using --load-via-partition-root forces you to pay the\n> overhead of rerouting every tuple to the target partition whether you\n> need it or not, which is potentially a large unnecessary expense.\n\nOddly, I always thought that we prioritize correctness over speed.\nI don't mind having an option that allows people to select a less-safe\nway of doing this, but I do think it's unwise for less-safe to be the\ndefault, especially when it's something you can't fix after the fact.\n\nWhat do you think of \"--load-via-partition-root=on/off/auto\", where\nauto means \"not with hash partitions\" or the like? I'm still\nuncomfortable about the collation aspect, but I'm willing to concede\nthat range partitioning is less likely to fail in this way than hash.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 15:34:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 1:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, that was what I thought too to start with, but I now think that\n> it is far too narrow-minded a view of the problem. The real issue\n> is something I said that you trimmed:\n>\n> >> In general, we've never thought that hash values are\n> >> required to be consistent across platforms.\n>\n> hashenum() is doing something that is perfectly fine in the context\n> of hash indexes, which have traditionally been our standard of how\n> reproducible hash values need to be. Hash partitioning has moved\n> those goalposts, and if there was any discussion of the consequences\n> of that, I didn't see it.\n\nRight, I don't think it was ever discussed, and I think that was a\nmistake on my part.\n\n> In the meantime, I think we need to recognize that hash values are\n> not very portable. I do not think we do our users a service by\n> letting them discover the corner cases the hard way.\n\nI think you're not really engaging with the argument that \"not\ncompletely portable\" and \"totally broken\" are two different things,\nand I still think that's an important point here. One idea that I had\nis to add a flag somewhere to indicate whether a particular opclass or\nopfamily is suitable for hash partitioning, or perhaps better, an\nalternative to opcdefault that sets the default for partitioning,\nwhich could be different from the default for indexing. Then we could\neither prohibit this case, or make it work. Of course we would have to\ndefine what \"suitable for hash partitioning\" means, but \"would be\nlikely to survive a dump and reload on the same machine without any\nsoftware changes\" is probably a reasonable minimum standard.\n\nI don't think the fact that our *traditional* standard for how stable\na hash function needs to be has been XYZ carries any water. Needs\nchange over time, and we adapt the code to meet the new needs. Since\nwe have no system for type properties in PostgreSQL -- a design\ndecision I find questionable -- we tie all such properties to operator\nclasses. That's why, for example, we have HASHEXTENDED_PROC, even\nthough hash indexes don't need 64-bit hash values or a seed. We added\nthat for hash partitioning, and it's now used in other places too,\nbecause 32-bits aren't enough for everything just because they're\nenough for hash indexes, and seeds are handy. That's also why we have\nBTINRANGE_PROC, which doesn't exist to support btree indexes, but\nrather window frames. The fact that a certain feature requires us to\ngraft some additional stuff into the operator class/family mechanism,\nor that it doesn't quite work with everything that's already part of\nthat mechanism, isn't an argument against the feature. That's just how\nwe do things around here. Indeed, if somebody, instead of implementing\nhash partitioning by tying it into hash opfamilies, were to make up\nsome completely new hashing infrastructure that had exactly the\nproperties they wanted for partitioning, that would be *totally*\nunacceptable and surely a reason for rejecting such a feature\noutright. The fact that it tries to make use of the existing\ninfrastructure is a good thing about that feature, not a bad thing,\neven though it is turning out that there are some problems.\n\nOn the question of whether hash partitioning is a good feature in\ngeneral, I can only say that I disagree with what seems to be your\nposition, which as best as I can tell is \"it sucks and we should kill\nit with fire\". I do think that partitioning in general leaves a lot to\nbe desired in PostgreSQL in general, and perhaps the issues are even\nworse for hash partitioning than they are elsewhere. However, I think\nthat the solution to that is for people to keep putting more work into\nmaking it better, not to give up and say \"ah, partitioning (or hash\npartitioning specifically) is a stupid feature that nobody wants\". To\nthink that, you have to be living in a bubble. It's unfortunate that\nwith all the work that so many people have put into this area we don't\nhave better results to show for it, but AFAICS there's no help for\nthat but to keep hacking. Amit Langote's work on locking before\npruning, for example, will be hugely impactful for some kinds of\nqueries if it gets committed, and it's been a long time coming, partly\nbecause so many other problems needed to be sorted out first. But you\ncan't run the simplest workload with any kind of partitioning, range,\nhash, whatever, and not run into that problem immediately.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 15:38:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 1, 2023 at 1:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In the meantime, I think we need to recognize that hash values are\n>> not very portable. I do not think we do our users a service by\n>> letting them discover the corner cases the hard way.\n\n> I think you're not really engaging with the argument that \"not\n> completely portable\" and \"totally broken\" are two different things,\n> and I still think that's an important point here.\n\nI don't buy that argument. From the user's perspective, it's broken\nif her use-case fails. \"It works for most people\" is cold comfort,\nmost especially so if there's no convenient path to fixing it after\na failure.\n\n> I don't think the fact that our *traditional* standard for how stable\n> a hash function needs to be has been XYZ carries any water.\n\nWell, it wouldn't need to if we had a practical way of changing the\nbehavior of an existing hash function, but guess what: we don't.\nAndrew's original proposal for fixing this was exactly to change the\nbehavior of hashenum(). There were some problems with the idea of\ndepending on enumsortorder instead of enum OID, but the really\nfundamental issue is that you can't change hashing behavior without\nbreaking pg_upgrade completely. Not only will your hash indexes be\ncorrupt, but your hash-partitioned tables will be broken, in exactly\nthe same way that we're trying to solve for dump/reload cases (which\nof course will *also* be broken by redefining the hash function, if\nyou didn't use --load-via-partition-root). Moreover, while we can\nalways advise people to reindex, there's no similarly easy way to fix\nbroken partitioning.\n\nThat being the case, I don't think moving the goalposts for hash\nfunction stability is going to lead to a workable solution.\n\n> On the question of whether hash partitioning is a good feature in\n> general, I can only say that I disagree with what seems to be your\n> position, which as best as I can tell is \"it sucks and we should kill\n> it with fire\".\n\nAs I said, I'm not prepared to litigate that case today ... but\nI do have a sneaking suspicion that we will eventually reach that\nconclusion. In any case, if we don't want to reach that conclusion,\nwe need some practical solution to these dump/reload problems.\nHave you got a better idea than --load-via-partition-root?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 16:12:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 12:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Also, and I think pretty\n> > significantly, using --load-via-partition-root forces you to pay the\n> > overhead of rerouting every tuple to the target partition whether you\n> > need it or not, which is potentially a large unnecessary expense.\n>\n> Oddly, I always thought that we prioritize correctness over speed.\n> I don't mind having an option that allows people to select a less-safe\n> way of doing this, but I do think it's unwise for less-safe to be the\n> default, especially when it's something you can't fix after the fact.\n\n+1\n\nAs you pointed out already, pg_dump's primary use-cases all involve\ntarget systems that aren't identical to the source system. Anybody\nusing pg_dump is unlikely to be particularly concerned about\nperformance.\n\n> What do you think of \"--load-via-partition-root=on/off/auto\", where\n> auto means \"not with hash partitions\" or the like? I'm still\n> uncomfortable about the collation aspect, but I'm willing to concede\n> that range partitioning is less likely to fail in this way than hash.\n\nCurrently, pg_dump ignores collation versions entirely, except when\nrun by pg_upgrade. So pg_dump already understands that sometimes it's\nimportant that the collation behavior be completely identical when the\ndatabase is restored, and sometimes it's desirable to produce a dump\nwith any available \"logically equivalent\" collation. This is about the\nhigh level requirements, which makes sense to me.\n\nISTM that range partitioning is missing a good high level model that\nbuilds on that. What's really needed is a fully worked out abstraction\nthat recognizes how collations can be equivalent for some purposes,\nbut not other purposes. The indirection between \"logical and physical\ncollations\" is underdeveloped. There isn't even an official name for\nthat idea.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Feb 2023 13:12:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 3:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I spent a bit more time thinking about that, and while I agree that\n> it's an oddity, I don't see that it matters in the case of hash\n> partitioning. You would notice an issue if you tried to do a selective\n> restore of just one partition --- but under what circumstance would\n> that be a useful thing to do? By definition, under hash partitioning\n> there is no user-meaningful difference between different partitions.\n> Moreover, in the case at hand you would get constraint failures without\n> --load-via-partition-root, or tuple routing failures with it,\n> so what's the difference? (Unless you'd created all the partitions\n> to start with and were doing a selective restore of just one partition's\n> data, in which case the outcome is \"fails\" or \"works\" respectively.)\n\nI guess I was worried that pg_dump's dependency ordering stuff might\ndo something weird in some case that I'm not smart enough to think up.\n\n> > Also, and I think pretty\n> > significantly, using --load-via-partition-root forces you to pay the\n> > overhead of rerouting every tuple to the target partition whether you\n> > need it or not, which is potentially a large unnecessary expense.\n>\n> Oddly, I always thought that we prioritize correctness over speed.\n\nThat's a bit rich.\n\nIt seems to me that the job of pg_dump is to produce a dump that, when\nreloaded on another system, recreates the same database state. That\nmeans that we end up with all of the same objects, each defined in the\nsame way, and that all of the tables end up with all the same contents\nthat they had before. Here, you'd like to argue that it's perfectly\nfine if we instead insert some of the rows into different tables than\nwhere they were on the original system. Under normal circumstances, of\ncourse, we wouldn't consider any such thing, because then we would not\nbe faithfully replicating the database state, which would be\nincorrect. But here you want to argue that it's OK to create a\ndifferent database state because trying to recreate the same one would\nproduce an error and the user might not like getting an error so let's\njust do something else instead and not even bother telling them.\n\nAs you have quite rightly pointed out, the --load-via-partition-root\nbehavior is useful for working around a variety of unfortunate things\nthat can happen. If a user is willing to say that getting a row into\none partition of some table is just as good as getting it into another\npartition of that same table and that you don't mind paying the cost\nassociated with that, then that is something that we can do for that\nuser. But just as we normally prioritize correctness over speed, so\nalso do we normally throw errors when things aren't right instead of\nsilently accepting bad input. The partitions in this scenario are\ntables that have constraints. If a dump contains a row that doesn't\nsatisfy some constraint on the table into which it is being loaded,\nthat's an error. Keep in mind that there's no rule that a user can't\nquery a partition directly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 16:14:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 1:14 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> It seems to me that the job of pg_dump is to produce a dump that, when\n> reloaded on another system, recreates the same database state. That\n> means that we end up with all of the same objects, each defined in the\n> same way, and that all of the tables end up with all the same contents\n> that they had before. Here, you'd like to argue that it's perfectly\n> fine if we instead insert some of the rows into different tables than\n> where they were on the original system. Under normal circumstances, of\n> course, we wouldn't consider any such thing, because then we would not\n> be faithfully replicating the database state, which would be\n> incorrect. But here you want to argue that it's OK to create a\n> different database state because trying to recreate the same one would\n> produce an error and the user might not like getting an error so let's\n> just do something else instead and not even bother telling them.\n\nThis is a misrepresentation of Tom's words. It isn't actually\nself-evident what \"we end up with all of the same objects, each\ndefined in the same way, and that all of the tables end up with all\nthe same contents that they had before\" actually means here, in\ngeneral. Tom's main concern seems to be just that -- the ambiguity\nitself.\n\nIf there was a fully worked out idea of what that would mean, then I\nsuspect it would be quite subtle and complicated -- it's an inherently\ntricky area. You seem to be saying that the way that this stuff\ncurrently works is correct by definition, except when it isn't.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Feb 2023 13:44:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 12:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't think the fact that our *traditional* standard for how stable\n> a hash function needs to be has been XYZ carries any water. Needs\n> change over time, and we adapt the code to meet the new needs. Since\n> we have no system for type properties in PostgreSQL -- a design\n> decision I find questionable -- we tie all such properties to operator\n> classes.\n\nAre you familiar with B-Tree opclass support function 4, equalimage?\nIt's used to determine whether a B-Tree index can use deduplication at\nCREATE INDEX time. ISTM that the requirements are rather similar here\n-- perhaps even identical.\n\nSee: https://www.postgresql.org/docs/devel/btree-support-funcs.html\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Feb 2023 13:49:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't think the fact that our *traditional* standard for how stable\n> > a hash function needs to be has been XYZ carries any water.\n>\n> Well, it wouldn't need to if we had a practical way of changing the\n> behavior of an existing hash function, but guess what: we don't.\n> Andrew's original proposal for fixing this was exactly to change the\n> behavior of hashenum(). There were some problems with the idea of\n> depending on enumsortorder instead of enum OID, but the really\n> fundamental issue is that you can't change hashing behavior without\n> breaking pg_upgrade completely. Not only will your hash indexes be\n> corrupt, but your hash-partitioned tables will be broken, in exactly\n> the same way that we're trying to solve for dump/reload cases (which\n> of course will *also* be broken by redefining the hash function, if\n> you didn't use --load-via-partition-root). Moreover, while we can\n> always advise people to reindex, there's no similarly easy way to fix\n> broken partitioning.\n>\n> That being the case, I don't think moving the goalposts for hash\n> function stability is going to lead to a workable solution.\n\nI don't see that there is any easy, clean way to solve this in\nreleased branches. The idea that I proposed could be implemented in\nmaster, and I think it is the right kind of fix, but it is not\nback-patchable. However, I think your argument rests on the premise\nthat making --load-via-partition-root the default behavior in some or\nall cases will not break anything for anyone, and I'm skeptical. I\nthink that's a significant behavior change and that some people will\nnotice, and some will find it an improvement while others will find it\nworse than the current behavior. I also think that there must be a lot\nmore people using partitioning in general, and even hash partitioning\nspecifically, than there are people using hash partitioning on an enum\ncolumn.\n\nPersonally, I would rather disallow this case in the back-branches --\ni.e. make pg_dump barf if it is encountered and block CREATE TABLE\nfrom setting up any new situations of this type -- than foist\n--load-via-partition-root on many people who aren't affected by the\nissue. I'm not saying that's a great answer, but we have to pick from\nthe choices we have.\n\nI also don't accept that if someone has hit this issue they are just\nhosed and there's no way out. Yeah, it's not a lot of fun: you\nprobably have to use \"CREATE TABLE unpartitioned AS SELECT * FROM\nborked; DROP TABLE borked;\" or so to rescue your data. But what would\nwe do if we discovered that the btree opclass sorts 1 before 0, or\nsomething? Surely we wouldn't refuse to fix the opclass just because\nsome users have existing indexes on disk that would be out of order\nwith the new opclass definition. We'd just change it and people would\nhave to deal. People with indexes would need to reindex. People with\npartitioning boundaries between 0 and 1 would need to repartition.\nThis case isn't the same because hashenum() isn't broken in general,\njust for this particular purpose. But I think you're trying to argue\nthat we should fix this by changing something other than the thing\nthat is broken, and I don't agree with that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 16:52:30 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It seems to me that the job of pg_dump is to produce a dump that, when\n> reloaded on another system, recreates the same database state. That\n> means that we end up with all of the same objects, each defined in the\n> same way, and that all of the tables end up with all the same contents\n> that they had before.\n\nNo, its job is to produce *logically* the same state. We don't expect\nit to produce, say, the same CTID value that each row had before ---\nif you want that, you use pg_upgrade or physical replication, not\npg_dump. There are fairly strong constraints on how identical we\ncan make the results given that we're not replicating physical state.\nAnd that's not even touching the point that frequently what the user\nis after is not an identical copy anyway.\n\n> Here, you'd like to argue that it's perfectly\n> fine if we instead insert some of the rows into different tables than\n> where they were on the original system.\n\nI can agree with that argument for range or list partitioning, where\nthe partitions have some semantic meaning to the user. I don't buy it\nfor hash partitioning. It was an implementation artifact to begin\nwith that a given row ended up in partition 3 not partition 11, so why\nwould users care which partition it ends up in after a dump/reload?\nIf they think there is a difference between the partitions, they need\neducation.\n\n> ... But just as we normally prioritize correctness over speed, so\n> also do we normally throw errors when things aren't right instead of\n> silently accepting bad input. The partitions in this scenario are\n> tables that have constraints.\n\nAgain, for hash partitioning those constraints are implementation\nartifacts not something that users should have any concern with.\n\n> Keep in mind that there's no rule that a user can't\n> query a partition directly.\n\nSure, and in the case of a hash partition, what he is going to\nget is an implementation-dependent subset of the rows. I use\n\"implementation-dependent\" here in the same sense that the SQL\nstandard does, which is \"there is a rule but the implementation\ndoesn't have to tell you what it is\". In particular, we are not\nbound to make the subset be the same on different installations.\nWe already didn't promise that, because of issues like endianness.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 17:08:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 4:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This is a misrepresentation of Tom's words. It isn't actually\n> self-evident what \"we end up with all of the same objects, each\n> defined in the same way, and that all of the tables end up with all\n> the same contents that they had before\" actually means here, in\n> general. Tom's main concern seems to be just that -- the ambiguity\n> itself.\n\nAs far as I can see, Tom didn't admit that there was any ambiguity. He\njust said that I was advocating for wrong behavior for the sake of\nperformance. I don't think that is what I was doing. I also don't\nreally think Tom thinks that that is what I was doing. But it is what\nhe said I was doing.\n\nI do agree with you that the ambiguity is the root of the issue. I\nmean, if we can't put the rows back into the same partitions where\nthey were before, does the user care about that, or do they only care\nthat the rows end up in some partition of the toplevel partitioned\ntable? I think that there's no single definition of correctness that\nis the only defensible one here, and we can't know which one the user\nwants a priori. I also think that they can care which one they're\ngetting, and thus I think that changing the default in a minor release\nis a bad plan. Tom, as I understand it, is arguing that the\n--load-via-partition-root behavior has negligible downsides and is\nalmost categorically better than the current default behavior, and\nthus making that the new default in some or all situations in a minor\nrelease is totally fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 17:12:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Tom, as I understand it, is arguing that the\n> --load-via-partition-root behavior has negligible downsides and is\n> almost categorically better than the current default behavior, and\n> thus making that the new default in some or all situations in a minor\n> release is totally fine.\n\nI think it's categorically better than a failed restore. I wouldn't\nbe proposing this if there were no such problem; but there is,\nand I don't buy your apparent position that we should leave affected\nusers to cope as best they can. Yes, it's clearly only a minority\nof users that are affected, else we'd have heard complaints before.\nBut it could be absolutely catastrophic for an affected user,\nif they're trying to restore their only backup. I'd rather impose\nan across-the-board cost on all users of hash partitioning than\nrisk such outcomes for a few.\n\nAlso, you've really offered no evidence for your apparent position\nthat --load-via-partition-root has unacceptable overhead. We've\ndone enough work on partition routing over the last few years that\nwhatever measurements might've originally justified that idea\ndon't necessarily apply anymore. Admittedly, I've not measured\nit either. But we don't tell people to avoid partitioning because\nINSERT is unduly expensive. Partition routing is just the cost of\ndoing business in that space.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 17:33:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 2:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Feb 1, 2023 at 4:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > This is a misrepresentation of Tom's words. It isn't actually\n> > self-evident what \"we end up with all of the same objects, each\n> > defined in the same way, and that all of the tables end up with all\n> > the same contents that they had before\" actually means here, in\n> > general. Tom's main concern seems to be just that -- the ambiguity\n> > itself.\n>\n> As far as I can see, Tom didn't admit that there was any ambiguity.\n\nTom said:\n\n\"I spent a bit more time thinking about that, and while I agree that\nit's an oddity, I don't see that it matters in the case of hash\npartitioning. You would notice an issue if you tried to do a\nselective restore of just one partition --- but under what\ncircumstance would that be a useful thing to do?\"\n\nWhile the word ambiguity may not have actually been used, Tom very\nclearly admitted some ambiguity. But even if he didn't, so what? It's\nperfectly obvious that that's the major underlying issue, and that\nthis is a high level problem rather than a low level problem.\n\n> He just said that I was advocating for wrong behavior for the sake of\n> performance. I don't think that is what I was doing. I also don't\n> really think Tom thinks that that is what I was doing. But it is what\n> he said I was doing.\n\nAnd I think that you're making a mountain out of a molehill.\n\n> I do agree with you that the ambiguity is the root of the issue. I\n> mean, if we can't put the rows back into the same partitions where\n> they were before, does the user care about that, or do they only care\n> that the rows end up in some partition of the toplevel partitioned\n> table?\n\nThat was precisely the question that Tom posed to you, in the same\nemail as the one that you found objectionable.\n\n> Tom, as I understand it, is arguing that the\n> --load-via-partition-root behavior has negligible downsides and is\n> almost categorically better than the current default behavior, and\n> thus making that the new default in some or all situations in a minor\n> release is totally fine.\n\nI don't know why you seem to think that it was such an absolutist\nposition as that.\n\nYou mentioned \"minor releases\" here. Who said anything about that?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Feb 2023 14:33:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Here, you'd like to argue that it's perfectly\n> > fine if we instead insert some of the rows into different tables than\n> > where they were on the original system.\n>\n> I can agree with that argument for range or list partitioning, where\n> the partitions have some semantic meaning to the user. I don't buy it\n> for hash partitioning. It was an implementation artifact to begin\n> with that a given row ended up in partition 3 not partition 11, so why\n> would users care which partition it ends up in after a dump/reload?\n> If they think there is a difference between the partitions, they need\n> education.\n\nI see your point. I think it's somewhat valid. However, I also think\nit muddies the definition of what pg_dump is allowed to do in a way\nthat I do not like. I think there's a difference between the CTID or\nXMAX of a tuple changing and it ending up in a totally different\npartition. It feels like it makes the definition of correctness\nsubjective: we do think that people care about range and list\npartitions as individual entities, so we'll put the rows back where\nthey were and complain if we can't, but we don't think they think\nabout hash partitions that way, so we will err on the side of making\nthe dump restoration succeed. That's a level of guessing what the user\nmeant that I think is uncomfortable. I feel like when somebody around\nhere discovers that sort of reasoning in some other software's code,\nor in a proposed patch, it pretty often gets blasted on this mailing\nlist with considerable vigor.\n\nI think you can construct plausible cases where it's not just\nacademic. For instance, suppose I intend to use some kind of logical\nreplication system, not necessarily the one built into PostgreSQL, to\nreplicate data between two systems. Before engaging that system, I\nneed to make the initial database contents match. The system is\noblivious to partitioning, and just replicates each table to a table\nwith a matching name. Well, if the partitions don't actually match up\n1:1, I kind of need to know about that. In this use case, the rows\nsilently moving around doesn't meet my needs. Or, suppose I dump and\nrestore two databases. It works perfectly. I then run a comparison\ntool of some sort that compares the two databases. EDB has such a\ntool! I don't know whether it would perform the comparison via the\npartition root or not, because I don't know how it works. But I find\nit pretty plausible that some such tool would show differences between\nthe source and target databases. Now, if I had done the data migration\nusing --load-with-partition-root, I would expect that. I might even be\nlooking for it, to see what got moved around. But otherwise, it might\nbe unexpected.\n\nAnother subtle problem with this whole situation is: suppose that on\nhost A, I set up a table hash-partitioned by an enum column and make a\nbunch of hash partitions. Then, on host B, I set up the same table\nwith a bunch of foreign table partitions, each corresponding to the\nmatching partition on the other node. I guess that just doesn't work,\nwhereas if the column were of any other data type, it would work. If\nit were a collatable data type, it would need the collations and\ncollation definitions to match, too.\n\nI know these things are subtle, and maybe these specific things will\nnever happen to anyone, or nobody will care. I don't know. I just have\na really hard time accepting that a categorical statement that this\njust can't ever matter to anyone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 17:36:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> You mentioned \"minor releases\" here. Who said anything about that?\n\nI did: I'd like to back-patch the fix if possible. I think changing\nthe default --load-via-partition-root choice could be back-patchable.\n\nIf Robert is resistant to that but would accept it in master,\nI'd settle for that in preference to having no fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 17:38:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 1, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I can agree with that argument for range or list partitioning, where\n>> the partitions have some semantic meaning to the user. I don't buy it\n>> for hash partitioning. It was an implementation artifact to begin\n>> with that a given row ended up in partition 3 not partition 11, so why\n>> would users care which partition it ends up in after a dump/reload?\n>> If they think there is a difference between the partitions, they need\n>> education.\n\n> I see your point. I think it's somewhat valid. However, I also think\n> it muddies the definition of what pg_dump is allowed to do in a way\n> that I do not like. I think there's a difference between the CTID or\n> XMAX of a tuple changing and it ending up in a totally different\n> partition. It feels like it makes the definition of correctness\n> subjective: we do think that people care about range and list\n> partitions as individual entities, so we'll put the rows back where\n> they were and complain if we can't, but we don't think they think\n> about hash partitions that way, so we will err on the side of making\n> the dump restoration succeed. That's a level of guessing what the user\n> meant that I think is uncomfortable.\n\nI see your point too, and to me it's evidence for the position that\nwe should never have done hash partitioning in the first place.\nIt's precisely because you want to analyze it in the same terms\nas range/list partitioning that we have these issues. Or we could\nhave built it on some other infrastructure than hash index opclasses\n... but we didn't do that, and now we have a mess. I don't see a way\nout other than relaxing the guarantees about how hash partitioning\nworks compared to the other kinds.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 17:49:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > You mentioned \"minor releases\" here. Who said anything about that?\n>\n> I did: I'd like to back-patch the fix if possible. I think changing\n> the default --load-via-partition-root choice could be back-patchable.\n>\n> If Robert is resistant to that but would accept it in master,\n> I'd settle for that in preference to having no fix.\n>\n\n I'm for accepting --load-via-partition-root as the solution to this problem.\nI think it is better than doing nothing and, at the moment, I don't see any\nalternatives to pick from.\n\nAs evidenced from the current influx of collation problems related to\nindexes, we would be foolish to discount the collation issues with just\nplain text, so limiting this only to the enum case (which is a must-have)\ndoesn't seem wise.\n\npg_dump should be conservative in what it produces - which in this\nsituation means having minimal environmental dependencies and internal\nvolatility.\n\nIn the interest of compatibility, having an escape hatch to do things as\nthey are done today is something we should provide. We got this one wrong\nand that is going to cause some pain. Though at least with the escape\nhatch we shouldn't be dealing with as many unresolvable complaints as when\nwe back-patched removing the public schema from search_path.\n\nIn the worst case, being conservative, the user can always at minimum\nrestore their dump file into a local database and get access to their data\nin a usable format with minimal hassle. The few that would like \"same\ntable name or bust\" behavior because of external or application-specific\nrequirements can still get that behavior.\n\nDavid J.\n\nOn Wed, Feb 1, 2023 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Geoghegan <pg@bowt.ie> writes:\n> You mentioned \"minor releases\" here. Who said anything about that?\n\nI did: I'd like to back-patch the fix if possible.  I think changing\nthe default --load-via-partition-root choice could be back-patchable.\n\nIf Robert is resistant to that but would accept it in master,\nI'd settle for that in preference to having no fix.  I'm for accepting --load-via-partition-root as the solution to this problem.  I think it is better than doing nothing and, at the moment, I don't see any alternatives to pick from.As evidenced from the current influx of collation problems related to indexes, we would be foolish to discount the collation issues with just plain text, so limiting this only to the enum case (which is a must-have) doesn't seem wise.pg_dump should be conservative in what it produces - which in this situation means having minimal environmental dependencies and internal volatility.In the interest of compatibility, having an escape hatch to do things as they are done today is something we should provide.  We got this one wrong and that is going to cause some pain.  Though at least with the escape hatch we shouldn't be dealing with as many unresolvable complaints as when we back-patched removing the public schema from search_path.In the worst case, being conservative, the user can always at minimum restore their dump file into a local database and get access to their data in a usable format with minimal hassle.  The few that would like \"same table name or bust\" behavior because of external or application-specific requirements can still get that behavior.David J.", "msg_date": "Wed, 1 Feb 2023 15:49:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 1, 2023 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That being the case, I don't think moving the goalposts for hash\n>> function stability is going to lead to a workable solution.\n\n> I don't see that there is any easy, clean way to solve this in\n> released branches. The idea that I proposed could be implemented in\n> master, and I think it is the right kind of fix, but it is not\n> back-patchable.\n\nYou waved your arms about inventing some new hashing infrastructure,\nbut it was phrased in such a way that it wasn't clear to me if that\nwas actually a serious proposal or not. But if it is: how will you\nget around the fact that any change to hashing behavior will break\npg_upgrade of existing hash-partitioned tables? New infrastructure\navails nothing if it has to be bug-compatible with the old. So I'm\nnot sure how restricting the fix to master helps us.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 18:14:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 2:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's precisely because you want to analyze it in the same terms\n> as range/list partitioning that we have these issues. Or we could\n> have built it on some other infrastructure than hash index opclasses\n> ... but we didn't do that, and now we have a mess. I don't see a way\n> out other than relaxing the guarantees about how hash partitioning\n> works compared to the other kinds.\n\nI suspect that using hash index opclasses was the right design --\nsticking to the same approach to hashing seems valuable. I agree with\nyour overall conclusion, though, since it doesn't seem sensible to\nallow hashing behavior to ever be anything greater than an\nimplementation detail. On general principle. What happens when a hash\nfunction is discovered to have a huge flaw, as happened a couple of\ntimes before now?\n\nIt's just the same with collations, where a particular collation\nshouldn't be expected to have perfectly stable behavior across a dump\nand reload. While admitting that possibility does open the door to\nproblems, in particular problems when range partitioning is in use,\nthose problems at least make sense. And they probably won't come up\nvery often -- collation updates don't often contain enormous\ngratuitous differences that are liable to create dump/reload hazards\nwith range partitioning. It is the least worst approach, overall. In\ntheory, and in practice.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Feb 2023 15:15:43 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Thu, 2 Feb 2023 at 11:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > You mentioned \"minor releases\" here. Who said anything about that?\n>\n> I did: I'd like to back-patch the fix if possible. I think changing\n> the default --load-via-partition-root choice could be back-patchable.\n>\n> If Robert is resistant to that but would accept it in master,\n> I'd settle for that in preference to having no fix.\n\nI'm not sure it'll help the discussion any, but on master, the\nperformance gap between using --load-via-partition-root and not using\nit should be somewhat closed due to 3592e0ff9. So using that is likely\nnot as terrible as it once was.\n\n[1] does not show results for inserting directly into partitions, but\nthe benchmark results do show that performance is better than it was\nwithout the caching. The order that pg_dump outputs the rows should\nmean the cache is hit most of the time for RANGE partitioned tables,\nat least, and likely more often than not for LIST. HASH partitioning\nis not really affected by that commit. The idea there is that it's\nprobably as cheap to hash as it is to do an equality check with the\nlast Datum.\n\nDigging into the history a bit, I found [2] and particularly [3] that\nseem to indicate this option was thought about due to concerns about\nhash functions not returning consistent results on different\narchitectures. I suspect it might have been defaulted to load into the\nleaf partitions for performance reasons, however. I mean, why else\nwould you?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqFeW5hvQqprXOLuGMMJSf%2B1C%2BWk4w_L-M03sVduF3oYg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAGPqQf0C1he087bz9xRBOGZBuESYz9X=Fp8Ca_g+TfHgAff75g@mail.gmail.com\n[3] https://www.postgresql.org/message-id/CA%2BTgmoZFn7TJ7QBsFatnuEE%3DGYGdZSNXqr9489n5JBsdy5rFfA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 13:15:28 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Digging into the history a bit, I found [2] and particularly [3] that\n> seem to indicate this option was thought about due to concerns about\n> hash functions not returning consistent results on different\n> architectures. I suspect it might have been defaulted to load into the\n> leaf partitions for performance reasons, however. I mean, why else\n> would you?\n> [3] https://www.postgresql.org/message-id/CA%2BTgmoZFn7TJ7QBsFatnuEE%3DGYGdZSNXqr9489n5JBsdy5rFfA%40mail.gmail.com\n\nHah ... so we went over almost exactly this same ground in 2017.\nThe consensus at that point seemed to be that actual problems\nwould be rare enough that we'd not need to impose the overhead\nof --load-via-partition-root by default. Now, AFAICS that was\nbased on exactly zero hard evidence, as to either the frequency\nof actual problems or the cost of --load-via-partition-root.\nOur optimism about the former seems to have been mostly borne out,\ngiven the lack of complaints since then; but I still think our\npessimism about the latter is on shaky grounds.\n\nAnyway, after re-reading the old thread I wonder if my first instinct\n(force --load-via-partition-root for enum hash cases only) was the\nbest compromise after all. I'm not sure how painful it is to get\npg_dump to detect such cases, but it's probably possible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 20:03:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, 2023-02-01 at 17:49 -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Feb 1, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I can agree with that argument for range or list partitioning, where\n> > > the partitions have some semantic meaning to the user.  I don't buy it\n> > > for hash partitioning.  It was an implementation artifact to begin\n> > > with that a given row ended up in partition 3 not partition 11, so why\n> > > would users care which partition it ends up in after a dump/reload?\n> > > If they think there is a difference between the partitions, they need\n> > > education.\n> \n> > I see your point. I think it's somewhat valid. However, I also think\n> > it muddies the definition of what pg_dump is allowed to do in a way\n> > that I do not like. I think there's a difference between the CTID or\n> > XMAX of a tuple changing and it ending up in a totally different\n> > partition. It feels like it makes the definition of correctness\n> > subjective: we do think that people care about range and list\n> > partitions as individual entities, so we'll put the rows back where\n> > they were and complain if we can't, but we don't think they think\n> > about hash partitions that way, so we will err on the side of making\n> > the dump restoration succeed. That's a level of guessing what the user\n> > meant that I think is uncomfortable.\n> \n> I see your point too, and to me it's evidence for the position that\n> we should never have done hash partitioning in the first place.\n\nYou suggested earlier to deprecate hash partitioning. That's a bit\nmuch, but I'd say that most use cases of hash partitioning that I can\nimagine would involve integers. We could warn against using hash\npartitioning for data types other than numbers and date/time in the\ndocumentation.\n\nI also understand the bad feeling of changing partitions during a\ndump/restore, but I cannot think of a better way out.\n\n> What do you think of \"--load-via-partition-root=on/off/auto\", where\n> auto means \"not with hash partitions\" or the like?\n\nThat's perhaps the best way. So users who know that their hash\npartitions won't change and want the small speed benefit can have it.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 02 Feb 2023 10:51:14 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On 2023-Feb-01, Robert Haas wrote:\n\n> I think you can construct plausible cases where it's not just\n> academic. For instance, suppose I intend to use some kind of logical\n> replication system, not necessarily the one built into PostgreSQL, to\n> replicate data between two systems. Before engaging that system, I\n> need to make the initial database contents match. The system is\n> oblivious to partitioning, and just replicates each table to a table\n> with a matching name.\n\nThis only works if that other system's hashing behavior is identical to\nPostgres' for hashing that particular enum; there's no other way that\nyou could make the tables match exactly in the way you propose. What\nthis tells me is that it's not really reasonable for users to expect\nthat this situation would actually work. It is totally reasonable for\nrange and list, but not for hash.\n\n\nIf the idea of --load-via-partition-root=auto is going to be the fix for\nthis problem, then it has to consider that hash partitioning might be in\na level below the topmost one. For example,\n\ncreate type colors as enum ('blue', 'red', 'green');\ncreate table topmost (prim int, col colors, a int) partition by range (prim);\ncreate table parent partition of topmost for values from (0) to (1000) partition by hash (col);\ncreate table child1 partition of parent for values with (modulus 3, remainder 0);\ncreate table child2 partition of parent for values with (modulus 3, remainder 1);\ncreate table child3 partition of parent for values with (modulus 3, remainder 2);\n\nIf you dump this with --load-via-partition-root, for child1 it'll give you this:\n\n--\n-- Data for Name: child1; Type: TABLE DATA; Schema: public; Owner: alvherre\n--\n\nCOPY public.topmost (prim, col, a) FROM stdin;\n\\.\n\nwhich is what we want; so for --load-via-partition-root=auto (or\nwhatever), we need to ensure that we detect hash partitioning all the\nway down from the topmost to the leaves.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n", "msg_date": "Thu, 2 Feb 2023 11:58:59 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "\nOn 2023-02-01 We 20:03, Tom Lane wrote:\n>\n> Anyway, after re-reading the old thread I wonder if my first instinct\n> (force --load-via-partition-root for enum hash cases only) was the\n> best compromise after all. I'm not sure how painful it is to get\n> pg_dump to detect such cases, but it's probably possible.\n>\n> \t\t\t\n\n\nGiven the other problems you enumerated upthread, I'd be more inclined\nto go with your other suggestion of\n\"--load-via-partition-root=on/off/auto\" (with the default presumably\n\"auto\").\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 2 Feb 2023 07:56:22 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Wed, Feb 1, 2023 at 6:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You waved your arms about inventing some new hashing infrastructure,\n> but it was phrased in such a way that it wasn't clear to me if that\n> was actually a serious proposal or not. But if it is: how will you\n> get around the fact that any change to hashing behavior will break\n> pg_upgrade of existing hash-partitioned tables? New infrastructure\n> avails nothing if it has to be bug-compatible with the old. So I'm\n> not sure how restricting the fix to master helps us.\n\nI think what I'd propose to do is invent a new method of hashing enums\nthat can be used for hash partitioning on enum columns going forward,\nand make it the default for hash partitioning going forward. The\nexisting method can continue to be used for hash indexes, to avoid\nbreaking on disk compatibility.\n\nNow, for the back branches, if you wanted to force\n--load-via-partition-root specifically for the case of hash\npartitioning on an enum column, I think that would be fine. It's a\nhack, but there's no way out in the back branches that is not a hack.\nWhat I don't like is the idea of enabling --load-via-partition-root in\nthe back branches more broadly, e.g. in all cases whatsoever, or in\nall cases involving hash partitioning. If we really want to, we can\nmake --load-via-partition-root the new default categorically in\nmaster, and make the pg_dump option --no-load-via-partition-root. I'm\nnot convinced that's a good idea, but maybe it is, and you know, major\nreleases change behavior sometimes, that's how life goes. Minor\nreleases, though, should minimize behavior changes, IMHO. It's not at\nall nice to apply a critical security update and find that pg_dump\nworks differently to fix a problem you weren't having.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 09:50:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-02-01 We 20:03, Tom Lane wrote:\n>> Anyway, after re-reading the old thread I wonder if my first instinct\n>> (force --load-via-partition-root for enum hash cases only) was the\n>> best compromise after all. I'm not sure how painful it is to get\n>> pg_dump to detect such cases, but it's probably possible.\n\n> Given the other problems you enumerated upthread, I'd be more inclined\n> to go with your other suggestion of\n> \"--load-via-partition-root=on/off/auto\" (with the default presumably\n> \"auto\").\n\nHmm ... is there any actual value in \"off\" in this case? We can be\njust about certain that dump/reload of a hashed enum key will fail.\n\nIf we made \"auto\" also use --load-via-partition-root for range keys\nhaving collation properties, there'd be more of an argument for\nletting users override it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Feb 2023 09:52:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> ... so for --load-via-partition-root=auto (or\n> whatever), we need to ensure that we detect hash partitioning all the\n> way down from the topmost to the leaves.\n\nYeah, that had already occurred to me, which is why I was not feeling\nconfident about it being an easy hack in pg_dump.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Feb 2023 10:04:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Here's a set of draft patches around this issue.\n\n0001 does what I last suggested, ie force load-via-partition-root for\nleaf tables underneath a partitioned table with a partitioned-by-hash\nenum column. It wasn't quite as messy as I first feared, although we do\nneed a new query (and pg_dump now knows more about pg_partitioned_table\nthan it used to).\n\nI was a bit unhappy to read this in the documentation:\n\n It is best not to use parallelism when restoring from an archive made\n with this option, because <application>pg_restore</application> will\n not know exactly which partition(s) a given archive data item will\n load data into. This could result in inefficiency due to lock\n conflicts between parallel jobs, or perhaps even restore failures due\n to foreign key constraints being set up before all the relevant data\n is loaded.\n\nThis made me wonder if this could be a usable solution at all, but\nafter thinking for awhile, I don't see how the claim about foreign key\nconstraints is anything but FUD. pg_dump/pg_restore have sufficient\ndependency logic to prevent that from happening. I think we can just\ndrop the \"or perhaps ...\" clause here, and tolerate the possible\ninefficiency as better than failing.\n\n0002 and 0003 are not part of the bug fix, but are some performance\nimprovements I noticed while working on this. 0002 is pretty minor,\nbut 0003 is possibly significant if you have a ton of partitions.\nI haven't done any performance measurement on it though.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 14 Feb 2023 14:21:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Tue, Feb 14, 2023 at 2:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This made me wonder if this could be a usable solution at all, but\n> after thinking for awhile, I don't see how the claim about foreign key\n> constraints is anything but FUD. pg_dump/pg_restore have sufficient\n> dependency logic to prevent that from happening. I think we can just\n> drop the \"or perhaps ...\" clause here, and tolerate the possible\n> inefficiency as better than failing.\n\nRight, but isn't that dependency logic based around the fact that the\ninserts are targeting the original partition? Like, suppose partition\nA has a foreign key that is not present on partition B. A row that is\noriginally in partition B gets rerouted into partition A. It must now\nsatisfy the foreign key constraint when, previously, that was\nunnecessary.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Feb 2023 10:50:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Feb 14, 2023 at 2:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This made me wonder if this could be a usable solution at all, but\n>> after thinking for awhile, I don't see how the claim about foreign key\n>> constraints is anything but FUD. pg_dump/pg_restore have sufficient\n>> dependency logic to prevent that from happening. I think we can just\n>> drop the \"or perhaps ...\" clause here, and tolerate the possible\n>> inefficiency as better than failing.\n\n> Right, but isn't that dependency logic based around the fact that the\n> inserts are targeting the original partition? Like, suppose partition\n> A has a foreign key that is not present on partition B. A row that is\n> originally in partition B gets rerouted into partition A. It must now\n> satisfy the foreign key constraint when, previously, that was\n> unnecessary.\n\nWell, that's a user error not pg_dump's fault. Particularly so for hash\npartitioning, where there is no defensible reason to make the partitions\nsemantically different.\n\nThere could be a risk of a timing problem, namely that parallel pg_restore\ntries to check an FK constraint before all the relevant data has arrived.\nBut in practice I don't believe that either. We load all the data in\n\"data\" phase and then create indexes and check FKs in \"post-data\" phase,\nand I do not believe that parallel restore weakens that separation\n(because it's enforced by a post-data boundary object in the dependencies).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Feb 2023 11:20:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Mon, Feb 27, 2023 at 11:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, that's a user error not pg_dump's fault. Particularly so for hash\n> partitioning, where there is no defensible reason to make the partitions\n> semantically different.\n\nI am still of the opinion that you're going down a dangerous path of\nredefining pg_dump's mission from \"dump and restore the database, as\nit actually exists\" to \"dump and restore the database, unless the user\ndid something that I think is silly\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Feb 2023 12:02:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Feb 27, 2023 at 11:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, that's a user error not pg_dump's fault. Particularly so for hash\n>> partitioning, where there is no defensible reason to make the partitions\n>> semantically different.\n\n> I am still of the opinion that you're going down a dangerous path of\n> redefining pg_dump's mission from \"dump and restore the database, as\n> it actually exists\" to \"dump and restore the database, unless the user\n> did something that I think is silly\".\n\nLet's not attack straw men, shall we? I'm defining pg_dump's mission\nas \"dump and restore the database successfully\". Failure to restore\ndoes not help anyone, especially if they are in a disaster recovery\nsituation where it's not possible to re-take the dump. It's not like\nthere's no precedent for having pg_dump tweak things to ensure a\nsuccessful restore.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Feb 2023 12:50:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Mon, Feb 27, 2023 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Feb 27, 2023 at 11:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Well, that's a user error not pg_dump's fault. Particularly so for hash\n> >> partitioning, where there is no defensible reason to make the partitions\n> >> semantically different.\n>\n> > I am still of the opinion that you're going down a dangerous path of\n> > redefining pg_dump's mission from \"dump and restore the database, as\n> > it actually exists\" to \"dump and restore the database, unless the user\n> > did something that I think is silly\".\n>\n> Let's not attack straw men, shall we? I'm defining pg_dump's mission\n> as \"dump and restore the database successfully\". Failure to restore\n> does not help anyone, especially if they are in a disaster recovery\n> situation where it's not possible to re-take the dump. It's not like\n> there's no precedent for having pg_dump tweak things to ensure a\n> successful restore.\n\nSure, but I was responding to your assertion that there's no case in\nwhich --load-via-partition-root could cause a restore failure. I'm not\nsure that's accurate. The fact that the case is something that nobody\nis especially likely to do doesn't mean it doesn't exist, and ISTM we\nhave had more than a few pg_dump bug reports that come down to someone\nhaving done something which we didn't foresee.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Feb 2023 12:55:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Sure, but I was responding to your assertion that there's no case in\n> which --load-via-partition-root could cause a restore failure. I'm not\n> sure that's accurate.\n\nPerhaps it's not, but it's certainly far less likely to cause a restore\nfailure than the behavior I want to replace.\n\nMore to the current point perhaps, I doubt that it's likely enough to\ncause a restore failure to justify the existing docs warning. There\nmay have been a time when the warning was justified, but I don't believe\nit today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Feb 2023 13:04:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Tue, Feb 14, 2023 at 02:21:33PM -0500, Tom Lane wrote:\n> Here's a set of draft patches around this issue.\n>\n> 0001 does what I last suggested, ie force load-via-partition-root for\n> leaf tables underneath a partitioned table with a partitioned-by-hash\n> enum column. It wasn't quite as messy as I first feared, although we do\n> need a new query (and pg_dump now knows more about pg_partitioned_table\n> than it used to).\n>\n> I was a bit unhappy to read this in the documentation:\n>\n> It is best not to use parallelism when restoring from an archive made\n> with this option, because <application>pg_restore</application> will\n> not know exactly which partition(s) a given archive data item will\n> load data into. This could result in inefficiency due to lock\n> conflicts between parallel jobs, or perhaps even restore failures due\n> to foreign key constraints being set up before all the relevant data\n> is loaded.\n>\n> This made me wonder if this could be a usable solution at all, but\n> after thinking for awhile, I don't see how the claim about foreign key\n> constraints is anything but FUD. pg_dump/pg_restore have sufficient\n> dependency logic to prevent that from happening. I think we can just\n> drop the \"or perhaps ...\" clause here, and tolerate the possible\n> inefficiency as better than failing.\n\nWorking on some side project that can cause dump of hash partitions to be\nrouted to a different partition, I realized that --load-via-partition-root can\nindeed cause deadlock in such case without FK dependency or anything else.\n\nThe problem is that each worker will perform a TRUNCATE TABLE ONLY followed by\na copy of the original partition's data in a transaction, and that obviously\nwill lead to deadlock if the original and locked partition and the restored\npartition are different.\n\n\n", "msg_date": "Sat, 11 Mar 2023 11:01:54 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> Working on some side project that can cause dump of hash partitions to be\n> routed to a different partition, I realized that --load-via-partition-root can\n> indeed cause deadlock in such case without FK dependency or anything else.\n\n> The problem is that each worker will perform a TRUNCATE TABLE ONLY followed by\n> a copy of the original partition's data in a transaction, and that obviously\n> will lead to deadlock if the original and locked partition and the restored\n> partition are different.\n\nOh, interesting. I wonder if we can rearrange things to avoid that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Mar 2023 22:10:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Fri, Mar 10, 2023 at 10:10:14PM -0500, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > Working on some side project that can cause dump of hash partitions to be\n> > routed to a different partition, I realized that --load-via-partition-root can\n> > indeed cause deadlock in such case without FK dependency or anything else.\n>\n> > The problem is that each worker will perform a TRUNCATE TABLE ONLY followed by\n> > a copy of the original partition's data in a transaction, and that obviously\n> > will lead to deadlock if the original and locked partition and the restored\n> > partition are different.\n>\n> Oh, interesting. I wonder if we can rearrange things to avoid that.\n\nThe BEGIN + TRUNCATE is only there to avoid generating WAL records just in case\nthe wal_level is minimal. I don't remember if that optimization still exists,\nbut if yes we could avoid doing that if the server's wal_level is replica or\nhigher? That's not perfect but it would help in many cases.\n\n\n", "msg_date": "Sat, 11 Mar 2023 11:32:32 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> The BEGIN + TRUNCATE is only there to avoid generating WAL records just in case\n> the wal_level is minimal. I don't remember if that optimization still exists,\n> but if yes we could avoid doing that if the server's wal_level is replica or\n> higher? That's not perfect but it would help in many cases.\n\nAfter thinking about this, it seems like a better idea is to skip the\nTRUNCATE if we're doing load-via-partition-root. In that case it's\nclearly a dangerous thing to do regardless of deadlock worries, since\nit risks discarding previously-loaded data that came over from another\npartition. (IOW this seems like an independent, pre-existing bug in\nload-via-partition-root mode.)\n\nThe trick is to detect in pg_restore whether pg_dump chose to do\nload-via-partition-root. If we have a COPY statement we can fairly\neasily examine it to see if the target table is what we expect or\nsomething else. However, if the table was dumped as INSERT statements\nit'd be far messier; the INSERTs are not readily accessible from the\ncode that needs to make the decision.\n\nWhat I propose we do about that is further tweak things so that\nload-via-partition-root forces dumping via COPY. AFAIK the only\ncompelling use-case for dump-as-INSERTs is in transferring data\nto a non-Postgres database, which is a context in which dumping\npartitioned tables as such is pretty hopeless anyway. (I wonder if\nwe should have some way to dump all the contents of a partitioned\ntable as if it were unpartitioned, to support such migration.)\n\nAn alternative could be to extend the archive TOC format to record\ndirectly whether a given TABLE DATA object loads data via partition\nroot or normally. Potentially we could do that without an archive\nformat break by defining te->defn for TABLE DATA to be empty for\nnormal dumps (as it is now) or something like \"-- load via partition root\"\nfor the load-via-partition-root case. However, depending on examination\nof the COPY command would already work for the vast majority of existing\narchive files, so I feel like it might be the preferable choice.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Mar 2023 15:46:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Sun, Mar 12, 2023 at 03:46:52PM -0400, Tom Lane wrote:\n> What I propose we do about that is further tweak things so that\n> load-via-partition-root forces dumping via COPY. AFAIK the only\n> compelling use-case for dump-as-INSERTs is in transferring data\n> to a non-Postgres database, which is a context in which dumping\n> partitioned tables as such is pretty hopeless anyway. (I wonder if\n> we should have some way to dump all the contents of a partitioned\n> table as if it were unpartitioned, to support such migration.)\n\nI think that what this other thread is about.\n\nhttps://commitfest.postgresql.org/42/4130/\npg_dump all child tables with the root table\n\n\n", "msg_date": "Sun, 12 Mar 2023 14:54:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Mar 12, 2023 at 03:46:52PM -0400, Tom Lane wrote:\n>> What I propose we do about that is further tweak things so that\n>> load-via-partition-root forces dumping via COPY. AFAIK the only\n>> compelling use-case for dump-as-INSERTs is in transferring data\n>> to a non-Postgres database, which is a context in which dumping\n>> partitioned tables as such is pretty hopeless anyway. (I wonder if\n>> we should have some way to dump all the contents of a partitioned\n>> table as if it were unpartitioned, to support such migration.)\n\n> I think that what this other thread is about.\n> https://commitfest.postgresql.org/42/4130/\n> pg_dump all child tables with the root table\n\nAs far as I understood (didn't actually read the latest patch) that\none is just about easily selecting all the partitions of a partitioned\ntable when doing a selective dump. It's not helping you produce a\nnon-Postgres-specific dump.\n\nAlthough I guess by combining load-via-partition-root, data-only mode,\nand dump-as-inserts you could produce a clean collection of\nnon-partition-dependent INSERT commands ... so maybe we'd better not\nforce dump-as-inserts off. I'm starting to like the te->defn hack\nmore.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Mar 2023 16:02:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Sun, Mar 12, 2023 at 03:46:52PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > The BEGIN + TRUNCATE is only there to avoid generating WAL records just in case\n> > the wal_level is minimal. I don't remember if that optimization still exists,\n> > but if yes we could avoid doing that if the server's wal_level is replica or\n> > higher? That's not perfect but it would help in many cases.\n>\n> After thinking about this, it seems like a better idea is to skip the\n> TRUNCATE if we're doing load-via-partition-root. In that case it's\n> clearly a dangerous thing to do regardless of deadlock worries, since\n> it risks discarding previously-loaded data that came over from another\n> partition. (IOW this seems like an independent, pre-existing bug in\n> load-via-partition-root mode.)\n\nIt's seems quite unlikely to be able to actually truncate already restored data\nwithout eventually going into a deadlock, but it's still possible so agreed.\n\n> The trick is to detect in pg_restore whether pg_dump chose to do\n> load-via-partition-root. If we have a COPY statement we can fairly\n> easily examine it to see if the target table is what we expect or\n> something else. However, if the table was dumped as INSERT statements\n> it'd be far messier; the INSERTs are not readily accessible from the\n> code that needs to make the decision.\n>\n> What I propose we do about that is further tweak things so that\n> load-via-partition-root forces dumping via COPY. AFAIK the only\n> compelling use-case for dump-as-INSERTs is in transferring data\n> to a non-Postgres database, which is a context in which dumping\n> partitioned tables as such is pretty hopeless anyway.\n\nIt seems acceptable to me.\n\n> (I wonder if\n> we should have some way to dump all the contents of a partitioned\n> table as if it were unpartitioned, to support such migration.)\n\n(this would be nice to have)\n\n> An alternative could be to extend the archive TOC format to record\n> directly whether a given TABLE DATA object loads data via partition\n> root or normally. Potentially we could do that without an archive\n> format break by defining te->defn for TABLE DATA to be empty for\n> normal dumps (as it is now) or something like \"-- load via partition root\"\n> for the load-via-partition-root case. However, depending on examination\n> of the COPY command would already work for the vast majority of existing\n> archive files, so I feel like it might be the preferable choice.\n\nGiven that this approach wouldn't help with existing dump files (at least if\nusing COPY, in any case the one using INSERT are doomed), so I'm slightly in\nfavor of the first approach, and later add an easy and non magic incantation\nway to produce dumps that don't depend on partitioning. It would mean that you\nwould only be able to produce such dumps using pg16 client binaries, but such\nversion would also work with older server versions so it doesn't seem like a\nhuge problem in the long run.\n\n\n", "msg_date": "Mon, 13 Mar 2023 16:33:14 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sun, Mar 12, 2023 at 03:46:52PM -0400, Tom Lane wrote:\n>> The trick is to detect in pg_restore whether pg_dump chose to do\n>> load-via-partition-root.\n\n> Given that this approach wouldn't help with existing dump files (at least if\n> using COPY, in any case the one using INSERT are doomed), so I'm slightly in\n> favor of the first approach, and later add an easy and non magic incantation\n> way to produce dumps that don't depend on partitioning.\n\nYeah, we need to do both. Attached find an updated patch series:\n\n0001: TAP test that exhibits both this deadlock problem and the\ndifferent-hash-codes problem. I'm not sure if we want to commit\nthis, or if it should be in exactly this form --- the second set\nof tests with a manual --load-via-partition-root switch will be\npretty redundant after this patch series.\n\n0002: Make pg_restore detect load-via-partition-root by examining the\nCOPY commands embedded in the dump, and skip the TRUNCATE if so,\nthereby fixing the deadlock issue. This is the best we can do for\nlegacy dump files, I think, but it should be good enough.\n\n0003: Also detect load-via-partition-root by adding a label in the\ndump. This is a more bulletproof solution going forward.\n\n0004-0006: same as previous patches, but rebased over these.\nThis gets us to a place where the new TAP test passes.\n\nI've not done anything about modifying the documentation, but I still\nthink we could remove the warning label on --load-via-partition-root.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 13 Mar 2023 19:39:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Mon, Mar 13, 2023 at 07:39:12PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Sun, Mar 12, 2023 at 03:46:52PM -0400, Tom Lane wrote:\n> >> The trick is to detect in pg_restore whether pg_dump chose to do\n> >> load-via-partition-root.\n>\n> > Given that this approach wouldn't help with existing dump files (at least if\n> > using COPY, in any case the one using INSERT are doomed), so I'm slightly in\n> > favor of the first approach, and later add an easy and non magic incantation\n> > way to produce dumps that don't depend on partitioning.\n>\n> Yeah, we need to do both. Attached find an updated patch series:\n\nI didn't find a CF entry, is it intended?\n\n> 0001: TAP test that exhibits both this deadlock problem and the\n> different-hash-codes problem. I'm not sure if we want to commit\n> this, or if it should be in exactly this form --- the second set\n> of tests with a manual --load-via-partition-root switch will be\n> pretty redundant after this patch series.\n\nI think there should be at least the first set of tests committed. I would\nalso be happy to see some test with the --insert case, even if those are\ntechnically redundant too, as it's quite cheap to do once you setup the\ncluster.\n\n> 0002: Make pg_restore detect load-via-partition-root by examining the\n> COPY commands embedded in the dump, and skip the TRUNCATE if so,\n> thereby fixing the deadlock issue. This is the best we can do for\n> legacy dump files, I think, but it should be good enough.\n\nis_load_via_partition_root():\n\n+ * In newer archive files this can be detected by checking for a special\n+ * comment placed in te->defn. In older files we have to fall back to seeing\n+ * if the COPY statement targets the named table or some other one. This\n+ * will not work for data dumped as INSERT commands, so we could give a false\n+ * negative in that case; fortunately, that's a rarely-used option.\n\nI'm not sure if you intend to keep the current 0002 - 0003 separation, but if\nyes the part about te->defn and possible fallback should be moved to 0003. In\n0002 we're only looking at the COPY statement.\n\n- * the run then we wrap the COPY in a transaction and\n- * precede it with a TRUNCATE. If archiving is not on\n- * this prevents WAL-logging the COPY. This obtains a\n- * speedup similar to that from using single_txn mode in\n- * non-parallel restores.\n+ * the run and we are not restoring a\n+ * load-via-partition-root data item then we wrap the COPY\n+ * in a transaction and precede it with a TRUNCATE. If\n+ * archiving is not on this prevents WAL-logging the COPY.\n+ * This obtains a speedup similar to that from using\n+ * single_txn mode in non-parallel restores.\n\nI know you're only inheriting this comment, but isn't it well outdated and not\naccurate anymore? I'm assuming that \"archiving is not on\" was an acceptable\nway to mean \"wal_level < archive\" at some point, but it's now completely\nmisleading.\n\nMinor nitpicking:\n- should the function name prefixed with a \"_\" like almost all nearby code?\n- should there be an assert that the given toc entry is indeed a TABLE DATA?\n\nFWIW it unsurprisingly fixes the problem on my original use case.\n\n> 0003: Also detect load-via-partition-root by adding a label in the\n> dump. This is a more bulletproof solution going forward.\n>\n> 0004-0006: same as previous patches, but rebased over these.\n> This gets us to a place where the new TAP test passes.\n\n+getPartitioningInfo(Archive *fout)\n+{\n+ PQExpBuffer query;\n+ PGresult *res;\n+ int ntups;\n+\n+ /* no partitions before v10 */\n+ if (fout->remoteVersion < 100000)\n+ return;\n+ [...]\n+ /*\n+ * Unsafe partitioning schemes are exactly those for which hash enum_ops\n+ * appears among the partition opclasses. We needn't check partstrat.\n+ *\n+ * Note that this query may well retrieve info about tables we aren't\n+ * going to dump and hence have no lock on. That's okay since we need not\n+ * invoke any unsafe server-side functions.\n+ */\n+ appendPQExpBufferStr(query,\n+ \"SELECT partrelid FROM pg_partitioned_table WHERE\\n\"\n+ \"(SELECT c.oid FROM pg_opclass c JOIN pg_am a \"\n+ \"ON c.opcmethod = a.oid\\n\"\n+ \"WHERE opcname = 'enum_ops' \"\n+ \"AND opcnamespace = 'pg_catalog'::regnamespace \"\n+ \"AND amname = 'hash') = ANY(partclass)\");\n\nHash partitioning was added with pg11, should we bypass pg10 too with a comment\nsaying that we only care about hash, at least for the forseeable future?\n\nOther than that, the patchset looks quite good to me, modulo the upcoming doc\nchanges.\n\nMore generally, I also think that forcing --load-via-partition-root for\nknown unsafe partitioning seems like the best compromise. I'm not sure if we\nshould have an option to turn it off though.\n\n\n", "msg_date": "Thu, 16 Mar 2023 17:03:52 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Mar 13, 2023 at 07:39:12PM -0400, Tom Lane wrote:\n>> Yeah, we need to do both. Attached find an updated patch series:\n\n> I didn't find a CF entry, is it intended?\n\nYeah, it's there:\n\nhttps://commitfest.postgresql.org/42/4226/\n\n> I'm not sure if you intend to keep the current 0002 - 0003 separation, but if\n> yes the part about te->defn and possible fallback should be moved to 0003. In\n> 0002 we're only looking at the COPY statement.\n\nI was intending to smash it all into one commit --- the separation is\njust to ease review.\n\n> I know you're only inheriting this comment, but isn't it well outdated and not\n> accurate anymore? I'm assuming that \"archiving is not on\" was an acceptable\n> way to mean \"wal_level < archive\" at some point, but it's now completely\n> misleading.\n\nSure, want to propose wording?\n\n> Hash partitioning was added with pg11, should we bypass pg10 too with a comment\n> saying that we only care about hash, at least for the forseeable future?\n\nGood point. With v10 already out of support, it hardly matters in the\nreal world, but we might as well not expend cycles when we clearly\nneedn't.\n\n> More generally, I also think that forcing --load-via-partition-root for\n> known unsafe partitioning seems like the best compromise. I'm not sure if we\n> should have an option to turn it off though.\n\nI think the odds of that yielding a usable dump are nil, so I don't\nsee why we should bother.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Mar 2023 08:43:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Thu, Mar 16, 2023 at 08:43:56AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Mon, Mar 13, 2023 at 07:39:12PM -0400, Tom Lane wrote:\n> >> Yeah, we need to do both. Attached find an updated patch series:\n>\n> > I didn't find a CF entry, is it intended?\n>\n> Yeah, it's there:\n>\n> https://commitfest.postgresql.org/42/4226/\n\nAh thanks! I was looking for \"pg_dump\" in the page.\n\n> > I'm not sure if you intend to keep the current 0002 - 0003 separation, but if\n> > yes the part about te->defn and possible fallback should be moved to 0003. In\n> > 0002 we're only looking at the COPY statement.\n>\n> I was intending to smash it all into one commit --- the separation is\n> just to ease review.\n\nOk, no problem then and I agree it's better to squash them.\n\n> > I know you're only inheriting this comment, but isn't it well outdated and not\n> > accurate anymore? I'm assuming that \"archiving is not on\" was an acceptable\n> > way to mean \"wal_level < archive\" at some point, but it's now completely\n> > misleading.\n>\n> Sure, want to propose wording?\n\nJust mentioning the exact wal_level, something like\n\n * [...]. If wal_level is set to minimal this prevents\n * WAL-logging the COPY. This obtains a speedup similar to\n * [...]\n\n\n> > More generally, I also think that forcing --load-via-partition-root for\n> > known unsafe partitioning seems like the best compromise. I'm not sure if we\n> > should have an option to turn it off though.\n>\n> I think the odds of that yielding a usable dump are nil, so I don't\n> see why we should bother.\n\nNo objection from me.\n\n\n", "msg_date": "Fri, 17 Mar 2023 10:07:20 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Mar 16, 2023 at 08:43:56AM -0400, Tom Lane wrote:\n>> I think the odds of that yielding a usable dump are nil, so I don't\n>> see why we should bother.\n\n> No objection from me.\n\nOK, pushed with the discussed changes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Mar 2023 13:44:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_dump versus hash partitioning" }, { "msg_contents": "On Fri, Mar 17, 2023 at 01:44:12PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Thu, Mar 16, 2023 at 08:43:56AM -0400, Tom Lane wrote:\n> >> I think the odds of that yielding a usable dump are nil, so I don't\n> >> see why we should bother.\n>\n> > No objection from me.\n>\n> OK, pushed with the discussed changes.\n\nGreat news, thanks a lot!\n\n\n", "msg_date": "Sat, 18 Mar 2023 09:06:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump versus hash partitioning" } ]
[ { "msg_contents": "\nAs a project, do we want to nudge users toward ICU as the collation\nprovider as the best practice going forward?\n\nIf so, is version 16 the right time to adjust defaults to favor ICU?\n\n * At build time, default to --with-icu (-Dicu=enabled); users who\n don't want ICU can specify --without-icu (-Dicu=disabled/auto)\n * At initdb time, default to --locale-provider=icu if built with\n ICU support\n\nIf we don't want to nudge users toward ICU, is it because we are\nwaiting for something, or is there a lack of consensus that ICU is\nactually better?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 02 Feb 2023 05:13:16 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, Feb 2, 2023 at 8:13 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> If we don't want to nudge users toward ICU, is it because we are\n> waiting for something, or is there a lack of consensus that ICU is\n> actually better?\n\nDo you think it's better?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 08:44:58 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, 2023-02-02 at 08:44 -0500, Robert Haas wrote:\n> On Thu, Feb 2, 2023 at 8:13 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > If we don't want to nudge users toward ICU, is it because we are\n> > waiting for something, or is there a lack of consensus that ICU is\n> > actually better?\n> \n> Do you think it's better?\n\nYes:\n\n * ICU more featureful: e.g. supports case-insensitive collations (the\ncitext docs suggest looking at ICU instead).\n * It's faster: a simple non-contrived sort is something like 70%\nfaster[1] than one using glibc.\n * It can provide consistent semantics across platforms.\n\nI believe the above reasons are enough to call ICU \"better\", but it\nalso seems like a better path for addressing/mitigating collator\nversioning problems:\n\n * Easier for users to control what library version is available on\ntheir system. We can also ask packagers to keep some old versions of\nICU available for an extended period of time.\n * If one of the ICU multilib patches makes it in, it will be easier\nfor users to select which of the library versions Postgres will use.\n * Reports versions for indiividual collators, distinct from the\nlibrary version.\n\nThe biggest disadvantage (rather, the flip side of its advantages) is\nthat it's a separate dependency. Will ICU still be maintained in 10\nyears or will we end up stuck maintaining it ourselves? Then again,\nwe've already been shipping it, so I don't know if we can avoid that\nproblem entirely now even if we wanted to.\n\nI don't mean that ICU solves all of our problems -- far from it. But\nyou asked if I think it's better, and my answer is yes.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://postgr.es/m/64039a2dbcba6f42ed2f32bb5f0371870a70afda.camel@j-davis.com\n\n\n\n", "msg_date": "Thu, 02 Feb 2023 08:31:46 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, Feb 3, 2023 at 5:31 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2023-02-02 at 08:44 -0500, Robert Haas wrote:\n> > On Thu, Feb 2, 2023 at 8:13 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > > If we don't want to nudge users toward ICU, is it because we are\n> > > waiting for something, or is there a lack of consensus that ICU is\n> > > actually better?\n> >\n> > Do you think it's better?\n>\n> Yes:\n>\n> * ICU more featureful: e.g. supports case-insensitive collations (the\n> citext docs suggest looking at ICU instead).\n> * It's faster: a simple non-contrived sort is something like 70%\n> faster[1] than one using glibc.\n> * It can provide consistent semantics across platforms.\n\n+1\n\n> * Easier for users to control what library version is available on\n> their system. We can also ask packagers to keep some old versions of\n> ICU available for an extended period of time.\n> * If one of the ICU multilib patches makes it in, it will be easier\n> for users to select which of the library versions Postgres will use.\n> * Reports versions for indiividual collators, distinct from the\n> library version.\n\n+1\n\n> The biggest disadvantage (rather, the flip side of its advantages) is\n> that it's a separate dependency. Will ICU still be maintained in 10\n> years or will we end up stuck maintaining it ourselves? Then again,\n> we've already been shipping it, so I don't know if we can avoid that\n> problem entirely now even if we wanted to.\n\nIt has a pretty special status, with an absolutely enormous amount of\ntechnology depending on it.\n\nhttp://blog.unicode.org/2016/05/icu-joins-unicode-consortium.html\nhttps://unicode.org/consortium/consort.html\nhttps://home.unicode.org/membership/members/\nhttps://home.unicode.org/about-unicode/\n\nI mean, who knows what the future holds, but ultimately what we're\ndoing here is taking the de facto reference implementation of the\nUnicode collation algorithm. Are Unicode and the consortium still\ngoing to be here in 10 years? We're all in on Unicode, and it's also\ntangled up with ISO standards, as are parts of the collation stuff.\nSure, there could be a clean-room implementation that replaces it in\nsome sense (just as there is a Java implementation) but it would very\nlikely be \"the same\" because the real thing we're buying here is the\nset of algorithms and data maintenance that the whole industry has\nagreed on.\n\nUnless Britain decides to exit the Latin alphabet, terminate\nmembership of ISO and revert to anglo-saxon runes with a sort order\nthat is defined in the new constitution as \"the opposite of whatever\nUnicode says\", it's hard to see obstacles to ICU's long term universal\napplicability.\n\nIt's still important to have libc support as an option, though,\nbecause it's a totally reasonable thing to want sort order to agree\nwith the \"sort\" command on the same host, and you are willing to deal\nwith all the complexities that we're trying to escape.\n\n\n", "msg_date": "Fri, 3 Feb 2023 11:14:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, Feb 2, 2023 at 2:15 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Sure, there could be a clean-room implementation that replaces it in\n> some sense (just as there is a Java implementation) but it would very\n> likely be \"the same\" because the real thing we're buying here is the\n> set of algorithms and data maintenance that the whole industry has\n> agreed on.\n\nI don't think that a clean room implementation is implausible. They\nseem to already exist, and be explicitly provided for by CLDR, which\nis not joined at the hip to ICU:\n\nhttps://github.com/elixir-cldr/cldr\n\nMost of the value that we tend to think of as coming from ICU actually\ncomes from CLDR itself, as well as related Unicode Consortium and IETF\nstandards/RFCs such as BCP-47.\n\n> Unless Britain decides to exit the Latin alphabet, terminate\n> membership of ISO and revert to anglo-saxon runes with a sort order\n> that is defined in the new constitution as \"the opposite of whatever\n> Unicode says\", it's hard to see obstacles to ICU's long term universal\n> applicability.\n\nIt would have to literally be defined as \"not unicode\" for it to\npresent a real problem. A key goal of Unicode is to accommodate\npolitical and cultural shifts, since even countries can come and go.\nIn principle Unicode should be able to accommodate just about any\nchange in preferences, except when there is an irreconcilable\ndifference of opinion among people that are from the same natural\nlanguage group. For example it can accommodate relatively minor\ndifferences of opinion about how text should be sorted among groups\nthat each speak a regional dialect of the same language. Hardly\nanybody even notices this.\n\nAccommodating these variations can only come from making a huge\ninvestment. Most of the work is actually done by natural language\nscholars, not technologists. That effort is very unlikely to be\nduplicated by some other group with its own conflicting goals. AFAICT\nthere is no great need for any schisms, since differences of opinion\ncan usually be accommodated under the umbrella of Unicode.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Feb 2023 14:48:05 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> It's still important to have libc support as an option, though,\n> because it's a totally reasonable thing to want sort order to agree\n> with the \"sort\" command on the same host, and you are willing to deal\n> with all the complexities that we're trying to escape.\n\nYeah. I would be resistant to making ICU a required dependency,\nbut it doesn't seem unreasonable to start moving towards it being\nour default collation support.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Feb 2023 18:10:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, 2023-02-02 at 18:10 -0500, Tom Lane wrote:\n> Yeah.  I would be resistant to making ICU a required dependency,\n> but it doesn't seem unreasonable to start moving towards it being\n> our default collation support.\n\nPatch attached.\n\nTo get the default locale, the patch initializes a UCollator with NULL\nfor the locale name, and then queries it for the locale name. Then it's\nconverted to a language tag, which is consistent with the initial\ncollation import. I'm not sure that's the best way, but it seems\nreasonable.\n\nIf it's a user-provided locale (--icu-locale=), then the patch leaves\nit as-is, and does not convert it to a language tag (consistent with\nCREATE COLLATION and CREATE DATABASE).\n\nI opened another discussion about whether we want to try harder to\nvalidate or canonicalize the locale name:\n\nhttps://www.postgresql.org/message-id/11b1eeb7e7667fdd4178497aeb796c48d26e69b9.camel@j-davis.com\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Wed, 08 Feb 2023 12:16:46 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 2023-02-08 12:16:46 -0800, Jeff Davis wrote:\n> On Thu, 2023-02-02 at 18:10 -0500, Tom Lane wrote:\n> > Yeah.� I would be resistant to making ICU a required dependency,\n> > but it doesn't seem unreasonable to start moving towards it being\n> > our default collation support.\n>\n> Patch attached.\n\nUnfortunately this fails widely on CI, with both compile time and runtime\nissues:\nhttps://cirrus-ci.com/build/5116408950947840\n\n\n", "msg_date": "Wed, 8 Feb 2023 18:22:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Wed, 2023-02-08 at 18:22 -0800, Andres Freund wrote:\n> On 2023-02-08 12:16:46 -0800, Jeff Davis wrote:\n> > On Thu, 2023-02-02 at 18:10 -0500, Tom Lane wrote:\n> > > Yeah.  I would be resistant to making ICU a required dependency,\n> > > but it doesn't seem unreasonable to start moving towards it being\n> > > our default collation support.\n> > \n> > Patch attached.\n> \n> Unfortunately this fails widely on CI, with both compile time and\n> runtime\n\nNew patches attached.\n\n 0001: build defaults to requiring ICU\n 0002: initdb defaults to using ICU (if server built with ICU)\n\nOne CI test is failing: \"Windows - Server 2019, VS 2019 - Meson &\nninja\"; if I apply Andres patch (\nhttps://github.com/anarazel/postgres/commit/dde7c68 ), then it works.\n\nI ran into one annoyance with pg_upgrade, which is that a v15 cluster\ninitialized with the defaults requires that the v16 cluster is\ninitialized with --locale-provider=libc, because otherwise the old and\nnew cluster will have mismatching template databases. Simple to fix\nonce you see the error, but I wonder how many initdb scripts might be\nbroken? I suppose it's just the cost of changing a default? Would an\nenvironment variable help for cases where it's difficult to pass that\nextra option down through a script?\n\nI also considered posting another patch to change the default for\nCREATE COLLATION, but there are a few issues I'm not sure about. Should\nthe default be based on whether ICU support is available? Or the\ndatlocprovider for the current database? And/or some kind of\ncompatibility GUC?\n\nNotes on the tests I needed to fix, in case they are interesting or\npoint to some kind of larger problem:\n\n * ecpg has a test that involves setting the client_encoding to LATIN1\nwhich required a compatible server encoding so it was setting\nENCODING=SQL_ASCII, which ICU doesn't support. The ecpg test did not\nlook particularly sensitive to the locale, so I changed it to use\nclient_encoding=SQL_ASCII instead, so that the server encoding doesn't\nmatter.\n * citext has a test involving Turkish characters, which works for all\nlibc locales, but in ICU the test only works in Turkish locales. I skip\nthe test if datlocprovider='i', because citext doesn't seem very\nimportant in an ICU world.\n * unaccent is broken if the database provider is ICU and LC_CTYPE=C,\nbecause the t_isspace() (etc.) functions do not properly handle ICU.\nProbably some other things are broken with that combination, but only\nthis test seems to exercise it. I just skipped the test for that broken\ncombination, but perhaps it should be fixed in the future.\n * initdb was being built with ICU as a dependency in meson, but not\nautoconf. I assume it's fine to link ICU into initdb, so I changed the\nMakefile.\n * I changed a couple tests to initialize with --locale-provider=libc.\nThey were testing that creating a database with the ICU provider but no\nICU locale fails, and that's easiest to test if the template is libc.\n * The CI test CompilerWarnings:mingw_cross_warning was failing because\nICU is not available. I added --without-icu in the .cirrus.yml file and\nit works.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Fri, 10 Feb 2023 16:17:00 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Hi,\n\nOn 2023-02-10 16:17:00 -0800, Jeff Davis wrote:\n> One CI test is failing: \"Windows - Server 2019, VS 2019 - Meson &\n> ninja\"; if I apply Andres patch (\n> https://github.com/anarazel/postgres/commit/dde7c68 ), then it works.\n\nUntil something like my patch above is done more generally applicable, I think\nyour patch should disable ICU on windows. Can't just fail to build.\n\nPerhaps we don't need to force ICU use to on with the meson build, given that\nit defaults to auto-detection?\n\n\n> I ran into one annoyance with pg_upgrade, which is that a v15 cluster\n> initialized with the defaults requires that the v16 cluster is\n> initialized with --locale-provider=libc, because otherwise the old and\n> new cluster will have mismatching template databases. Simple to fix\n> once you see the error, but I wonder how many initdb scripts might be\n> broken? I suppose it's just the cost of changing a default? Would an\n> environment variable help for cases where it's difficult to pass that\n> extra option down through a script?\n\nThat seems problematic to me.\n\nBut, shouldn't pg_upgrade be able to deal with this? As long as the databases\nare created with template0, we can create the collations at that point?\n\n\n> @@ -15323,7 +15311,7 @@ else\n> We can't simply define LARGE_OFF_T to be 9223372036854775807,\n> since some C++ compilers masquerading as C compilers\n> incorrectly reject 9223372036854775807. */\n> -#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n> +#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))\n> int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721\n> \t\t && LARGE_OFF_T % 2147483647 == 1)\n> \t\t ? 1 : -1];\n\nThis stuff shouldn't be in here, it's due to a debian patched autoconf.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Feb 2023 18:00:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, 2023-02-02 at 05:13 -0800, Jeff Davis wrote:\n> As a project, do we want to nudge users toward ICU as the collation\n> provider as the best practice going forward?\n\nOne consideration here is security. Any vulnerability in ICU collation\nroutines could easily become a vulnerability in Postgres.\n\nI looked at these lists:\n\nhttps://www.cvedetails.com/vulnerability-list/vendor_id-17477/Icu-project.html\nhttps://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=icu\nhttps://unicode-org.atlassian.net/issues/?jql=labels%20%3D%20%22security%22\nhttps://unicode-org.atlassian.net/issues/?jql=labels%20%3D%20%22was_sensitive%22\n\nHere are the recent CVEs:\n\nCVE-2021-30535 https://unicode-org.atlassian.net/browse/ICU-21587\nCVE-2020-21913 https://unicode-org.atlassian.net/browse/ICU-20850\nCVE-2020-10531 https://unicode-org.atlassian.net/browse/ICU-20958\n\nBut there are quite a few JIRAs that look concerning that don't have a\nCVE assigned:\n\n2021 https://unicode-org.atlassian.net/browse/ICU-21537\n2021 https://unicode-org.atlassian.net/browse/ICU-21597\n2021 https://unicode-org.atlassian.net/browse/ICU-21676\n2021 https://unicode-org.atlassian.net/browse/ICU-21749\n\nNot sure which of these are exploitable, and if they are, why they\ndon't have a CVE. If someone else finds more issues, please let me\nknow.\n\nThe good news is that the Chrome/Chromium projects are actively finding\nand reporting issues.\n\nI didn't look for comparable information about glibc, but I would guess\nthat exploitable memory errors in setlocale/strcoll are very rare,\notherwise it would be a security disaster for many projects.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Mon, 13 Feb 2023 17:11:29 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-10 at 18:00 -0800, Andres Freund wrote:\n> Until something like my patch above is done more generally\n> applicable, I think\n> your patch should disable ICU on windows. Can't just fail to build.\n> \n> Perhaps we don't need to force ICU use to on with the meson build,\n> given that\n> it defaults to auto-detection?\n\nDone. I changed it back to 'auto', and tests pass.\n\n> \n> But, shouldn't pg_upgrade be able to deal with this? As long as the\n> databases\n> are created with template0, we can create the collations at that\n> point?\n\nAre you saying that the upgraded cluster could have a different default\ncollation for the template databases than the original cluster?\n\nThat would be wrong to do, at least by default, but I could see it\nbeing a useful option.\n\nOr maybe I misunderstand what you're saying?\n\n> \n> This stuff shouldn't be in here, it's due to a debian patched\n> autoconf.\n\nRemoved, thank you.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Tue, 14 Feb 2023 09:48:08 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Hi,\n\nOn 2023-02-14 09:48:08 -0800, Jeff Davis wrote:\n> On Fri, 2023-02-10 at 18:00 -0800, Andres Freund wrote:\n> > But, shouldn't pg_upgrade be able to deal with this? As long as the\n> > databases\n> > are created with template0, we can create the collations at that\n> > point?\n> \n> Are you saying that the upgraded cluster could have a different default\n> collation for the template databases than the original cluster?\n\n> That would be wrong to do, at least by default, but I could see it\n> being a useful option.\n> \n> Or maybe I misunderstand what you're saying?\n\nI am saying that pg_upgrade should be able to deal with the difference. The\ndetails of how to implement that, don't matter that much.\n\nFWIW, I don't think it matters much what collation template0 has, since we\nallow to change the locale provider when using template0 as the template.\n\nWe could easily update template0, if we think that's necessary. But I don't\nthink it really is. As long as the newly created databases have the right\nprovider, I'd lean towards not touching template0. But whatever...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 14 Feb 2023 09:59:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 2/13/23 8:11 PM, Jeff Davis wrote:\r\n> On Thu, 2023-02-02 at 05:13 -0800, Jeff Davis wrote:\r\n>> As a project, do we want to nudge users toward ICU as the collation\r\n>> provider as the best practice going forward?\r\n> \r\n> One consideration here is security. Any vulnerability in ICU collation\r\n> routines could easily become a vulnerability in Postgres.\r\n\r\nWould it be any different than a vulnerability in OpenSSL et al? I know \r\nthat's a general, nuanced question but it would be good to understand if \r\nwe are exposing ourselves to any more vulnerabilities. And would it be \r\nany different than today, given people can build PG with libicu as is?\r\n\r\nContinuing on $SUBJECT, I wanted to understand performance comparisons. \r\nI saw your comments[1] in response to Robert's question, looked at your \r\nbenchmarks[2] and one that ICU ran on older versions[3]. It seems that \r\nin general, users would see performance gains switching to ICU. The only \r\none in [3] that stood out to me was the tests on the \"ko_KR\" collation \r\nunderperformed on a list of Korean names, but maybe that is better in \r\nnewer versions.\r\n\r\nI agree with most of your points in [1]. The platform-consistent \r\nbehavior is a good point, especially with more PG deployments running on \r\ndifferent systems. While taking on a new dependency is a concern, ICU \r\nwas released in 1999[4], has an active community, and seems to follow \r\nstandards (i.e. the Unicode Consortium).\r\n\r\nI do wonder about upgrades, beyond the ongoing work with pg_upgrade. I \r\nthink the logical methods (pg_dumpall, logical replication) should \r\ngenerally be OK, but we should ensure we think of things that could go \r\nwrong and how we'd answer them.\r\n\r\nBased on the available data, I think it's OK to move towards ICU as the \r\ndefault, or preferred, collation provider. I agree (for now) in not \r\ntaking a hard dependency on ICU.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/b676252eeb57ab8da9dbb411d0ccace95caeda0a.camel%40j-davis.com\r\n[2] \r\nhttps://www.postgresql.org/message-id/64039a2dbcba6f42ed2f32bb5f0371870a70afda.camel@j-davis.com\r\n[3] https://icu.unicode.org/charts/collation-icu4c48-glibc\r\n[4] https://en.wikipedia.org/wiki/International_Components_for_Unicode", "msg_date": "Tue, 14 Feb 2023 16:27:50 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Tue, 2023-02-14 at 16:27 -0500, Jonathan S. Katz wrote:\n> Would it be any different than a vulnerability in OpenSSL et al?\n\nIn principle, no, but sometimes the details matter. I'm just trying to\nadd data to the discussion.\n\n> It seems that \n> in general, users would see performance gains switching to ICU.\n\nThat's great news, and consistent with my experience. I don't think it\nshould be a driving factor though. If there's a choice is between\nplatform-independent semantics (ICU) and performance, platform-\nindependence should be the default.\n\n> I agree with most of your points in [1]. The platform-consistent \n> behavior is a good point, especially with more PG deployments running\n> on \n> different systems.\n\nNow I think semantics are the most important driver, being consistent\nacross platforms and based on some kind of trusted independent\norganization that we can point to.\n\nIt feels very wrong to me to explain that sort order is defined by the\noperating system on which Postgres happens to run. Saying that it's\ndefined by ICU, which is part of the Unicode consotium, is much better.\nIt doesn't eliminate versioning issues, of course, but I think it's a\nbetter explanation for users.\n\nMany users have other systems in their data infrastructure, running on\na variety of platforms, and could (in theory) try to synchronize around\na common ICU version to avoid subtle bugs in their data pipeline.\n\n> Based on the available data, I think it's OK to move towards ICU as\n> the \n> default, or preferred, collation provider. I agree (for now) in not \n> taking a hard dependency on ICU.\n\nI count several favorable responses, so I'll take it that we (as a\ncommunity) are intending to change the default for build and initdb in\nv16.\n\nRobert expressed some skepticism[1], though I don't see an objection.\nIf I read his concerns correctly, he's mainly concerned with quality\nissues like documentaiton, bugs, etc. I understand those concerns (I'm\nthe one that raised them), but they seem like the kind of issues that\none finds any time they dig into a dependency enough. \"Setting our\nsights very high\"[1], to me, would just be ICU with a bit more rigorous\nattention to quality issues.\n\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoYmeGJaW%3DPy9tAZtrnCP%2B_Q%2BzRQthv%3Dzn_HyA_nqEDM-A%40mail.gmail.com\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Wed, 15 Feb 2023 11:31:32 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, Feb 16, 2023 at 1:01 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> It feels very wrong to me to explain that sort order is defined by the\n> operating system on which Postgres happens to run. Saying that it's\n> defined by ICU, which is part of the Unicode consotium, is much better.\n> It doesn't eliminate versioning issues, of course, but I think it's a\n> better explanation for users.\n\nThe fact that we can't use ICU on Windows, though, weakens this\nargument a lot. In my experience, we have a lot of Windows users, and\nthey're not any happier with the operating system collations than\nLinux users. Possibly less so.\n\nI feel like this is a very difficult kind of change to judge. If\neveryone else feels this is a win, we should go with it, and hopefully\nwe'll end up better off. I do feel like there are things that could go\nwrong, though, between the imperfect documentation, the fact that a\nsubstantial chunk of our users won't be able to use it because they\nrun Windows, and everybody having to adjust to the behavior change.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Feb 2023 15:05:10 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, 2023-02-16 at 15:05 +0530, Robert Haas wrote:\n> On Thu, Feb 16, 2023 at 1:01 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > It feels very wrong to me to explain that sort order is defined by the\n> > operating system on which Postgres happens to run. Saying that it's\n> > defined by ICU, which is part of the Unicode consotium, is much better.\n> > It doesn't eliminate versioning issues, of course, but I think it's a\n> > better explanation for users.\n> \n> The fact that we can't use ICU on Windows, though, weakens this\n> argument a lot. In my experience, we have a lot of Windows users, and\n> they're not any happier with the operating system collations than\n> Linux users. Possibly less so.\n> \n> I feel like this is a very difficult kind of change to judge. If\n> everyone else feels this is a win, we should go with it, and hopefully\n> we'll end up better off. I do feel like there are things that could go\n> wrong, though, between the imperfect documentation, the fact that a\n> substantial chunk of our users won't be able to use it because they\n> run Windows, and everybody having to adjust to the behavior change.\n\nUnless I misunderstand, the lack of Windows support is not a matter\nof principle and can be added later on, right?\n\nI am in favor of changing the default. It might be good to add a section\nto the documentation in \"Server setup and operation\" recommending that\nif you go with the default choice of ICU, you should configure your\npackage manager not to upgrade the ICU library.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 16 Feb 2023 11:32:16 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 2/16/23 4:35 AM, Robert Haas wrote:\r\n> On Thu, Feb 16, 2023 at 1:01 AM Jeff Davis <pgsql@j-davis.com> wrote:\r\n>> It feels very wrong to me to explain that sort order is defined by the\r\n>> operating system on which Postgres happens to run. Saying that it's\r\n>> defined by ICU, which is part of the Unicode consotium, is much better.\r\n>> It doesn't eliminate versioning issues, of course, but I think it's a\r\n>> better explanation for users.\r\n> \r\n> The fact that we can't use ICU on Windows, though, weakens this\r\n> argument a lot. In my experience, we have a lot of Windows users, and\r\n> they're not any happier with the operating system collations than\r\n> Linux users. Possibly less so.\r\n\r\nThis is one reason why we're discussing ICU as the \"preferred default\" \r\nvs. \"the default.\" While it may not completely eliminate platform \r\ndependent behavior for collations, it takes a step forward.\r\n\r\nAnd AIUI, it does sound like ICU is available on newer versions of \r\nWindows[1].\r\n\r\n> I feel like this is a very difficult kind of change to judge. If\r\n> everyone else feels this is a win, we should go with it, and hopefully\r\n> we'll end up better off. I do feel like there are things that could go\r\n> wrong, though, between the imperfect documentation, the fact that a\r\n> substantial chunk of our users won't be able to use it because they\r\n> run Windows, and everybody having to adjust to the behavior change.\r\n\r\nWe should continue to improve our documentation. Personally, I found the \r\nbiggest challenge was understanding how to set ICU locales / rules, \r\nparticularly for nondeterministic collations as it was challenging to \r\nfind where these were listed. I was able to overcome this with the \r\nexamples in our docs + blogs, but I agree it's an area we can continue \r\nto improve upon.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://learn.microsoft.com/en-us/dotnet/core/extensions/globalization-icu", "msg_date": "Thu, 16 Feb 2023 10:06:37 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Hi,\n\nOn 2023-02-16 15:05:10 +0530, Robert Haas wrote:\n> The fact that we can't use ICU on Windows, though, weakens this\n> argument a lot. In my experience, we have a lot of Windows users, and\n> they're not any happier with the operating system collations than\n> Linux users. Possibly less so.\n\nWhy can't you use ICU on windows? It works today, afaict?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Feb 2023 08:15:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, Feb 16, 2023 at 9:45 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-16 15:05:10 +0530, Robert Haas wrote:\n> > The fact that we can't use ICU on Windows, though, weakens this\n> > argument a lot. In my experience, we have a lot of Windows users, and\n> > they're not any happier with the operating system collations than\n> > Linux users. Possibly less so.\n>\n> Why can't you use ICU on windows? It works today, afaict?\n\nUh, I had the contrary impression from the discussion upthread, but it\nsounds like I might be misunderstanding the situation?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Feb 2023 10:44:05 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Hi, \n\nOn February 16, 2023 9:14:05 PM PST, Robert Haas <robertmhaas@gmail.com> wrote:\n>On Thu, Feb 16, 2023 at 9:45 PM Andres Freund <andres@anarazel.de> wrote:\n>> On 2023-02-16 15:05:10 +0530, Robert Haas wrote:\n>> > The fact that we can't use ICU on Windows, though, weakens this\n>> > argument a lot. In my experience, we have a lot of Windows users, and\n>> > they're not any happier with the operating system collations than\n>> > Linux users. Possibly less so.\n>>\n>> Why can't you use ICU on windows? It works today, afaict?\n>\n>Uh, I had the contrary impression from the discussion upthread, but it\n>sounds like I might be misunderstanding the situation?\n\nThat was about the build environment in CI / cfbot, I think. Jeff was making icu a hard requirement by default, but ICU wasn't installed in a usable way, so the build failed. The patch he referred to was just building ICU during the CI run. \n\nI do remember encountering issues with the mkvcbuild.pl build not building against a downloaded modern icu build, but that was just about library naming or directory structure, or such. \n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 16 Feb 2023 21:30:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, Feb 17, 2023 at 10:44:05AM +0530, Robert Haas wrote:\n> Uh, I had the contrary impression from the discussion upthread, but it\n> sounds like I might be misunderstanding the situation?\n\nIMO, it would be nice to be able to have the automatic detection of\nmeson work in the CFbot to see how this patch goes. Perhaps that's\nnot a reason enough to hold on this patch, though..\n\nSeparate question: what's the state of the Windows installers provided\nby the community regarding libicu? Is that embedded in the MSI?\n--\nMichael", "msg_date": "Fri, 17 Feb 2023 14:40:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Hi, \n\nOn February 16, 2023 9:40:17 PM PST, Michael Paquier <michael@paquier.xyz> wrote:\n>On Fri, Feb 17, 2023 at 10:44:05AM +0530, Robert Haas wrote:\n>> Uh, I had the contrary impression from the discussion upthread, but it\n>> sounds like I might be misunderstanding the situation?\n>\n>IMO, it would be nice to be able to have the automatic detection of\n>meson work in the CFbot to see how this patch goes. Perhaps that's\n>not a reason enough to hold on this patch, though..\n\nFwiw, the manually triggered mingw task today builds with ICU support. One thing the patch could do is to just comment out the \"manual\" piece in .cirrus.yml, then cfbot would run it for just this cf entry.\n\nI am planning to build the optional libraries that are easily built, as part of the image build for use by CI. Just haven't gotten around to it. The patch Jeff linked to is part of the experimentation on the way to that. If somebody else wants to finish that, even better. IIRC that prototype builds all optional dependencies except for kerberos and ossp-uuid. \n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 16 Feb 2023 21:51:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Tue, 2023-02-14 at 09:59 -0800, Andres Freund wrote:\n> I am saying that pg_upgrade should be able to deal with the\n> difference. The\n> details of how to implement that, don't matter that much.\n\nTo clarify, you're saying that pg_upgrade should simply update\npg_database to set the new databases' collation fields equal to that of\nthe old cluster?\n\nI'll submit it as a separate patch because it would be independently\nuseful.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 17 Feb 2023 00:06:06 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-17 at 14:40 +0900, Michael Paquier wrote:\n> Separate question: what's the state of the Windows installers provided\n> by the community regarding libicu?  Is that embedded in the MSI?\n\nThe EDB installer installs a quite old version of the ICU library\nfor compatibility reasons, as far as I know.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 17 Feb 2023 13:23:12 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-17 at 00:06 -0800, Jeff Davis wrote:\n> On Tue, 2023-02-14 at 09:59 -0800, Andres Freund wrote:\n> > I am saying that pg_upgrade should be able to deal with the\n> > difference. The\n> > details of how to implement that, don't matter that much.\n> \n> To clarify, you're saying that pg_upgrade should simply update\n> pg_database to set the new databases' collation fields equal to that\n> of\n> the old cluster?\n\nThinking about this more, it's not clear to me if this would be in\nscope for pg_upgrade or not. If pg_upgrade is fixing up the new cluster\nrather than checking for compatibility, why doesn't it just take over\nand do the initdb for the new cluster itself? That would be less\nconfusing for users, and avoid some weirdness (like, if you drop the\ndatabase \"postgres\" on the original, why does it reappear after an\nupgrade?).\n\nSomeone might want to do something interesting to the new cluster\nbefore the upgrade, but it's not clear from the docs what would be both\nuseful and safe.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 17 Feb 2023 09:01:54 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Hi,\n\nOn 2023-02-17 09:01:54 -0800, Jeff Davis wrote:\n> On Fri, 2023-02-17 at 00:06 -0800, Jeff Davis wrote:\n> > On Tue, 2023-02-14 at 09:59 -0800, Andres Freund wrote:\n> > > I am saying that pg_upgrade should be able to deal with the\n> > > difference. The\n> > > details of how to implement that, don't matter that much.\n> > \n> > To clarify, you're saying that pg_upgrade should simply update\n> > pg_database to set the new databases' collation fields equal to that\n> > of\n> > the old cluster?\n\nYes.\n\n> Thinking about this more, it's not clear to me if this would be in\n> scope for pg_upgrade or not.\n\nI don't think we should consider changing the default collation provider\nwithout making this more seamless, one way or another.\n\n\n> If pg_upgrade is fixing up the new cluster rather than checking for\n> compatibility, why doesn't it just take over and do the initdb for the new\n> cluster itself? That would be less confusing for users, and avoid some\n> weirdness (like, if you drop the database \"postgres\" on the original, why\n> does it reappear after an upgrade?).\n\nI've wondered about that as well. There are some initdb-time options you can\nset, but pg_upgrade could forward those.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Feb 2023 09:05:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "pá 17. 2. 2023 v 18:02 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:\n\n> On Fri, 2023-02-17 at 00:06 -0800, Jeff Davis wrote:\n> > On Tue, 2023-02-14 at 09:59 -0800, Andres Freund wrote:\n> > > I am saying that pg_upgrade should be able to deal with the\n> > > difference. The\n> > > details of how to implement that, don't matter that much.\n> >\n> > To clarify, you're saying that pg_upgrade should simply update\n> > pg_database to set the new databases' collation fields equal to that\n> > of\n> > the old cluster?\n>\n> Thinking about this more, it's not clear to me if this would be in\n> scope for pg_upgrade or not. If pg_upgrade is fixing up the new cluster\n> rather than checking for compatibility, why doesn't it just take over\n> and do the initdb for the new cluster itself? That would be less\n> confusing for users, and avoid some weirdness (like, if you drop the\n> database \"postgres\" on the original, why does it reappear after an\n> upgrade?).\n>\n> Someone might want to do something interesting to the new cluster\n> before the upgrade, but it's not clear from the docs what would be both\n> useful and safe.\n>\n\nToday I tested icu for Czech sorting. It is a little bit slower, but not\ntoo much, but it produces partially different results.\n\nselect row_number() over (order by nazev collate \"cs-x-icu\"), nazev from\nobce\nexcept select row_number() over (order by nazev collate \"cs_CZ\"), nazev\nfrom obce;\n\nreturns a not empty set. So minimally for Czech collate, an index rebuild\nis necessary\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n>\n\npá 17. 2. 2023 v 18:02 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:On Fri, 2023-02-17 at 00:06 -0800, Jeff Davis wrote:\n> On Tue, 2023-02-14 at 09:59 -0800, Andres Freund wrote:\n> > I am saying that pg_upgrade should be able to deal with the\n> > difference. The\n> > details of how to implement that, don't matter that much.\n> \n> To clarify, you're saying that pg_upgrade should simply update\n> pg_database to set the new databases' collation fields equal to that\n> of\n> the old cluster?\n\nThinking about this more, it's not clear to me if this would be in\nscope for pg_upgrade or not. If pg_upgrade is fixing up the new cluster\nrather than checking for compatibility, why doesn't it just take over\nand do the initdb for the new cluster itself? That would be less\nconfusing for users, and avoid some weirdness (like, if you drop the\ndatabase \"postgres\" on the original, why does it reappear after an\nupgrade?).\n\nSomeone might want to do something interesting to the new cluster\nbefore the upgrade, but it's not clear from the docs what would be both\nuseful and safe.Today I tested icu for Czech sorting. It is a little bit slower, but not too much, but it produces partially different results.select row_number() over (order by nazev collate \"cs-x-icu\"), nazev from obce except select row_number() over (order by nazev collate \"cs_CZ\"), nazev from obce;returns a not empty set. So minimally for Czech collate, an index rebuild is necessaryRegardsPavel \n\nRegards,\n        Jeff Davis", "msg_date": "Fri, 17 Feb 2023 18:27:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-17 at 09:05 -0800, Andres Freund wrote:\n> > Thinking about this more, it's not clear to me if this would be in\n> > scope for pg_upgrade or not.\n> \n> I don't think we should consider changing the default collation\n> provider\n> without making this more seamless, one way or another.\n\nI guess I'm fine hacking pg_upgrade, but I think I'd like to make it\nconditional on this specific case: only perform the fixup if the old\ncluster is 15 or earlier and using libc and the newer cluster is 16 or\nlater and using icu.\n\nThere's already a check that the new cluster is empty, so I think it's\nsafe to hack the pg_database locale fields.\n\nRegards,\n\tJeff Davis\n\n> \n\n\n", "msg_date": "Fri, 17 Feb 2023 10:00:41 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Hi,\n\nOn 2023-02-17 10:00:41 -0800, Jeff Davis wrote:\n> I guess I'm fine hacking pg_upgrade, but I think I'd like to make it\n> conditional on this specific case: only perform the fixup if the old\n> cluster is 15 or earlier and using libc and the newer cluster is 16 or\n> later and using icu.\n\n-1. That's just going to cause pain one major version upgrade further down the\nline. Why would we want to incur that pain?\n\n\n> There's already a check that the new cluster is empty, so I think it's\n> safe to hack the pg_database locale fields.\n\nI don't think we need to, we do issue the CREATE DATABASEs. So we just need to\nmake sure that includes the collation provider info, and the proper template\ndatabase, in pg_upgrade mode.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Feb 2023 10:09:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, Feb 17, 2023 at 09:01:54AM -0800, Jeff Davis wrote:\n> On Fri, 2023-02-17 at 00:06 -0800, Jeff Davis wrote:\n> > On Tue, 2023-02-14 at 09:59 -0800, Andres Freund wrote:\n> > > I am saying that pg_upgrade should be able to deal with the\n> > > difference. The\n> > > details of how to implement that, don't matter that much.\n> > \n> > To clarify, you're saying that pg_upgrade should simply update\n> > pg_database to set the new databases' collation fields equal to that\n> > of\n> > the old cluster?\n> \n> Thinking about this more, it's not clear to me if this would be in\n> scope for pg_upgrade or not. If pg_upgrade is fixing up the new cluster\n> rather than checking for compatibility, why doesn't it just take over\n> and do the initdb for the new cluster itself? That would be less\n> confusing for users, and avoid some weirdness (like, if you drop the\n> database \"postgres\" on the original, why does it reappear after an\n> upgrade?).\n> \n> Someone might want to do something interesting to the new cluster\n> before the upgrade, but it's not clear from the docs what would be both\n> useful and safe.\n\nThis came up before - I'm of the opinion that it's unsupported and/or\nuseless to try to do anything on the new cluster between initdb and\npg_upgrade.\n\nhttps://www.postgresql.org/message-id/20220707184410.GB13040@telsasoft.com\nhttps://www.postgresql.org/message-id/20220905170322.GM31833@telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 17 Feb 2023 12:41:40 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-17 at 10:09 -0800, Andres Freund wrote:\n> -1. That's just going to cause pain one major version upgrade further\n> down the\n> line. Why would we want to incur that pain?\n\nOK, we can just always do the fixup as long as the old one is libc and\nthe new one is ICU. I'm just trying to avoid this becoming a general\nmechanism to fix up an incompatible new cluster.\n\n> > There's already a check that the new cluster is empty, so I think\n> > it's\n> > safe to hack the pg_database locale fields.\n> \n> I don't think we need to, we do issue the CREATE DATABASEs. So we\n> just need to\n> make sure that includes the collation provider info, and the proper\n> template\n> database, in pg_upgrade mode.\n\nWe must fixup template1/postgres in the new cluster (change it to libc\nto match the old cluster), because any objects existing in those\ndatabases in the old cluster may depend on the default collation. I\ndon't see how we can do that without updating pg_database, so I'm not\nfollowing your point.\n\n(I think you're right that template0 is optional; but since we're\nfixing up the other databases it would be less surprising if we also\nfixed up template0.)\n\nAnd if we do fixup template0/template1/postgres to match the old\ncluster, then CREATE DATABASE will have no issue.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Fri, 17 Feb 2023 12:36:05 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-17 at 18:27 +0100, Pavel Stehule wrote:\n> Today I tested icu for Czech sorting. It is a little bit slower, but\n> not too much, but it produces partially different results.\n\nThank you for trying it.\n\nIf it's a significant slowdown, can you please send more information?\nICU version, libc version, and testcase?\n\n> select row_number() over (order by nazev collate \"cs-x-icu\"), nazev\n> from obce \n> except select row_number() over (order by nazev collate \"cs_CZ\"),\n> nazev from obce;\n> \n> returns a not empty set. So minimally for Czech collate, an index\n> rebuild is necessary\n\nYes, that's true of any locale change, provider change, or even\nprovider version change.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Fri, 17 Feb 2023 12:43:47 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Hi,\n\nOn 2023-02-17 12:36:05 -0800, Jeff Davis wrote:\n> > > There's already a check that the new cluster is empty, so I think\n> > > it's\n> > > safe to hack the pg_database locale fields.\n> > \n> > I don't think we need to, we do issue the CREATE DATABASEs. So we\n> > just need to\n> > make sure that includes the collation provider info, and the proper\n> > template\n> > database, in pg_upgrade mode.\n> \n> We must fixup template1/postgres in the new cluster (change it to libc\n> to match the old cluster), because any objects existing in those\n> databases in the old cluster may depend on the default collation. I\n> don't see how we can do that without updating pg_database, so I'm not\n> following your point.\n\nI think we just drop/recreate template1 and postgres during pg_upgrade. Yep,\nlooks like it. See create_new_objects():\n\n\t\t/*\n\t\t * template1 database will already exist in the target installation,\n\t\t * so tell pg_restore to drop and recreate it; otherwise we would fail\n\t\t * to propagate its database-level properties.\n\t\t */\n\t\tcreate_opts = \"--clean --create\";\n\nand then:\n\n\t\t/*\n\t\t * postgres database will already exist in the target installation, so\n\t\t * tell pg_restore to drop and recreate it; otherwise we would fail to\n\t\t * propagate its database-level properties.\n\t\t */\n\t\tif (strcmp(old_db->db_name, \"postgres\") == 0)\n\t\t\tcreate_opts = \"--clean --create\";\n\t\telse\n\t\t\tcreate_opts = \"--create\";\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Feb 2023 12:50:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-17 at 12:50 -0800, Andres Freund wrote:\n> I think we just drop/recreate template1 and postgres during\n> pg_upgrade.\n\nThank you, that makes much more sense now.\n\nI was confused because pg_upgrade loops through to check compatibility\nwith all the databases, which makes zero sense if it's going to drop\nall of them except template0 anyway. The checks on template1/postgres\nshould be bypassed.\n\nSo the two approaches are:\n\n1. Don't bother with locale provider compatibility checks at all (even\non template0). The emitted CREATE DATABASE statements already specify\nthe locale provider, so that will take care of everything except\ntemplate0. Maybe the locale provider of template0 shouldn't matter, but\nsome users might be surprised if it changes during upgrade. It also\nraises some questions about the other properties of template0 like\nencoding, lc_collate, and lc_ctype, which also don't matter a whole lot\n(because they can all be changed when using template0 as a template).\n\n2. Update the pg_database entry for template0. This has less potential\nfor surprise in case people are actually using template0 for a\ntemplate.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Fri, 17 Feb 2023 15:07:15 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "pá 17. 2. 2023 v 21:43 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:\n\n> On Fri, 2023-02-17 at 18:27 +0100, Pavel Stehule wrote:\n> > Today I tested icu for Czech sorting. It is a little bit slower, but\n> > not too much, but it produces partially different results.\n>\n> Thank you for trying it.\n>\n> If it's a significant slowdown, can you please send more information?\n> ICU version, libc version, and testcase?\n>\n\nno - this slowdown is not significant - although 1% can looks too much -\nbut it is just two ms\n\nIt looks so libicu has little bit more expensive initialization, but the\nexecution is little bit faster\n\nBut when I try to repeat the measurements, the results are very unstable on\nmy desktop :-/\n\nSELECT * FROM obce ORDER BY nazev LIMIT 10 // is faster with glibc little\nbit\nSELECT * FROM obce ORDER BY nazev // is faster with libicu\n\nYou can download dataset https://pgsql.cz/files/obce.sql\n\nIt is table of municipalities in czech republic (real names) - about 6000\nrows\n\nI use fedora 37 - so libicu 71.1, glibc 2.36\n\nRegards\n\nPavel\n\n\n\n>\n> > select row_number() over (order by nazev collate \"cs-x-icu\"), nazev\n> > from obce\n> > except select row_number() over (order by nazev collate \"cs_CZ\"),\n> > nazev from obce;\n> >\n> > returns a not empty set. So minimally for Czech collate, an index\n> > rebuild is necessary\n>\n> Yes, that's true of any locale change, provider change, or even\n> provider version change.\n>\n>\n> --\n> Jeff Davis\n> PostgreSQL Contributor Team - AWS\n>\n>\n>\n\npá 17. 2. 2023 v 21:43 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:On Fri, 2023-02-17 at 18:27 +0100, Pavel Stehule wrote:\n> Today I tested icu for Czech sorting. It is a little bit slower, but\n> not too much, but it produces partially different results.\n\nThank you for trying it.\n\nIf it's a significant slowdown, can you please send more information?\nICU version, libc version, and testcase?no - this slowdown is not significant - although 1% can looks too much - but it is just two msIt looks so libicu has little bit more expensive initialization, but the execution is little bit fasterBut when I try to repeat the measurements, the results are very unstable on my desktop :-/SELECT * FROM obce ORDER BY nazev LIMIT 10 // is faster with glibc little bitSELECT * FROM obce ORDER BY nazev // is faster with libicuYou can download dataset https://pgsql.cz/files/obce.sqlIt is table of municipalities in czech republic (real names) - about 6000 rowsI use fedora 37 - so libicu 71.1, glibc 2.36RegardsPavel \n\n> select row_number() over (order by nazev collate \"cs-x-icu\"), nazev\n> from obce \n> except select row_number() over (order by nazev collate \"cs_CZ\"),\n> nazev from obce;\n> \n> returns a not empty set. So minimally for Czech collate, an index\n> rebuild is necessary\n\nYes, that's true of any locale change, provider change, or even\nprovider version change.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Sat, 18 Feb 2023 05:52:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, Feb 17, 2023 at 10:32 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Thinking about this more, it's not clear to me if this would be in\n> scope for pg_upgrade or not. If pg_upgrade is fixing up the new cluster\n> rather than checking for compatibility, why doesn't it just take over\n> and do the initdb for the new cluster itself? That would be less\n> confusing for users, and avoid some weirdness (like, if you drop the\n> database \"postgres\" on the original, why does it reappear after an\n> upgrade?).\n>\n> Someone might want to do something interesting to the new cluster\n> before the upgrade, but it's not clear from the docs what would be both\n> useful and safe.\n\nI agree with all of this. I think it would be fantastic if pg_upgrade\ndid the initdb itself. It would be simple to make this optional\nbehavior, too: if the destination directory does not exist or is\nempty, initdb into it, otherwise skip that. That might be too\nautomagical, so we could add add a --no-initdb option. If not\nspecified, the destination directory must either not exist or be\nempty; else it must exist and look like a data directory.\n\nI completely concur with the idea that doing something with the new\ncluster before the upgrade is weird, and I don't think we should\nencourage people to do it. Nevertheless, as the threads to which\nJustin linked probably say, I'm not sure that it's a good idea to\ncompletely slam the door shut on that option. If we did want to move\nin that direction, though, having pg_upgrade do the initdb would be an\nexcellent first step. We could at some later time decide to remove the\n--no-initdb option; or maybe we'll decide that it's good to keep it\nfor emergencies, which is my present bias. In any event, the resulting\nsystem would be more usable and less error-prone than what we have\ntoday.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Feb 2023 14:08:46 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 17.02.23 21:43, Jeff Davis wrote:\n>> select row_number() over (order by nazev collate \"cs-x-icu\"), nazev\n>> from obce\n>> except select row_number() over (order by nazev collate \"cs_CZ\"),\n>> nazev from obce;\n>>\n>> returns a not empty set. So minimally for Czech collate, an index\n>> rebuild is necessary\n> Yes, that's true of any locale change, provider change, or even\n> provider version change.\n\nI'm confused. We are not going to try to change existing databases to a \ndifferent collation provider during pg_upgrade, are we?\n\n\n\n", "msg_date": "Mon, 20 Feb 2023 15:55:45 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Mon, 2023-02-20 at 15:55 +0100, Peter Eisentraut wrote:\n> I'm confused.  We are not going to try to change existing databases\n> to a \n> different collation provider during pg_upgrade, are we?\n\nNo, certainly not.\n\nI interpreted Pavel's comments as a comparison of ICU and libc in\ngeneral and not specific to this patch. Changing providers obviously\nrequires an index rebuild to be safe.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Mon, 20 Feb 2023 11:41:52 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-17 at 15:07 -0800, Jeff Davis wrote:\n> 2. Update the pg_database entry for template0. This has less\n> potential\n> for surprise in case people are actually using template0 for a\n> template.\n\nNew patches attached.\n\n 0001: default autoconf to build with ICU (meson already uses 'auto')\n 0002: update template0 in new cluster (as described above)\n 0003: default initdb to use ICU\n\nUpdating template0, as in 0002, seems straightforward and unsurprising,\nsince only template0 is preserved and it was only initialized for the\npurposes of upgrading. Also, template0 is not sensitive to locale\nsettings, and doesn't even have the datcollversion set. The patch\nupdates encoding, datlocprovider, datcollate, datctype, and\ndaticulocale on the new cluster. No doc update, because there are some\ninitdb settings (like checksums) which still need to be compatible\nbetween the old and the new cluster.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Fri, 24 Feb 2023 15:54:15 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-02-24 at 15:54 -0800, Jeff Davis wrote:\n>   0001: default autoconf to build with ICU (meson already uses\n> 'auto')\n\nWhat's the best way to prepare for the impact of this on the buildfarm?\nHow should we migrate to using --without-icu for those animals not\ncurrently specifying --with-icu?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Feb 2023 11:30:41 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Fri, 2023-02-24 at 15:54 -0800, Jeff Davis wrote:\n>> 0001: default autoconf to build with ICU (meson already uses\n>> 'auto')\n\n> What's the best way to prepare for the impact of this on the buildfarm?\n> How should we migrate to using --without-icu for those animals not\n> currently specifying --with-icu?\n\nTell the buildfarm owners to add --without-icu to their config if\nthey don't have and don't want to install ICU. Wait a couple weeks.\nCommit, then nag the owners whose machines turn red.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Feb 2023 15:23:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 25.02.23 00:54, Jeff Davis wrote:\n> On Fri, 2023-02-17 at 15:07 -0800, Jeff Davis wrote:\n>> 2. Update the pg_database entry for template0. This has less\n>> potential\n>> for surprise in case people are actually using template0 for a\n>> template.\n> \n> New patches attached.\n> \n> 0001: default autoconf to build with ICU (meson already uses 'auto')\n\nI would skip this. There was a brief discussion about this at [0], \nwhere I pointed out that if we are going to do something like that, \nthere would be other candidates among the optional dependencies to \npromote, such as certainly openssl and arguably lz4. If we don't do \nthis consistently across dependencies, then there will be confusion.\n\nIn practice, I don't think it matters. Almost all installations are \nmade by packagers, who will make their own choices. Flipping the \ndefault in configure is only going to cause some amount of confusion and \nannoyance in some places, but won't actually have the ostensibly desired \nimpact in practice.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/534fed4a262fee534662bd07a691c5ef%40postgrespro.ru\n\n> 0002: update template0 in new cluster (as described above)\n\nThis makes sense. I'm confused what the code originally wanted to \nachieve, e.g.,\n\n-/*\n- * Check that every database that already exists in the new cluster is\n- * compatible with the corresponding database in the old one.\n- */\n-static void\n-check_databases_are_compatible(void)\n\nWas there once support for the new cluster having additional databases \nin place? Weird.\n\nIn any case, I think you can remove additional code from get_db_infos() \nrelated to fields that are no longer used, such as db_encoding, and the \ncorresponding struct fields in DbInfo.\n\n> 0003: default initdb to use ICU\n\nWhat are the changes in the citext tests about? Is it the same issue as \nin unaccent? In that case, the OR should be an AND? Maybe add a comment?\n\nWhy is unaccent is \"broken\" if the default collation is provided by ICU \nand LC_CTYPE=C? Is that a general problem? Should we prevent this \ncombination?\n\nWhat are the changes in the ecpg tests about? Looks harmless, but if \nthere is a need, maybe it should be commented somewhere, otherwise what \nprevents someone from changing it back?\n\n\n\n", "msg_date": "Thu, 2 Mar 2023 10:37:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, 2023-03-02 at 10:37 +0100, Peter Eisentraut wrote:\n> I would skip this.  There was a brief discussion about this at [0], \n> where I pointed out that if we are going to do something like that, \n> there would be other candidates among the optional dependencies to \n> promote, such as certainly openssl and arguably lz4.  If we don't do \n> this consistently across dependencies, then there will be confusion.\n\nThe difference is that ICU affects semantics of collations, and\ncollations are not really an optional feature. If postgres is built\nwithout ICU, that will affect the default at initdb time (after patch\n003, anyway), which will then affect the default collations in all\ndatabases.\n\n> In practice, I don't think it matters.  Almost all installations are \n> made by packagers, who will make their own choices.\n\nMostly true, but the discussion at [0] reveals that some people do\nbuild postgresql themselves for whatever reason.\n\nWhen I first started out with postgres I always built from source. That\nwas quite a while ago, so maybe that means nothing; but it would be sad\nto think that the build-it-yourself experience doesn't matter.\n\n> >    0002: update template0 in new cluster (as described above)\n> \n> This makes sense.  I'm confused what the code originally wanted to \n> achieve, e.g.,\n> \n> -/*\n> - * Check that every database that already exists in the new cluster\n> is\n> - * compatible with the corresponding database in the old one.\n> - */\n> -static void\n> -check_databases_are_compatible(void)\n> \n> Was there once support for the new cluster having additional\n> databases \n> in place?  Weird.\n\nIt looks like 33755e8edf was the last significant change here. CC\nHeikki for comment.\n\n> In any case, I think you can remove additional code from\n> get_db_infos() \n> related to fields that are no longer used, such as db_encoding, and\n> the \n> corresponding struct fields in DbInfo.\n\nYou're right: there's not much of an intersection between the code that\nneeds a DbInfo and the code that needs the locale fields. I created a\nseparate DbLocaleInfo struct for the template0 locale information, and\nremoved the locale fields from DbInfo.\n\nI also added a TAP test.\n\n> >    0003: default initdb to use ICU\n> \n> What are the changes in the citext tests about?\n\nThere's a test in citext_utf8.sql for:\n\n SELECT 'i'::citext = 'İ'::citext AS t;\n\ncitext_eq uses DEFAULT_COLLATION_OID, comparing the results after\napplying lower(). Apparently:\n\n lower('İ' collate \"en_US\") = 'i' -- true\n lower('İ' collate \"tr-TR-x-icu\") = 'i' -- true\n lower('İ' collate \"en-US-x-icu\") = 'i' -- false\n\nthe test was passing before because it seems to be true for all libc\nlocales. But for ICU, it seems to only be true in the \"tr-TR\" locale at\ncolstrength=secondary (and true for other ICU locales at\ncolstrength=primary).\n\nWe can't fix the test by being explicit about the collation, because\ncitext doesn't pass it down; it always uses the default collation. We\ncould fix citext to pass it down properly, but that seems like a\ndifferent patch.\n\nIn any case, citext doesn't seem very important to those using ICU (we\nhave a doc note suggesting ICU instead), so I don't see a strong reason\nto test the combination. So, I just exit the test early if it's ICU. I\nadded a better comment.\n\n\n>   Is it the same issue as \n> in unaccent?  In that case, the OR should be an AND?  Maybe add a\n> comment?\n> \n> Why is unaccent is \"broken\" if the default collation is provided by\n> ICU \n> and LC_CTYPE=C?  Is that a general problem?  Should we prevent this \n> combination?\n\nA different issue: unaccent is calling t_isspace(), which is just not\nproperly handling locales. t_isspace() always passes NULL as the last\nargument to char2wchar. There are TODO comments throughout that file.\n\nSpecifically what happens:\n lc_ctype_is_c(DEFAULT_COLLATION_OID) returns false\n so it calls char2wchar(), which calls mbstowcs()\n which returns an error because the LC_CTYPE=C\n\nRight now, that's a longstanding issue for all users of t_isspace() and\nrelated functions: tsearch, ltree, pg_trgm, dict_xsyn, and unaccent. I\nassume it was known and considered unimportant, otherwise we wouldn't\nhave left the TODO comments in there.\n\nI believe it's only a problem when the provider is ICU and the LC_CTYPE\nis C. I think a quick fix would be to just test LC_CTYPE directly (from\nthe environment or setlocale(LC_CTYPE, NULL)) rather than try to\nextract it from the default collation. It sounds like a separate patch,\nand should be handled as a bugfix and backported.\n\nA better fix would be to support character classification in ICU. I\ndon't think that's hard, but ICU has quite a few options, and we'd need\nto discuss which are the right ones to support. We may also want to\npass collation information down rather than just using the database\ndefault, but that may depend on the caller and we should discuss that,\nas well.\n\n> What are the changes in the ecpg tests about?  Looks harmless, but if\n> there is a need, maybe it should be commented somewhere, otherwise\n> what \n> prevents someone from changing it back?\n\nICU is not compatible with SQL_ASCII, so I had to remove the\nENCODING=SQL_ASCII line from the ecpg test build. CC Michael Meskes who\nadded the line in 1fa6be6f69 in case he has a comment.\n\nBut when I did that, I got CI failures on windows because it couldn't\ntranscode between LATIN1 and WIN1252. So I changed the ecpg test to\njust use SQL_ASCII for the client_encoding (not the server encoding).\nMichael Meskes added the client_encoding parameter test in 5e7710e725,\nso he might have a comment about that as well.\n\nSince I removed the code, I didn't see a clear place to add a comment,\nbut if you have a suggestion I'll take it.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Fri, 03 Mar 2023 21:45:06 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Fri, 2023-03-03 at 21:45 -0800, Jeff Davis wrote:\n> > >    0002: update template0 in new cluster (as described above)\n\nI think 0002 is about ready and I plan to commit it soon unless someone\nhas more comments.\n\nI'm holding off on 0001 for now, because you objected. But I still\nthink 0001 is a good idea so I'd like to hear more before I withdraw\nit.\n\n> > >    0003: default initdb to use ICU\n\nThis is also about ready, and I plan to commit this soon after 0002.\n\n> A different issue: unaccent is calling t_isspace(), which is just not\n> properly handling locales. t_isspace() always passes NULL as the last\n> argument to char2wchar. There are TODO comments throughout that file.\n\nI posted a bug report and patch for this issue:\n\nhttps://www.postgresql.org/message-id/79e4354d9eccfdb00483146a6b9f6295202e7890.camel@j-davis.com\n\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 07 Mar 2023 21:55:07 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 08.03.23 06:55, Jeff Davis wrote:\n> On Fri, 2023-03-03 at 21:45 -0800, Jeff Davis wrote:\n>>>>    0002: update template0 in new cluster (as described above)\n> \n> I think 0002 is about ready and I plan to commit it soon unless someone\n> has more comments.\n\n0002 seems fine to me.\n\n> I'm holding off on 0001 for now, because you objected. But I still\n> think 0001 is a good idea so I'd like to hear more before I withdraw\n> it.\n\nLet's come back to that after dealing with the other two.\n\n>>>>    0003: default initdb to use ICU\n> \n> This is also about ready, and I plan to commit this soon after 0002.\n\nThis seems mostly ok to me. I have a few small comments.\n\n+ default, ICU obtains the ICU locale from the ICU default collator.\n\nThis seems to be a fancy way of saying, the default ICU locale will be \nset to something that approximates what you have set your operating \nsystem to. Which is what we want, I think. Can we say this in a more \nuser-friendly way?\n\n+static void\n+check_icu_locale()\n\nshould be check_icu_locale(void)\n\n+ if (U_ICU_VERSION_MAJOR_NUM >= 54)\n+ {\n\nIf we're going to add more of these mysterious version checks, could we \nadd a comment about what they are for?\n\nHowever, I suspect what this chunk is doing is some sort of \ncanonicalization/language-tag conversion, which per the other thread, I \nhave some questions about.\n\nHow about for this patch, we skip this part and just do the else branch\n\n+ icu_locale = pg_strdup(default_locale);\n\nand then put the canonicalization business into the canonicalization \npatch set?\n\n\n\n", "msg_date": "Thu, 9 Mar 2023 10:36:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Thu, 2023-03-09 at 10:36 +0100, Peter Eisentraut wrote:\n> 0002 seems fine to me.\n\nCommitted 0002 with some test improvements.\n\n> \n> Let's come back to that after dealing with the other two.\n\nLeaving 0001 open for now.\n\n0003 committed after addressing your comments.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 09 Mar 2023 11:14:25 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 09.03.23 20:14, Jeff Davis wrote:\n>> Let's come back to that after dealing with the other two.\n> \n> Leaving 0001 open for now.\n\nI suspect making a change like this now would result in a bloodbath on \nthe build farm that we could do without. I suggest revisiting this \nafter the commit fest ends.\n\n\n", "msg_date": "Thu, 16 Mar 2023 14:52:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 16.03.23 14:52, Peter Eisentraut wrote:\n> On 09.03.23 20:14, Jeff Davis wrote:\n>>> Let's come back to that after dealing with the other two.\n>>\n>> Leaving 0001 open for now.\n> \n> I suspect making a change like this now would result in a bloodbath on \n> the build farm that we could do without.  I suggest revisiting this \n> after the commit fest ends.\n\nI don't object to this patch. I suggest waiting until next week to \ncommit it and then see what happens. It's easy to revert if it goes \nterribly.\n\n\n", "msg_date": "Wed, 5 Apr 2023 09:33:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Wed, Apr 05, 2023 at 09:33:25AM +0200, Peter Eisentraut wrote:\n> On 16.03.23 14:52, Peter Eisentraut wrote:\n> > On 09.03.23 20:14, Jeff Davis wrote:\n> > > > Let's come back to that after dealing with the other two.\n> > > \n> > > Leaving 0001 open for now.\n> > \n> > I suspect making a change like this now would result in a bloodbath on\n> > the build farm that we could do without.� I suggest revisiting this\n> > after the commit fest ends.\n> \n> I don't object to this patch. I suggest waiting until next week to commit\n> it and then see what happens. It's easy to revert if it goes terribly.\n\nIs this still being considered for v16 ?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 17 Apr 2023 08:23:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On Mon, 2023-04-17 at 08:23 -0500, Justin Pryzby wrote:\n> > I don't object to this patch.  I suggest waiting until next week to\n> > commit\n> > it and then see what happens.  It's easy to revert if it goes\n> > terribly.\n> \n> Is this still being considered for v16 ?\n\nYes, unless someone raises a procedural objection.\n\nIs now a reasonable time to check it in and see what breaks? It looks\nlike there are quite a few buildfarm members that specify neither --\nwith-icu nor --without-icu.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 17 Apr 2023 11:02:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> Is now a reasonable time to check it in and see what breaks? It looks\n> like there are quite a few buildfarm members that specify neither --\n> with-icu nor --without-icu.\n\nI see you just pinged buildfarm-members again, so I'd think it's\npolite to give people 24 hours or so to deal with that before\nyou break things.\n\n(My animals are all set, I believe.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Apr 2023 14:33:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" }, { "msg_contents": "On 4/17/23 2:33 PM, Tom Lane wrote:\r\n> Jeff Davis <pgsql@j-davis.com> writes:\r\n>> Is now a reasonable time to check it in and see what breaks? It looks\r\n>> like there are quite a few buildfarm members that specify neither --\r\n>> with-icu nor --without-icu.\r\n> \r\n> I see you just pinged buildfarm-members again, so I'd think it's\r\n> polite to give people 24 hours or so to deal with that before\r\n> you break things.\r\n\r\n[RMT hat]\r\n\r\nThis thread has fallen silent and the RMT wanted to check in.\r\n\r\nThe RMT did have a brief discussion on $SUBJECT. We agree with several \r\npoints that regardless of if/when ICU becomes the default collation \r\nprovider for PostgreSQL, we'll likely have to flush out several issues. \r\nThe question is how long we want that period to be before releasing the \r\ndefault.\r\n\r\nRight now, and in absence of critical issues or objections, the RMT is \r\nOK with leaving in ICU as the default collation provider for Beta 1. If \r\nwe're to revert back to glibc, we recommend doing this before Beta 2.\r\n\r\nHowever, if there are strong objections to this proposal, please do \r\nstate them.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 3 May 2023 11:29:14 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Move defaults toward ICU in 16?" } ]
[ { "msg_contents": "\nHi, I'm trying to construct a new tuple type, that's not heaptuple,\nWhen I get a tupleTableSlot, I will get data info from it and then I\nwill constuct a new tuple, and now I need to put it into a physical\npage, how should I do?\n\n--------------\n\njacktby@gmail.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 22:00:56 +0800", "msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>", "msg_from_op": true, "msg_subject": "How to write a new tuple into page?" }, { "msg_contents": "On Thu, Feb 2, 2023 at 7:31 PM jacktby@gmail.com <jacktby@gmail.com> wrote:\n>\n> Hi, I'm trying to construct a new tuple type, that's not heaptuple,\n> When I get a tupleTableSlot, I will get data info from it and then I\n> will constuct a new tuple, and now I need to put it into a physical\n> page, how should I do?\n\nPostgres writes table pages from shared buffers/buffer pool to disk\nvia storage manager (smgr.c). I think looking at the code around\nFlushBuffer()/FlushRelationBuffers()/smgrwrite() might help.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 3 Feb 2023 09:21:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to write a new tuple into page?" } ]
[ { "msg_contents": "In many cases, a DELETE or UPDATE not having a WHERE clause (or having it\nwith a condition matching all rows in the table) is a sign of some kind of\nmistake, leading to accidental data loss, performance issues, producing a\nlot of dead tuples, and so on. Recently, this topic was again discussed [1]\n\nAttached is a patch implemented by Andrey Boroding (attached) during our\ntoday's online session [2], containing a rough prototype for two new GUCs:\n\n- prevent_unqualified_deletes\n- prevent_unqualified_updates\n\nBoth are \"false\" by default; for superusers, they are not applied.\n\nThere is also another implementation of this idea, in the form of an\nextension [3], but I think having this in the core would be beneficial to\nmany users.\n\nLooking forward to your feedback.\n\n[1] https://news.ycombinator.com/item?id=34560332\n[2] https://www.youtube.com/watch?v=samLkrC5xQA\n[3] https://github.com/eradman/pg-safeupdate", "msg_date": "Thu, 2 Feb 2023 12:21:39 -0800", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": true, "msg_subject": "Prevent accidental whole-table DELETEs and UPDATEs" }, { "msg_contents": "Nikolay Samokhvalov <samokhvalov@gmail.com> writes:\n> In many cases, a DELETE or UPDATE not having a WHERE clause (or having it\n> with a condition matching all rows in the table) is a sign of some kind of\n> mistake, leading to accidental data loss, performance issues, producing a\n> lot of dead tuples, and so on. Recently, this topic was again discussed [1]\n\n> Attached is a patch implemented by Andrey Boroding (attached) during our\n> today's online session [2], containing a rough prototype for two new GUCs:\n\n> - prevent_unqualified_deletes\n> - prevent_unqualified_updates\n\nThis sort of thing has been proposed before and rejected before.\nI do not think anything has changed. In any case, I seriously\ndoubt that something that's basically a one-line test (excluding\noverhead such as GUC definitions) is going to meaningfully\nimprove users' lives. The cases that I actually see reported\nare not \"I left off the WHERE\" but more like \"I fat-fingered\na variable in a sub-select so that it's an outer reference,\ncausing the test to degenerate to WHERE x = x\", or perhaps\n\"I misunderstood the behavior of NOT IN with nulls, ending up\nwith a constant-false or constant-true condition\". I'm not sure\nif there's a reliable way to spot those sorts of not-so-trivial\nsemantic errors ... but if we could, that'd be worth discussing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Feb 2023 16:32:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Prevent accidental whole-table DELETEs and UPDATEs" } ]
[ { "msg_contents": "Here are a few small patches for basebackup.c:\n\n0001 fixes what I believe to be a slight logical error in sendFile(),\nintroduced by me during the v15 development cycle when I introduced\nthe bbsink abstraction. I believe that it is theoretically possible\nfor this to cause an assertion failure, although the chances of that\nactually happening seem extremely remote in practice. I don't believe\nthere are any consequences worse than that; for instance, I don't\nthink this can result in your backup getting corrupted. See the\nproposed commit message for full details. Because I believe that this\nis formally a bug, I am inclined to think that this should be\nback-patched, but I also think it's fairly likely that no one would\never notice if we didn't. However, patch authors have been known to be\nwrong about the consequences of their own bugs from time to time, so\nplease do let me know if this seems more serious to you than what I'm\nindicating, or conversely if you think it's not a problem at all for\nsome reason.\n\n0002 removes an old comment from the file that I find useless and\nslightly misleading.\n\n0003 rewrites a comment about the way that we verify checksums during\nbackups. If we get a checksum mismatch, we reread the block and see if\nthe perceived problem goes away. If it doesn't, then we report it.\nThis is intended as protection against the backup reading a block\nwhile some other process is in the midst of writing it, but there's no\nguarantee that any concurrent write will finish quickly enough for our\nsecond read attempt to see the updated contents. The comment claims\notherwise, and that's false, and I'm getting tired of reading this\nfalse claim every time I read this code, so I rewrote the comment to\nsay what I believe to be true, namely, that our algorithm is flaky and\nwe have no good way to fix that right now. I'm pretty sure that Andres\npointed this problem out when this feature was under discussion, but\nsomehow it's still like this. There's another nearby comment which is\nalso false or at least misleading for basically the same reasons which\nprobably should be rewritten too, but I was a bit less certain how to\nrewrite it and it wasn't making me as annoyed as this one, so for now\nI only rewrote the one.\n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Feb 2023 15:23:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "a very minor bug and a couple of comment changes for basebackup.c" }, { "msg_contents": "On Thu, Feb 02, 2023 at 03:23:51PM -0500, Robert Haas wrote:\n> 0001 fixes what I believe to be a slight logical error in sendFile(),\n> introduced by me during the v15 development cycle when I introduced\n> the bbsink abstraction. I believe that it is theoretically possible\n> for this to cause an assertion failure, although the chances of that\n> actually happening seem extremely remote in practice. I don't believe\n> there are any consequences worse than that; for instance, I don't\n> think this can result in your backup getting corrupted. See the\n> proposed commit message for full details. Because I believe that this\n> is formally a bug, I am inclined to think that this should be\n> back-patched, but I also think it's fairly likely that no one would\n> ever notice if we didn't. However, patch authors have been known to be\n> wrong about the consequences of their own bugs from time to time, so\n> please do let me know if this seems more serious to you than what I'm\n> indicating, or conversely if you think it's not a problem at all for\n> some reason.\n\nSeems right, I think that you should backpatch that as\nVERIFY_CHECKSUMS is the default.\n\n> 0002 removes an old comment from the file that I find useless and\n> slightly misleading.\n\nOkay.\n\n> 0003 rewrites a comment about the way that we verify checksums during\n> backups. If we get a checksum mismatch, we reread the block and see if\n> the perceived problem goes away. If it doesn't, then we report it.\n> This is intended as protection against the backup reading a block\n> while some other process is in the midst of writing it, but there's no\n> guarantee that any concurrent write will finish quickly enough for our\n> second read attempt to see the updated contents.\n\nThere is more to it: the page LSN is checked before its checksum.\nHence, if the page's LSN is corrupted in a way that it is higher than\nsink->bbs_state->startptr, the checksum verification is just skipped\nwhile the page is broken but not reported as such. (Spoiler: this has\nbeen mentioned in the past, and maybe we'd better remove this stuff in\nits entirety.)\n\n> The comment claims\n> otherwise, and that's false, and I'm getting tired of reading this\n> false claim every time I read this code, so I rewrote the comment to\n> say what I believe to be true, namely, that our algorithm is flaky and\n> we have no good way to fix that right now. I'm pretty sure that Andres\n> pointed this problem out when this feature was under discussion, but\n> somehow it's still like this. There's another nearby comment which is\n> also false or at least misleading for basically the same reasons which\n> probably should be rewritten too, but I was a bit less certain how to\n> rewrite it and it wasn't making me as annoyed as this one, so for now\n> I only rewrote the one.\n\nIndeed, that's an improvement ;)\n--\nMichael", "msg_date": "Thu, 2 Mar 2023 16:59:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: a very minor bug and a couple of comment changes for basebackup.c" }, { "msg_contents": "Thanks for the review. I have committed the patches.\n\nOn Thu, Mar 2, 2023 at 2:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Seems right, I think that you should backpatch that as\n> VERIFY_CHECKSUMS is the default.\n\nDone.\n\n> There is more to it: the page LSN is checked before its checksum.\n> Hence, if the page's LSN is corrupted in a way that it is higher than\n> sink->bbs_state->startptr, the checksum verification is just skipped\n> while the page is broken but not reported as such. (Spoiler: this has\n> been mentioned in the past, and maybe we'd better remove this stuff in\n> its entirety.)\n\nYep. It's pretty embarrassing that we have a checksum verification\nfeature that has both known ways of producing false positives and\nknown ways of producing false negatives and we have no plan to ever\nfix that, we're just going to keep shipping what we've got. I think\nit's pretty clear that the feature shouldn't have been committed like\nthis; valid criticisms of the design were offered and simply not\naddressed, not even by updating the comments or documentation with\ndisclaimers. I find the code in sendFile() pretty ugly, too. For all\nof that, I'm a bit uncertain whether ripping it out is the right thing\nto do. It might be (I'm not sure) that it tends to work decently well\nin practice. Like, yes, it could produce false checksum warnings, but\ndoes that actually happen to people? It's probably not too likely that\na read() or write() of 8kB gets updated after doing only part of the\nI/O, so the concurrent read or write is fairly likely to be on-CPU, I\nwould guess, and therefore this algorithm might kind of work OK in\npractice despite begin garbage on a theoretical level. Similarly, the\nproblems with how the LSN is vetted make it likely that a page\nreplaced with random garbage will go undetected, but a page where a\nfew bytes get flipped in a random position within the page is likely\nto get caught, and maybe the latter is actually a bigger risk than the\nformer. I don't really know. I'd be interested to hear from anyone\nwith a lot of practical experience using the feature. A few anecdotes\nof the form \"this routine fails to tell us about problems\" or \"this\noften complains about problems that are not real\" or \"this has really\nhelped us out on a number of occasions and we have had no problems\nwith it\" would be really helpful.\n\nOn the other hand, we could just say that the code is nonsense and\ntherefore, regardless of practical experience, it ought to be removed.\nI'm somewhat open to that idea, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Mar 2023 11:07:07 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: a very minor bug and a couple of comment changes for basebackup.c" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> Thanks for the review. I have committed the patches.\n\nNo objections to what was committed.\n\n> On Thu, Mar 2, 2023 at 2:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > There is more to it: the page LSN is checked before its checksum.\n> > Hence, if the page's LSN is corrupted in a way that it is higher than\n> > sink->bbs_state->startptr, the checksum verification is just skipped\n> > while the page is broken but not reported as such. (Spoiler: this has\n> > been mentioned in the past, and maybe we'd better remove this stuff in\n> > its entirety.)\n> \n> Yep. It's pretty embarrassing that we have a checksum verification\n> feature that has both known ways of producing false positives and\n> known ways of producing false negatives and we have no plan to ever\n> fix that, we're just going to keep shipping what we've got. I think\n> it's pretty clear that the feature shouldn't have been committed like\n> this; valid criticisms of the design were offered and simply not\n> addressed, not even by updating the comments or documentation with\n> disclaimers. I find the code in sendFile() pretty ugly, too. For all\n> of that, I'm a bit uncertain whether ripping it out is the right thing\n> to do. It might be (I'm not sure) that it tends to work decently well\n> in practice. Like, yes, it could produce false checksum warnings, but\n> does that actually happen to people? It's probably not too likely that\n> a read() or write() of 8kB gets updated after doing only part of the\n> I/O, so the concurrent read or write is fairly likely to be on-CPU, I\n> would guess, and therefore this algorithm might kind of work OK in\n> practice despite begin garbage on a theoretical level. Similarly, the\n> problems with how the LSN is vetted make it likely that a page\n> replaced with random garbage will go undetected, but a page where a\n> few bytes get flipped in a random position within the page is likely\n> to get caught, and maybe the latter is actually a bigger risk than the\n> former. I don't really know. I'd be interested to hear from anyone\n> with a lot of practical experience using the feature. A few anecdotes\n> of the form \"this routine fails to tell us about problems\" or \"this\n> often complains about problems that are not real\" or \"this has really\n> helped us out on a number of occasions and we have had no problems\n> with it\" would be really helpful.\n\nThe concern about the LSN is certainly a valid one and we ended up\nripping that check out of pgbackrest in favor of taking a different\napproach- simply re-read the page and see if it changed. If it changed,\nthen we punt and figure that it was a hot page that PG was actively\nwriting to and so it'll be in the WAL and we don't have to worry about\nit. We're generally concerned more about latent on-disk corruption that\nis missed than about some kind of in-memory corruption and the page\nchanging in the filesystem cache without some other process writing to\nit just seems to be awfully unlikely.\n\nhttps://github.com/pgbackrest/pgbackrest/commit/9eec98c61302121134d2067326dbd2cd0f2f0b9c\n\nFrom a practical perspective, while this has the afore-mentioned risk\nregarding our loop happening twice fast enough that it re-reads the same\npage without the i/o on the page progressing at all, I'm not aware of\neven one report of that actually happening. We absolutely have seen\ncases where the first read picks up a torn page and that's not even\nuncommon in busy environments.\n\nOne of the ideas I've had around how to address all of this is to not\ndepend on inferring things from what's been written out but instead to\njust ask PG and that's one of the (relatively few...) benefits that I\nsee to an archive_library- the ability to check if a given page is in\nshared buffers and, if so, what its LSN is to see if it's past the start\nof the backup. Given that pg_basebackup is already working with a lot\nof server-side code.. perhaps this could be a direction to go in for it.\n\nAnother idea we played around with was keeping track of the LSNs of the\npages with invalid checksums and checking that they fall within the\nrange of start LSN-end LSN of the backup; while that wouldn't\ncompletely prevent corrupted LSNs from skipping detection using the LSN\napproach, it'd sure make it a whole lot less likely.\n\nLastly, I've also wondered about trying to pull (clean) pages from\nshared_buffers directly instead of reading them off of disk at all,\nperhaps using a ring buffer in shared_buffers to read them in if\nthey're not already there, and letting all the usual checksum validation\nand such happening with PG handling it. Dirty pages would have to be\nhandled though, presumably by locking them and then reading the page\nfrom disk as I don't think folks would be pleased if we decided to\nforcibly write them out. For an incremental backup, if you have\nreliable knowledge that the page is in the WAL from PG, perhaps you\ncould even skip the page entirely.\n\nIndependently of verifying PG checksums, we've also considered an\napproach where we take the file-level checksums we have in the\npgbackrest manifest and check if the file has changed on the filesystem\nwithout the timestamp changing as that would certainly be indicative of\nsomething gone wrong.\n\nCertainly interested in other ideas about how to improve things in this\narea. I don't think we should rip it out though.\n\nThanks,\n\nStephen", "msg_date": "Mon, 6 Mar 2023 19:00:38 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: a very minor bug and a couple of comment changes for basebackup.c" } ]
[ { "msg_contents": "Hi,\n\nThis small patch introduces a XML pretty print function. It basically \ntakes advantage of the indentation feature of xmlDocDumpFormatMemory \nfrom libxml2 to format XML strings.\n\npostgres=# SELECT xmlpretty('<foo id=\"x\"><bar id=\"y\"><var \nid=\"z\">42</var></bar></foo>');\n         xmlpretty\n--------------------------\n  <foo id=\"x\">            +\n    <bar id=\"y\">          +\n      <var id=\"z\">42</var>+\n    </bar>                +\n  </foo>                  +\n\n(1 row)\n\n\nThe patch also contains regression tests and documentation.\n\nFeedback is very welcome!\n\nJim", "msg_date": "Thu, 2 Feb 2023 21:35:39 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "[PATCH] Add pretty-printed XML output option" }, { "msg_contents": "The system somehow returns different error messages in Linux and \nMacOS/Windows, which was causing the cfbot to fail.\n\nSELECT xmlpretty('<foo>')::xml;\n                           ^\n-DETAIL:  line 1: chunk is not well balanced\n+DETAIL:  line 1: Premature end of data in tag foo line 1\n\nTest removed in v2.\n\nOn 02.02.23 21:35, Jim Jones wrote:\n> Hi,\n>\n> This small patch introduces a XML pretty print function. It basically \n> takes advantage of the indentation feature of xmlDocDumpFormatMemory \n> from libxml2 to format XML strings.\n>\n> postgres=# SELECT xmlpretty('<foo id=\"x\"><bar id=\"y\"><var \n> id=\"z\">42</var></bar></foo>');\n>         xmlpretty\n> --------------------------\n>  <foo id=\"x\">            +\n>    <bar id=\"y\">          +\n>      <var id=\"z\">42</var>+\n>    </bar>                +\n>  </foo>                  +\n>\n> (1 row)\n>\n>\n> The patch also contains regression tests and documentation.\n>\n> Feedback is very welcome!\n>\n> Jim", "msg_date": "Fri, 3 Feb 2023 08:05:46 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Hi,\n\nThe cfbot on \"Windows - Server 2019, VS 2019 - Meson & ninja\" is failing \nthe regression tests with the error:\n\nERROR:  unsupported XML feature\nDETAIL:  This functionality requires the server to be built with libxml \nsupport.\n\nIs there anything I can do to enable libxml to run my regression tests?\n\nv3 adds a missing xmlFree call.\n\nBest, Jim", "msg_date": "Mon, 6 Feb 2023 17:19:02 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> The cfbot on \"Windows - Server 2019, VS 2019 - Meson & ninja\" is failing \n> the regression tests with the error:\n\n> ERROR:  unsupported XML feature\n> DETAIL:  This functionality requires the server to be built with libxml \n> support.\n\n> Is there anything I can do to enable libxml to run my regression tests?\n\nYour patch needs to also update expected/xml_1.out to match the output\nthe test produces when run without --with-libxml.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Feb 2023 11:23:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 06.02.23 17:23, Tom Lane wrote:\n>> Your patch needs to also update expected/xml_1.out to match the output\n>> the test produces when run without --with-libxml.\n\nThanks a lot! It seems to do the trick.\n\nxml_1.out updated in v4.\n\nBest, Jim", "msg_date": "Mon, 6 Feb 2023 18:58:53 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "while working on another item of the TODO list I realized that I should \nbe using a PG_TRY() block in he xmlDocDumpFormatMemory call.\n\nFixed in v5.\n\nBest regards, Jim", "msg_date": "Wed, 8 Feb 2023 21:30:52 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Thu, Feb 9, 2023 at 7:31 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> while working on another item of the TODO list I realized that I should\n> be using a PG_TRY() block in he xmlDocDumpFormatMemory call.\n>\n> Fixed in v5.\n>\n\nI noticed the xmlFreeDoc(doc) within the PG_CATCH is guarded but the\nother xmlFreeDoc(doc) is not. As the doc is assigned outside the\nPG_TRY shouldn't those both be the same?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 9 Feb 2023 10:09:53 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 09.02.23 00:09, Peter Smith wrote:\n> I noticed the xmlFreeDoc(doc) within the PG_CATCH is guarded but the\n> other xmlFreeDoc(doc) is not. As the doc is assigned outside the\n> PG_TRY shouldn't those both be the same?\n\nHi Peter,\n\nMy logic there was the following: if program reached that part of the \ncode it means that the xml_parse() and xmlDocDumpFormatMemory() worked, \nwhich consequently means that the variables doc and xmlbuf are != NULL, \ntherefore not needing to be checked. Am I missing something?\n\nThanks a lot for the review!\n\nBest, Jim", "msg_date": "Thu, 9 Feb 2023 00:42:20 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Thu, Feb 9, 2023 at 10:42 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> On 09.02.23 00:09, Peter Smith wrote:\n> > I noticed the xmlFreeDoc(doc) within the PG_CATCH is guarded but the\n> > other xmlFreeDoc(doc) is not. As the doc is assigned outside the\n> > PG_TRY shouldn't those both be the same?\n>\n> Hi Peter,\n>\n> My logic there was the following: if program reached that part of the\n> code it means that the xml_parse() and xmlDocDumpFormatMemory() worked,\n> which consequently means that the variables doc and xmlbuf are != NULL,\n> therefore not needing to be checked. Am I missing something?\n>\n\nThanks. I think I understand it better now -- I expect\nxmlDocDumpFormatMemory will cope OK when passed a NULL doc (see this\nsource [1]), but it will return nbytes of 0, but your code will still\nthrow ERROR, meaning the guard for doc NULL is necessary for the\nPG_CATCH.\n\nIn that case, everything LGTM.\n\n~\n\nOTOH, if you are having to check for NULL doc anyway, maybe it's just\nas easy only doing that up-front. Then you could quick-exit the\nfunction without calling xmlDocDumpFormatMemory etc. in the first\nplace. For example:\n\ndoc = xml_parse(arg, XMLOPTION_DOCUMENT, false, GetDatabaseEncoding(), NULL);\nif (!doc)\n return 0;\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 9 Feb 2023 12:01:10 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 09.02.23 02:01, Peter Smith wrote:\n> OTOH, if you are having to check for NULL doc anyway, maybe it's just\n> as easy only doing that up-front. Then you could quick-exit the\n> function without calling xmlDocDumpFormatMemory etc. in the first\n> place. For example:\n>\n> doc = xml_parse(arg, XMLOPTION_DOCUMENT, false, GetDatabaseEncoding(), NULL);\n> if (!doc)\n> return 0;\n\nI see your point. If I got it right, you're suggesting the following \nchange in the PG_TRY();\n\n    PG_TRY();\n     {\n\n         int nbytes;\n\n         if(!doc)\n             xml_ereport(xmlerrcxt, ERROR, ERRCODE_INTERNAL_ERROR,\n                     \"could not parse the given XML document\");\n\n         xmlDocDumpFormatMemory(doc, &xmlbuf, &nbytes, 1);\n\n         if(!nbytes || xmlerrcxt->err_occurred)\n             xml_ereport(xmlerrcxt, ERROR, ERRCODE_INTERNAL_ERROR,\n                     \"could not indent the given XML document\");\n\n\n         initStringInfo(&buf);\n         appendStringInfoString(&buf, (const char *)xmlbuf);\n\n     }\n\n.. which will catch the doc == NULL before calling xmlDocDumpFormatMemory.\n\nIs it what you suggest?\n\nThanks a lot for the thorough review!\n\nBest, Jim\n\n\n\n", "msg_date": "Thu, 9 Feb 2023 08:16:50 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> I see your point. If I got it right, you're suggesting the following \n> change in the PG_TRY();\n\n>    PG_TRY();\n>     {\n\n>         int nbytes;\n\n>         if(!doc)\n>             xml_ereport(xmlerrcxt, ERROR, ERRCODE_INTERNAL_ERROR,\n>                     \"could not parse the given XML document\");\n\n>         xmlDocDumpFormatMemory(doc, &xmlbuf, &nbytes, 1);\n\n>         if(!nbytes || xmlerrcxt->err_occurred)\n>             xml_ereport(xmlerrcxt, ERROR, ERRCODE_INTERNAL_ERROR,\n>                     \"could not indent the given XML document\");\n\n\n>         initStringInfo(&buf);\n>         appendStringInfoString(&buf, (const char *)xmlbuf);\n\n>     }\n\n> .. which will catch the doc == NULL before calling xmlDocDumpFormatMemory.\n\nUm ... why are you using PG_TRY here at all? It seems like\nyou have full control of the actually likely error cases.\nThe only plausible error out of the StringInfo calls is OOM,\nand you probably don't want to trap that at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Feb 2023 02:23:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 09.02.23 08:23, Tom Lane wrote:\n> Um ... why are you using PG_TRY here at all? It seems like\n> you have full control of the actually likely error cases.\n> The only plausible error out of the StringInfo calls is OOM,\n> and you probably don't want to trap that at all.\n\nMy intention was to catch any unexpected error from \nxmlDocDumpFormatMemory and handle it properly. But I guess you're right, \nI can control the likely error cases by checking doc and nbytes.\n\nYou suggest something along these lines?\n\n     xmlDocPtr  doc;\n     xmlChar    *xmlbuf = NULL;\n     text       *arg = PG_GETARG_TEXT_PP(0);\n     StringInfoData buf;\n     int nbytes;\n\n     doc = xml_parse(arg, XMLOPTION_DOCUMENT, false, \nGetDatabaseEncoding(), NULL);\n\n     if(!doc)\n         elog(ERROR, \"could not parse the given XML document\");\n\n     xmlDocDumpFormatMemory(doc, &xmlbuf, &nbytes, 1);\n\n     xmlFreeDoc(doc);\n\n     if(!nbytes)\n         elog(ERROR, \"could not indent the given XML document\");\n\n     initStringInfo(&buf);\n     appendStringInfoString(&buf, (const char *)xmlbuf);\n\n     xmlFree(xmlbuf);\n\n     PG_RETURN_XML_P(stringinfo_to_xmltype(&buf));\n\n\nThanks!\n\nBest, Jim", "msg_date": "Thu, 9 Feb 2023 09:17:08 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 02.02.23 21:35, Jim Jones wrote:\n> This small patch introduces a XML pretty print function. It basically \n> takes advantage of the indentation feature of xmlDocDumpFormatMemory \n> from libxml2 to format XML strings.\n\nI suggest we call it \"xmlformat\", which is an established term for this.\n\n\n\n", "msg_date": "Thu, 9 Feb 2023 11:31:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Thu, Feb 9, 2023 at 7:17 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> On 09.02.23 08:23, Tom Lane wrote:\n> > Um ... why are you using PG_TRY here at all? It seems like\n> > you have full control of the actually likely error cases.\n> > The only plausible error out of the StringInfo calls is OOM,\n> > and you probably don't want to trap that at all.\n>\n> My intention was to catch any unexpected error from\n> xmlDocDumpFormatMemory and handle it properly. But I guess you're right,\n> I can control the likely error cases by checking doc and nbytes.\n>\n> You suggest something along these lines?\n>\n> xmlDocPtr doc;\n> xmlChar *xmlbuf = NULL;\n> text *arg = PG_GETARG_TEXT_PP(0);\n> StringInfoData buf;\n> int nbytes;\n>\n> doc = xml_parse(arg, XMLOPTION_DOCUMENT, false,\n> GetDatabaseEncoding(), NULL);\n>\n> if(!doc)\n> elog(ERROR, \"could not parse the given XML document\");\n>\n> xmlDocDumpFormatMemory(doc, &xmlbuf, &nbytes, 1);\n>\n> xmlFreeDoc(doc);\n>\n> if(!nbytes)\n> elog(ERROR, \"could not indent the given XML document\");\n>\n> initStringInfo(&buf);\n> appendStringInfoString(&buf, (const char *)xmlbuf);\n>\n> xmlFree(xmlbuf);\n>\n> PG_RETURN_XML_P(stringinfo_to_xmltype(&buf));\n>\n>\n> Thanks!\n>\n\nSomething like that LGTM, but here are some other minor comments.\n\n======\n\n1.\nFYI, there are some whitespace warnings applying the v5 patch\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n../patches_misc/v5-0001-Add-pretty-printed-XML-output-option.patch\n../patches_misc/v5-0001-Add-pretty-printed-XML-output-option.patch:26:\ntrailing whitespace.\n\n../patches_misc/v5-0001-Add-pretty-printed-XML-output-option.patch:29:\ntrailing whitespace.\n\n../patches_misc/v5-0001-Add-pretty-printed-XML-output-option.patch:33:\ntrailing whitespace.\n\n../patches_misc/v5-0001-Add-pretty-printed-XML-output-option.patch:37:\ntrailing whitespace.\n\n../patches_misc/v5-0001-Add-pretty-printed-XML-output-option.patch:41:\ntrailing whitespace.\n\nwarning: squelched 48 whitespace errors\nwarning: 53 lines add whitespace errors.\n\n======\nsrc/test/regress/sql/xml.sql\n\n2.\nThe v5 patch was already testing NULL, but it might be good to add\nmore tests to verify the function is behaving how you want for edge\ncases. For example,\n\n+-- XML pretty print: NULL, empty string, spaces only...\nSELECT xmlpretty(NULL);\nSELECT xmlpretty('');\nSELECT xmlpretty(' ');\n\n~~\n\n3.\nThe function is returning XML anyway, so is the '::xml' casting in\nthese tests necessary?\n\ne.g.\nSELECT xmlpretty(NULL)::xml; --> SELECT xmlpretty(NULL);\n\n======\nsrc/include/catalog/pg_proc.dat\n\n4.\n\n+ { oid => '4642', descr => 'Indented text from xml',\n+ proname => 'xmlpretty', prorettype => 'xml',\n+ proargtypes => 'xml', prosrc => 'xmlpretty' },\n\nSpurious leading space for this new entry.\n\n======\ndoc/src/sgml/func.sgml\n\n5.\n+ <foo id=\"x\">\n+ <bar id=\"y\">\n+ <var id=\"z\">42</var>\n+ </bar>\n+ </foo>\n+\n+(1 row)\n+\n+]]></screen>\n\nA spurious blank line in the example after the \"(1 row)\"\n\n~~~\n\n6.\nDoes this function docs belong in section 9.15.1 \"Producing XML\nContent\"? Or (since it is not really producing content) should it be\nmoved to the 9.15.3 \"Processing XML\" section?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 10 Feb 2023 12:10:46 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 10.02.23 02:10, Peter Smith wrote:\n> On Thu, Feb 9, 2023 at 7:17 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\n> 1.\n> FYI, there are some whitespace warnings applying the v5 patch\n>\nTrailing whitespaces removed. The patch applies now without warnings.\n> ======\n> src/test/regress/sql/xml.sql\n>\n> 2.\n> The v5 patch was already testing NULL, but it might be good to add\n> more tests to verify the function is behaving how you want for edge\n> cases. For example,\n>\n> +-- XML pretty print: NULL, empty string, spaces only...\n> SELECT xmlpretty(NULL);\n> SELECT xmlpretty('');\n> SELECT xmlpretty(' ');\n\nTest cases added.\n\n> 3.\n> The function is returning XML anyway, so is the '::xml' casting in\n> these tests necessary?\n>\n> e.g.\n> SELECT xmlpretty(NULL)::xml; --> SELECT xmlpretty(NULL);\nIt is indeed not necessary. Most likely I used it for testing and forgot \nto remove it afterwards. Now removed.\n> ======\n> src/include/catalog/pg_proc.dat\n>\n> 4.\n>\n> + { oid => '4642', descr => 'Indented text from xml',\n> + proname => 'xmlpretty', prorettype => 'xml',\n> + proargtypes => 'xml', prosrc => 'xmlpretty' },\n>\n> Spurious leading space for this new entry.\nRemoved.\n>\n> ======\n> doc/src/sgml/func.sgml\n>\n> 5.\n> + <foo id=\"x\">\n> + <bar id=\"y\">\n> + <var id=\"z\">42</var>\n> + </bar>\n> + </foo>\n> +\n> +(1 row)\n> +\n> +]]></screen>\n>\n> A spurious blank line in the example after the \"(1 row)\"\nRemoved.\n> ~~~\n>\n> 6.\n> Does this function docs belong in section 9.15.1 \"Producing XML\n> Content\"? Or (since it is not really producing content) should it be\n> moved to the 9.15.3 \"Processing XML\" section?\nMoved to the section 9.15.3\n\nFollowing the suggestion of Peter Eisentraut I renamed the function to \nxmlformat().\n\nv6 attached.\n\nThanks a lot for the review.\n\nBest, Jim", "msg_date": "Fri, 10 Feb 2023 09:15:50 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Something is misbehaving.\n\nUsing the latest HEAD, and before applying the v6 patch, 'make check'\nis working OK.\n\nBut after applying the v6 patch, some XML regression tests are failing\nbecause the DETAIL part of the message indicating the wrong syntax\nposition is not getting displayed. Not only for your new tests -- but\nfor other XML tests too.\n\nMy ./configure looks like this:\n./configure --prefix=/usr/local/pg_oss --with-libxml --enable-debug\n--enable-cassert --enable-tap-tests CFLAGS=\"-ggdb -O0 -g3\n-fno-omit-frame-pointer\"\n\nresulting in:\nchecking whether to build with XML support... yes\nchecking for libxml-2.0 >= 2.6.23... yes\n\n~\n\ne.g.(these are a sample of errors)\n\nxml ... FAILED 2561 ms\n\n@@ -344,8 +326,6 @@\n <twoerrors>&idontexist;</unbalanced>\n ^\n line 1: Opening and ending tag mismatch: twoerrors line 1 and unbalanced\n-<twoerrors>&idontexist;</unbalanced>\n- ^\n SELECT xmlparse(document '<nosuchprefix:tag/>');\n xmlparse\n ---------------------\n@@ -1696,14 +1676,8 @@\n SELECT xmlformat('');\n ERROR: invalid XML document\n DETAIL: line 1: switching encoding : no input\n-\n-^\n line 1: Document is empty\n-\n-^\n -- XML format: invalid string (whitespaces)\n SELECT xmlformat(' ');\n ERROR: invalid XML document\n DETAIL: line 1: Start tag expected, '<' not found\n-\n- ^\n\n~~\n\nSeparately (but maybe it's related?), the CF-bot test also reported a\nfailure [1] with strange error detail differences.\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/xml.out\n/tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/xml.out 2023-02-12\n09:02:57.077569000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\n2023-02-12 09:05:45.148100000 +0000\n@@ -1695,10 +1695,7 @@\n -- XML format: empty string\n SELECT xmlformat('');\n ERROR: invalid XML document\n-DETAIL: line 1: switching encoding : no input\n-\n-^\n-line 1: Document is empty\n+DETAIL: line 1: Document is empty\n\n ^\n -- XML format: invalid string (whitespaces)\n\n------\n[1] https://api.cirrus-ci.com/v1/artifact/task/4802219812323328/testrun/build/testrun/regress/regress/regression.diffs\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 13 Feb 2023 12:15:13 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 13.02.23 02:15, Peter Smith wrote:\n> Something is misbehaving.\n>\n> Using the latest HEAD, and before applying the v6 patch, 'make check'\n> is working OK.\n>\n> But after applying the v6 patch, some XML regression tests are failing\n> because the DETAIL part of the message indicating the wrong syntax\n> position is not getting displayed. Not only for your new tests -- but\n> for other XML tests too.\n\nYes, I noticed it yesterday ... and I'm not sure how to solve it. It \nseems that in the system is returning a different error message in the \nFreeBSD patch tester, which is causing a regression test in this \nparticular OS to fail.\n\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/xml.out /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/xml.out\t2023-02-12 09:02:57.077569000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\t2023-02-12 09:05:45.148100000 +0000\n@@ -1695,10 +1695,7 @@\n -- XML format: empty string\n SELECT xmlformat('');\n ERROR: invalid XML document\n-DETAIL: line 1: switching encoding : no input\n-\n-^\n-line 1: Document is empty\n+DETAIL: line 1: Document is empty\n \n ^\n -- XML format: invalid string (whitespaces)\n\n\nDoes anyone know if there is anything I can do to make the error \nmessages be consistent among different OS?\n\n\n\n\n\n\nOn 13.02.23 02:15, Peter Smith wrote:\n\n\nSomething is misbehaving.\n\nUsing the latest HEAD, and before applying the v6 patch, 'make check'\nis working OK.\n\nBut after applying the v6 patch, some XML regression tests are failing\nbecause the DETAIL part of the message indicating the wrong syntax\nposition is not getting displayed. Not only for your new tests -- but\nfor other XML tests too.\n\nYes, I noticed it yesterday ... and I'm\n not sure how to solve it. It seems that in the system is\n returning a different error message in the FreeBSD patch tester,\n which is causing a regression test in this particular OS to\n fail.\n\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/xml.out /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/xml.out\t2023-02-12 09:02:57.077569000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\t2023-02-12 09:05:45.148100000 +0000\n@@ -1695,10 +1695,7 @@\n -- XML format: empty string\n SELECT xmlformat('');\n ERROR: invalid XML document\n-DETAIL: line 1: switching encoding : no input\n-\n-^\n-line 1: Document is empty\n+DETAIL: line 1: Document is empty\n \n ^\n -- XML format: invalid string (whitespaces)\n\n\n\n\nDoes anyone know if there is anything I\n can do to make the error messages be consistent among different\n OS?", "msg_date": "Mon, 13 Feb 2023 13:15:57 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 13.02.23 13:15, Jim Jones wrote:\n> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/xml.out /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\n> --- /tmp/cirrus-ci-build/src/test/regress/expected/xml.out\t2023-02-12 09:02:57.077569000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\t2023-02-12 09:05:45.148100000 +0000\n> @@ -1695,10 +1695,7 @@\n> -- XML format: empty string\n> SELECT xmlformat('');\n> ERROR: invalid XML document\n> -DETAIL: line 1: switching encoding : no input\n> -\n> -^\n> -line 1: Document is empty\n> +DETAIL: line 1: Document is empty\n> \n> ^\n> -- XML format: invalid string (whitespaces)\n\nI couldn't figure out why the error messages are different -- I'm \nwondering if the issue is the test environment itself. I just removed \nthe troubling test case for now\n\nSELECT xmlformat('');\n\nv7 attached.\n\nThanks for reviewing this patch!\n\nBest, Jim", "msg_date": "Tue, 14 Feb 2023 22:55:20 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Wed, Feb 15, 2023 at 8:55 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> On 13.02.23 13:15, Jim Jones wrote:\n>\n> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/xml.out /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out\n> --- /tmp/cirrus-ci-build/src/test/regress/expected/xml.out 2023-02-12 09:02:57.077569000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/xml.out 2023-02-12 09:05:45.148100000 +0000\n> @@ -1695,10 +1695,7 @@\n> -- XML format: empty string\n> SELECT xmlformat('');\n> ERROR: invalid XML document\n> -DETAIL: line 1: switching encoding : no input\n> -\n> -^\n> -line 1: Document is empty\n> +DETAIL: line 1: Document is empty\n>\n> ^\n> -- XML format: invalid string (whitespaces)\n>\n> I couldn't figure out why the error messages are different -- I'm wondering if the issue is the test environment itself. I just removed the troubling test case for now\n>\n> SELECT xmlformat('');\n>\n> v7 attached.\n>\n> Thanks for reviewing this patch!\n>\n\nYesterday I looked at those cfbot configs and noticed all those\nmachines have different versions of libxml.\n\n2.10.3\n2.6.23\n2.9.10\n2.9.13\n\nBut I don't if version numbers have any impact on the different error\ndetails or not.\n\n~\n\nThe thing that puzzled me most is that in MY environment (CentOS7;\nlibxml 20901; PG --with-libxml build) I get this behaviour.\n\n- Without your v6 patch 'make check' is all OK.\n\n- With your v6 patch other XML tests (not only yours) of 'make check'\nfailed with different error messages.\n\n- Similarly, if I keep the v6 patch but just change (in xmlformat) the\n#ifdef USE_LIBXML to be #if 0, then only the new xmlformat tests fail,\nbut the other XML tests are working OK again.\n\nThose results implied to me that this function code (in my environment\nanyway) is somehow introducing a side effect causing the *other* XML\ntests to fail.\n\nBut so far I was unable to identify the reason. Sorry, I don't know\nthis XML API well enough to help more.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Austalia.\n\n\n", "msg_date": "Wed, 15 Feb 2023 09:45:42 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 14.02.23 23:45, Peter Smith wrote:\n> Those results implied to me that this function code (in my environment\n> anyway) is somehow introducing a side effect causing the *other* XML\n> tests to fail.\n\nI believe I've found the issue. It is probably related to the XML OPTION \nsettings, as it seems to deliver different error messages when set to \nDOCUMENT or CONTENT:\n\npostgres=# SET XML OPTION CONTENT;\nSET\npostgres=# SELECT xmlformat('');\nERROR:  invalid XML document\nDETAIL:  line 1: switching encoding : no input\n\n^\nline 1: Document is empty\n\n^\npostgres=# SET XML OPTION DOCUMENT;\nSET\npostgres=# SELECT xmlformat('');\nERROR:  invalid XML document\nLINE 1: SELECT xmlformat('');\n                          ^\nDETAIL:  line 1: switching encoding : no input\n\n^\nline 1: Document is empty\n\n^\n\nv8 attached reintroduces the SELECT xmlformat('') test case and adds SET \nXML OPTION DOCUMENT to the regression tests.\n\nBest, Jim", "msg_date": "Wed, 15 Feb 2023 01:05:40 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Wed, Feb 15, 2023 at 11:05 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> On 14.02.23 23:45, Peter Smith wrote:\n> > Those results implied to me that this function code (in my environment\n> > anyway) is somehow introducing a side effect causing the *other* XML\n> > tests to fail.\n>\n> I believe I've found the issue. It is probably related to the XML OPTION\n> settings, as it seems to deliver different error messages when set to\n> DOCUMENT or CONTENT:\n>\n> postgres=# SET XML OPTION CONTENT;\n> SET\n> postgres=# SELECT xmlformat('');\n> ERROR: invalid XML document\n> DETAIL: line 1: switching encoding : no input\n>\n> ^\n> line 1: Document is empty\n>\n> ^\n> postgres=# SET XML OPTION DOCUMENT;\n> SET\n> postgres=# SELECT xmlformat('');\n> ERROR: invalid XML document\n> LINE 1: SELECT xmlformat('');\n> ^\n> DETAIL: line 1: switching encoding : no input\n>\n> ^\n> line 1: Document is empty\n>\n> ^\n>\n> v8 attached reintroduces the SELECT xmlformat('') test case and adds SET\n> XML OPTION DOCUMENT to the regression tests.\n>\n\nWith v8, in my environment, in psql I see something slightly different:\n\n\ntest_pub=# SET XML OPTION CONTENT;\nSET\ntest_pub=# SELECT xmlformat('');\nERROR: invalid XML document\nDETAIL: line 1: switching encoding : no input\nline 1: Document is empty\ntest_pub=# SET XML OPTION DOCUMENT;\nSET\ntest_pub=# SELECT xmlformat('');\nERROR: invalid XML document\nLINE 1: SELECT xmlformat('');\n ^\nDETAIL: line 1: switching encoding : no input\nline 1: Document is empty\n\n~~\n\ntest_pub=# SET XML OPTION CONTENT;\nSET\ntest_pub=# INSERT INTO xmltest VALUES (3, '<wrong');\nERROR: relation \"xmltest\" does not exist\nLINE 1: INSERT INTO xmltest VALUES (3, '<wrong');\n ^\ntest_pub=# SET XML OPTION DOCUMENT;\nSET\ntest_pub=# INSERT INTO xmltest VALUES (3, '<wrong');\nERROR: relation \"xmltest\" does not exist\nLINE 1: INSERT INTO xmltest VALUES (3, '<wrong');\n ^\n\n~~\n\nBecause the expected extra detail stuff is missing the regression\ntests are still failing for me.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Austalia.\n\n\n", "msg_date": "Wed, 15 Feb 2023 12:09:13 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 15.02.23 02:09, Peter Smith wrote:\n> With v8, in my environment, in psql I see something slightly different:\n>\n>\n> test_pub=# SET XML OPTION CONTENT;\n> SET\n> test_pub=# SELECT xmlformat('');\n> ERROR: invalid XML document\n> DETAIL: line 1: switching encoding : no input\n> line 1: Document is empty\n> test_pub=# SET XML OPTION DOCUMENT;\n> SET\n> test_pub=# SELECT xmlformat('');\n> ERROR: invalid XML document\n> LINE 1: SELECT xmlformat('');\n> ^\n> DETAIL: line 1: switching encoding : no input\n> line 1: Document is empty\n>\n> ~~\n>\n> test_pub=# SET XML OPTION CONTENT;\n> SET\n> test_pub=# INSERT INTO xmltest VALUES (3, '<wrong');\n> ERROR: relation \"xmltest\" does not exist\n> LINE 1: INSERT INTO xmltest VALUES (3, '<wrong');\n> ^\n> test_pub=# SET XML OPTION DOCUMENT;\n> SET\n> test_pub=# INSERT INTO xmltest VALUES (3, '<wrong');\n> ERROR: relation \"xmltest\" does not exist\n> LINE 1: INSERT INTO xmltest VALUES (3, '<wrong');\n> ^\n>\n> ~~\n\nYes... a cfbot also complained about the same thing.\n\nSetting the VERBOSITY to terse might solve this issue:\n\npostgres=# \\set VERBOSITY terse\npostgres=# SELECT xmlformat('');\nERROR:  invalid XML document\n\npostgres=# \\set VERBOSITY default\npostgres=# SELECT xmlformat('');\nERROR:  invalid XML document\nDETAIL:  line 1: switching encoding : no input\n\n^\nline 1: Document is empty\n\n^\n\nv9 wraps the corner test cases with VERBOSITY terse to reduce the error \nmessage output.\n\nThanks!\n\nBest, Jim", "msg_date": "Wed, 15 Feb 2023 08:10:44 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Wed, Feb 15, 2023 at 6:10 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> On 15.02.23 02:09, Peter Smith wrote:\n> > With v8, in my environment, in psql I see something slightly different:\n> >\n> >\n> > test_pub=# SET XML OPTION CONTENT;\n> > SET\n> > test_pub=# SELECT xmlformat('');\n> > ERROR: invalid XML document\n> > DETAIL: line 1: switching encoding : no input\n> > line 1: Document is empty\n> > test_pub=# SET XML OPTION DOCUMENT;\n> > SET\n> > test_pub=# SELECT xmlformat('');\n> > ERROR: invalid XML document\n> > LINE 1: SELECT xmlformat('');\n> > ^\n> > DETAIL: line 1: switching encoding : no input\n> > line 1: Document is empty\n> >\n> > ~~\n> >\n> > test_pub=# SET XML OPTION CONTENT;\n> > SET\n> > test_pub=# INSERT INTO xmltest VALUES (3, '<wrong');\n> > ERROR: relation \"xmltest\" does not exist\n> > LINE 1: INSERT INTO xmltest VALUES (3, '<wrong');\n> > ^\n> > test_pub=# SET XML OPTION DOCUMENT;\n> > SET\n> > test_pub=# INSERT INTO xmltest VALUES (3, '<wrong');\n> > ERROR: relation \"xmltest\" does not exist\n> > LINE 1: INSERT INTO xmltest VALUES (3, '<wrong');\n> > ^\n> >\n> > ~~\n>\n> Yes... a cfbot also complained about the same thing.\n>\n> Setting the VERBOSITY to terse might solve this issue:\n>\n> postgres=# \\set VERBOSITY terse\n> postgres=# SELECT xmlformat('');\n> ERROR: invalid XML document\n>\n> postgres=# \\set VERBOSITY default\n> postgres=# SELECT xmlformat('');\n> ERROR: invalid XML document\n> DETAIL: line 1: switching encoding : no input\n>\n> ^\n> line 1: Document is empty\n>\n> ^\n>\n> v9 wraps the corner test cases with VERBOSITY terse to reduce the error\n> message output.\n>\n\nBingo!! Your v9 patch now passes all 'make check' tests for me.\n\nBut I'll leave it to a committer to decide if this VERBOSITY toggle is\nthe best fix.\n\n(I don't understand, maybe someone can explain, how the patch managed\nto mess verbosity of the existing tests.)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Austalia.\n\n\n", "msg_date": "Wed, 15 Feb 2023 20:06:49 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 15.02.23 10:06, Peter Smith wrote:\n> Bingo!! Your v9 patch now passes all 'make check' tests for me.\nNice! It also passed all tests in the patch tester.\n> But I'll leave it to a committer to decide if this VERBOSITY toggle is\n> the best fix.\n\nI see now other test cases in the xml.sql file that also use this \noption, so it might be a known \"issue\".\n\nShall we now set the patch to \"Ready for Commiter\"?\n\nThanks a lot for the review!\n\nBest, Jim", "msg_date": "Wed, 15 Feb 2023 10:37:39 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 2023-Feb-13, Peter Smith wrote:\n\n> Something is misbehaving.\n> \n> Using the latest HEAD, and before applying the v6 patch, 'make check'\n> is working OK.\n> \n> But after applying the v6 patch, some XML regression tests are failing\n> because the DETAIL part of the message indicating the wrong syntax\n> position is not getting displayed. Not only for your new tests -- but\n> for other XML tests too.\n\nNote that there's another file, xml_2.out, which does not contain the\nadditional part of the DETAIL message. I suspect in Peter's case it's\nxml_2.out that was originally being used as a comparison basis (not\nxml.out), but that one is not getting patched, and ultimately the diff\nis reported for him against xml.out because none of them matches.\n\nAn easy way forward might be to manually apply the patch from xml.out to\nxml_2.out, and edit it by hand to remove the additional lines.\n\nSee commit 085423e3e326 for a bit of background.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 15 Feb 2023 11:11:59 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 15.02.23 11:11, Alvaro Herrera wrote:\n> Note that there's another file, xml_2.out, which does not contain the\n> additional part of the DETAIL message. I suspect in Peter's case it's\n> xml_2.out that was originally being used as a comparison basis (not\n> xml.out), but that one is not getting patched, and ultimately the diff\n> is reported for him against xml.out because none of them matches.\n>\n> An easy way forward might be to manually apply the patch from xml.out to\n> xml_2.out, and edit it by hand to remove the additional lines.\n>\n> See commit 085423e3e326 for a bit of background.\n\nHi Álvaro,\n\nAs my test cases were not specifically about how the error message looks \nlike, I thought that suppressing part of the error messages by setting \nVERBOSITY terse would suffice.[1] Is this approach not recommended?\n\nThanks!\n\nBest, Jim\n\n1 - v9 patch: \nhttps://www.postgresql.org/message-id/CAHut%2BPtoH8zkBHxv44XyO%2Bo4kW_nZdGhNdVaJ_cpEjrckKDqtw%40mail.gmail.com", "msg_date": "Wed, 15 Feb 2023 11:33:50 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 2023-Feb-15, Jim Jones wrote:\n\n> Hi Álvaro,\n> \n> As my test cases were not specifically about how the error message looks\n> like, I thought that suppressing part of the error messages by setting\n> VERBOSITY terse would suffice.[1] Is this approach not recommended?\n\nWell, I don't see why we would depart from what we've been doing in the\nrest of the XML tests. I did see that patch and I thought it was taking\nthe wrong approach.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Aprender sin pensar es inútil; pensar sin aprender, peligroso\" (Confucio)\n\n\n", "msg_date": "Wed, 15 Feb 2023 12:11:22 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 15.02.23 12:11, Alvaro Herrera wrote:\n> Well, I don't see why we would depart from what we've been doing in the\n> rest of the XML tests. I did see that patch and I thought it was taking\n> the wrong approach.\n\nFair point.\n\nv10 patches the xml.out to xml_2.out - manually removing the additional \nlines.\n\nThanks for the review!\n\nBest, Jim", "msg_date": "Wed, 15 Feb 2023 13:31:43 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Accidentally left the VERBOSE settings out -- sorry!\n\nNow it matches the approach used in a xpath test in xml.sql, xml.out, \nxml_1.out and xml_2.out\n\n-- Since different libxml versions emit slightly different\n-- error messages, we suppress the DETAIL in this test.\n\\set VERBOSITY terse\nSELECT xpath('/*', '<invalidns xmlns=''&lt;''/>');\nERROR:  could not parse XML document\n\\set VERBOSITY default\n\nv11 now correctly sets xml_2.out.\n\nBest, Jim", "msg_date": "Wed, 15 Feb 2023 14:49:30 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Note that there's another file, xml_2.out, which does not contain the\n> additional part of the DETAIL message. I suspect in Peter's case it's\n> xml_2.out that was originally being used as a comparison basis (not\n> xml.out), but that one is not getting patched, and ultimately the diff\n> is reported for him against xml.out because none of them matches.\n> See commit 085423e3e326 for a bit of background.\n\nYeah. That's kind of sad, because it means there are still broken\nlibxml2s out there in 2023. Nonetheless, since there are, it is not\noptional to fix all three expected-files.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Feb 2023 10:07:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Thu, Feb 16, 2023 at 12:49 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> Accidentally left the VERBOSE settings out -- sorry!\n>\n> Now it matches the approach used in a xpath test in xml.sql, xml.out,\n> xml_1.out and xml_2.out\n>\n> -- Since different libxml versions emit slightly different\n> -- error messages, we suppress the DETAIL in this test.\n> \\set VERBOSITY terse\n> SELECT xpath('/*', '<invalidns xmlns=''&lt;''/>');\n> ERROR: could not parse XML document\n> \\set VERBOSITY default\n>\n> v11 now correctly sets xml_2.out.\n>\n> Best, Jim\n\n\nFirstly, Sorry it seems like I made a mistake and was premature\ncalling bingo above for v9.\n- today I repeated v9 'make check' and found it failing still.\n- the new xmlformat tests are OK, but some pre-existing xmlparse tests\nare broken.\n- see attached file pretty-v9-results\n\n----\n\nOTOH, the absence of xml_2.out from this patch appears to be the\ncorrect explanation for why my results have been differing.\n\n----\n\nToday I fetched and tried the latest v11.\n\nIt is failing too, but only just.\n- see attached file pretty-v11-results\n\nIt looks only due to a whitespace EOF issue in the xml_2.out\n\n@@ -1679,4 +1679,4 @@\n -- XML format: empty string\n SELECT xmlformat('');\n ERROR: invalid XML document\n-\\set VERBOSITY default\n\\ No newline at end of file\n+\\set VERBOSITY default\n\n------\n\nThe attached patch update (v12-0002) fixes the xml_2.out for me.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 16 Feb 2023 10:13:35 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Thu, Feb 9, 2023 at 2:31 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> I suggest we call it \"xmlformat\", which is an established term for this.\n>\n\nSome very-very old, rusted memory told me that there was something in\nstandard – and indeed, it seems it described an optional Feature X069,\n“XMLSerialize: INDENT” for XMLSERIALIZE. So probably pretty-printing should\ngo there, to XMLSERIALIZE, to follow the standard?\n\nOracle also has an option for it in XMLSERIALIZE, although in a slightly\ndifferent form, with ability to specify the number of spaces for\nindentation\nhttps://docs.oracle.com/database/121/SQLRF/functions268.htm#SQLRF06231.\n\nOn Thu, Feb 9, 2023 at 2:31 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\nI suggest we call it \"xmlformat\", which is an established term for this.Some very-very old, rusted memory told me that there was something in standard – and indeed, it seems it described an optional Feature X069, “XMLSerialize: INDENT” for XMLSERIALIZE. So probably pretty-printing should go there, to XMLSERIALIZE, to follow the standard?Oracle also has an option for it in XMLSERIALIZE, although in a slightly different form, with ability to specify the number of spaces for indentation https://docs.oracle.com/database/121/SQLRF/functions268.htm#SQLRF06231.", "msg_date": "Wed, 15 Feb 2023 20:38:01 -0800", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 16.02.23 05:38, Nikolay Samokhvalov wrote:\n> On Thu, Feb 9, 2023 at 2:31 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com> wrote:\n>\n> I suggest we call it \"xmlformat\", which is an established term for\n> this.\n>\n>\n> Some very-very old, rusted memory told me that there was something in \n> standard – and indeed, it seems it described an optional Feature X069, \n> “XMLSerialize: INDENT” for XMLSERIALIZE. So probably pretty-printing \n> should go there, to XMLSERIALIZE, to follow the standard?\n>\n> Oracle also has an option for it in XMLSERIALIZE, although in a \n> slightly different form, with ability to specify the number of spaces \n> for indentation \n> https://docs.oracle.com/database/121/SQLRF/functions268.htm#SQLRF06231.\n\nHi Nikolay,\n\nMy first thought was to call it xmlpretty, to make it consistent with \nthe jsonb equivalent \"jsonb_pretty\". But yes, you make a good \nobservation .. xmlserialize seems to be a much better candidate.\n\nI would be willing to refactor my patch if we agree on xmlserialize.\n\nThanks for the suggestion!\n\nJim\n\n\n\n\n\n\nOn 16.02.23\n 05:38, Nikolay Samokhvalov wrote:\n\n\n\n\nOn Thu, Feb 9, 2023 at\n 2:31 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com>\n wrote:\n\n\n\n I suggest we call it \"xmlformat\", which is an established\n term for this.\n\n\n\nSome very-very old, rusted memory\n told me that there was something in standard – and indeed,\n it seems it described an optional Feature X069,\n “XMLSerialize: INDENT” for XMLSERIALIZE. So probably\n pretty-printing should go there, to XMLSERIALIZE, to\n follow the standard?\n\n\nOracle also has an option for it\n in XMLSERIALIZE, although in a slightly different form,\n with ability to specify the number of spaces for\n indentation https://docs.oracle.com/database/121/SQLRF/functions268.htm#SQLRF06231.\n\n\n\nHi Nikolay,\n\nMy first thought was to call it xmlpretty,\n to make it consistent with the jsonb equivalent \"jsonb_pretty\".\n But yes, you make a good observation .. xmlserialize seems to be\n a much better candidate. \n\nI would be willing to refactor my patch if\n we agree on xmlserialize.\n\nThanks for the suggestion!\n\nJim", "msg_date": "Thu, 16 Feb 2023 08:08:16 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 16.02.23 00:13, Peter Smith wrote:\n> Today I fetched and tried the latest v11.\n>\n> It is failing too, but only just.\n> - see attached file pretty-v11-results\n>\n> It looks only due to a whitespace EOF issue in the xml_2.out\n>\n> @@ -1679,4 +1679,4 @@\n> -- XML format: empty string\n> SELECT xmlformat('');\n> ERROR: invalid XML document\n> -\\set VERBOSITY default\n> \\ No newline at end of file\n> +\\set VERBOSITY default\n>\n> ------\n>\n> The attached patch update (v12-0002) fixes the xml_2.out for me.\n\nIt works for me too.\n\nThanks for the quick fix!\n\nJim\n\n\n\n", "msg_date": "Thu, 16 Feb 2023 08:51:16 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 16.02.23 00:13, Peter Smith wrote:\n> Today I fetched and tried the latest v11.\n> It is failing too, but only just.\n> - see attached file pretty-v11-results\n>\n> It looks only due to a whitespace EOF issue in the xml_2.out\n>\n> @@ -1679,4 +1679,4 @@\n> -- XML format: empty string\n> SELECT xmlformat('');\n> ERROR: invalid XML document\n> -\\set VERBOSITY default\n> \\ No newline at end of file\n> +\\set VERBOSITY default\n>\n> ------\n>\n> The attached patch update (v12-0002) fixes the xml_2.out for me.\n\nI'm squashing v12-0001 and v12-0002 (v13 attached). There is still an \nopen discussion on renaming the function to xmlserialize,[1] but it \nshouldn't be too difficult to change it later in case we reach a \nconsensus :)\n\nThanks!\n\nJim\n\n1- \nhttps://www.postgresql.org/message-id/CANNMO%2BKwb4_87G8qDeN%2BVk1B1vX3HvgoGW%2B13fJ-b6rj7qbAww%40mail.gmail.com", "msg_date": "Thu, 16 Feb 2023 23:12:09 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Thu, Feb 16, 2023 at 2:12 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> I'm squashing v12-0001 and v12-0002 (v13 attached).\nI've looked into the patch. The code looks to conform to usual expectations.\nOne nit: this comment should have just one asterisk.\n+ /**\nAnd I have a dumb question: is this function protected from using\nexternal XML namespaces? What if the user passes some xmlns that will\nforce it to read namespace data from the server filesystem? Or is it\nnot possible? I see there are a lot of calls to xml_parse() anyway,\nbut still...\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Thu, 16 Feb 2023 16:08:21 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 16.02.23 05:38, Nikolay Samokhvalov wrote:\n> On Thu, Feb 9, 2023 at 2:31 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com> wrote:\n>\n> I suggest we call it \"xmlformat\", which is an established term for\n> this.\n>\n>\n> Some very-very old, rusted memory told me that there was something in \n> standard – and indeed, it seems it described an optional Feature X069, \n> “XMLSerialize: INDENT” for XMLSERIALIZE. So probably pretty-printing \n> should go there, to XMLSERIALIZE, to follow the standard?\n>\n> Oracle also has an option for it in XMLSERIALIZE, although in a \n> slightly different form, with ability to specify the number of spaces \n> for indentation \n> https://docs.oracle.com/database/121/SQLRF/functions268.htm#SQLRF06231.\n\nAfter your comment I'm studying the possibility to extend the existing \nxmlserialize function to add the indentation feature. If so, how do you \nthink it should look like? An extra parameter? e.g.\n\nSELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>'::XML AS text, \ntrue);\n\n.. or more or like Oracle does it\n\nSELECT XMLSERIALIZE(DOCUMENT xmltype('<foo><bar>42</bar></foo>') AS BLOB \nINDENT)\nFROM dual;\n\nThanks!\n\nBest, Jim\n\n\n\n\n\n\nOn 16.02.23\n 05:38, Nikolay Samokhvalov wrote:\n\n\n\n\nOn Thu, Feb 9, 2023 at\n 2:31 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com>\n wrote:\n\n\n\n I suggest we call it \"xmlformat\", which is an established\n term for this.\n\n\n\nSome very-very old, rusted memory\n told me that there was something in standard – and indeed,\n it seems it described an optional Feature X069,\n “XMLSerialize: INDENT” for XMLSERIALIZE. So probably\n pretty-printing should go there, to XMLSERIALIZE, to\n follow the standard?\n\n\nOracle also has an option for it\n in XMLSERIALIZE, although in a slightly different form,\n with ability to specify the number of spaces for\n indentation https://docs.oracle.com/database/121/SQLRF/functions268.htm#SQLRF06231.\n\n\n\nAfter your comment I'm studying the\n possibility to extend the existing xmlserialize function to add\n the indentation feature. If so, how do you think it should look\n like? An extra parameter? e.g.\n\nSELECT xmlserialize(DOCUMENT\n '<foo><bar>42</bar></foo>'::XML AS text,\n true); \n\n.. or more or like Oracle does it\n\nSELECT XMLSERIALIZE(DOCUMENT\n xmltype('<foo><bar>42</bar></foo>') AS\n BLOB INDENT) \n FROM dual;\n\nThanks!\n\nBest, Jim", "msg_date": "Fri, 17 Feb 2023 10:14:46 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 17.02.23 01:08, Andrey Borodin wrote:\n> On Thu, Feb 16, 2023 at 2:12 PM Jim Jones<jim.jones@uni-muenster.de> wrote:\n> I've looked into the patch. The code looks to conform to usual \n> expectations.\n> One nit: this comment should have just one asterisk.\n> + /**\n\nThanks for reviewing! Asterisk removed in v14.\n\n> And I have a dumb question: is this function protected from using\n> external XML namespaces? What if the user passes some xmlns that will\n> force it to read namespace data from the server filesystem? Or is it\n> not possible? I see there are a lot of calls to xml_parse() anyway,\n> but still...\n\nAccording to the documentation,[1] such validations are not supported.\n\n/\"The |xml| type does not validate input values against a document type \ndeclaration (DTD), even when the input value specifies a DTD. There is \nalso currently no built-in support for validating against other XML \nschema languages such as XML Schema.\"/\n\nBut I'll have a look at the code to be sure :)\n\nBest, Jim\n\n1- https://www.postgresql.org/docs/15/datatype-xml.html", "msg_date": "Fri, 17 Feb 2023 20:01:35 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Fri, Feb 17, 2023 at 1:14 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n\n> After your comment I'm studying the possibility to extend the existing\n> xmlserialize function to add the indentation feature. If so, how do you\n> think it should look like? An extra parameter? e.g.\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>'::XML AS text,\n> true);\n>\n> .. or more or like Oracle does it\n>\n> SELECT XMLSERIALIZE(DOCUMENT xmltype('<foo><bar>42</bar></foo>') AS BLOB\n> INDENT)\n> FROM dual;\n>\n\nMy idea was to follow the SQL standard (part 14, SQL/XML); unfortunately,\nthere is no free version, but there are drafts at\nhttp://www.wiscorp.com/SQLStandards.html.\n\n<XML character string serialization> ::=\n XMLSERIALIZE <left paren> [ <document or content> ]\n\n <XML value expression> AS <data type>\n [ <XML serialize bom> ]\n [ <XML serialize version> ]\n [ <XML declaration option> ]\n\n [ <XML serialize indent> ]\n <right paren>\n\n<XML serialize indent> ::=\n [ NO ] INDENT\n\n\nOracle's extension SIZE=n also seems interesting (including a special case\nSIZE=0, which means using new-line characters without spaces for each line).\n\nOn Fri, Feb 17, 2023 at 1:14 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n\nAfter your comment I'm studying the\n possibility to extend the existing xmlserialize function to add\n the indentation feature. If so, how do you think it should look\n like? An extra parameter? e.g.\nSELECT xmlserialize(DOCUMENT\n '<foo><bar>42</bar></foo>'::XML AS text,\n true); \n\n.. or more or like Oracle does it\n\nSELECT XMLSERIALIZE(DOCUMENT\n xmltype('<foo><bar>42</bar></foo>') AS\n BLOB INDENT) \n FROM dual;My idea was to follow the SQL standard (part 14, SQL/XML); unfortunately, there is no free version, but there are drafts at http://www.wiscorp.com/SQLStandards.html.<XML character string serialization> ::=\n XMLSERIALIZE <left paren> [ <document or content> ]\n <XML value expression> AS <data type>\n [ <XML serialize bom> ]\n [ <XML serialize version> ]\n [ <XML declaration option> ]\n [ <XML serialize indent> ]\n <right paren>\n<XML serialize indent> ::=\n [ NO ] INDENTOracle's extension SIZE=n also seems interesting (including a special case SIZE=0, which means using new-line characters without spaces for each line).", "msg_date": "Fri, 17 Feb 2023 14:24:13 -0800", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 17.02.23 23:24, Nikolay Samokhvalov wrote:\n> \n> My idea was to follow the SQL standard (part 14, SQL/XML); \n> unfortunately, there is no free version, but there are drafts at \n> http://www.wiscorp.com/SQLStandards.html \n> <http://www.wiscorp.com/SQLStandards.html>.\n> \n> <XML character string serialization> ::= XMLSERIALIZE <left paren> [ \n> <document or content> ]\n> \n> <XML value expression> AS <data type> [ <XML serialize bom> ] [ <XML \n> serialize version> ] [ <XML declaration option> ]\n> \n> [ <XML serialize indent> ] <right paren>\n> \n> <XML serialize indent> ::= [ NO ] INDENT\n\nGood find. It would be better to use this standard syntax.\n\n\n", "msg_date": "Sat, 18 Feb 2023 19:09:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 18.02.23 19:09, Peter Eisentraut wrote:\n> On 17.02.23 23:24, Nikolay Samokhvalov wrote:\n>>\n>> My idea was to follow the SQL standard (part 14, SQL/XML); \n>> unfortunately, there is no free version, but there are drafts at \n>> http://www.wiscorp.com/SQLStandards.html \n>> <http://www.wiscorp.com/SQLStandards.html>.\n>>\n>> <XML character string serialization> ::= XMLSERIALIZE <left paren> [ \n>> <document or content> ]\n>>\n>> <XML value expression> AS <data type> [ <XML serialize bom> ] [ <XML \n>> serialize version> ] [ <XML declaration option> ]\n>>\n>> [ <XML serialize indent> ] <right paren>\n>>\n>> <XML serialize indent> ::= [ NO ] INDENT\n>\n> Good find.  It would be better to use this standard syntax.\n\nAs suggested by Peter and Nikolay, v15 now removes the xmlformat \nfunction from the catalog and adds the [NO] INDENT option to \nxmlserialize, as described in X069.\n\npostgres=# SELECT xmlserialize(DOCUMENT '<foo><bar><val \nx=\"y\">42</val></bar></foo>' AS text INDENT);\n               xmlserialize\n----------------------------------------\n  <?xml version=\"1.0\" encoding=\"UTF-8\"?>+\n  <foo>                                 +\n    <bar>                               +\n      <val x=\"y\">42</val>               +\n    </bar>                              +\n  </foo>                                +\n\n(1 row)\n\npostgres=# SELECT xmlserialize(DOCUMENT '<foo><bar><val \nx=\"y\">42</val></bar></foo>' AS text NO INDENT);\n                xmlserialize\n-------------------------------------------\n  <foo><bar><val x=\"y\">42</val></bar></foo>\n(1 row)\n\nAlthough the indent feature is designed to work with xml strings of type \nDOCUMENT, this implementation also allows the usage of CONTENT type \nstrings as long as it contains a well-formed xml. It will throw an error \notherwise.\n\nThanks!\n\nBest, Jim", "msg_date": "Tue, 21 Feb 2023 00:06:05 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On Mon, Feb 20, 2023 at 3:06 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\n\n> As suggested by Peter and Nikolay, v15 now removes the xmlformat\n> function from the catalog and adds the [NO] INDENT option to\n> xmlserialize, as described in X069.\\\n>\n\nGreat. I'm checking this patch and it seems, indentation stops working if\nwe have a text node inside:\n\ngitpod=# select xmlserialize(document '<xml><more>13</more></xml>' as text\nindent);\n xmlserialize\n----------------------------------------\n <?xml version=\"1.0\" encoding=\"UTF-8\"?>+\n <xml> +\n <more>13</more> +\n </xml> +\n\n(1 row)\n\ngitpod=# select xmlserialize(document '<xml>text<more>13</more></xml>' as\ntext indent);\n xmlserialize\n----------------------------------------\n <?xml version=\"1.0\" encoding=\"UTF-8\"?>+\n <xml>text<more>13</more></xml> +\n\n(1 row)\n\nWorth to mention, Oracle behaves similarly -- indentation doesn't work:\nhttps://dbfiddle.uk/hRz5sXdM.\n\nBut is this as expected? Shouldn't it be like this:\n<xml>\n text\n <more>13</more>\n</xml>\n?\n\nOn Mon, Feb 20, 2023 at 3:06 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\nAs suggested by Peter and Nikolay, v15 now removes the xmlformat \nfunction from the catalog and adds the [NO] INDENT option to \nxmlserialize, as described in X069.\\Great. I'm checking this patch and it seems, indentation stops working if we have a text node inside:gitpod=# select xmlserialize(document '<xml><more>13</more></xml>' as text indent);              xmlserialize              ---------------------------------------- <?xml version=\"1.0\" encoding=\"UTF-8\"?>+ <xml>                                 +   <more>13</more>                     + </xml>                                + (1 row)gitpod=# select xmlserialize(document '<xml>text<more>13</more></xml>' as text indent);              xmlserialize              ---------------------------------------- <?xml version=\"1.0\" encoding=\"UTF-8\"?>+ <xml>text<more>13</more></xml>        + (1 row) Worth to mention, Oracle behaves similarly -- indentation doesn't work: https://dbfiddle.uk/hRz5sXdM.But is this as expected? Shouldn't it be like this:<xml>  text  <more>13</more></xml>?", "msg_date": "Tue, 21 Feb 2023 23:05:38 -0800", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Here are some review comments for patch v15-0001\n\nFYI, the patch applies clean and tests OK for me.\n\n======\ndoc/src/sgml/datatype.sgml\n\n1.\nXMLSERIALIZE ( { DOCUMENT | CONTENT } <replaceable>value</replaceable>\nAS <replaceable>type</replaceable> [ { NO INDENT | INDENT } ] )\n\n~\n\nAnother/shorter way to write that syntax is like below. For me, it is\neasier to read. YMMV.\n\nXMLSERIALIZE ( { DOCUMENT | CONTENT } <replaceable>value</replaceable>\nAS <replaceable>type</replaceable> [ [NO] INDENT ] )\n\n======\nsrc/backend/executor/execExprInterp.c\n\n2. ExecEvalXmlExpr\n\n@@ -3829,7 +3829,8 @@ ExecEvalXmlExpr(ExprState *state, ExprEvalStep *op)\n {\n Datum *argvalue = op->d.xmlexpr.argvalue;\n bool *argnull = op->d.xmlexpr.argnull;\n-\n+ bool indent = op->d.xmlexpr.xexpr->indent;\n+ text *data;\n /* argument type is known to be xml */\n Assert(list_length(xexpr->args) == 1);\nMissing whitespace after the variable declarations\n\n~~~\n\n3.\n+\n+ data = xmltotext_with_xmloption(DatumGetXmlP(value),\n+ xexpr->xmloption);\n+ if(indent)\n+ *op->resvalue = PointerGetDatum(xmlformat(data));\n+ else\n+ *op->resvalue = PointerGetDatum(data);\n+\n }\n\nUnnecessary blank line at the end.\n======\nsrc/backend/utils/adt/xml.\n\n4. xmlformat\n\n+#else\n+ NO_XML_SUPPORT();\n+return 0;\n+#endif\n\nWrong indentation (return 0) in the indentation function? ;-)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 22 Feb 2023 18:20:04 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 22.02.23 08:05, Nikolay Samokhvalov wrote:\n>\n> But is this as expected? Shouldn't it be like this:\n> <xml>\n>   text\n>   <more>13</more>\n> </xml>\n> ?\n\nOracle and other parsers I know also do not work well with mixed \ncontents.[1,2] I believe libxml2's parser does not know where to put the \nnewline, as mixed values can contain more than one text node:\n\n<xml>text<more>13</more> text2 text3</xml> [3]\n\nAnd applying this logic the output could look like this ..\n\n<xml>text\n   <more>13</more>text2 text3\n</xml>\n\nor even this\n\n<xml>\n   text\n   <more>13</more>\n   text2 text3\n</xml>\n\n.. which doesn't seem right either. Perhaps a note about mixed contents \nin the docs would make things clearer?\n\nThanks for the review!\n\nJim\n\n1- https://xmlpretty.com/\n\n2- https://www.samltool.com/prettyprint.php\n\n3- https://dbfiddle.uk/_CcC8h3I", "msg_date": "Wed, 22 Feb 2023 10:15:50 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 22.02.23 08:20, Peter Smith wrote:\n> Here are some review comments for patch v15-0001\n>\n> FYI, the patch applies clean and tests OK for me.\n>\n> ======\n> doc/src/sgml/datatype.sgml\n>\n> 1.\n> XMLSERIALIZE ( { DOCUMENT | CONTENT } <replaceable>value</replaceable>\n> AS <replaceable>type</replaceable> [ { NO INDENT | INDENT } ] )\n>\n> ~\n>\n> Another/shorter way to write that syntax is like below. For me, it is\n> easier to read. YMMV.\n>\n> XMLSERIALIZE ( { DOCUMENT | CONTENT } <replaceable>value</replaceable>\n> AS <replaceable>type</replaceable> [ [NO] INDENT ] )\nIndeed simpler to read.\n> ======\n> src/backend/executor/execExprInterp.c\n>\n> 2. ExecEvalXmlExpr\n>\n> @@ -3829,7 +3829,8 @@ ExecEvalXmlExpr(ExprState *state, ExprEvalStep *op)\n> {\n> Datum *argvalue = op->d.xmlexpr.argvalue;\n> bool *argnull = op->d.xmlexpr.argnull;\n> -\n> + bool indent = op->d.xmlexpr.xexpr->indent;\n> + text *data;\n> /* argument type is known to be xml */\n> Assert(list_length(xexpr->args) == 1);\n> Missing whitespace after the variable declarations\nWhitespace added.\n> ~~~\n>\n> 3.\n> +\n> + data = xmltotext_with_xmloption(DatumGetXmlP(value),\n> + xexpr->xmloption);\n> + if(indent)\n> + *op->resvalue = PointerGetDatum(xmlformat(data));\n> + else\n> + *op->resvalue = PointerGetDatum(data);\n> +\n> }\n>\n> Unnecessary blank line at the end.\nblank line removed.\n> ======\n> src/backend/utils/adt/xml.\n>\n> 4. xmlformat\n>\n> +#else\n> + NO_XML_SUPPORT();\n> +return 0;\n> +#endif\n>\n> Wrong indentation (return 0) in the indentation function? ;-)\n\nindentation corrected.\n\nv16 attached.\n\nThanks a lot!\n\nJim", "msg_date": "Wed, 22 Feb 2023 10:35:59 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Here are some review comments for patch v16-0001.\n\n======\n> src/backend/executor/execExprInterp.c\n>\n> 2. ExecEvalXmlExpr\n>\n> @@ -3829,7 +3829,8 @@ ExecEvalXmlExpr(ExprState *state, ExprEvalStep *op)\n> {\n> Datum *argvalue = op->d.xmlexpr.argvalue;\n> bool *argnull = op->d.xmlexpr.argnull;\n> -\n> + bool indent = op->d.xmlexpr.xexpr->indent;\n> + text *data;\n> /* argument type is known to be xml */\n> Assert(list_length(xexpr->args) == 1);\n> Missing whitespace after the variable declarations\nWhitespace added.\n\n~\n\nOh, I meant something different to that fix. I meant there is a\nmissing blank line after the last ('data') variable declaration.\n\n======\nTest code.\n\nI wondered if there ought to be a test that demonstrates explicitly\nsaying NO INDENT will give the identical result to just omitting it.\n\nFor example:\n\ntest=# -- no indent is default\ntest=# SELECT xmlserialize(DOCUMENT '<foo><bar><val\nx=\"y\">42</val></bar></foo>' AS text) = xmlserialize(DOCUMENT\n'<foo><bar><val x=\"y\">42</val></bar></foo>' AS text NO INDENT);\n ?column?\n----------\n t\n(1 row)\n\ntest=# SELECT xmlserialize(CONTENT '<foo><bar><val\nx=\"y\">42</val></bar></foo>' AS text) = xmlserialize(CONTENT\n'<foo><bar><val x=\"y\">42</val></bar></foo>' AS text NO INDENT);\n ?column?\n----------\n t\n(1 row)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 23 Feb 2023 09:45:02 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 22.02.23 23:45, Peter Smith wrote:\n> src/backend/executor/execExprInterp.c\n>> 2. ExecEvalXmlExpr\n>>\n>> @@ -3829,7 +3829,8 @@ ExecEvalXmlExpr(ExprState *state, ExprEvalStep *op)\n>> {\n>> Datum *argvalue = op->d.xmlexpr.argvalue;\n>> bool *argnull = op->d.xmlexpr.argnull;\n>> -\n>> + bool indent = op->d.xmlexpr.xexpr->indent;\n>> + text *data;\n>> /* argument type is known to be xml */\n>> Assert(list_length(xexpr->args) == 1);\n>> Missing whitespace after the variable declarations\n> Whitespace added.\n>\n> ~\n>\n> Oh, I meant something different to that fix. I meant there is a\n> missing blank line after the last ('data') variable declaration.\nI believe I see it now (it took me a while) :)\n> ======\n> Test code.\n>\n> I wondered if there ought to be a test that demonstrates explicitly\n> saying NO INDENT will give the identical result to just omitting it.\n>\n> For example:\n>\n> test=# -- no indent is default\n> test=# SELECT xmlserialize(DOCUMENT '<foo><bar><val\n> x=\"y\">42</val></bar></foo>' AS text) = xmlserialize(DOCUMENT\n> '<foo><bar><val x=\"y\">42</val></bar></foo>' AS text NO INDENT);\n> ?column?\n> ----------\n> t\n> (1 row)\n>\n> test=# SELECT xmlserialize(CONTENT '<foo><bar><val\n> x=\"y\">42</val></bar></foo>' AS text) = xmlserialize(CONTENT\n> '<foo><bar><val x=\"y\">42</val></bar></foo>' AS text NO INDENT);\n> ?column?\n> ----------\n> t\n> (1 row)\n\nActually NO INDENT just ignores this feature and doesn't call the \nfunction at all, so in this particular case the result sets will always \nbe identical. But yes, I totally agree that a test case for that is also \nimportant.\n\nv17 attached.\n\nThanks!\n\nBest, Jim", "msg_date": "Thu, 23 Feb 2023 00:41:53 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Here are my review comments for patch v17-0001.\n\n======\nsrc/test/regress/sql/xml.sql\n\nThe blank line(s) which previously separated the xmlserialize tests\nfrom the xml IS [NOT] DOCUMENT tests are now missing...\n\n\ne.g.\n\n-- indent different encoding (returns UTF-8)\nSELECT xmlserialize(DOCUMENT '<?xml version=\"1.0\"\nencoding=\"ISO-8859-1\"?><foo><bar><val>&#52;&#50;</val></bar></foo>' AS\ntext INDENT);\nSELECT xmlserialize(CONTENT '<?xml version=\"1.0\"\nencoding=\"ISO-8859-1\"?><foo><bar><val>&#52;&#50;</val></bar></foo>' AS\ntext INDENT);\n-- 'no indent' = not using 'no indent'\nSELECT xmlserialize(DOCUMENT '<foo><bar><val\nx=\"y\">42</val></bar></foo>' AS text) = xmlserialize(DOCUMENT\n'<foo><bar><val x=\"y\">42</val></bar></foo>' AS text NO INDENT);\nSELECT xmlserialize(CONTENT '<foo><bar><val\nx=\"y\">42</val></bar></foo>' AS text) = xmlserialize(CONTENT\n'<foo><bar><val x=\"y\">42</val></bar></foo>' AS text NO INDENT);\nSELECT xml '<foo>bar</foo>' IS DOCUMENT;\nSELECT xml '<foo>bar</foo><bar>foo</bar>' IS DOCUMENT;\nSELECT xml '<abc/>' IS NOT DOCUMENT;\nSELECT xml 'abc' IS NOT DOCUMENT;\nSELECT '<>' IS NOT DOCUMENT;\n\n~~\n\nApart from that, patch v17 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 23 Feb 2023 12:52:03 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 23.02.23 02:52, Peter Smith wrote:\n> Here are my review comments for patch v17-0001.\n>\n> ======\n> src/test/regress/sql/xml.sql\n>\n> The blank line(s) which previously separated the xmlserialize tests\n> from the xml IS [NOT] DOCUMENT tests are now missing...\n\nv18 adds a new line in the xml.sql file to separate the xmlserialize \ntest cases from the rest.\n\nThanks!\n\nBest, Jim", "msg_date": "Thu, 23 Feb 2023 07:36:16 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 23.02.23 07:36, Jim Jones wrote:\n> On 23.02.23 02:52, Peter Smith wrote:\n>> Here are my review comments for patch v17-0001.\n>>\n>> ======\n>> src/test/regress/sql/xml.sql\n>>\n>> The blank line(s) which previously separated the xmlserialize tests\n>> from the xml IS [NOT] DOCUMENT tests are now missing...\n> \n> v18 adds a new line in the xml.sql file to separate the xmlserialize \n> test cases from the rest.\n\nIn kwlist.h you have\n\n PG_KEYWORD(\"indent\", INDENT, UNRESERVED_KEYWORD, AS_LABEL)\n\nbut you can actually make it BARE_LABEL, which is preferable.\n\nMore importantly, you need to add the new keyword to the \nbare_label_keyword production in gram.y. I thought we had some tooling \nin the build system to catch this kind of omission, but it's apparently \nnot working right now.\n\nElsewhere, let's rename the xmlformat() C function to xmlserialize() (or \nmaybe something like xmlserialize_indent()), so the association is clearer.\n\n\n\n", "msg_date": "Thu, 23 Feb 2023 08:51:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 23.02.23 08:51, Peter Eisentraut wrote:\n> In kwlist.h you have\n>\n>     PG_KEYWORD(\"indent\", INDENT, UNRESERVED_KEYWORD, AS_LABEL)\n>\n> but you can actually make it BARE_LABEL, which is preferable.\n>\n> More importantly, you need to add the new keyword to the \n> bare_label_keyword production in gram.y.  I thought we had some \n> tooling in the build system to catch this kind of omission, but it's \n> apparently not working right now.\nEntry in kwlist.h changed to BARE_LABEL.\n>\n> Elsewhere, let's rename the xmlformat() C function to xmlserialize() \n> (or maybe something like xmlserialize_indent()), so the association is \n> clearer.\n>\nxmlserialize_indent sounds much better and makes the association indeed \nclearer. Changed in v19.\n\nv19 attached.\n\nThanks for the review!\n\nBest, Jim", "msg_date": "Thu, 23 Feb 2023 09:20:00 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "The patch v19 LGTM.\n\n- v19 applies cleanly for me\n- Full clean build OK\n- HTML docs build and render OK\n- The 'make check' tests all pass for me\n- Also cfbot reports latest patch has no errors -- http://cfbot.cputube.org/\n\nSo, I marked it a \"Ready for Committer\" in the CF --\nhttps://commitfest.postgresql.org/42/4162/\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 24 Feb 2023 09:30:59 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "While reviewing this patch, I started to wonder why we don't eliminate\nthe maintenance hassle of xml_1.out by putting in a short-circuit\nat the top of the test, similar to those in some other scripts:\n\n/* skip test if XML support not compiled in */\nSELECT '<value>one</value>'::xml;\n\\if :ERROR\n\\quit\n\\endif\n\n(and I guess xmlmap.sql could get the same treatment).\n\nThe only argument I can think of against it is that the current\napproach ensures we produce a clean error (and not, say, a crash)\nfor all xml.c entry points not just xml_in. I'm not sure how much\nthat's worth though. The compiler/linker would tell us if we miss\ncompiling out every reference to libxml2.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Mar 2023 12:38:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 09.03.23 18:38, Tom Lane wrote:\n> While reviewing this patch, I started to wonder why we don't eliminate\n> the maintenance hassle of xml_1.out by putting in a short-circuit\n> at the top of the test, similar to those in some other scripts:\n>\n> /* skip test if XML support not compiled in */\n> SELECT '<value>one</value>'::xml;\n> \\if :ERROR\n> \\quit\n> \\endif\n>\n> (and I guess xmlmap.sql could get the same treatment).\n>\n> The only argument I can think of against it is that the current\n> approach ensures we produce a clean error (and not, say, a crash)\n> for all xml.c entry points not just xml_in. I'm not sure how much\n> that's worth though. The compiler/linker would tell us if we miss\n> compiling out every reference to libxml2.\n>\n> Thoughts?\n>\n> \t\t\tregards, tom lane\n\nHi Tom,\n\nI agree it would make things easier and it could indeed save some time \n(and some CI runs ;)).\n\nHowever, checking in the absence of libxml2 if an error message is \nraised, and checking if this error message is the one we expect, is IMHO \nalso a very nice test. But I guess I could also live with skipping the \nwhole thing.\n\nBest, Jim", "msg_date": "Thu, 9 Mar 2023 19:41:29 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> The patch v19 LGTM.\n\nI've looked through this now, and have some minor complaints and a major\none. The major one is that it doesn't work for XML that doesn't satisfy\nIS DOCUMENT. For example,\n\nregression=# select '<bar><val x=\"y\">42</val></bar><foo></foo>'::xml is document;\n ?column? \n----------\n f\n(1 row)\n\nregression=# select xmlserialize (content '<bar><val x=\"y\">42</val></bar><foo></foo>' as text);\n xmlserialize \n-------------------------------------------\n <bar><val x=\"y\">42</val></bar><foo></foo>\n(1 row)\n\nregression=# select xmlserialize (content '<bar><val x=\"y\">42</val></bar><foo></foo>' as text indent);\nERROR: invalid XML document\nDETAIL: line 1: Extra content at the end of the document\n<bar><val x=\"y\">42</val></bar><foo></foo>\n ^\n\nThis is not what the documentation promises, and I don't think it's\ngood enough --- the SQL spec has no restriction saying you can't\nuse INDENT with CONTENT. I tried adjusting things so that we call\nxml_parse() with the appropriate DOCUMENT or CONTENT xmloption flag,\nbut all that got me was empty output (except for a document header).\nIt seems like xmlDocDumpFormatMemory is not the thing to use, at least\nnot in the CONTENT case. But libxml2 has a few other \"dump\"\nfunctions, so maybe we can use a different one? I see we are using\nxmlNodeDump elsewhere, and that has a format option, so maybe there's\na way forward there.\n\nA lesser issue is that INDENT tacks on a document header (XML declaration)\nwhether there was one or not. I'm not sure whether that's an appropriate\nthing to do in the DOCUMENT case, but it sure seems weird in the CONTENT\ncase. We have code that can strip off the header again, but we\nneed to figure out exactly when to apply it.\n\nI also suspect that it's outright broken to attach a header claiming\nthe data is now in UTF8 encoding. If the database encoding isn't\nUTF8, then either that's a lie or we now have an encoding violation.\n\nAnother thing that's mildly irking me is that the current\nfactorization of this code will result in xml_parse'ing the data\ntwice, if you have both DOCUMENT and INDENT specified. We could\nconsider avoiding that if we merged the indentation functionality\ninto xmltotext_with_xmloption, but it's probably premature to do so\nwhen we haven't figured out how to get the output right --- we might\nend up needing two xml_parse calls anyway with different parameters,\nperhaps.\n\nI also had a bunch of cosmetic complaints (mostly around this having\na bad case of add-at-the-end-itis), which I've cleaned up in the\nattached v20. This doesn't address any of the above, however.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 09 Mar 2023 15:21:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Thanks a lot for the review!\n\nOn 09.03.23 21:21, Tom Lane wrote:\n> I've looked through this now, and have some minor complaints and a major\n> one. The major one is that it doesn't work for XML that doesn't satisfy\n> IS DOCUMENT. For example,\n>\n> regression=# select '<bar><val x=\"y\">42</val></bar><foo></foo>'::xml is document;\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n> regression=# select xmlserialize (content '<bar><val x=\"y\">42</val></bar><foo></foo>' as text);\n> xmlserialize\n> -------------------------------------------\n> <bar><val x=\"y\">42</val></bar><foo></foo>\n> (1 row)\n>\n> regression=# select xmlserialize (content '<bar><val x=\"y\">42</val></bar><foo></foo>' as text indent);\n> ERROR: invalid XML document\n> DETAIL: line 1: Extra content at the end of the document\n> <bar><val x=\"y\">42</val></bar><foo></foo>\n> ^\n\nI assumed it should fail because the XML string doesn't have a \nsingly-rooted XML. Oracle has this feature implemented and it does not \nseem to allow non singly-rooted strings either[1]. Also, some the tools \nI use also fail in this case[2,3]\n\nHow do you suggest the output should look like? Does the SQL spec also \ndefine it? I can't find it online :(\n\n> This is not what the documentation promises, and I don't think it's\n> good enough --- the SQL spec has no restriction saying you can't\n> use INDENT with CONTENT. I tried adjusting things so that we call\n> xml_parse() with the appropriate DOCUMENT or CONTENT xmloption flag,\n> but all that got me was empty output (except for a document header).\n> It seems like xmlDocDumpFormatMemory is not the thing to use, at least\n> not in the CONTENT case. But libxml2 has a few other \"dump\"\n> functions, so maybe we can use a different one? I see we are using\n> xmlNodeDump elsewhere, and that has a format option, so maybe there's\n> a way forward there.\n>\n> A lesser issue is that INDENT tacks on a document header (XML declaration)\n> whether there was one or not. I'm not sure whether that's an appropriate\n> thing to do in the DOCUMENT case, but it sure seems weird in the CONTENT\n> case. We have code that can strip off the header again, but we\n> need to figure out exactly when to apply it.\nI replaced xmlDocDumpFormatMemory with xmlSaveToBuffer and used to \noption XML_SAVE_NO_DECL for input docs with XML declaration. It no \nlonger returns a XML declaration if the input doc does not contain it.\n> I also suspect that it's outright broken to attach a header claiming\n> the data is now in UTF8 encoding. If the database encoding isn't\n> UTF8, then either that's a lie or we now have an encoding violation.\nI was mistakenly calling xml_parse with GetDatabaseEncoding(). It now \nuses the encoding of the given doc and UTF8 if not provided.\n> Another thing that's mildly irking me is that the current\n> factorization of this code will result in xml_parse'ing the data\n> twice, if you have both DOCUMENT and INDENT specified. We could\n> consider avoiding that if we merged the indentation functionality\n> into xmltotext_with_xmloption, but it's probably premature to do so\n> when we haven't figured out how to get the output right --- we might\n> end up needing two xml_parse calls anyway with different parameters,\n> perhaps.\n>\n> I also had a bunch of cosmetic complaints (mostly around this having\n> a bad case of add-at-the-end-itis), which I've cleaned up in the\n> attached v20. This doesn't address any of the above, however.\nI swear to god I have no idea what \"add-at-the-end-itis\" means :)\n> \t\t\tregards, tom lane\n\nThanks a lot!\n\nBest, Jim\n\n1 - https://dbfiddle.uk/WUOWtjBU\n\n2 - https://www.samltool.com/prettyprint.php\n\n3 - https://xmlpretty.com/xmlpretty", "msg_date": "Fri, 10 Mar 2023 14:30:07 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> On 09.03.23 21:21, Tom Lane wrote:\n>> I've looked through this now, and have some minor complaints and a major\n>> one. The major one is that it doesn't work for XML that doesn't satisfy\n>> IS DOCUMENT. For example,\n\n> How do you suggest the output should look like?\n\nI'd say a series of node trees, each starting on a separate line.\n\n>> I also suspect that it's outright broken to attach a header claiming\n>> the data is now in UTF8 encoding. If the database encoding isn't\n>> UTF8, then either that's a lie or we now have an encoding violation.\n\n> I was mistakenly calling xml_parse with GetDatabaseEncoding(). It now \n> uses the encoding of the given doc and UTF8 if not provided.\n\nMmmm .... doing this differently from what we do elsewhere does not\nsound like the right path forward. The input *is* (or had better be)\nin the database encoding.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Mar 2023 09:32:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 10.03.23 15:32, Tom Lane wrote:\n> Jim Jones<jim.jones@uni-muenster.de> writes:\n>> On 09.03.23 21:21, Tom Lane wrote:\n>>> I've looked through this now, and have some minor complaints and a major\n>>> one. The major one is that it doesn't work for XML that doesn't satisfy\n>>> IS DOCUMENT. For example,\n>> How do you suggest the output should look like?\n> I'd say a series of node trees, each starting on a separate line.\n\nv22 attached enables the usage of INDENT with non singly-rooted xml.\n\npostgres=# SELECT xmlserialize (CONTENT '<bar><val \nx=\"y\">42</val></bar><foo>73</foo>' AS text INDENT);\n      xmlserialize\n-----------------------\n  <bar>                +\n    <val x=\"y\">42</val>+\n  </bar>               +\n  <foo>73</foo>\n(1 row)\n\nI tried several libxml2 dump functions and none of them could cope very \nwell with an xml string without a root node. So added them into a \ntemporary root node, so that I could iterate over its children and add \nthem one by one (formatted) into the output buffer.\n\nI slightly modified the existing xml_parse() function to return the list \nof nodes parsed by xmlParseBalancedChunkMemory:\n\nxml_parse(text *data, XmlOptionType xmloption_arg, bool preserve_whitespace,\n           int encoding, Node *escontext, *xmlNodePtr *parsed_nodes*)\n\nres_code = xmlParseBalancedChunkMemory(doc, NULL, NULL, 0,\nutf8string + count, *parsed_nodes*);\n\n>> I was mistakenly calling xml_parse with GetDatabaseEncoding(). It now\n>> uses the encoding of the given doc and UTF8 if not provided.\n> Mmmm .... doing this differently from what we do elsewhere does not\n> sound like the right path forward. The input *is* (or had better be)\n> in the database encoding.\nI changed that behavior. It now uses GetDatabaseEncoding();\n\nThanks!\n\nBest, Jim", "msg_date": "Mon, 13 Mar 2023 13:08:11 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 09.03.23 21:21, Tom Lane wrote:\n> Peter Smith <smithpb2250@gmail.com> writes:\n>> The patch v19 LGTM.\n> Another thing that's mildly irking me is that the current\n> factorization of this code will result in xml_parse'ing the data\n> twice, if you have both DOCUMENT and INDENT specified. We could\n> consider avoiding that if we merged the indentation functionality\n> into xmltotext_with_xmloption, but it's probably premature to do so\n> when we haven't figured out how to get the output right --- we might\n> end up needing two xml_parse calls anyway with different parameters,\n> perhaps.\n\nJust a thought: since xmlserialize_indent also calls xml_parse() to \nbuild the xmlDocPtr, couldn't we simply bypass \nxmltotext_with_xmloption() in case of INDENT is specified?\n\nSomething like this:\n\ndiff --git a/src/backend/executor/execExprInterp.c \nb/src/backend/executor/execExprInterp.c\nindex 19351fe..ea808dd 100644\n--- a/src/backend/executor/execExprInterp.c\n+++ b/src/backend/executor/execExprInterp.c\n@@ -3829,6 +3829,7 @@ ExecEvalXmlExpr(ExprState *state, ExprEvalStep *op)\n         {\n                 Datum      *argvalue = op->d.xmlexpr.argvalue;\n                 bool       *argnull = op->d.xmlexpr.argnull;\n+                               text       *result;\n\n                 /* argument type is known to be xml */\n                 Assert(list_length(xexpr->args) == 1);\n@@ -3837,8 +3838,14 @@ ExecEvalXmlExpr(ExprState *state, ExprEvalStep *op)\n                         return;\n                 value = argvalue[0];\n\n-                               *op->resvalue = \nPointerGetDatum(xmltotext_with_xmloption(DatumGetXmlP(value),\n- xexpr->xmloption));\n+                               if (xexpr->indent)\n+                                       result = \nxmlserialize_indent(DatumGetXmlP(value),\n+ xexpr->xmloption);\n+                               else\n+                                       result = \nxmltotext_with_xmloption(DatumGetXmlP(value),\n+ xexpr->xmloption);\n+\n+                               *op->resvalue = PointerGetDatum(result);\n                 *op->resnull = false;\n         }\n\n         break;\n\n\n\n\n", "msg_date": "Tue, 14 Mar 2023 13:58:28 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> [ v22-0001-Add-pretty-printed-XML-output-option.patch ]\n\nI poked at this for awhile and ran into a problem that I'm not sure\nhow to solve: it misbehaves for input with embedded DOCTYPE.\n\nregression=# SELECT xmlserialize(DOCUMENT '<!DOCTYPE a><a/>' as text indent);\n xmlserialize \n--------------\n <!DOCTYPE a>+\n <a></a> +\n \n(1 row)\n\nregression=# SELECT xmlserialize(CONTENT '<!DOCTYPE a><a/>' as text indent);\n xmlserialize \n--------------\n \n(1 row)\n\nThe bad result for CONTENT is because xml_parse() decides to\nparse_as_document, but xmlserialize_indent has no idea that happened\nand tries to use the content_nodes list anyway. I don't especially\ncare for the laissez faire \"maybe we'll set *content_nodes and maybe\nwe won't\" API you adopted for xml_parse, which seems to be contributing\nto the mess. We could pass back more info so that xmlserialize_indent\nknows what really happened. However, that won't fix the bogus output\nfor the DOCUMENT case. Are we perhaps passing incorrect flags to\nxmlSaveToBuffer?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Mar 2023 13:40:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "On 14.03.23 18:40, Tom Lane wrote:\n> Jim Jones <jim.jones@uni-muenster.de> writes:\n>> [ v22-0001-Add-pretty-printed-XML-output-option.patch ]\n> I poked at this for awhile and ran into a problem that I'm not sure\n> how to solve: it misbehaves for input with embedded DOCTYPE.\n>\n> regression=# SELECT xmlserialize(DOCUMENT '<!DOCTYPE a><a/>' as text indent);\n> xmlserialize\n> --------------\n> <!DOCTYPE a>+\n> <a></a> +\n> \n> (1 row)\n\nThe issue was the flag XML_SAVE_NO_EMPTY. It was forcing empty elements \nto be serialized with start-end tag pairs. Removing it did the trick ...\n\npostgres=# SELECT xmlserialize(DOCUMENT '<!DOCTYPE a><a/>' AS text INDENT);\n  xmlserialize\n--------------\n  <!DOCTYPE a>+\n  <a/>        +\n\n(1 row)\n\n... but as a side effect empty start-end tags will be now serialized as \nempty elements\n\npostgres=# SELECT xmlserialize(CONTENT '<foo><bar></bar></foo>' AS text \nINDENT);\n  xmlserialize\n--------------\n  <foo>       +\n    <bar/>    +\n  </foo>\n(1 row)\n\nIt seems to be the standard behavior of other xml indent tools \n(including Oracle)\n\n> regression=# SELECT xmlserialize(CONTENT '<!DOCTYPE a><a/>' as text indent);\n> xmlserialize\n> --------------\n> \n> (1 row)\n>\n> The bad result for CONTENT is because xml_parse() decides to\n> parse_as_document, but xmlserialize_indent has no idea that happened\n> and tries to use the content_nodes list anyway. I don't especially\n> care for the laissez faire \"maybe we'll set *content_nodes and maybe\n> we won't\" API you adopted for xml_parse, which seems to be contributing\n> to the mess. We could pass back more info so that xmlserialize_indent\n> knows what really happened.\n\nI added a new (nullable) parameter to the xml_parse function that will \nreturn the actual XmlOptionType used to parse the xml data. Now \nxmlserialize_indent knows how the data was really parsed:\n\npostgres=# SELECT xmlserialize(CONTENT '<!DOCTYPE a><a/>' AS text INDENT);\n  xmlserialize\n--------------\n  <!DOCTYPE a>+\n  <a/>        +\n\n(1 row)\n\nI added test cases for these queries.\n\nv23 attached.\n\nThanks!\n\nBest, Jim", "msg_date": "Tue, 14 Mar 2023 23:57:22 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> On 14.03.23 18:40, Tom Lane wrote:\n>> I poked at this for awhile and ran into a problem that I'm not sure\n>> how to solve: it misbehaves for input with embedded DOCTYPE.\n\n> The issue was the flag XML_SAVE_NO_EMPTY. It was forcing empty elements \n> to be serialized with start-end tag pairs. Removing it did the trick ...\n> ... but as a side effect empty start-end tags will be now serialized as \n> empty elements\n\n> postgres=# SELECT xmlserialize(CONTENT '<foo><bar></bar></foo>' AS text \n> INDENT);\n>  xmlserialize\n> --------------\n>  <foo>       +\n>    <bar/>    +\n>  </foo>\n> (1 row)\n\nHuh, interesting. That is a legitimate pretty-fication of the input,\nI suppose, but some people might think it goes beyond the charter of\n\"indentation\". I'm okay with it personally; anyone want to object?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Mar 2023 19:25:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "I wrote:\n> Huh, interesting. That is a legitimate pretty-fication of the input,\n> I suppose, but some people might think it goes beyond the charter of\n> \"indentation\". I'm okay with it personally; anyone want to object?\n\nHearing no objections to that, I moved ahead with this.\n\nIt occurred to me to test v23 for memory leaks, and it had bad ones:\n\n* the \"newline\" node used in the CONTENT case never got freed.\nMaking another one for each line wasn't helping, either.\n\n* libxml, at least in the 2.9.7 version I have here, turns out to\nleak memory if you pass a non-null encoding to xmlSaveToBuffer.\nBut AFAICS we don't really need to do that, because the last thing\nwe want is for libxml to try to do any encoding conversion.\n\nAfter cleaning that up, I saw that we were indeed doing essentially\nduplicative xml_parse calls for the DOCUMENT check and the indentation\nwork, so I refactored to allow just one call to serve.\n\nPushed with those changes and some other cosmetic cleanup.\nThanks for working so hard on this!\n\n(Now to keep an eye on the buildfarm, to see if other versions of\nlibxml work like mine ...)\n\nBTW, the libxml leak problem seems to extend to other cases too.\nI tested with code like\n\ndo $$\ndeclare x xml; t text;\nbegin\nx := '<?xml version=\"1.0\" encoding=\"utf8\"?><foo><bar><val>73</val></bar></foo>';\nfor i in 1..10000000 loop\n t := xmlserialize(document x as text);\nend loop;\nraise notice 't = %', t;\nend;\n$$;\n\nThat case is fine, but if you change the encoding spec to \"latin1\",\nit leaks like mad. That problem is not the fault of this patch,\nI don't think. I wonder if we need to do something to prevent\nlibxml from seeing encoding declarations other than utf8?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Mar 2023 17:13:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" }, { "msg_contents": "I wrote:\n> BTW, the libxml leak problem seems to extend to other cases too.\n> I tested with code like\n\n> do $$\n> declare x xml; t text;\n> begin\n> x := '<?xml version=\"1.0\" encoding=\"utf8\"?><foo><bar><val>73</val></bar></foo>';\n> for i in 1..10000000 loop\n> t := xmlserialize(document x as text);\n> end loop;\n> raise notice 't = %', t;\n> end;\n> $$;\n\n> That case is fine, but if you change the encoding spec to \"latin1\",\n> it leaks like mad. That problem is not the fault of this patch,\n> I don't think. I wonder if we need to do something to prevent\n> libxml from seeing encoding declarations other than utf8?\n\nAfter a bit of further testing: the leak is present in libxml2 2.9.7\nwhich is what I have on this RHEL8 box, but it seems not to occur\nin libxml2 2.10.3 (tested on Fedora 37, and I verified that Fedora\nisn't carrying any relevant local patch).\n\nSo maybe it's worth working around that, or maybe it isn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Mar 2023 17:38:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Memory leak in libxml2 (was Re: [PATCH] Add pretty-printed XML output\n option)" }, { "msg_contents": "> On 15 Mar 2023, at 22:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> After a bit of further testing: the leak is present in libxml2 2.9.7\n> which is what I have on this RHEL8 box, but it seems not to occur\n> in libxml2 2.10.3 (tested on Fedora 37, and I verified that Fedora\n> isn't carrying any relevant local patch).\n> \n> So maybe it's worth working around that, or maybe it isn't.\n\n2.9.7 is from November 2017 and 2.10.3 is from October 2022, so depending on\nwhen in that timespan the issue was fixed it might be in a release which will\nbe with us for quite some time. The lack of reports (that I was able to find)\nindicate that it might be rare in production though?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 15 Mar 2023 23:17:12 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Memory leak in libxml2 (was Re: [PATCH] Add pretty-printed XML\n output option)" }, { "msg_contents": "On 15.03.23 22:13, Tom Lane wrote:\n> I wrote:\n> It occurred to me to test v23 for memory leaks, and it had bad ones:\n> * the \"newline\" node used in the CONTENT case never got freed.\n> Making another one for each line wasn't helping, either.\nOh, I did really miss that one. Thanks!\n> Pushed with those changes and some other cosmetic cleanup.\n> Thanks for working so hard on this!\nGreat! Thank you, Peter and Andrey for the very nice reviews.\n> BTW, the libxml leak problem seems to extend to other cases too.\n> I tested with code like\n>\n> do $$\n> declare x xml; t text;\n> begin\n> x := '<?xml version=\"1.0\" encoding=\"utf8\"?><foo><bar><val>73</val></bar></foo>';\n> for i in 1..10000000 loop\n> t := xmlserialize(document x as text);\n> end loop;\n> raise notice 't = %', t;\n> end;\n> $$;\n>\n> That case is fine, but if you change the encoding spec to \"latin1\",\n> it leaks like mad. That problem is not the fault of this patch,\n> I don't think. I wonder if we need to do something to prevent\n> libxml from seeing encoding declarations other than utf8?\n\nIn my environment (libxml2 v2.9.10 and Ubuntu 22.04) I couldn't \nreproduce this memory leak. It's been most likely fixed in further \nlibxml2 versions. Unfortunately their gitlab page has no release notes \nfrom versions prior to 2.9.13 :(\n\nBest, Jim\n\n\n\n", "msg_date": "Thu, 16 Mar 2023 20:07:06 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add pretty-printed XML output option" } ]
[ { "msg_contents": "Here is a patch that removes some unused leftovers from commit \ncfd9be939e9c516243c5b6a49ad1e1a9a38f1052 (old).", "msg_date": "Thu, 2 Feb 2023 22:58:32 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Remove unused code related to unknown type" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Here is a patch that removes some unused leftovers from commit \n> cfd9be939e9c516243c5b6a49ad1e1a9a38f1052 (old).\n\nUgh. Those are outright wrong now, aren't they? Better nuke 'em.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Feb 2023 18:12:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove unused code related to unknown type" }, { "msg_contents": "On 03.02.23 00:12, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Here is a patch that removes some unused leftovers from commit\n>> cfd9be939e9c516243c5b6a49ad1e1a9a38f1052 (old).\n> \n> Ugh. Those are outright wrong now, aren't they? Better nuke 'em.\n\ndone\n\n\n\n", "msg_date": "Sat, 4 Feb 2023 08:02:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Remove unused code related to unknown type" } ]
[ { "msg_contents": "I have found that in some corners of the code some calls to standard C \nfunctions are decorated with casts to (void *) for no reason, and this \ncode pattern then gets copied around. I have gone through and cleaned \nthis up a bit, in the attached patches.\n\nThe involved functions are: repalloc, memcpy, memset, memmove, memcmp, \nqsort, bsearch\n\nAlso hash_search(), for which there was a historical reason (the \nargument used to be char *), but not anymore.", "msg_date": "Thu, 2 Feb 2023 23:22:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Remove some useless casts to (void *)" }, { "msg_contents": "On Thu, Feb 2, 2023 at 5:22 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> I have found that in some corners of the code some calls to standard C\n> functions are decorated with casts to (void *) for no reason, and this\n> code pattern then gets copied around. I have gone through and cleaned\n> this up a bit, in the attached patches.\n>\n> The involved functions are: repalloc, memcpy, memset, memmove, memcmp,\n> qsort, bsearch\n>\n> Also hash_search(), for which there was a historical reason (the\n> argument used to be char *), but not anymore.\n\n\n+1\n\nAll code is example code.\n\nApplies.\nPasses make check world.\n\nOn Thu, Feb 2, 2023 at 5:22 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:I have found that in some corners of the code some calls to standard C \nfunctions are decorated with casts to (void *) for no reason, and this \ncode pattern then gets copied around.  I have gone through and cleaned \nthis up a bit, in the attached patches.\n\nThe involved functions are: repalloc, memcpy, memset, memmove, memcmp, \nqsort, bsearch\n\nAlso hash_search(), for which there was a historical reason (the \nargument used to be char *), but not anymore.+1All code is example code.Applies.Passes make check world.", "msg_date": "Thu, 2 Feb 2023 18:59:44 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove some useless casts to (void *)" }, { "msg_contents": "On 03.02.23 00:59, Corey Huinker wrote:\n> On Thu, Feb 2, 2023 at 5:22 PM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> \n> I have found that in some corners of the code some calls to standard C\n> functions are decorated with casts to (void *) for no reason, and this\n> code pattern then gets copied around.  I have gone through and cleaned\n> this up a bit, in the attached patches.\n> \n> The involved functions are: repalloc, memcpy, memset, memmove, memcmp,\n> qsort, bsearch\n> \n> Also hash_search(), for which there was a historical reason (the\n> argument used to be char *), but not anymore.\n> \n> \n> +1\n\ncommitted\n\n> All code is example code.\n\nI like that one!\n\n\n\n", "msg_date": "Tue, 7 Feb 2023 08:09:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Remove some useless casts to (void *)" } ]
[ { "msg_contents": "Hey All,\n\nI have a query like below\n\n\nSELECT * FROM data WHERE val=trunc(val) AND\nacc_id='kfd50ed6-0bc3-44a9-881f-ec89713fdd80'::uuid ORDER BY ct DESC LIMIT\n10;\n\ntable structure is\n\n data\n(id uuid,\n c_id uuid,\n acc_id uuid,\n val numeric,\n ct timestamptz);\n\n\nCan you please help me to write an index?\n\nor Can someone write an index and revert here..\n\nits very urgent for me.\n\nThanks\n-- \nThanks,\nChanukya SDS\n+918186868384\n\nSent From My iPhone\n\nHey All,I have a query like below SELECT * FROM data WHERE val=trunc(val) AND acc_id='kfd50ed6-0bc3-44a9-881f-ec89713fdd80'::uuid ORDER BY ct DESC LIMIT 10;table structure is  data(id uuid,  c_id uuid,  acc_id uuid,  val numeric,  ct timestamptz); Can you please help me to write an index? or Can someone write an index and revert here.. its very urgent for me. Thanks-- Thanks,Chanukya SDS+918186868384Sent From My iPhone", "msg_date": "Fri, 3 Feb 2023 09:03:03 +0530", "msg_from": "chanukya SDS <chanukyasds@gmail.com>", "msg_from_op": true, "msg_subject": "Index problem Need an urgent fix" }, { "msg_contents": "On Fri, Feb 3, 2023 at 9:03 AM chanukya SDS <chanukyasds@gmail.com> wrote:\n>\n> Hey All,\n>\n> I have a query like below\n>\n> SELECT * FROM data WHERE val=trunc(val) AND acc_id='kfd50ed6-0bc3-44a9-881f-ec89713fdd80'::uuid ORDER BY ct DESC LIMIT 10;\n>\n> table structure is\n>\n> data\n> (id uuid,\n> c_id uuid,\n> acc_id uuid,\n> val numeric,\n> ct timestamptz);\n>\n> Can you please help me to write an index?\n>\n> or Can someone write an index and revert here..\n>\n> its very urgent for me.\n\nThanks for reaching out. I think the question is more generic without\nthe problem being described. Is the SELECT query taking more time? If\nyes, why? Have you looked at EXPLAIN (ANALYZE...) output? Have you\ntried to tune/vary configuration settings to see if it helps?\n\nAlso, the best place to ask this question is pgsql-general or pgsql-sql.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 3 Feb 2023 10:12:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Index problem Need an urgent fix" }, { "msg_contents": "hi,\n\nLe ven. 3 févr. 2023 à 11:33, chanukya SDS <chanukyasds@gmail.com> a écrit :\n\n> Hey All,\n>\n> I have a query like below\n>\n>\n> SELECT * FROM data WHERE val=trunc(val) AND\n> acc_id='kfd50ed6-0bc3-44a9-881f-ec89713fdd80'::uuid ORDER BY ct DESC LIMIT\n> 10;\n>\n> table structure is\n>\n> data\n> (id uuid,\n> c_id uuid,\n> acc_id uuid,\n> val numeric,\n> ct timestamptz);\n>\n>\n> Can you please help me to write an index?\n>\n> or Can someone write an index and revert here..\n>\n> its very urgent for me.\n>\n\npgsql-hackers isn't a suitable mailing list for this question, you should\nuse psql-general or psql-performance. please also look at\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions for details to\nprovide.\n\nnote that if you're not really sure of what index would help and don't want\nto disturb your database too much (and assuming you don't have another\nenvironment to test things on), you can use hypopg to create hypothetical\nindexes and see how many index definitions would behave without actually\ncreating them: https://github.com/HypoPG/hypopg\n\n>\n\nhi, Le ven. 3 févr. 2023 à 11:33, chanukya SDS <chanukyasds@gmail.com> a écrit :Hey All,I have a query like below SELECT * FROM data WHERE val=trunc(val) AND acc_id='kfd50ed6-0bc3-44a9-881f-ec89713fdd80'::uuid ORDER BY ct DESC LIMIT 10;table structure is  data(id uuid,  c_id uuid,  acc_id uuid,  val numeric,  ct timestamptz); Can you please help me to write an index? or Can someone write an index and revert here.. its very urgent for me. pgsql-hackers isn't a suitable mailing list for this question, you should use psql-general or psql-performance. please also look at https://wiki.postgresql.org/wiki/Slow_Query_Questions for details to provide. note that if you're not really sure of what index would help and don't want to disturb your database too much (and assuming you don't have another environment to test things on), you can use hypopg to create hypothetical indexes and see how many index definitions would behave without actually creating them: https://github.com/HypoPG/hypopg", "msg_date": "Fri, 3 Feb 2023 12:43:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Index problem Need an urgent fix" } ]
[ { "msg_contents": "When I use 'create table t(a int);'; suppose that this table t's oid is 1200,\nthen postgres will create a file named 1200 in the $PGDATA/base, So where\nis the logic code in the internal?\n\n\n--------------\n\n\n\njacktby@gmail.com\n\n\n", "msg_date": "Fri, 3 Feb 2023 17:18:50 +0800", "msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>", "msg_from_op": true, "msg_subject": "Where is the logig to create a table file?" }, { "msg_contents": "Hi, Jack\n\nOn Fri, 3 Feb 2023 at 13:19, jacktby@gmail.com <jacktby@gmail.com> wrote:\n>\n> When I use 'create table t(a int);'; suppose that this table t's oid is 1200,\n> then postgres will create a file named 1200 in the $PGDATA/base, So where\n> is the logic code in the internal?\n>\nheapam_relation_set_new_filenode()->RelationCreateStorage()\n\nKind regards,\nPavel Borisov,\nSupabase\n\n\n", "msg_date": "Fri, 3 Feb 2023 13:44:46 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Where is the logig to create a table file?" }, { "msg_contents": "At Fri, 3 Feb 2023 13:44:46 +0400, Pavel Borisov <pashkin.elfe@gmail.com> wrote in \n> Hi, Jack\n> \n> On Fri, 3 Feb 2023 at 13:19, jacktby@gmail.com <jacktby@gmail.com> wrote:\n> >\n> > When I use 'create table t(a int);'; suppose that this table t's oid is 1200,\n> > then postgres will create a file named 1200 in the $PGDATA/base, So where\n> > is the logic code in the internal?\n> >\n> heapam_relation_set_new_filenode()->RelationCreateStorage()\n\nOr if you are searching for the logic to determin the file name, see\nGetNewRelFileNumber().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 07 Feb 2023 13:56:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Where is the logig to create a table file?" } ]
[ { "msg_contents": "Hi\n\nWe can simply allow an access to backend process id thru psql variable. I\npropose the name \"BACKEND_PID\". The advantages of usage are simple\naccessibility by command \\set, and less typing then using function\npg_backend_pid, because psql variables are supported by tab complete\nroutine. Implementation is very simple, because we can use the function\nPQbackendPID.\n\nComments, notes?\n\nRegards\n\nPavel", "msg_date": "Fri, 3 Feb 2023 11:41:29 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "On Fri, Feb 3, 2023 at 5:42 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> We can simply allow an access to backend process id thru psql variable. I\n> propose the name \"BACKEND_PID\". The advantages of usage are simple\n> accessibility by command \\set, and less typing then using function\n> pg_backend_pid, because psql variables are supported by tab complete\n> routine. Implementation is very simple, because we can use the function\n> PQbackendPID.\n>\n> Comments, notes?\n>\n> Regards\n>\n> Pavel\n>\n\nInteresting, and probably useful.\n\nIt needs a corresponding line in UnsyncVariables():\n\nSetVariable(pset.vars, \"BACKEND_PID\", NULL);\n\nThat will set the variable back to null when the connection goes away.\n\nOn Fri, Feb 3, 2023 at 5:42 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiWe can simply allow an access to backend process id thru psql variable. I propose the name \"BACKEND_PID\". The advantages of usage are simple accessibility by command \\set, and less typing then using function pg_backend_pid, because psql variables are supported by tab complete routine. Implementation is very simple, because we can use the function PQbackendPID.Comments, notes?RegardsPavelInteresting, and probably useful.It needs a corresponding line in UnsyncVariables():SetVariable(pset.vars, \"BACKEND_PID\", NULL);That will set the variable back to null when the connection goes away.", "msg_date": "Fri, 3 Feb 2023 14:27:23 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "Hi\n\n\npá 3. 2. 2023 v 20:27 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n>\n>\n> On Fri, Feb 3, 2023 at 5:42 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> We can simply allow an access to backend process id thru psql variable. I\n>> propose the name \"BACKEND_PID\". The advantages of usage are simple\n>> accessibility by command \\set, and less typing then using function\n>> pg_backend_pid, because psql variables are supported by tab complete\n>> routine. Implementation is very simple, because we can use the function\n>> PQbackendPID.\n>>\n>> Comments, notes?\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n> Interesting, and probably useful.\n>\n> It needs a corresponding line in UnsyncVariables():\n>\n> SetVariable(pset.vars, \"BACKEND_PID\", NULL);\n>\n> That will set the variable back to null when the connection goes away.\n>\n\nwith doc and unsetting variable\n\nRegards\n\nPavel\n\n\n>\n>\n>\n>\n>", "msg_date": "Sat, 4 Feb 2023 19:08:09 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": ">\n> with doc and unsetting variable\n>\n> Regards\n>\n> Pavel\n>\n>\nPatch applies.\n\nManually testing confirms that it works, at least for the connected state.\nI don't actually know how get psql to invoke DISCONNECT, so I killed the\ndev server and can confirm\n\n[222281:14:57:01 EST] corey=# \\echo :BACKEND_PID\n222281\n[222281:14:57:04 EST] corey=# select 1;\nFATAL: terminating connection due to administrator command\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 1.554 ms\n[:15:02:31 EST] !> \\echo :BACKEND_PID\n:BACKEND_PID\n\nClearly, it is hard to write a regression test for an externally volatile\nvalue. `SELECT sign(:BACKEND_PID)` would technically do the job, if we're\nstriving for completeness.\n\nThe inability to easily DISCONNECT via psql, and the deleterious effect\nthat would have on other regression tests tells me that we can leave that\none untested.\n\nNotes:\n\nThis effectively makes the %p prompt (which I use in the example above) the\nsame as %:BACKEND_PID: and we may want to note that in the documentation.\n\nDo we want to change %p to pull from this variable and save the snprintf()?\nNot a measurable savings, more or a don't-repeat-yourself thing.\n\nIn the varlistentry, I suggest we add \"This variable is unset when the\nconnection is lost.\" after \"but can be changed or unset.\n\nwith doc and unsetting variableRegardsPavelPatch applies.Manually testing confirms that it works, at least for the connected state. I don't actually know how get psql to invoke DISCONNECT, so I killed the dev server and can confirm[222281:14:57:01 EST] corey=# \\echo :BACKEND_PID222281[222281:14:57:04 EST] corey=# select 1;FATAL:  terminating connection due to administrator commandserver closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.The connection to the server was lost. Attempting reset: Failed.Time: 1.554 ms[:15:02:31 EST] !> \\echo :BACKEND_PID:BACKEND_PIDClearly, it is hard to write a regression test for an externally volatile value. `SELECT sign(:BACKEND_PID)` would technically do the job, if we're striving for completeness.The inability to easily DISCONNECT via psql, and the deleterious effect that would have on other regression tests tells me that we can leave that one untested. Notes:This effectively makes the %p prompt (which I use in the example above) the same as %:BACKEND_PID: and we may want to note that in the documentation.Do we want to change %p to pull from this variable and save the snprintf()? Not a measurable savings, more or a don't-repeat-yourself thing.In the varlistentry, I suggest we add \"This variable is unset when the connection is lost.\" after \"but can be changed or unset.", "msg_date": "Sat, 4 Feb 2023 15:35:58 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "Hi\n\n\nso 4. 2. 2023 v 21:36 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n> with doc and unsetting variable\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n> Patch applies.\n>\n> Manually testing confirms that it works, at least for the connected state.\n> I don't actually know how get psql to invoke DISCONNECT, so I killed the\n> dev server and can confirm\n>\n> [222281:14:57:01 EST] corey=# \\echo :BACKEND_PID\n> 222281\n> [222281:14:57:04 EST] corey=# select 1;\n> FATAL: terminating connection due to administrator command\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n> Time: 1.554 ms\n> [:15:02:31 EST] !> \\echo :BACKEND_PID\n> :BACKEND_PID\n>\n> Clearly, it is hard to write a regression test for an externally volatile\n> value. `SELECT sign(:BACKEND_PID)` would technically do the job, if we're\n> striving for completeness.\n>\n\nI did simple test - :BACKEND_PID should be equal pg_backend_pid()\n\n\n>\n> The inability to easily DISCONNECT via psql, and the deleterious effect\n> that would have on other regression tests tells me that we can leave that\n> one untested.\n>\n\nI agree\n\n\n>\n> Notes:\n>\n> This effectively makes the %p prompt (which I use in the example above)\n> the same as %:BACKEND_PID: and we may want to note that in the\n> documentation.\n>\n\ndone\n\n\n> Do we want to change %p to pull from this variable and save the\n> snprintf()? Not a measurable savings, more or a don't-repeat-yourself thing.\n>\n\nI checked the code, and I don't think so. Current state is safer (I think).\nThe psql's variables are not protected, and I think, so is safer, better to\nread the value for prompt directly by usage of the libpq API instead read\nthe possibly \"customized\" variable. I see possible inconsistency,\nbut again, the same inconsistency can be for variables USER and DBNAME too,\nand I am not able to say what is significantly better. Just prompt shows\nreal value, and the related variable is +/- copy in connection time.\n\nI am not 100% sure in this area what is better, but the change requires\nwider and more general discussion, and I don't think the benefits of\npossible\nchange are enough. There are just two possible solutions - we can protect\nsome psql's variables (and it can do some compatibility issues), or we\nneed to accept, so the value in prompt can be fake. It is better to not\ntouch it :-).\n\n\n>\n> In the varlistentry, I suggest we add \"This variable is unset when the\n> connection is lost.\" after \"but can be changed or unset.\n>\n\ndone\n\nRegards\n\nPavel", "msg_date": "Sun, 5 Feb 2023 06:17:39 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": ">\n>\n>>\n>> Clearly, it is hard to write a regression test for an externally volatile\n>> value. `SELECT sign(:BACKEND_PID)` would technically do the job, if we're\n>> striving for completeness.\n>>\n>\n> I did simple test - :BACKEND_PID should be equal pg_backend_pid()\n>\n>\n\nEven better.\n\n\n>\n>>\n>> Do we want to change %p to pull from this variable and save the\n>> snprintf()? Not a measurable savings, more or a don't-repeat-yourself thing.\n>>\n>\n> I checked the code, and I don't think so. Current state is safer (I\n> think). The psql's variables are not protected, and I think, so is safer,\n> better to\n> read the value for prompt directly by usage of the libpq API instead read\n> the possibly \"customized\" variable. I see possible inconsistency,\n> but again, the same inconsistency can be for variables USER and DBNAME\n> too, and I am not able to say what is significantly better. Just prompt\n> shows\n> real value, and the related variable is +/- copy in connection time.\n>\n> I am not 100% sure in this area what is better, but the change requires\n> wider and more general discussion, and I don't think the benefits of\n> possible\n> change are enough. There are just two possible solutions - we can protect\n> some psql's variables (and it can do some compatibility issues), or we\n> need to accept, so the value in prompt can be fake. It is better to not\n> touch it :-).\n>\n\nI agree it is out of scope of this patch, but I like the idea of protected\npsql variables, and I doubt users are intentionally re-using these vars to\nany positive effect. The more likely case is that newer psql vars just\nhappen to use the names chosen by somebody's script in the past.\n\n\n>\n> done\n>\n>\n>\nEverything passes. Docs look good. Test looks good.\n\nClearly, it is hard to write a regression test for an externally volatile value. `SELECT sign(:BACKEND_PID)` would technically do the job, if we're striving for completeness.I did simple test - :BACKEND_PID should be equal pg_backend_pid() Even better. Do we want to change %p to pull from this variable and save the snprintf()? Not a measurable savings, more or a don't-repeat-yourself thing.I checked the code, and I don't think so. Current state is safer (I think). The psql's variables are not protected, and I think, so is safer, better toread the value for prompt directly by usage of the libpq API instead read the possibly \"customized\" variable. I see possible inconsistency,but again, the same inconsistency can be for variables USER and DBNAME too, and I am not able to say what is significantly better. Just prompt showsreal value, and the related variable is +/- copy in connection time.I am not 100% sure in this area what is better, but the change requires wider and more general discussion, and I don't think the benefits of possiblechange are enough. There are just two possible solutions - we can protect some psql's variables (and it can do some compatibility issues), or weneed to accept, so the value in prompt can be fake. It is better to not touch it :-).I agree it is out of scope of this patch, but I like the idea of protected psql variables, and I doubt users are intentionally re-using these vars to any positive effect. The more likely case is that newer psql vars just happen to use the names chosen by somebody's script in the past.  done Everything passes. Docs look good. Test looks good.", "msg_date": "Sun, 5 Feb 2023 18:25:02 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nA small but helpful feature.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Sun, 05 Feb 2023 23:34:28 +0000", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "po 6. 2. 2023 v 0:25 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n>\n>>>\n>>> Clearly, it is hard to write a regression test for an externally\n>>> volatile value. `SELECT sign(:BACKEND_PID)` would technically do the job,\n>>> if we're striving for completeness.\n>>>\n>>\n>> I did simple test - :BACKEND_PID should be equal pg_backend_pid()\n>>\n>>\n>\n> Even better.\n>\n>\n>>\n>>>\n>>> Do we want to change %p to pull from this variable and save the\n>>> snprintf()? Not a measurable savings, more or a don't-repeat-yourself thing.\n>>>\n>>\n>> I checked the code, and I don't think so. Current state is safer (I\n>> think). The psql's variables are not protected, and I think, so is safer,\n>> better to\n>> read the value for prompt directly by usage of the libpq API instead read\n>> the possibly \"customized\" variable. I see possible inconsistency,\n>> but again, the same inconsistency can be for variables USER and DBNAME\n>> too, and I am not able to say what is significantly better. Just prompt\n>> shows\n>> real value, and the related variable is +/- copy in connection time.\n>>\n>> I am not 100% sure in this area what is better, but the change requires\n>> wider and more general discussion, and I don't think the benefits of\n>> possible\n>> change are enough. There are just two possible solutions - we can protect\n>> some psql's variables (and it can do some compatibility issues), or we\n>> need to accept, so the value in prompt can be fake. It is better to not\n>> touch it :-).\n>>\n>\n> I agree it is out of scope of this patch, but I like the idea of protected\n> psql variables, and I doubt users are intentionally re-using these vars to\n> any positive effect. The more likely case is that newer psql vars just\n> happen to use the names chosen by somebody's script in the past.\n>\n\nbash variables are not protected too. I know this is in a different\ncontext, and different architecture. It can be a very simple patch, but it\nneeds wider discussion. Probably it should be immutable, it is safer, and\nnow I do not have any useful use case for mutability of these variables.\n\nRegards\n\nPavel\n\n\n\n>\n>\n>>\n>> done\n>>\n>>\n>>\n> Everything passes. Docs look good. Test looks good.\n>\n\npo 6. 2. 2023 v 0:25 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:Clearly, it is hard to write a regression test for an externally volatile value. `SELECT sign(:BACKEND_PID)` would technically do the job, if we're striving for completeness.I did simple test - :BACKEND_PID should be equal pg_backend_pid() Even better. Do we want to change %p to pull from this variable and save the snprintf()? Not a measurable savings, more or a don't-repeat-yourself thing.I checked the code, and I don't think so. Current state is safer (I think). The psql's variables are not protected, and I think, so is safer, better toread the value for prompt directly by usage of the libpq API instead read the possibly \"customized\" variable. I see possible inconsistency,but again, the same inconsistency can be for variables USER and DBNAME too, and I am not able to say what is significantly better. Just prompt showsreal value, and the related variable is +/- copy in connection time.I am not 100% sure in this area what is better, but the change requires wider and more general discussion, and I don't think the benefits of possiblechange are enough. There are just two possible solutions - we can protect some psql's variables (and it can do some compatibility issues), or weneed to accept, so the value in prompt can be fake. It is better to not touch it :-).I agree it is out of scope of this patch, but I like the idea of protected psql variables, and I doubt users are intentionally re-using these vars to any positive effect. The more likely case is that newer psql vars just happen to use the names chosen by somebody's script in the past.bash variables are not protected too. I know this is in a different context, and different architecture. It can be a very simple patch, but it needs wider discussion. Probably it should be immutable, it is safer, and now I  do not have any useful use case for mutability of these variables.RegardsPavel   done Everything passes. Docs look good. Test looks good.", "msg_date": "Mon, 6 Feb 2023 08:24:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "po 6. 2. 2023 v 0:35 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> A small but helpful feature.\n>\n> The new status of this patch is: Ready for Committer\n>\n\nThank you very much\n\nPavel\n\npo 6. 2. 2023 v 0:35 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:The following review has been posted through the commitfest application:\nmake installcheck-world:  tested, passed\nImplements feature:       tested, passed\nSpec compliant:           tested, passed\nDocumentation:            tested, passed\n\nA small but helpful feature.\n\nThe new status of this patch is: Ready for CommitterThank you very muchPavel", "msg_date": "Mon, 6 Feb 2023 08:30:45 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "\tCorey Huinker wrote:\n\n> Manually testing confirms that it works, at least for the connected state. I\n> don't actually know how get psql to invoke DISCONNECT, so I killed the dev\n> server and can confirm\n\nMaybe something like this could be used, with no external action:\n\n postgres=# \\echo :BACKEND_PID \n 10805\n postgres=# create user tester superuser;\n CREATE ROLE\n postgres=# \\c postgres tester\n You are now connected to database \"postgres\" as user \"tester\".\n postgres=# alter user tester nosuperuser connection limit 0;\n ALTER ROLE\n postgres=# select pg_terminate_backend(pg_backend_pid());\n FATAL: terminating connection due to administrator command\n server closed the connection unexpectedly\n\t This probably means the server terminated abnormally\n\t before or while processing the request.\n The connection to the server was lost. Attempting reset: Failed.\n The connection to the server was lost. Attempting reset: Failed.\n\n !?> \\echo :BACKEND_PID\n :BACKEND_PID\n\n\n> In the varlistentry, I suggest we add \"This variable is unset when the\n> connection is lost.\" after \"but can be changed or unset.\n\nPersonally I'd much rather have BACKEND_PID set to 0 rather than being unset\nwhen not connected. For one thing it allows safely using \\if :BACKEND_PID.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 06 Feb 2023 12:16:13 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "\tI wrote:\n\n> > In the varlistentry, I suggest we add \"This variable is unset when the\n> > connection is lost.\" after \"but can be changed or unset.\n> \n> Personally I'd much rather have BACKEND_PID set to 0 rather than being unset\n> when not connected. For one thing it allows safely using \\if :BACKEND_PID.\n\nOops it turns out that was wishful thinking from me.\n\\if does not interpret a non-zero integer as true, except for the\nvalue \"1\". \nI'd still prefer BACKEND_PID being 0 when not connected, though.\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 06 Feb 2023 13:03:13 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "po 6. 2. 2023 v 13:03 odesílatel Daniel Verite <daniel@manitou-mail.org>\nnapsal:\n\n> I wrote:\n>\n> > > In the varlistentry, I suggest we add \"This variable is unset when the\n> > > connection is lost.\" after \"but can be changed or unset.\n> >\n> > Personally I'd much rather have BACKEND_PID set to 0 rather than being\n> unset\n> > when not connected. For one thing it allows safely using \\if\n> :BACKEND_PID.\n>\n> Oops it turns out that was wishful thinking from me.\n> \\if does not interpret a non-zero integer as true, except for the\n> value \"1\".\n> I'd still prefer BACKEND_PID being 0 when not connected, though.\n>\n\nI think psql never tries to execute a query if the engine is not connected,\nso for usage in queries undefined state is not important - it will be\nalways defined.\n\nfor using in \\if is unset may be a better state, because you can try to use\n{? varname} syntax.\n\n0 is theoretically valid process id number, so I am not sure if 0 is ok. I\ndon't know if some numbers can be used like invalid process id?\n\n\n\n\n>\n> Best regards,\n> --\n> Daniel Vérité\n> https://postgresql.verite.pro/\n> Twitter: @DanielVerite\n>\n\npo 6. 2. 2023 v 13:03 odesílatel Daniel Verite <daniel@manitou-mail.org> napsal:        I wrote:\n\n> > In the varlistentry, I suggest we add \"This variable is unset when the\n> > connection is lost.\" after \"but can be changed or unset.\n> \n> Personally I'd much rather have BACKEND_PID set to 0 rather than being unset\n> when not connected. For one thing it allows safely using \\if :BACKEND_PID.\n\nOops it turns out that was wishful thinking from me.\n\\if does not interpret a non-zero integer as true, except for the\nvalue \"1\". \nI'd still prefer BACKEND_PID being 0 when not connected, though.I think psql never tries to execute a query if the engine is not connected, so for usage in queries undefined state is not important - it will be always defined.for using in \\if is unset may be a better state, because you can try to use {? varname} syntax.0 is theoretically valid process id number, so I am not sure if 0 is ok. I don't know if some numbers can be used like invalid process id? \n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Mon, 6 Feb 2023 15:30:14 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "On 03.02.23 11:41, Pavel Stehule wrote:\n> We can simply allow an access to backend process id thru psql variable. \n> I propose the name \"BACKEND_PID\". The advantages of usage are simple \n> accessibility by command \\set, and less typing then using function \n> pg_backend_pid, because psql variables are supported by tab complete \n> routine. Implementation is very simple, because we can use the function \n> PQbackendPID.\n\nWhat would this be useful for?\n\nYou can mostly do this using\n\n select pg_backend_pid() AS \"BACKEND_PID\" \\gset\n\n\n\n", "msg_date": "Thu, 9 Feb 2023 09:57:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "čt 9. 2. 2023 v 9:57 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 03.02.23 11:41, Pavel Stehule wrote:\n> > We can simply allow an access to backend process id thru psql variable.\n> > I propose the name \"BACKEND_PID\". The advantages of usage are simple\n> > accessibility by command \\set, and less typing then using function\n> > pg_backend_pid, because psql variables are supported by tab complete\n> > routine. Implementation is very simple, because we can use the function\n> > PQbackendPID.\n>\n> What would this be useful for?\n>\n> You can mostly do this using\n>\n> select pg_backend_pid() AS \"BACKEND_PID\" \\gset\n>\n\nthere are 2 (3) my motivations\n\nfirst and main (for me) - I can use psql variables tab complete - just\n:B<tab> - it is significantly faster\nsecond - I can see all connection related information by \\set\nthird - there is not hook on reconnect in psql - so if you implement\nBACKEND_PID by self, you ensure to run query with pg_backend_pid() after\nany reconnect or connection change.\n\nIt is clean so you can run \"select pg_backend_pid() AS \"BACKEND_PID\" \\gset\"\nand you can store it to .psqlrc. But most of the time I am in customer's\nenvironment, and I have the time, possibility to do a complete setup of\n.psqlrc. It looks (for me) like a generally useful feature to be\neverywhere.\n\nRegards\n\nPavel\n\nčt 9. 2. 2023 v 9:57 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 03.02.23 11:41, Pavel Stehule wrote:\n> We can simply allow an access to backend process id thru psql variable. \n> I propose the name \"BACKEND_PID\". The advantages of usage are simple \n> accessibility by command \\set, and less typing then using function \n> pg_backend_pid, because psql variables are supported by tab complete \n> routine. Implementation is very simple, because we can use the function \n> PQbackendPID.\n\nWhat would this be useful for?\n\nYou can mostly do this using\n\n     select pg_backend_pid() AS \"BACKEND_PID\" \\gset there are 2 (3) my motivationsfirst and main (for me) - I can use psql variables tab complete - just :B<tab> - it is significantly fastersecond - I can see all connection related information by \\setthird - there is not hook on reconnect in psql - so if you implement BACKEND_PID by self, you ensure to run query with pg_backend_pid() after any reconnect or connection change.It is clean so you can run \"select pg_backend_pid() AS \"BACKEND_PID\" \\gset\" and you can store it to .psqlrc. But most of the time I am in customer's environment, and I have the time, possibility to do a complete setup of .psqlrc. It looks (for me) like a generally useful feature to be everywhere. RegardsPavel", "msg_date": "Thu, 9 Feb 2023 10:11:21 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "Hi,\n\nOn 2023-02-09 10:11:21 +0100, Pavel Stehule wrote:\n> first and main (for me) - I can use psql variables tab complete - just\n> :B<tab> - it is significantly faster\n> second - I can see all connection related information by \\set\n> third - there is not hook on reconnect in psql - so if you implement\n> BACKEND_PID by self, you ensure to run query with pg_backend_pid() after\n> any reconnect or connection change.\n> \n> It is clean so you can run \"select pg_backend_pid() AS \"BACKEND_PID\" \\gset\"\n> and you can store it to .psqlrc. But most of the time I am in customer's\n> environment, and I have the time, possibility to do a complete setup of\n> .psqlrc. It looks (for me) like a generally useful feature to be\n> everywhere.\n\nI personally just solved this by using %p in PROMPT*. Not that that serves\nquite the same niche.\n\nI guess the fact that we have %p is a minor precedent of psql special casing\nbackend pid in psql.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 11 Feb 2023 13:03:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "On 2023-02-04 15:35:58 -0500, Corey Huinker wrote:\n> This effectively makes the %p prompt (which I use in the example above) the\n> same as %:BACKEND_PID: and we may want to note that in the documentation.\n\nI don't really see much of a point in noting this in the doc. I don't know in\nwhat situation a user would be helped by reading\n\n+ This substitution is almost equal to using <literal>%:BACKEND_PID:</literal>,\n+ but it is safer, because psql variable can be overwriten or unset.\n\nor just about any reformulation of that?\n\n\n", "msg_date": "Sat, 11 Feb 2023 13:05:03 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "On 09.02.23 10:11, Pavel Stehule wrote:\n> first and main (for me) - I can use psql variables tab complete - just \n> :B<tab> - it is significantly faster\n> second - I can see all connection related information by \\set\n> third - there is not hook on reconnect in psql - so if you implement \n> BACKEND_PID by self, you ensure to run query with pg_backend_pid() after \n> any reconnect or connection change.\n> \n> It is clean so you can run \"select pg_backend_pid() AS \"BACKEND_PID\" \n> \\gset\" and you can store it to .psqlrc. But most of the time I am in \n> customer's environment, and I have the time, possibility to do a \n> complete setup of .psqlrc. It looks (for me) like a generally useful \n> feature to be everywhere.\n\nBut what do you need the backend PID for in the first place?\n\nOf course, you might want to use it to find your own session in \npg_stat_activity or something like that, but then you're already in a \nquery and can use pg_backend_pid(). What do you need the backend PID \nfor outside of such a query?\n\n\n\n", "msg_date": "Mon, 13 Feb 2023 18:06:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "po 13. 2. 2023 v 18:06 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 09.02.23 10:11, Pavel Stehule wrote:\n> > first and main (for me) - I can use psql variables tab complete - just\n> > :B<tab> - it is significantly faster\n> > second - I can see all connection related information by \\set\n> > third - there is not hook on reconnect in psql - so if you implement\n> > BACKEND_PID by self, you ensure to run query with pg_backend_pid() after\n> > any reconnect or connection change.\n> >\n> > It is clean so you can run \"select pg_backend_pid() AS \"BACKEND_PID\"\n> > \\gset\" and you can store it to .psqlrc. But most of the time I am in\n> > customer's environment, and I have the time, possibility to do a\n> > complete setup of .psqlrc. It looks (for me) like a generally useful\n> > feature to be everywhere.\n>\n> But what do you need the backend PID for in the first place?\n>\n> Of course, you might want to use it to find your own session in\n> pg_stat_activity or something like that, but then you're already in a\n> query and can use pg_backend_pid(). What do you need the backend PID\n> for outside of such a query?\n>\n\nIn every real use case you can use pg_backend_pid(), but you need to write\na complete name without tab complete, and you need to know so this function\nis available.\n\nBACKEND_PID is supported by tab complete, and it is displayed in \\set list\nand \\? variables. Nothing less, nothing more, Custom psql variable can have\nsome obsolete value.\n\nI can imagine using :BACKEND_PID in \\echo command - and it just saves you\none step with its own custom variable.\n\nIt is just some more comfort with almost zero cost.\n\nRegards\n\nPavel\n\npo 13. 2. 2023 v 18:06 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 09.02.23 10:11, Pavel Stehule wrote:\n> first and main (for me) - I can use psql variables tab complete - just \n> :B<tab> - it is significantly faster\n> second - I can see all connection related information by \\set\n> third - there is not hook on reconnect in psql - so if you implement \n> BACKEND_PID by self, you ensure to run query with pg_backend_pid() after \n> any reconnect or connection change.\n> \n> It is clean so you can run \"select pg_backend_pid() AS \"BACKEND_PID\" \n> \\gset\" and you can store it to .psqlrc. But most of the time I am in \n> customer's environment, and I have the time, possibility to do a \n> complete setup of .psqlrc. It looks (for me) like a generally useful \n> feature to be everywhere.\n\nBut what do you need the backend PID for in the first place?\n\nOf course, you might want to use it to find your own session in \npg_stat_activity or something like that, but then you're already in a \nquery and can use pg_backend_pid().  What do you need the backend PID \nfor outside of such a query?In every real use case you can use pg_backend_pid(), but you need to write a complete name without tab complete, and you need to know so this function is available.BACKEND_PID is supported by  tab complete, and it is displayed in \\set list and \\? variables. Nothing less, nothing more, Custom psql variable can have some obsolete value.I can imagine using :BACKEND_PID in \\echo command - and it just saves you one step with its own custom variable. It is just some more comfort with almost zero cost.RegardsPavel", "msg_date": "Mon, 13 Feb 2023 18:33:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "Hi,\n\nOn 2023-02-13 18:06:23 +0100, Peter Eisentraut wrote:\n> On 09.02.23 10:11, Pavel Stehule wrote:\n> > first and main (for me) - I can use psql variables tab complete - just\n> > :B<tab> - it is significantly faster\n> > second - I can see all connection related information by \\set\n> > third - there is not hook on reconnect in psql - so if you implement\n> > BACKEND_PID by self, you ensure to run query with pg_backend_pid() after\n> > any reconnect or connection change.\n> > \n> > It is clean so you can run \"select pg_backend_pid() AS \"BACKEND_PID\"\n> > \\gset\" and you can store it to .psqlrc. But most of the time I am in\n> > customer's environment, and I have the time, possibility to do a\n> > complete setup of .psqlrc. It looks (for me) like a generally useful\n> > feature to be everywhere.\n> \n> But what do you need the backend PID for in the first place?\n\nFor me it's using gdb, pidstat, strace, perf, ...\n\nBut for those %p in the PROMPTs is more useful.\n\n\n> Of course, you might want to use it to find your own session in\n> pg_stat_activity or something like that, but then you're already in a query\n> and can use pg_backend_pid(). What do you need the backend PID for outside\n> of such a query?\n\nE.g. I fire of a query, it's slower than I'd like, I want to attach perf. Of\ncourse I can establish a separate connection, query pg_stat_activity there,\nand then perf. But that requires manually filtering pg_stat_activity to find\nthe query.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Feb 2023 09:42:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-13 18:06:23 +0100, Peter Eisentraut wrote:\n>> But what do you need the backend PID for in the first place?\n\n> For me it's using gdb, pidstat, strace, perf, ...\n> But for those %p in the PROMPTs is more useful.\n\nIndeed, because ...\n\n> E.g. I fire of a query, it's slower than I'd like, I want to attach perf. Of\n> course I can establish a separate connection, query pg_stat_activity there,\n> and then perf. But that requires manually filtering pg_stat_activity to find\n> the query.\n\n... in this case, the problem is that the session is tied up doing the\nslow query. You can't run \"select pg_backend_pid()\", but you can't\nextract a psql variable value either. If you had the foresight to\nset up a PROMPT, or to collect the PID earlier, you're good. But I'm\nstill not seeing where a psql variable makes that easier.\n\nI don't buy Pavel's argument that adding Yet Another built-in variable\nadds ease of use. I think what it mostly adds is clutter. I realize\nthat \"psql --help=variables | wc\" is already 160+ lines, but that\ndoesn't mean that making it longer and longer is a net improvement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Feb 2023 12:52:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "Hi,\n\nOn 2023-02-13 12:52:23 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > E.g. I fire of a query, it's slower than I'd like, I want to attach perf. Of\n> > course I can establish a separate connection, query pg_stat_activity there,\n> > and then perf. But that requires manually filtering pg_stat_activity to find\n> > the query.\n> \n> ... in this case, the problem is that the session is tied up doing the\n> slow query. You can't run \"select pg_backend_pid()\", but you can't\n> extract a psql variable value either. If you had the foresight to\n> set up a PROMPT, or to collect the PID earlier, you're good. But I'm\n> still not seeing where a psql variable makes that easier.\n\nI guess you could argue that referencing BACKEND_PID in PROMPT would be more\nreadable. But that's about it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Feb 2023 09:58:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "po 13. 2. 2023 v 18:52 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-13 18:06:23 +0100, Peter Eisentraut wrote:\n> >> But what do you need the backend PID for in the first place?\n>\n> > For me it's using gdb, pidstat, strace, perf, ...\n> > But for those %p in the PROMPTs is more useful.\n>\n> Indeed, because ...\n>\n> > E.g. I fire of a query, it's slower than I'd like, I want to attach\n> perf. Of\n> > course I can establish a separate connection, query pg_stat_activity\n> there,\n> > and then perf. But that requires manually filtering pg_stat_activity to\n> find\n> > the query.\n>\n> ... in this case, the problem is that the session is tied up doing the\n> slow query. You can't run \"select pg_backend_pid()\", but you can't\n> extract a psql variable value either. If you had the foresight to\n> set up a PROMPT, or to collect the PID earlier, you're good. But I'm\n> still not seeing where a psql variable makes that easier.\n>\n> I don't buy Pavel's argument that adding Yet Another built-in variable\n> adds ease of use. I think what it mostly adds is clutter. I realize\n> that \"psql --help=variables | wc\" is already 160+ lines, but that\n> doesn't mean that making it longer and longer is a net improvement.\n>\n\nThere are three kinds of variables - there are about 40 psql variables.\n\nI can be mistaken - I thought so somebody if needed filtering in\npg_stat_activity, they can run just \"\\set\"\n\nand he can see\n\n\n(2023-02-13 19:09:10) postgres=# \\set\nAUTOCOMMIT = 'on'\nBACKEND_PID = 10102\nCOMP_KEYWORD_CASE = 'preserve-upper'\nDBNAME = 'postgres'\nECHO = 'none'\nECHO_HIDDEN = 'off'\nENCODING = 'UTF8'\nERROR = 'false'\nFETCH_COUNT = '0'\nHIDE_TABLEAM = 'off'\nHIDE_TOAST_COMPRESSION = 'off'\nHISTCONTROL = 'none'\nHISTSIZE = '500'\nHOST = '/tmp'\nIGNOREEOF = '0'\nLAST_ERROR_MESSAGE = ''\n...\n\nhe don't need to search more\n\nTo find and use pg_backend_pid is not rocket science. But use :BACKEND_PID\nis simpler.\n\nIt is true, so this information is redundant - I see some benefit in the\npossibility to see \"by using \\set\" a little bit more complete view on\nsession, but surely - this is in \"nice to have\" category (from my\nperspective), and if others has different opinion, than we don't need to\nspend with this patch more time. This is not an important feature.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\npo 13. 2. 2023 v 18:52 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-13 18:06:23 +0100, Peter Eisentraut wrote:\n>> But what do you need the backend PID for in the first place?\n\n> For me it's using gdb, pidstat, strace, perf, ...\n> But for those %p in the PROMPTs is more useful.\n\nIndeed, because ...\n\n> E.g. I fire of a query, it's slower than I'd like, I want to attach perf. Of\n> course I can establish a separate connection, query pg_stat_activity there,\n> and then perf. But that requires manually filtering pg_stat_activity to find\n> the query.\n\n... in this case, the problem is that the session is tied up doing the\nslow query.  You can't run \"select pg_backend_pid()\", but you can't\nextract a psql variable value either.  If you had the foresight to\nset up a PROMPT, or to collect the PID earlier, you're good.  But I'm\nstill not seeing where a psql variable makes that easier.\n\nI don't buy Pavel's argument that adding Yet Another built-in variable\nadds ease of use.  I think what it mostly adds is clutter.  I realize\nthat \"psql --help=variables | wc\" is already 160+ lines, but that\ndoesn't mean that making it longer and longer is a net improvement.There are three kinds of variables - there are about 40 psql variables.I can be mistaken - I thought so somebody if needed filtering in pg_stat_activity, they can run just \"\\set\"and he can see (2023-02-13 19:09:10) postgres=# \\setAUTOCOMMIT = 'on'BACKEND_PID = 10102COMP_KEYWORD_CASE = 'preserve-upper'DBNAME = 'postgres'ECHO = 'none'ECHO_HIDDEN = 'off'ENCODING = 'UTF8'ERROR = 'false'FETCH_COUNT = '0'HIDE_TABLEAM = 'off'HIDE_TOAST_COMPRESSION = 'off'HISTCONTROL = 'none'HISTSIZE = '500'HOST = '/tmp'IGNOREEOF = '0'LAST_ERROR_MESSAGE = ''...he don't need to search more To find and use pg_backend_pid is not rocket science. But use :BACKEND_PID is simpler. It is true, so this information is redundant - I see some benefit in the possibility to see \"by using \\set\" a little bit more complete view on session, but surely - this is in \"nice to have\" category (from my perspective), and if others has different opinion, than we don't need to spend with this patch more time. This is not an important feature.RegardsPavel\n\n                        regards, tom lane", "msg_date": "Mon, 13 Feb 2023 19:24:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "On 13.02.23 18:33, Pavel Stehule wrote:\n> In every real use case you can use pg_backend_pid(), but you need to \n> write a complete name without tab complete, and you need to know so this \n> function is available.\n> \n> BACKEND_PID is supported by  tab complete, and it is displayed in \\set \n> list and \\? variables. Nothing less, nothing more, Custom psql variable \n> can have some obsolete value.\n> \n> I can imagine using :BACKEND_PID in \\echo command - and it just saves \n> you one step with its own custom variable.\n> \n> It is just some more comfort with almost zero cost.\n\nThis line of argument would open us up to copying just about every bit \nof session state into psql just to make it slightly easier to use.\n\n\n", "msg_date": "Mon, 13 Feb 2023 22:05:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "On Thu, 16 Feb 2023 at 12:44, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> To find and use pg_backend_pid is not rocket science. But use :BACKEND_PID is simpler.\n\nI wanted to call out that if there's a connection pooler (e.g.\nPgBouncer) in the middle, then BACKEND_PID (and %p) are incorrect, but\npg_backend_pid() would work for the query. This might be an edge case,\nbut if BACKEND_PID is added it might be worth listing this edge case\nin the docs somewhere.\n\n\n", "msg_date": "Thu, 16 Feb 2023 12:49:38 +0100", "msg_from": "Jelte Fennema <me@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "čt 16. 2. 2023 v 12:49 odesílatel Jelte Fennema <me@jeltef.nl> napsal:\n\n> On Thu, 16 Feb 2023 at 12:44, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > To find and use pg_backend_pid is not rocket science. But use\n> :BACKEND_PID is simpler.\n>\n> I wanted to call out that if there's a connection pooler (e.g.\n> PgBouncer) in the middle, then BACKEND_PID (and %p) are incorrect, but\n> pg_backend_pid() would work for the query. This might be an edge case,\n> but if BACKEND_PID is added it might be worth listing this edge case\n> in the docs somewhere.\n>\n\ngood note\n\nRegards\n\nPavel\n\nčt 16. 2. 2023 v 12:49 odesílatel Jelte Fennema <me@jeltef.nl> napsal:On Thu, 16 Feb 2023 at 12:44, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> To find and use pg_backend_pid is not rocket science. But use :BACKEND_PID is simpler.\n\nI wanted to call out that if there's a connection pooler (e.g.\nPgBouncer) in the middle, then BACKEND_PID (and %p) are incorrect, but\npg_backend_pid() would work for the query. This might be an edge case,\nbut if BACKEND_PID is added it might be worth listing this edge case\nin the docs somewhere.good noteRegardsPavel", "msg_date": "Fri, 17 Feb 2023 05:04:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "On 2023-02-16 Th 23:04, Pavel Stehule wrote:\n>\n>\n> čt 16. 2. 2023 v 12:49 odesílatel Jelte Fennema <me@jeltef.nl> napsal:\n>\n> On Thu, 16 Feb 2023 at 12:44, Pavel Stehule\n> <pavel.stehule@gmail.com> wrote:\n> > To find and use pg_backend_pid is not rocket science. But use\n> :BACKEND_PID is simpler.\n>\n> I wanted to call out that if there's a connection pooler (e.g.\n> PgBouncer) in the middle, then BACKEND_PID (and %p) are incorrect, but\n> pg_backend_pid() would work for the query. This might be an edge case,\n> but if BACKEND_PID is added it might be worth listing this edge case\n> in the docs somewhere.\n>\n>\n> good note\n>\n>\n\nThis patch is marked RFC, but given the comments upthread from Tom, \nAndres and Peter, I think it should actually be Rejected.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-16 Th 23:04, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\nčt 16. 2. 2023 v 12:49\n odesílatel Jelte Fennema <me@jeltef.nl>\n napsal:\n\nOn Thu, 16 Feb 2023 at\n 12:44, Pavel Stehule <pavel.stehule@gmail.com>\n wrote:\n > To find and use pg_backend_pid is not rocket science.\n But use :BACKEND_PID is simpler.\n\n I wanted to call out that if there's a connection pooler\n (e.g.\n PgBouncer) in the middle, then BACKEND_PID (and %p) are\n incorrect, but\n pg_backend_pid() would work for the query. This might be an\n edge case,\n but if BACKEND_PID is added it might be worth listing this\n edge case\n in the docs somewhere.\n\n\n\ngood note\n\n\n\n\n\n\n\n\nThis patch is marked RFC, but given the comments upthread from\n Tom, Andres and Peter, I think it should actually be Rejected.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 18 Mar 2023 11:24:09 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" }, { "msg_contents": "so 18. 3. 2023 v 16:24 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n> On 2023-02-16 Th 23:04, Pavel Stehule wrote:\n>\n>\n>\n> čt 16. 2. 2023 v 12:49 odesílatel Jelte Fennema <me@jeltef.nl> napsal:\n>\n>> On Thu, 16 Feb 2023 at 12:44, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> > To find and use pg_backend_pid is not rocket science. But use\n>> :BACKEND_PID is simpler.\n>>\n>> I wanted to call out that if there's a connection pooler (e.g.\n>> PgBouncer) in the middle, then BACKEND_PID (and %p) are incorrect, but\n>> pg_backend_pid() would work for the query. This might be an edge case,\n>> but if BACKEND_PID is added it might be worth listing this edge case\n>> in the docs somewhere.\n>>\n>\n> good note\n>\n>\n>\n> This patch is marked RFC, but given the comments upthread from Tom, Andres\n> and Peter, I think it should actually be Rejected.\n>\n\nok\n\nregards\n\nPavel\n\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nso 18. 3. 2023 v 16:24 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\n\n\n\nOn 2023-02-16 Th 23:04, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nčt 16. 2. 2023 v 12:49\n odesílatel Jelte Fennema <me@jeltef.nl>\n napsal:\n\nOn Thu, 16 Feb 2023 at\n 12:44, Pavel Stehule <pavel.stehule@gmail.com>\n wrote:\n > To find and use pg_backend_pid is not rocket science.\n But use :BACKEND_PID is simpler.\n\n I wanted to call out that if there's a connection pooler\n (e.g.\n PgBouncer) in the middle, then BACKEND_PID (and %p) are\n incorrect, but\n pg_backend_pid() would work for the query. This might be an\n edge case,\n but if BACKEND_PID is added it might be worth listing this\n edge case\n in the docs somewhere.\n\n\n\ngood note\n\n\n\n\n\n\n\n\nThis patch is marked RFC, but given the comments upthread from\n Tom, Andres and Peter, I think it should actually be Rejected.okregardsPavel \n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 18 Mar 2023 17:57:32 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: psql variable BACKEND_PID" } ]
[ { "msg_contents": "Hi\n\none visitor of p2d2 (Prague PostgreSQL Developer Day) asked if it is\npossible to show the current role in psql's prompt. I think it is not\npossible, but fortunately (with some limits) almost all necessary work is\ndone, and the patch is short.\n\nIn the assigned patch I implemented a new prompt placeholder %N, that shows\nthe current role name.\n\n(2023-02-03 15:52:28) postgres=# \\set PROMPT1 '%n as %N at '%/%=%#\npavel as pavel at postgres=#set role to admin;\nSET\npavel as admin at postgres=>set role to default;\nSET\npavel as pavel at postgres=#\n\nComments, notes are welcome.\n\nRegards\n\nPavel", "msg_date": "Fri, 3 Feb 2023 15:56:06 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal: psql: show current user in prompt" }, { "msg_contents": "On Fri, Feb 3, 2023 at 9:56 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> one visitor of p2d2 (Prague PostgreSQL Developer Day) asked if it is\n> possible to show the current role in psql's prompt. I think it is not\n> possible, but fortunately (with some limits) almost all necessary work is\n> done, and the patch is short.\n>\n> In the assigned patch I implemented a new prompt placeholder %N, that\n> shows the current role name.\n>\n> (2023-02-03 15:52:28) postgres=# \\set PROMPT1 '%n as %N at '%/%=%#\n> pavel as pavel at postgres=#set role to admin;\n> SET\n> pavel as admin at postgres=>set role to default;\n> SET\n> pavel as pavel at postgres=#\n>\n> Comments, notes are welcome.\n>\n> Regards\n>\n> Pavel\n>\n\nThis patch is cluttered with the BACKEND_PID patch and some guc_tables.c\nstuff that I don't think is related.\n\nWe'd have to document the %N.\n\nI think there is some value here for people who have to connect as several\ndifferent users (tech support), and need a reminder-at-a-glance whether\nthey are operating in the desired role. It may be helpful in audit\ndocumentation as well.\n\nOn Fri, Feb 3, 2023 at 9:56 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hione visitor of p2d2 (Prague PostgreSQL Developer Day) asked if it is possible to show the current role in psql's prompt. I think it is not possible, but fortunately (with some limits) almost all necessary work is done, and the patch is short.In the assigned patch I implemented a new prompt placeholder %N, that shows the current role name.(2023-02-03 15:52:28) postgres=# \\set PROMPT1 '%n as %N at '%/%=%#pavel as pavel at postgres=#set role to admin;SETpavel as admin at postgres=>set role to default;SETpavel as pavel at postgres=#Comments, notes are welcome.RegardsPavelThis patch is cluttered with the BACKEND_PID patch and some guc_tables.c stuff that I don't think is related.We'd have to document the %N.I think there is some value here for people who have to connect as several different users (tech support), and need a reminder-at-a-glance whether they are operating in the desired role. It may be helpful in audit documentation as well.", "msg_date": "Fri, 3 Feb 2023 14:41:56 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "pá 3. 2. 2023 v 20:42 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n>\n>\n> On Fri, Feb 3, 2023 at 9:56 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> one visitor of p2d2 (Prague PostgreSQL Developer Day) asked if it is\n>> possible to show the current role in psql's prompt. I think it is not\n>> possible, but fortunately (with some limits) almost all necessary work is\n>> done, and the patch is short.\n>>\n>> In the assigned patch I implemented a new prompt placeholder %N, that\n>> shows the current role name.\n>>\n>> (2023-02-03 15:52:28) postgres=# \\set PROMPT1 '%n as %N at '%/%=%#\n>> pavel as pavel at postgres=#set role to admin;\n>> SET\n>> pavel as admin at postgres=>set role to default;\n>> SET\n>> pavel as pavel at postgres=#\n>>\n>> Comments, notes are welcome.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n> This patch is cluttered with the BACKEND_PID patch and some guc_tables.c\n> stuff that I don't think is related.\n>\n\nI was little bit lazy, I am sorry. I did it in one my experimental branch.\nBoth patches are PoC, and there are not documentation yet. I will separate\nit.\n\n\n> We'd have to document the %N.\n>\n> I think there is some value here for people who have to connect as several\n> different users (tech support), and need a reminder-at-a-glance whether\n> they are operating in the desired role. It may be helpful in audit\n> documentation as well.\n>\n\n yes, I agree so it can be useful - it is not my idea - and it is maybe\npartially deduced from some other databases.\n\nBoth patches are very simple - and they use almost already prepared\ninfrastructure.\n\nRegards\n\nPavel\n\npá 3. 2. 2023 v 20:42 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:On Fri, Feb 3, 2023 at 9:56 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hione visitor of p2d2 (Prague PostgreSQL Developer Day) asked if it is possible to show the current role in psql's prompt. I think it is not possible, but fortunately (with some limits) almost all necessary work is done, and the patch is short.In the assigned patch I implemented a new prompt placeholder %N, that shows the current role name.(2023-02-03 15:52:28) postgres=# \\set PROMPT1 '%n as %N at '%/%=%#pavel as pavel at postgres=#set role to admin;SETpavel as admin at postgres=>set role to default;SETpavel as pavel at postgres=#Comments, notes are welcome.RegardsPavelThis patch is cluttered with the BACKEND_PID patch and some guc_tables.c stuff that I don't think is related.I was little bit lazy, I am sorry. I did it in one my experimental branch. Both patches are PoC, and there are not documentation yet. I will separate it. We'd have to document the %N.I think there is some value here for people who have to connect as several different users (tech support), and need a reminder-at-a-glance whether they are operating in the desired role. It may be helpful in audit documentation as well. yes, I agree so it can be useful - it is not my idea - and it is maybe partially deduced from some other databases.Both patches are very simple - and they use almost already prepared infrastructure. RegardsPavel", "msg_date": "Fri, 3 Feb 2023 20:57:40 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Both patches are very simple - and they use almost already prepared\n> infrastructure.\n\nIt's not simple at all to make the psql feature depend on marking\n\"role\" as GUC_REPORT when it never has been before. That will\ncause the feature to misbehave when using older servers. I'm\neven less impressed by having it fall back on PQuser(), which\nwould be misleading at exactly the times when it matters.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Feb 2023 15:21:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "pá 3. 2. 2023 v 21:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > Both patches are very simple - and they use almost already prepared\n> > infrastructure.\n>\n> It's not simple at all to make the psql feature depend on marking\n> \"role\" as GUC_REPORT when it never has been before. That will\n> cause the feature to misbehave when using older servers. I'm\n> even less impressed by having it fall back on PQuser(), which\n> would be misleading at exactly the times when it matters.\n>\n\nIt is a good note. This can be disabled for older servers, and maybe it\ncan introduce its own GUC (and again - it can be disallowed for older\nservers).\n\nMy goal at this moment is to get some prototype. We can talk if this\nfeature request is valid or not, and we can talk about implementation.\n\nThere is another possibility to directly execute \"select current_user()\"\ninstead of looking at status parameters inside prompt processing. It can\nwork too.\n\nRegards\n\nPavel\n\n\n\n\n\n> regards, tom lane\n>\n\npá 3. 2. 2023 v 21:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Both patches are very simple - and they use almost already prepared\n> infrastructure.\n\nIt's not simple at all to make the psql feature depend on marking\n\"role\" as GUC_REPORT when it never has been before.  That will\ncause the feature to misbehave when using older servers.  I'm\neven less impressed by having it fall back on PQuser(), which\nwould be misleading at exactly the times when it matters.It is a good note. This can be disabled for older servers, and maybe it can  introduce its own GUC (and again - it can be disallowed for older servers).My goal at this moment is to get some prototype. We can talk if this feature request is valid or not, and we can talk about implementation. There is another possibility to directly execute \"select current_user()\" instead of looking at status parameters inside prompt processing. It can work too.RegardsPavel\n\n                        regards, tom lane", "msg_date": "Fri, 3 Feb 2023 21:43:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\npá 3. 2. 2023 v 21:43 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 3. 2. 2023 v 21:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > Both patches are very simple - and they use almost already prepared\n>> > infrastructure.\n>>\n>> It's not simple at all to make the psql feature depend on marking\n>> \"role\" as GUC_REPORT when it never has been before. That will\n>> cause the feature to misbehave when using older servers. I'm\n>> even less impressed by having it fall back on PQuser(), which\n>> would be misleading at exactly the times when it matters.\n>>\n>\n> It is a good note. This can be disabled for older servers, and maybe it\n> can introduce its own GUC (and again - it can be disallowed for older\n> servers).\n>\n\nHere is another version. For older servers it shows the string ERR0A000.\nThat is ERR code of \"feature is not supported\"\n\n\n> My goal at this moment is to get some prototype. We can talk if this\n> feature request is valid or not, and we can talk about implementation.\n>\n> There is another possibility to directly execute \"select current_user()\"\n> instead of looking at status parameters inside prompt processing. It can\n> work too.\n>\n\nI tested using the query SELECT CURRENT_USER, but I don't think it is\nusable now, because it doesn't work in the broken transaction.\n\nRegards\n\nPavel\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>> regards, tom lane\n>>\n>", "msg_date": "Sat, 4 Feb 2023 21:32:45 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Sat, Feb 4, 2023 at 3:33 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> pá 3. 2. 2023 v 21:43 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> pá 3. 2. 2023 v 21:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>>\n>>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>>> > Both patches are very simple - and they use almost already prepared\n>>> > infrastructure.\n>>>\n>>> It's not simple at all to make the psql feature depend on marking\n>>> \"role\" as GUC_REPORT when it never has been before. That will\n>>> cause the feature to misbehave when using older servers. I'm\n>>> even less impressed by having it fall back on PQuser(), which\n>>> would be misleading at exactly the times when it matters.\n>>>\n>>\n>> It is a good note. This can be disabled for older servers, and maybe it\n>> can introduce its own GUC (and again - it can be disallowed for older\n>> servers).\n>>\n>\n> Here is another version. For older servers it shows the string ERR0A000.\n> That is ERR code of \"feature is not supported\"\n>\n>\n>> My goal at this moment is to get some prototype. We can talk if this\n>> feature request is valid or not, and we can talk about implementation.\n>>\n>> There is another possibility to directly execute \"select current_user()\"\n>> instead of looking at status parameters inside prompt processing. It can\n>> work too.\n>>\n>\n> I tested using the query SELECT CURRENT_USER, but I don't think it is\n> usable now, because it doesn't work in the broken transaction.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>\n>>\n>>> regards, tom lane\n>>>\n>>\nI've tested this w/regards to psql. Latest commit.\nIt works as described. 'none' is displayed for the default role. (SET ROLE\nDEFAULT), otherwise the specific ROLE is displayed.\n\nI tried this patch on 15.2, but guc_files.c does not exist in that version,\nso it did not install.\nI also tried applying the %T patch, but since they touch the same file, it\nwould not install with it, without rebasing, repatching.\n\nThe Docs are updated, and it's a relatively contained patch.\n\nChanged status to Ready for Committer. (100% Guessing here...)\n\nKirk\n\nOn Sat, Feb 4, 2023 at 3:33 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 3. 2. 2023 v 21:43 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:pá 3. 2. 2023 v 21:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Both patches are very simple - and they use almost already prepared\n> infrastructure.\n\nIt's not simple at all to make the psql feature depend on marking\n\"role\" as GUC_REPORT when it never has been before.  That will\ncause the feature to misbehave when using older servers.  I'm\neven less impressed by having it fall back on PQuser(), which\nwould be misleading at exactly the times when it matters.It is a good note. This can be disabled for older servers, and maybe it can  introduce its own GUC (and again - it can be disallowed for older servers).Here is another version. For older servers it shows the string ERR0A000. That is  ERR code of \"feature is not supported\"My goal at this moment is to get some prototype. We can talk if this feature request is valid or not, and we can talk about implementation. There is another possibility to directly execute \"select current_user()\" instead of looking at status parameters inside prompt processing. It can work too.I tested using the query SELECT CURRENT_USER, but I don't think it is usable now, because it doesn't work in the broken transaction.RegardsPavel RegardsPavel\n\n                        regards, tom laneI've tested this w/regards to psql.  Latest commit.It works as described.  'none' is displayed for the default role. (SET ROLE DEFAULT), otherwise the specific ROLE is displayed.I tried this patch on 15.2, but guc_files.c does not exist in that version, so it did not install.I also tried applying the %T patch, but since they touch the same file, it would not install with it, without rebasing, repatching.The Docs are updated, and it's a relatively contained patch.Changed status to Ready for Committer. (100% Guessing here...)Kirk", "msg_date": "Thu, 2 Mar 2023 15:45:10 -0500", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n> Changed status to Ready for Committer. (100% Guessing here...)\n\nBasically, I want to reject this on the grounds that it's not\nuseful enough to justify the overhead of marking the \"role\" GUC\nas GUC_REPORT. The problems with it not going to work properly\nwith old servers are an additional reason not to like it.\n\nBut, if I lose the argument and we do commit this, I think it\nshould just print an empty string when dealing with an old server.\n\"ERR02000\" is an awful idea, not least because it could be a\nreal role name.\n\nBTW, we should probably get rid of the PQuser() fallback in\n%n (session_username()) as well. It's unlikely that there are\nstill servers in the wild that don't report \"session_authorization\",\nbut if we did find one then the output is potentially misleading.\nI'd rather print nothing than something that might not be your\nactual session authorization setting.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Apr 2023 12:42:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "út 4. 4. 2023 v 18:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Kirk Wolak <wolakk@gmail.com> writes:\n> > Changed status to Ready for Committer. (100% Guessing here...)\n>\n> Basically, I want to reject this on the grounds that it's not\n> useful enough to justify the overhead of marking the \"role\" GUC\n> as GUC_REPORT. The problems with it not going to work properly\n> with old servers are an additional reason not to like it.\n>\n\nIf I understand to next comment correctly, the overhead should not be too\nbig\n\n/*\n * ReportChangedGUCOptions: report recently-changed GUC_REPORT variables\n *\n * This is called just before we wait for a new client query.\n *\n * By handling things this way, we ensure that a ParameterStatus message\n * is sent at most once per variable per query, even if the variable\n * changed multiple times within the query. That's quite possible when\n * using features such as function SET clauses. Function SET clauses\n * also tend to cause values to change intraquery but eventually revert\n * to their prevailing values; ReportGUCOption is responsible for avoiding\n * redundant reports in such cases.\n */\n\n\n\n>\n> But, if I lose the argument and we do commit this, I think it\n> should just print an empty string when dealing with an old server.\n> \"ERR02000\" is an awful idea, not least because it could be a\n> real role name.\n>\n\nok\n\n\n>\n> BTW, we should probably get rid of the PQuser() fallback in\n> %n (session_username()) as well. It's unlikely that there are\n> still servers in the wild that don't report \"session_authorization\",\n> but if we did find one then the output is potentially misleading.\n> I'd rather print nothing than something that might not be your\n> actual session authorization setting.\n>\n\nIt should be a separate patch?\n\nUpdated patch attached\n\nRegards\n\nPavel\n\n\n\n>\n> regards, tom lane\n>", "msg_date": "Tue, 4 Apr 2023 19:26:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 4. 4. 2023 v 18:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Basically, I want to reject this on the grounds that it's not\n>> useful enough to justify the overhead of marking the \"role\" GUC\n>> as GUC_REPORT. The problems with it not going to work properly\n>> with old servers are an additional reason not to like it.\n\n> If I understand to next comment correctly, the overhead should not be too\n> big\n\nYeah, but how big is the use-case? The reason I'm skeptical is that\nhalf the time what you're going to get is \"none\":\n\n$ psql\npsql (16devel)\nType \"help\" for help.\n\nregression=# show role;\n role \n------\n none\n(1 row)\n\nThat's required by SQL spec I believe, but that doesn't make it useful\ndata to keep in one's prompt.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Apr 2023 13:55:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "út 4. 4. 2023 v 19:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > út 4. 4. 2023 v 18:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> Basically, I want to reject this on the grounds that it's not\n> >> useful enough to justify the overhead of marking the \"role\" GUC\n> >> as GUC_REPORT. The problems with it not going to work properly\n> >> with old servers are an additional reason not to like it.\n>\n> > If I understand to next comment correctly, the overhead should not be too\n> > big\n>\n> Yeah, but how big is the use-case? The reason I'm skeptical is that\n> half the time what you're going to get is \"none\":\n>\n> $ psql\n> psql (16devel)\n> Type \"help\" for help.\n>\n> regression=# show role;\n> role\n> ------\n> none\n> (1 row)\n>\n> That's required by SQL spec I believe, but that doesn't make it useful\n> data to keep in one's prompt.\n>\n\nWho needs it, and who uses different roles, then very quickly uses SET ROLE\nTO command.\n\nBut I fully agree so current behavior can be a little bit messy. I like\nthis feature, and I think it can have some benefits. Proposed\nimplementation is minimalistic.\n\nOne hard problem is translation of the oid of current_user to name. It\nrequires an opened transaction, and then it cannot be postponed to the end\nof the statement. On the other hand, when the change of role is done inside\na nested command, then it should not be visible from the client side.\n\nCan you accept the introduction of a new invisible GUC, that can be\nmodified only by SET ROLE TO command when it is executed as top command?\n\nRegards\n\nPavel\n\n\n\n\n\n\n>\n> regards, tom lane\n>\n\nút 4. 4. 2023 v 19:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 4. 4. 2023 v 18:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Basically, I want to reject this on the grounds that it's not\n>> useful enough to justify the overhead of marking the \"role\" GUC\n>> as GUC_REPORT.  The problems with it not going to work properly\n>> with old servers are an additional reason not to like it.\n\n> If I understand to next comment correctly, the overhead should not be too\n> big\n\nYeah, but how big is the use-case?  The reason I'm skeptical is that\nhalf the time what you're going to get is \"none\":\n\n$ psql\npsql (16devel)\nType \"help\" for help.\n\nregression=# show role;\n role \n------\n none\n(1 row)\n\nThat's required by SQL spec I believe, but that doesn't make it useful\ndata to keep in one's prompt.Who needs it, and who uses different roles, then very quickly uses SET ROLE TO command.But I fully agree so current behavior can be a little bit messy. I like this feature, and I think it can have some benefits. Proposed implementation is minimalistic.  One hard problem is translation of the oid of current_user to name. It requires an opened transaction, and then it cannot be postponed to the end of the statement. On the other hand, when the change of role is done inside a nested command, then it should not be visible from the client side. Can you accept the introduction of a new invisible GUC, that can be modified only by SET ROLE TO command when it is executed as top command?RegardsPavel \n\n                        regards, tom lane", "msg_date": "Tue, 4 Apr 2023 20:50:53 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "út 4. 4. 2023 v 20:50 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 4. 4. 2023 v 19:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > út 4. 4. 2023 v 18:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> >> Basically, I want to reject this on the grounds that it's not\n>> >> useful enough to justify the overhead of marking the \"role\" GUC\n>> >> as GUC_REPORT. The problems with it not going to work properly\n>> >> with old servers are an additional reason not to like it.\n>>\n>> > If I understand to next comment correctly, the overhead should not be\n>> too\n>> > big\n>>\n>> Yeah, but how big is the use-case? The reason I'm skeptical is that\n>> half the time what you're going to get is \"none\":\n>>\n>> $ psql\n>> psql (16devel)\n>> Type \"help\" for help.\n>>\n>> regression=# show role;\n>> role\n>> ------\n>> none\n>> (1 row)\n>>\n>> That's required by SQL spec I believe, but that doesn't make it useful\n>> data to keep in one's prompt.\n>>\n>\n> Who needs it, and who uses different roles, then very quickly uses SET\n> ROLE TO command.\n>\n> But I fully agree so current behavior can be a little bit messy. I like\n> this feature, and I think it can have some benefits. Proposed\n> implementation is minimalistic.\n>\n> One hard problem is translation of the oid of current_user to name. It\n> requires an opened transaction, and then it cannot be postponed to the end\n> of the statement. On the other hand, when the change of role is done inside\n> a nested command, then it should not be visible from the client side.\n>\n> Can you accept the introduction of a new invisible GUC, that can be\n> modified only by SET ROLE TO command when it is executed as top command?\n>\n\nIt was stupid idea.\n\nThere can be implemented fallbacks. When the role is \"none\", then the :USER\ncan be displayed instead.\n\nIt can work, because the custom role \"none\" is not allowed\n\n(2023-04-04 21:10:25) postgres=# create role none;\nERROR: role name \"none\" is reserved\nLINE 1: create role none;\n\n?\n\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>>\n>> regards, tom lane\n>>\n>\n\nút 4. 4. 2023 v 20:50 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 4. 4. 2023 v 19:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 4. 4. 2023 v 18:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Basically, I want to reject this on the grounds that it's not\n>> useful enough to justify the overhead of marking the \"role\" GUC\n>> as GUC_REPORT.  The problems with it not going to work properly\n>> with old servers are an additional reason not to like it.\n\n> If I understand to next comment correctly, the overhead should not be too\n> big\n\nYeah, but how big is the use-case?  The reason I'm skeptical is that\nhalf the time what you're going to get is \"none\":\n\n$ psql\npsql (16devel)\nType \"help\" for help.\n\nregression=# show role;\n role \n------\n none\n(1 row)\n\nThat's required by SQL spec I believe, but that doesn't make it useful\ndata to keep in one's prompt.Who needs it, and who uses different roles, then very quickly uses SET ROLE TO command.But I fully agree so current behavior can be a little bit messy. I like this feature, and I think it can have some benefits. Proposed implementation is minimalistic.  One hard problem is translation of the oid of current_user to name. It requires an opened transaction, and then it cannot be postponed to the end of the statement. On the other hand, when the change of role is done inside a nested command, then it should not be visible from the client side. Can you accept the introduction of a new invisible GUC, that can be modified only by SET ROLE TO command when it is executed as top command?It was stupid idea. There can be implemented fallbacks. When the role is \"none\", then the :USER can be displayed instead. It can work, because the custom role \"none\" is not allowed(2023-04-04 21:10:25) postgres=# create role none;ERROR:  role name \"none\" is reservedLINE 1: create role none;? RegardsPavel \n\n                        regards, tom lane", "msg_date": "Tue, 4 Apr 2023 21:11:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "út 4. 4. 2023 v 21:11 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 4. 4. 2023 v 20:50 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> út 4. 4. 2023 v 19:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>>\n>>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>>> > út 4. 4. 2023 v 18:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>>> >> Basically, I want to reject this on the grounds that it's not\n>>> >> useful enough to justify the overhead of marking the \"role\" GUC\n>>> >> as GUC_REPORT. The problems with it not going to work properly\n>>> >> with old servers are an additional reason not to like it.\n>>>\n>>> > If I understand to next comment correctly, the overhead should not be\n>>> too\n>>> > big\n>>>\n>>> Yeah, but how big is the use-case? The reason I'm skeptical is that\n>>> half the time what you're going to get is \"none\":\n>>>\n>>> $ psql\n>>> psql (16devel)\n>>> Type \"help\" for help.\n>>>\n>>> regression=# show role;\n>>> role\n>>> ------\n>>> none\n>>> (1 row)\n>>>\n>>> That's required by SQL spec I believe, but that doesn't make it useful\n>>> data to keep in one's prompt.\n>>>\n>>\n>> Who needs it, and who uses different roles, then very quickly uses SET\n>> ROLE TO command.\n>>\n>> But I fully agree so current behavior can be a little bit messy. I like\n>> this feature, and I think it can have some benefits. Proposed\n>> implementation is minimalistic.\n>>\n>> One hard problem is translation of the oid of current_user to name. It\n>> requires an opened transaction, and then it cannot be postponed to the end\n>> of the statement. On the other hand, when the change of role is done inside\n>> a nested command, then it should not be visible from the client side.\n>>\n>> Can you accept the introduction of a new invisible GUC, that can be\n>> modified only by SET ROLE TO command when it is executed as top command?\n>>\n>\n> It was stupid idea.\n>\n> There can be implemented fallbacks. When the role is \"none\", then the\n> :USER can be displayed instead.\n>\n> It can work, because the custom role \"none\" is not allowed\n>\n> (2023-04-04 21:10:25) postgres=# create role none;\n> ERROR: role name \"none\" is reserved\n> LINE 1: create role none;\n>\n> ?\n>\n>\nattached updated patch\n\nRegards\n\nPavel\n\n\n\n>\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>\n>>\n>>\n>>>\n>>> regards, tom lane\n>>>\n>>", "msg_date": "Wed, 5 Apr 2023 06:43:16 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Tue, Apr 4, 2023 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Basically, I want to reject this on the grounds that it's not\n> useful enough to justify the overhead of marking the \"role\" GUC\n> as GUC_REPORT.\n\nI agree with that. I think we need some method for optionally\nreporting values, so that stuff like this can be handled without\nadding it to the wire protocol for everyone. I don't think we can just\nkeep adding stuff to the set of things that gets reported for\neveryone. It doesn't scale. We need a really good reason to enlarge\nthe set of values reported for all users, and I don't think this meets\nthat bar.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 09:28:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 4, 2023 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Basically, I want to reject this on the grounds that it's not\n>> useful enough to justify the overhead of marking the \"role\" GUC\n>> as GUC_REPORT.\n\n> I agree with that. I think we need some method for optionally\n> reporting values, so that stuff like this can be handled without\n> adding it to the wire protocol for everyone.\n\nIt could probably be possible to provide some mechanism for setting\nGUC_REPORT on specific variables locally within sessions. I don't\nthink this'd be much of a protocol-break problem, because clients\nshould already be coded to deal gracefully with ParameterStatus messages\nfor variables they don't know. However, connecting that up to something\nlike a psql prompt feature would still be annoying. I doubt I'd want\nto go as far as having psql try to turn on GUC_REPORT automatically\nif it sees %N in the prompt ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Apr 2023 09:56:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Wed, Apr 5, 2023 at 9:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Apr 4, 2023 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Basically, I want to reject this on the grounds that it's not\n> >> useful enough to justify the overhead of marking the \"role\" GUC\n> >> as GUC_REPORT.\n>\n> > I agree with that. I think we need some method for optionally\n> > reporting values, so that stuff like this can be handled without\n> > adding it to the wire protocol for everyone.\n>\n> It could probably be possible to provide some mechanism for setting\n> GUC_REPORT on specific variables locally within sessions. I don't\n> think this'd be much of a protocol-break problem, because clients\n> should already be coded to deal gracefully with ParameterStatus messages\n> for variables they don't know. However, connecting that up to something\n> like a psql prompt feature would still be annoying. I doubt I'd want\n> to go as far as having psql try to turn on GUC_REPORT automatically\n> if it sees %N in the prompt ...\n\nOh, I had it in mind that it would do exactly that. And I think that\nshould be mediated by a wire protocol message, not a GUC, so that\nusers don't mess things up for psql or other clients -- in either\ndirection -- via SET commands.\n\nMaybe there's a better way, that just seemed like the obvious design.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 10:01:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "st 5. 4. 2023 v 15:56 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Apr 4, 2023 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Basically, I want to reject this on the grounds that it's not\n> >> useful enough to justify the overhead of marking the \"role\" GUC\n> >> as GUC_REPORT.\n>\n> > I agree with that. I think we need some method for optionally\n> > reporting values, so that stuff like this can be handled without\n> > adding it to the wire protocol for everyone.\n>\n> It could probably be possible to provide some mechanism for setting\n> GUC_REPORT on specific variables locally within sessions. I don't\n> think this'd be much of a protocol-break problem, because clients\n> should already be coded to deal gracefully with ParameterStatus messages\n> for variables they don't know. However, connecting that up to something\n> like a psql prompt feature would still be annoying. I doubt I'd want\n> to go as far as having psql try to turn on GUC_REPORT automatically\n> if it sees %N in the prompt ...\n>\n\nI agree with this analyze\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nst 5. 4. 2023 v 15:56 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 4, 2023 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Basically, I want to reject this on the grounds that it's not\n>> useful enough to justify the overhead of marking the \"role\" GUC\n>> as GUC_REPORT.\n\n> I agree with that. I think we need some method for optionally\n> reporting values, so that stuff like this can be handled without\n> adding it to the wire protocol for everyone.\n\nIt could probably be possible to provide some mechanism for setting\nGUC_REPORT on specific variables locally within sessions.  I don't\nthink this'd be much of a protocol-break problem, because clients\nshould already be coded to deal gracefully with ParameterStatus messages\nfor variables they don't know.  However, connecting that up to something\nlike a psql prompt feature would still be annoying.  I doubt I'd want\nto go as far as having psql try to turn on GUC_REPORT automatically\nif it sees %N in the prompt ...I agree with this analyzeRegardsPavel \n\n                        regards, tom lane", "msg_date": "Wed, 5 Apr 2023 16:58:51 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "st 5. 4. 2023 v 16:02 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:\n\n> On Wed, Apr 5, 2023 at 9:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > On Tue, Apr 4, 2023 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Basically, I want to reject this on the grounds that it's not\n> > >> useful enough to justify the overhead of marking the \"role\" GUC\n> > >> as GUC_REPORT.\n> >\n> > > I agree with that. I think we need some method for optionally\n> > > reporting values, so that stuff like this can be handled without\n> > > adding it to the wire protocol for everyone.\n> >\n> > It could probably be possible to provide some mechanism for setting\n> > GUC_REPORT on specific variables locally within sessions. I don't\n> > think this'd be much of a protocol-break problem, because clients\n> > should already be coded to deal gracefully with ParameterStatus messages\n> > for variables they don't know. However, connecting that up to something\n> > like a psql prompt feature would still be annoying. I doubt I'd want\n> > to go as far as having psql try to turn on GUC_REPORT automatically\n> > if it sees %N in the prompt ...\n>\n> Oh, I had it in mind that it would do exactly that. And I think that\n> should be mediated by a wire protocol message, not a GUC, so that\n> users don't mess things up for psql or other clients -- in either\n> direction -- via SET commands.\n>\n>\nIf the GUC_REPORT should not be used, then only one possibility is\nenhancing the protocol, about the possibility to read some predefined\nserver's features from the client.\nIt can be much cheaper than SQL query, and it can be used when the current\ntransaction is aborted. I can imagine a possibility to read server time or\na server session role from a prompt processing routine.\n\nBut for this specific case, you need to cache the role name somewhere. You\ncan simply get oid everytime, but for role name you need to access to\nsystem catalogue, and it is not possible in aborted transactions. So at the\nend, you probably should read \"role\" GUC.\n\nCan this design be acceptable?\n\nRegards\n\nPavel\n\n\n\n\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nst 5. 4. 2023 v 16:02 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Wed, Apr 5, 2023 at 9:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Apr 4, 2023 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Basically, I want to reject this on the grounds that it's not\n> >> useful enough to justify the overhead of marking the \"role\" GUC\n> >> as GUC_REPORT.\n>\n> > I agree with that. I think we need some method for optionally\n> > reporting values, so that stuff like this can be handled without\n> > adding it to the wire protocol for everyone.\n>\n> It could probably be possible to provide some mechanism for setting\n> GUC_REPORT on specific variables locally within sessions.  I don't\n> think this'd be much of a protocol-break problem, because clients\n> should already be coded to deal gracefully with ParameterStatus messages\n> for variables they don't know.  However, connecting that up to something\n> like a psql prompt feature would still be annoying.  I doubt I'd want\n> to go as far as having psql try to turn on GUC_REPORT automatically\n> if it sees %N in the prompt ...\n\nOh, I had it in mind that it would do exactly that. And I think that\nshould be mediated by a wire protocol message, not a GUC, so that\nusers don't mess things up for psql or other clients -- in either\ndirection -- via SET commands.\nIf the GUC_REPORT should not  be used, then only one possibility is enhancing the protocol, about the possibility to read some predefined server's features from the client.   It can be much cheaper than SQL query, and it can be used when the current transaction is aborted. I can imagine a possibility to read server time or a server session role from a prompt processing routine.But for this specific case, you need to cache the role name somewhere. You can simply get oid everytime, but for role name you need to access to system catalogue, and it is not possible in aborted transactions. So at the end, you probably should read \"role\" GUC. Can this design be  acceptable?RegardsPavel \n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 5 Apr 2023 17:33:52 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Wed, Apr 5, 2023 at 11:34 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> If the GUC_REPORT should not be used, then only one possibility is enhancing the protocol, about the possibility to read some predefined server's features from the client.\n> It can be much cheaper than SQL query, and it can be used when the current transaction is aborted. I can imagine a possibility to read server time or a server session role from a prompt processing routine.\n>\n> But for this specific case, you need to cache the role name somewhere. You can simply get oid everytime, but for role name you need to access to system catalogue, and it is not possible in aborted transactions. So at the end, you probably should read \"role\" GUC.\n>\n> Can this design be acceptable?\n\nI don't think we want to add a dedicated protocol message that says\n\"send me the role GUC right now\". I mean, we could, but being able to\ntell the GUC mechanism \"please send me the role GUC after every\ncommand\" sounds a lot easier to use.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 11:47:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "st 5. 4. 2023 v 17:47 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:\n\n> On Wed, Apr 5, 2023 at 11:34 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > If the GUC_REPORT should not be used, then only one possibility is\n> enhancing the protocol, about the possibility to read some predefined\n> server's features from the client.\n> > It can be much cheaper than SQL query, and it can be used when the\n> current transaction is aborted. I can imagine a possibility to read server\n> time or a server session role from a prompt processing routine.\n> >\n> > But for this specific case, you need to cache the role name somewhere.\n> You can simply get oid everytime, but for role name you need to access to\n> system catalogue, and it is not possible in aborted transactions. So at the\n> end, you probably should read \"role\" GUC.\n> >\n> > Can this design be acceptable?\n>\n> I don't think we want to add a dedicated protocol message that says\n> \"send me the role GUC right now\". I mean, we could, but being able to\n> tell the GUC mechanism \"please send me the role GUC after every\n> command\" sounds a lot easier to use.\n>\n\nI'll try it\n\nRegards\n\nPavel\n\n\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nst 5. 4. 2023 v 17:47 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Wed, Apr 5, 2023 at 11:34 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> If the GUC_REPORT should not  be used, then only one possibility is enhancing the protocol, about the possibility to read some predefined server's features from the client.\n> It can be much cheaper than SQL query, and it can be used when the current transaction is aborted. I can imagine a possibility to read server time or a server session role from a prompt processing routine.\n>\n> But for this specific case, you need to cache the role name somewhere. You can simply get oid everytime, but for role name you need to access to system catalogue, and it is not possible in aborted transactions. So at the end, you probably should read \"role\" GUC.\n>\n> Can this design be  acceptable?\n\nI don't think we want to add a dedicated protocol message that says\n\"send me the role GUC right now\". I mean, we could, but being able to\ntell the GUC mechanism \"please send me the role GUC after every\ncommand\" sounds a lot easier to use.I'll try itRegardsPavel \n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 5 Apr 2023 18:40:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "st 5. 4. 2023 v 18:40 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 5. 4. 2023 v 17:47 odesílatel Robert Haas <robertmhaas@gmail.com>\n> napsal:\n>\n>> On Wed, Apr 5, 2023 at 11:34 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> > If the GUC_REPORT should not be used, then only one possibility is\n>> enhancing the protocol, about the possibility to read some predefined\n>> server's features from the client.\n>> > It can be much cheaper than SQL query, and it can be used when the\n>> current transaction is aborted. I can imagine a possibility to read server\n>> time or a server session role from a prompt processing routine.\n>> >\n>> > But for this specific case, you need to cache the role name somewhere.\n>> You can simply get oid everytime, but for role name you need to access to\n>> system catalogue, and it is not possible in aborted transactions. So at the\n>> end, you probably should read \"role\" GUC.\n>> >\n>> > Can this design be acceptable?\n>>\n>> I don't think we want to add a dedicated protocol message that says\n>> \"send me the role GUC right now\". I mean, we could, but being able to\n>> tell the GUC mechanism \"please send me the role GUC after every\n>> command\" sounds a lot easier to use.\n>>\n>\n> I'll try it\n>\n\nhere is patch with setting GUC_REPORT on role only when it is required by\nprompt\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> --\n>> Robert Haas\n>> EDB: http://www.enterprisedb.com\n>>\n>", "msg_date": "Tue, 11 Apr 2023 11:37:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\nrebased version + fix warning possibly uninitialized variable\n\nRegards\n\nPavel", "msg_date": "Thu, 27 Apr 2023 07:39:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "čt 27. 4. 2023 v 7:39 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> rebased version + fix warning possibly uninitialized variable\n>\n\nfix not unique function id\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>", "msg_date": "Fri, 28 Apr 2023 07:00:10 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "I'm very much in favor of adding a way to support reporting other GUC\nvariables than the current hardcoded list. This can be quite useful to\nsupport some amount of session state in connection poolers.\n\nSome general feedback on the patch:\n1. I don't think the synchronization mechanism that you added should\nbe part of PQlinkParameterStatus. It seems likely for people to want\nto turn on reporting for multiple GUCs in one go. Having to\nsynchronize for each would introduce unnecessary round trips. Maybe\nyou also don't care about syncing at all at this point in time.\n\n2. Support for this message should probably require a protocol\nextension. There is another recent thread that discusses about adding\nmessage types and protocol extensions:\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoaxfJ3whOqnxTjT-%2BHDgZYbEho7dVHHsuEU2sgRw17OEQ%40mail.gmail.com#acd99fde0c037cc6cb35d565329b6e00\n\n3. Some tests for this new libpq API should be added in\nsrc/test/modules/libpq_pipeline\n\n4. s/massages/messages\n\n\nFinally, I think this patch should be split into two commits:\n1. adding custom GUC_REPORT protocol support+libpq API\n2. using this libpq API in psql for the user prompt\n\nIf you have multiple commits (which are rebased on master), you can\nvery easily create multiple patch files using this command:\ngit format-patch master --base=master --reroll-count={version_number_here}\n\n\nOn Fri, 28 Apr 2023 at 07:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> čt 27. 4. 2023 v 7:39 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>>\n>> Hi\n>>\n>> rebased version + fix warning possibly uninitialized variable\n>\n>\n> fix not unique function id\n>\n> Regards\n>\n> Pavel\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n\n\n", "msg_date": "Mon, 8 May 2023 14:21:50 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\npo 8. 5. 2023 v 14:22 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> I'm very much in favor of adding a way to support reporting other GUC\n> variables than the current hardcoded list. This can be quite useful to\n> support some amount of session state in connection poolers.\n>\n> Some general feedback on the patch:\n> 1. I don't think the synchronization mechanism that you added should\n> be part of PQlinkParameterStatus. It seems likely for people to want\n> to turn on reporting for multiple GUCs in one go. Having to\n> synchronize for each would introduce unnecessary round trips. Maybe\n> you also don't care about syncing at all at this point in time.\n>\n> 2. Support for this message should probably require a protocol\n> extension. There is another recent thread that discusses about adding\n> message types and protocol extensions:\n>\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoaxfJ3whOqnxTjT-%2BHDgZYbEho7dVHHsuEU2sgRw17OEQ%40mail.gmail.com#acd99fde0c037cc6cb35d565329b6e00\n>\n> 3. Some tests for this new libpq API should be added in\n> src/test/modules/libpq_pipeline\n>\n> 4. s/massages/messages\n>\n>\n> Finally, I think this patch should be split into two commits:\n> 1. adding custom GUC_REPORT protocol support+libpq API\n> 2. using this libpq API in psql for the user prompt\n>\n> If you have multiple commits (which are rebased on master), you can\n> very easily create multiple patch files using this command:\n> git format-patch master --base=master --reroll-count={version_number_here}\n>\n>\nThank you for your comments, I'll finish refactoring plpgsql_check, and\nI'll start refactoring this patch with your comments.\n\nRegards\n\nPavel\n\n\n>\n> On Fri, 28 Apr 2023 at 07:00, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > čt 27. 4. 2023 v 7:39 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >>\n> >> Hi\n> >>\n> >> rebased version + fix warning possibly uninitialized variable\n> >\n> >\n> > fix not unique function id\n> >\n> > Regards\n> >\n> > Pavel\n> >\n> >>\n> >> Regards\n> >>\n> >> Pavel\n> >>\n>\n\nHipo 8. 5. 2023 v 14:22 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:I'm very much in favor of adding a way to support reporting other GUC\nvariables than the current hardcoded list. This can be quite useful to\nsupport some amount of session state in connection poolers.\n\nSome general feedback on the patch:\n1. I don't think the synchronization mechanism that you added should\nbe part of PQlinkParameterStatus. It seems likely for people to want\nto turn on reporting for multiple GUCs in one go. Having to\nsynchronize for each would introduce unnecessary round trips. Maybe\nyou also don't care about syncing at all at this point in time.\n\n2. Support for this message should probably require a protocol\nextension. There is another recent thread that discusses about adding\nmessage types and protocol extensions:\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoaxfJ3whOqnxTjT-%2BHDgZYbEho7dVHHsuEU2sgRw17OEQ%40mail.gmail.com#acd99fde0c037cc6cb35d565329b6e00\n\n3. Some tests for this new libpq API should be added in\nsrc/test/modules/libpq_pipeline\n\n4. s/massages/messages\n\n\nFinally, I think this patch should be split into two commits:\n1. adding custom GUC_REPORT protocol support+libpq API\n2. using this libpq API in psql for the user prompt\n\nIf you have multiple commits (which are rebased on master), you can\nvery easily create multiple patch files using this command:\ngit format-patch master --base=master --reroll-count={version_number_here}\nThank you for your comments, I'll finish refactoring plpgsql_check, and I'll start refactoring this patch with your comments.RegardsPavel \n\nOn Fri, 28 Apr 2023 at 07:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> čt 27. 4. 2023 v 7:39 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>>\n>> Hi\n>>\n>> rebased version + fix warning possibly uninitialized variable\n>\n>\n> fix not unique function id\n>\n> Regards\n>\n> Pavel\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>", "msg_date": "Sat, 15 Jul 2023 18:10:51 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\npá 28. 4. 2023 v 7:00 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> čt 27. 4. 2023 v 7:39 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> rebased version + fix warning possibly uninitialized variable\n>>\n>\n> fix not unique function id\n>\n> Regards\n>\n> Pavel\n>\n\nonly rebase\n\n\n\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>", "msg_date": "Sat, 15 Jul 2023 18:11:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\npo 8. 5. 2023 v 14:22 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> I'm very much in favor of adding a way to support reporting other GUC\n> variables than the current hardcoded list. This can be quite useful to\n> support some amount of session state in connection poolers.\n>\n> Some general feedback on the patch:\n> 1. I don't think the synchronization mechanism that you added should\n> be part of PQlinkParameterStatus. It seems likely for people to want\n> to turn on reporting for multiple GUCs in one go. Having to\n> synchronize for each would introduce unnecessary round trips. Maybe\n> you also don't care about syncing at all at this point in time.\n>\n\nI don't understand how it can be possible to do it without. I need to\nprocess possible errors, and then I need to read and synchronize protocol.\nI didn't inject\nthis feature to some oher flow, so I need to implement a complete process.\nFor example, PQsetClientEncoding does a PQexec call, which is much more\nexpensive. Unfortunately, for this feature, I cannot change some local\nstate variables, but I need to change the state of the server. Own message\nis necessary, because we don't want to be limited by the current\ntransaction state, and then we cannot reuse PQexec.\n\nThe API can be changed from PQlinkParameterStatus(PGconn *conn, const char\n*paramName)\n\nto\n\nPQlinkParameterStatus(PGconn *conn, int nParamNames, const char *paramNames)\n\nWhat do you think?\n\nRegards\n\nPavel\n\n\n>\n> 2. Support for this message should probably require a protocol\n> extension. There is another recent thread that discusses about adding\n> message types and protocol extensions:\n>\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoaxfJ3whOqnxTjT-%2BHDgZYbEho7dVHHsuEU2sgRw17OEQ%40mail.gmail.com#acd99fde0c037cc6cb35d565329b6e00\n>\n> 3. Some tests for this new libpq API should be added in\n> src/test/modules/libpq_pipeline\n>\n> 4. s/massages/messages\n>\n>\n> Finally, I think this patch should be split into two commits:\n> 1. adding custom GUC_REPORT protocol support+libpq API\n> 2. using this libpq API in psql for the user prompt\n>\n> If you have multiple commits (which are rebased on master), you can\n> very easily create multiple patch files using this command:\n> git format-patch master --base=master --reroll-count={version_number_here}\n>\n>\n> On Fri, 28 Apr 2023 at 07:00, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > čt 27. 4. 2023 v 7:39 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >>\n> >> Hi\n> >>\n> >> rebased version + fix warning possibly uninitialized variable\n> >\n> >\n> > fix not unique function id\n> >\n> > Regards\n> >\n> > Pavel\n> >\n> >>\n> >> Regards\n> >>\n> >> Pavel\n> >>\n>\n\nHipo 8. 5. 2023 v 14:22 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:I'm very much in favor of adding a way to support reporting other GUC\nvariables than the current hardcoded list. This can be quite useful to\nsupport some amount of session state in connection poolers.\n\nSome general feedback on the patch:\n1. I don't think the synchronization mechanism that you added should\nbe part of PQlinkParameterStatus. It seems likely for people to want\nto turn on reporting for multiple GUCs in one go. Having to\nsynchronize for each would introduce unnecessary round trips. Maybe\nyou also don't care about syncing at all at this point in time.I don't understand how it can be possible to do it without.  I need to process possible errors, and then I need to read and synchronize protocol. I didn't injectthis feature to some oher flow, so I need to implement a complete process. For example, PQsetClientEncoding does a PQexec call, which is much more expensive. Unfortunately, for this feature, I cannot change some local state variables, but I need to change the state of the server. Own message is necessary, because we don't want to be limited by the current transaction state, and then we cannot reuse PQexec.The API can be changed from PQlinkParameterStatus(PGconn *conn, const char *paramName)toPQlinkParameterStatus(PGconn *conn, int nParamNames, const char *paramNames)What do you think?RegardsPavel \n\n2. Support for this message should probably require a protocol\nextension. There is another recent thread that discusses about adding\nmessage types and protocol extensions:\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoaxfJ3whOqnxTjT-%2BHDgZYbEho7dVHHsuEU2sgRw17OEQ%40mail.gmail.com#acd99fde0c037cc6cb35d565329b6e00\n\n3. Some tests for this new libpq API should be added in\nsrc/test/modules/libpq_pipeline\n\n4. s/massages/messages\n\n\nFinally, I think this patch should be split into two commits:\n1. adding custom GUC_REPORT protocol support+libpq API\n2. using this libpq API in psql for the user prompt\n\nIf you have multiple commits (which are rebased on master), you can\nvery easily create multiple patch files using this command:\ngit format-patch master --base=master --reroll-count={version_number_here}\n\n\nOn Fri, 28 Apr 2023 at 07:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> čt 27. 4. 2023 v 7:39 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>>\n>> Hi\n>>\n>> rebased version + fix warning possibly uninitialized variable\n>\n>\n> fix not unique function id\n>\n> Regards\n>\n> Pavel\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>", "msg_date": "Mon, 24 Jul 2023 21:15:43 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Mon, 24 Jul 2023 at 21:16, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I don't understand how it can be possible to do it without. I need to process possible errors, and then I need to read and synchronize protocol. I didn't inject\n> this feature to some oher flow, so I need to implement a complete process.\n\nI think I might be missing the reason for this then. Could you explain\na bit more why you didn't inject the feature into another flow?\nBecause it seems like it would be better to inserting the logic for\nhandling the new response packet in pqParseInput3(), and then wait on\nthe result with PQexecFinish(). This would allow sending these\nmessages in a pipelined mode.\n\n> For example, PQsetClientEncoding does a PQexec call, which is much more expensive.\n\nYeah, but you'd usually only call that once for the life of the\nconnection. But honestly it would still be good if there was a\npipelined version of that function.\n\n> Unfortunately, for this feature, I cannot change some local state variables, but I need to change the state of the server. Own message is necessary, because we don't want to be limited by the current transaction state, and then we cannot reuse PQexec.\n\nI guess this is your reasoning for why it needs its own state machine,\nbut I don't think I understand the problem. Could you expand a bit\nmore on what you mean? Note that different message types use\nPQexecFinish to wait for their result, e.g. PQdescribePrepared and\nPQclosePrepared use PQexecFinish too and those wait for a\nRowDescription and a Close message respectively. I added the logic for\nPQclosePerpared recently, that patch might be some helpful example\ncode: https://github.com/postgres/postgres/commit/28b5726561841556dc3e00ffe26b01a8107ee654\n\n> The API can be changed from PQlinkParameterStatus(PGconn *conn, const char *paramName)\n>\n> to\n>\n> PQlinkParameterStatus(PGconn *conn, int nParamNames, const char *paramNames)\n\nThat would definitely address the issue with the many round trips\nbeing needed. But it would still leave the issue of introducing a\nsecond state machine in the libpq code. So if it's possible to combine\nthis new code into the existing state machine, then that seems a lot\nbetter.\n\n\n", "msg_date": "Mon, 31 Jul 2023 17:46:25 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "po 31. 7. 2023 v 17:46 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Mon, 24 Jul 2023 at 21:16, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I don't understand how it can be possible to do it without. I need to\n> process possible errors, and then I need to read and synchronize protocol.\n> I didn't inject\n> > this feature to some oher flow, so I need to implement a complete\n> process.\n>\n> I think I might be missing the reason for this then. Could you explain\n> a bit more why you didn't inject the feature into another flow?\n> Because it seems like it would be better to inserting the logic for\n> handling the new response packet in pqParseInput3(), and then wait on\n> the result with PQexecFinish(). This would allow sending these\n> messages in a pipelined mode.\n>\n> > For example, PQsetClientEncoding does a PQexec call, which is much more\n> expensive.\n>\n> Yeah, but you'd usually only call that once for the life of the\n> connection. But honestly it would still be good if there was a\n> pipelined version of that function.\n>\n> > Unfortunately, for this feature, I cannot change some local state\n> variables, but I need to change the state of the server. Own message is\n> necessary, because we don't want to be limited by the current transaction\n> state, and then we cannot reuse PQexec.\n>\n> I guess this is your reasoning for why it needs its own state machine,\n> but I don't think I understand the problem. Could you expand a bit\n> more on what you mean? Note that different message types use\n> PQexecFinish to wait for their result, e.g. PQdescribePrepared and\n> PQclosePrepared use PQexecFinish too and those wait for a\n> RowDescription and a Close message respectively. I added the logic for\n> PQclosePerpared recently, that patch might be some helpful example\n> code:\n> https://github.com/postgres/postgres/commit/28b5726561841556dc3e00ffe26b01a8107ee654\n\n\nThe reason why I implemented separate flow is usage from psql and\nindependence of transaction state. It is used for the \\set command, that\nis non-transactional, not SQL. If I inject this message to some other flow,\nI lose this independence. Proposed message can be injected to other flows\ntoo, I think, but for the proposed psql feature it is not practical.\nWithout independence on transaction state and SQL, I can just implement\nsome SQL function that sets reporting for any GUC, and it is more simple\nthan extending protocol.\n\nRegards\n\nPavel\n\n\n\n>\n> > The API can be changed from PQlinkParameterStatus(PGconn *conn, const\n> char *paramName)\n> >\n> > to\n> >\n> > PQlinkParameterStatus(PGconn *conn, int nParamNames, const char\n> *paramNames)\n>\n> That would definitely address the issue with the many round trips\n> being needed. But it would still leave the issue of introducing a\n> second state machine in the libpq code. So if it's possible to combine\n> this new code into the existing state machine, then that seems a lot\n> better.\n>\n\npo 31. 7. 2023 v 17:46 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:On Mon, 24 Jul 2023 at 21:16, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I don't understand how it can be possible to do it without.  I need to process possible errors, and then I need to read and synchronize protocol. I didn't inject\n> this feature to some oher flow, so I need to implement a complete process.\n\nI think I might be missing the reason for this then. Could you explain\na bit more why you didn't inject the feature into another flow?\nBecause it seems like it would be better to inserting the logic for\nhandling the new response packet in pqParseInput3(), and then wait on\nthe result with PQexecFinish(). This would allow sending these\nmessages in a pipelined mode.\n\n> For example, PQsetClientEncoding does a PQexec call, which is much more expensive.\n\nYeah, but you'd usually only call that once for the life of the\nconnection. But honestly it would still be good if there was a\npipelined version of that function.\n\n> Unfortunately, for this feature, I cannot change some local state variables, but I need to change the state of the server. Own message is necessary, because we don't want to be limited by the current transaction state, and then we cannot reuse PQexec.\n\nI guess this is your reasoning for why it needs its own state machine,\nbut I don't think I understand the problem. Could you expand a bit\nmore on what you mean? Note that different message types use\nPQexecFinish to wait for their result, e.g. PQdescribePrepared and\nPQclosePrepared use PQexecFinish too and those wait for a\nRowDescription and a Close message respectively. I added the logic for\nPQclosePerpared recently, that patch might be some helpful example\ncode: https://github.com/postgres/postgres/commit/28b5726561841556dc3e00ffe26b01a8107ee654The reason why I implemented separate flow is usage from psql and independence of transaction state.  It is used for the \\set command, that is non-transactional, not SQL. If I inject this message to some other flow, I lose this independence. Proposed message can be injected to other flows too, I think, but for the proposed psql feature it is not practical. Without independence on transaction state and SQL, I can just implement some SQL function that sets reporting for any GUC, and it is more simple than extending protocol. RegardsPavel\n\n> The API can be changed from PQlinkParameterStatus(PGconn *conn, const char *paramName)\n>\n> to\n>\n> PQlinkParameterStatus(PGconn *conn, int nParamNames, const char *paramNames)\n\nThat would definitely address the issue with the many round trips\nbeing needed. But it would still leave the issue of introducing a\nsecond state machine in the libpq code. So if it's possible to combine\nthis new code into the existing state machine, then that seems a lot\nbetter.", "msg_date": "Tue, 8 Aug 2023 07:20:07 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Tue, 8 Aug 2023 at 07:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> The reason why I implemented separate flow is usage from psql and independence of transaction state. It is used for the \\set command, that is non-transactional, not SQL. If I inject this message to some other flow, I lose this independence.\n\nI still don't understand the issue that you're trying to solve by\nintroducing a new flow for handling the response. What do you mean\nwith independence of the transaction state? That it is not rolled-back\nin a case like this?\n\nBEGIN;\n\\set PROMPT '%N'\nROLLBACK;\n\nThat seems more like a Postgres server concern, i.e. don't revert the\nchange back on ROLLBACK. (I think your current server-side\nimplementation already does that)\n\nI guess one reason that I don't understand what you mean is that libpq\ndoesn't really care about \"transaction state\" at all. It's really a\nwrapper around a socket with easy functions to send messages in the\npostgres protocol over it. So it cares about the \"connection state\",\nbut not really about a \"transaction state\". (it does track the current\nconnection state, but it doesn't actually use the value except when\nreporting the value when PQtransactionStatus is called by the user of\nlibpq)\n\n> Without independence on transaction state and SQL, I can just implement some SQL function that sets reporting for any GUC, and it is more simple than extending protocol.\n\nI don't understand why this is not possible. As far as I can tell this\nshould work fine for the usecase of psql. I still prefer the protocol\nmessage approach though, because that makes it possible for connection\npoolers to intercept the message and handle it accordingly. And I see\nsome use cases for this reporting feature for PgBouncer as well.\nHowever, I think this is probably the key thing that I don't\nunderstand about the problem you're describing: So, could you explain\nin some more detail why implementing a SQL function would not work for\npsql?\n\n\n", "msg_date": "Thu, 10 Aug 2023 14:05:44 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "čt 10. 8. 2023 v 14:05 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Tue, 8 Aug 2023 at 07:20, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > The reason why I implemented separate flow is usage from psql and\n> independence of transaction state. It is used for the \\set command, that\n> is non-transactional, not SQL. If I inject this message to some other flow,\n> I lose this independence.\n>\n> I still don't understand the issue that you're trying to solve by\n> introducing a new flow for handling the response. What do you mean\n> with independence of the transaction state? That it is not rolled-back\n> in a case like this?\n>\n> BEGIN;\n> \\set PROMPT '%N'\n> ROLLBACK;\n>\n\nsurely not.\n\n\\set is client side setting, and it is not transactional. Attention -\n\"\\set\" and \"set\" commands are absolutely different creatures.\n\n\n> That seems more like a Postgres server concern, i.e. don't revert the\n> change back on ROLLBACK. (I think your current server-side\n> implementation already does that)\n>\n\nPostgres does it, but not on the client side. What I know, almost all\nenvironments don't support transactions on the client side. Postgres is not\nspecial in this direction.\n\n\n>\n> I guess one reason that I don't understand what you mean is that libpq\n> doesn't really care about \"transaction state\" at all. It's really a\n> wrapper around a socket with easy functions to send messages in the\n> postgres protocol over it. So it cares about the \"connection state\",\n> but not really about a \"transaction state\". (it does track the current\n> connection state, but it doesn't actually use the value except when\n> reporting the value when PQtransactionStatus is called by the user of\n> libpq)\n>\n> > Without independence on transaction state and SQL, I can just implement\n> some SQL function that sets reporting for any GUC, and it is more simple\n> than extending protocol.\n>\n> I don't understand why this is not possible. As far as I can tell this\n> should work fine for the usecase of psql. I still prefer the protocol\n> message approach though, because that makes it possible for connection\n> poolers to intercept the message and handle it accordingly. And I see\n> some use cases for this reporting feature for PgBouncer as well.\n>\n\nMaybe we are talking about different features. Now, I have no idea how the\nproposed feature can be useful for pgbouncer?\n\nSure If I implement a new flow, then pgbouncer cannot forward. But it is\nnot too hard to implement redirection of new flow to pgbouncer.\n\n\n> However, I think this is probably the key thing that I don't\n> understand about the problem you're describing: So, could you explain\n> in some more detail why implementing a SQL function would not work for\n> psql?\n>\n\nI try to get some consistency. psql setting and some features like\nformatting doesn't depend on transactional state. It depends just on\nconnection. This is the reason why I don't want to inject dependency on\ntransaction state. Without this dependency, I don't need to check\ntransaction state, and I can execute prompt settings immediately. If I\nimplement this feature as transactional, then I need to wait to idle or to\nmake a new transaction (and this I have not under my control). I try to be\nconsistent with current psql behaviour. It Would be strange (can be very\nmessy) if I had a message like \"cannot set a prompt, because you should do\nROLLBACK first\"\n\nWhen this feature should be implemented as an injected message, then I have\nanother problem. Which SQL command I have to send to the server, when the\nuser wants to set a prompt? And then I don't need to implement a new\nmessage, and I can just implement the SQL function\npg_catalog.report_config(...).\n\nčt 10. 8. 2023 v 14:05 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:On Tue, 8 Aug 2023 at 07:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> The reason why I implemented separate flow is usage from psql and independence of transaction state.  It is used for the \\set command, that is non-transactional, not SQL. If I inject this message to some other flow, I lose this independence.\n\nI still don't understand the issue that you're trying to solve by\nintroducing a new flow for handling the response. What do you mean\nwith independence of the transaction state? That it is not rolled-back\nin a case like this?\n\nBEGIN;\n\\set PROMPT '%N'\nROLLBACK;surely not.\\set is client side setting, and it is not transactional. Attention - \"\\set\" and \"set\" commands are absolutely different creatures.\n\nThat seems more like a Postgres server concern, i.e. don't revert the\nchange back on ROLLBACK. (I think your current server-side\nimplementation already does that)Postgres does it, but not on the client side. What I know, almost all environments don't support transactions on the client side. Postgres is not special in this direction. \n\nI guess one reason that I don't understand what you mean is that libpq\ndoesn't really care about \"transaction state\" at all. It's really a\nwrapper around a socket with easy functions to send messages in the\npostgres protocol over it. So it cares about the \"connection state\",\nbut not really about a \"transaction state\". (it does track the current\nconnection state, but it doesn't actually use the value except when\nreporting the value when PQtransactionStatus is called by the user of\nlibpq)\n\n> Without independence on transaction state and SQL, I can just implement some SQL function that sets reporting for any GUC, and it is more simple than extending protocol.\n\nI don't understand why this is not possible. As far as I can tell this\nshould work fine for the usecase of psql. I still prefer the protocol\nmessage approach though, because that makes it possible for connection\npoolers to intercept the message and handle it accordingly. And I see\nsome use cases for this reporting feature for PgBouncer as well.Maybe we are talking about different features. Now, I have no idea how the proposed feature can be useful for pgbouncer? Sure If I implement a new flow, then pgbouncer cannot forward. But it is not too hard to implement redirection of new flow to pgbouncer. \nHowever, I think this is probably the key thing that I don't\nunderstand about the problem you're describing: So, could you explain\nin some more detail why implementing a SQL function would not work for\npsql?I try to get some consistency. psql setting and some features like formatting doesn't depend on transactional state. It depends just on connection.  This is the reason why I don't want to inject dependency on transaction state. Without this dependency, I don't need to check transaction state, and I can execute prompt settings immediately. If I implement this feature as transactional, then I need to wait to idle or to make a new transaction (and this I have not under my control). I try to be consistent with current psql behaviour. It Would be strange (can be very messy) if I had a message like \"cannot set a prompt, because you should do ROLLBACK first\"When this feature should be implemented as an injected message, then I have another problem. Which SQL command I have to send to the server, when the user wants to set a prompt? And then I don't need to implement a new message, and I can just implement the SQL function pg_catalog.report_config(...).", "msg_date": "Thu, 10 Aug 2023 14:43:50 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Thu, 10 Aug 2023 at 14:44, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> čt 10. 8. 2023 v 14:05 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n>> That it is not rolled-back\n>> in a case like this?\n>>\n>> BEGIN;\n>> \\set PROMPT '%N'\n>> ROLLBACK;\n>\n>\n> surely not.\n>\n> \\set is client side setting, and it is not transactional. Attention - \"\\set\" and \"set\" commands are absolutely different creatures.\n\nTo clarify: I agree it's the desired behavior that \\set is not rolled back.\n\n> It Would be strange (can be very messy) if I had a message like \"cannot set a prompt, because you should do ROLLBACK first\"\n\nThis was a very helpful sentence for my understanding. To double check\nthat I'm understanding you correctly. This is the kind of case that\nyou're talking about.\n\npostgres=# BEGIN;\npostgres=# SELECT some syntax error;\nERROR: 42601: syntax error at or near \"some\"\npostgres=# \\set PROMPT '%N'\nERROR: 25P02: current transaction is aborted, commands ignored until\nend of transaction block\n\nI agree that it should not throw an error like that. So indeed a\ndedicated message type is needed for psql too. Because any query will\ncause that error.\n\nBut afaict there's no problem with using pqParseInput3() and\nPQexecFinish() even if the message isn't handled as part of the\ntransaction. Some other messages that pqParseInput3 handles which are\nnot part of the transaction are 'N' (Notice) and 'K' (secret key).\n\n\n", "msg_date": "Thu, 10 Aug 2023 16:31:27 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "čt 10. 8. 2023 v 16:31 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Thu, 10 Aug 2023 at 14:44, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > čt 10. 8. 2023 v 14:05 odesílatel Jelte Fennema <postgres@jeltef.nl>\n> napsal:\n> >> That it is not rolled-back\n> >> in a case like this?\n> >>\n> >> BEGIN;\n> >> \\set PROMPT '%N'\n> >> ROLLBACK;\n> >\n> >\n> > surely not.\n> >\n> > \\set is client side setting, and it is not transactional. Attention -\n> \"\\set\" and \"set\" commands are absolutely different creatures.\n>\n> To clarify: I agree it's the desired behavior that \\set is not rolled back.\n>\n> > It Would be strange (can be very messy) if I had a message like \"cannot\n> set a prompt, because you should do ROLLBACK first\"\n>\n> This was a very helpful sentence for my understanding. To double check\n> that I'm understanding you correctly. This is the kind of case that\n> you're talking about.\n>\n> postgres=# BEGIN;\n> postgres=# SELECT some syntax error;\n> ERROR: 42601: syntax error at or near \"some\"\n> postgres=# \\set PROMPT '%N'\n> ERROR: 25P02: current transaction is aborted, commands ignored until\n> end of transaction block\n>\n\nyes\n\n>\n> I agree that it should not throw an error like that. So indeed a\n> dedicated message type is needed for psql too. Because any query will\n> cause that error.\n>\n\n\n>\n> But afaict there's no problem with using pqParseInput3() and\n> PQexecFinish() even if the message isn't handled as part of the\n> transaction. Some other messages that pqParseInput3 handles which are\n> not part of the transaction are 'N' (Notice) and 'K' (secret key).\n>\n\nI have to recheck it\n\nRegards\n\nPavel\n\nčt 10. 8. 2023 v 16:31 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:On Thu, 10 Aug 2023 at 14:44, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> čt 10. 8. 2023 v 14:05 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n>> That it is not rolled-back\n>> in a case like this?\n>>\n>> BEGIN;\n>> \\set PROMPT '%N'\n>> ROLLBACK;\n>\n>\n> surely not.\n>\n> \\set is client side setting, and it is not transactional. Attention - \"\\set\" and \"set\" commands are absolutely different creatures.\n\nTo clarify: I agree it's the desired behavior that \\set is not rolled back.\n\n> It Would be strange (can be very messy) if I had a message like \"cannot set a prompt, because you should do ROLLBACK first\"\n\nThis was a very helpful sentence for my understanding. To double check\nthat I'm understanding you correctly. This is the kind of case that\nyou're talking about.\n\npostgres=# BEGIN;\npostgres=# SELECT some syntax error;\nERROR:  42601: syntax error at or near \"some\"\npostgres=# \\set PROMPT '%N'\nERROR:  25P02: current transaction is aborted, commands ignored until\nend of transaction blockyes \n\nI agree that it should not throw an error like that. So indeed a\ndedicated message type is needed for psql too. Because any query will\ncause that error. \n\nBut afaict there's no problem with using pqParseInput3() and\nPQexecFinish() even if the message isn't handled as part of the\ntransaction. Some other messages that pqParseInput3 handles which are\nnot part of the transaction are 'N' (Notice) and 'K' (secret key).I have to recheck itRegardsPavel", "msg_date": "Fri, 11 Aug 2023 08:34:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\n\n>>\n>>\n>> But afaict there's no problem with using pqParseInput3() and\n>> PQexecFinish() even if the message isn't handled as part of the\n>> transaction. Some other messages that pqParseInput3 handles which are\n>> not part of the transaction are 'N' (Notice) and 'K' (secret key).\n>>\n>\n> I have to recheck it\n>\n\nhere is new version based on usage of PQexecFinish\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>", "msg_date": "Mon, 28 Aug 2023 13:58:55 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "po 28. 8. 2023 v 13:58 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n>\n>>>\n>>>\n>>> But afaict there's no problem with using pqParseInput3() and\n>>> PQexecFinish() even if the message isn't handled as part of the\n>>> transaction. Some other messages that pqParseInput3 handles which are\n>>> not part of the transaction are 'N' (Notice) and 'K' (secret key).\n>>>\n>>\n>> I have to recheck it\n>>\n>\n> here is new version based on usage of PQexecFinish\n>\n\nwith protocol test\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>", "msg_date": "Mon, 28 Aug 2023 15:00:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Mon, 28 Aug 2023 at 15:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n+ minServerMajor = 1600;\n+ serverMajor = PQserverVersion(pset.db) / 100;\n\nInstead of using the server version, we should instead use the\nprotocol version negotiation that's provided by the\nNegotiateProtocolVersion message type. We should bump the requested\nprotocol version from 3.0 to 3.1 and check that the server supports\n3.1. Otherwise proxies or connection poolers might get this new\nmessage type, without knowing what to do with them.\n\n+ <varlistentry id=\"protocol-message-formats-ReportGUC\">\n+ <term>ReportGUC (F)</term>\n\nWe'll need some docs on the \"Message Flow\" page too:\nhttps://www.postgresql.org/docs/current/protocol-flow.html\nSpecifically what response is expected, if any.\n\nAnother thing that should be described there is that this falls\noutside of the transaction flow, i.e. it's changes are not reverted on\nROLLBACK. But that leaves an important consideration: What happens\nwhen an error occurs on the server during handling of this message\n(e.g. the GUC does not exist or an OOM is triggered). Is any open\ntransaction aborted in that case? If not, we should have a test for\nthat.\n\n+ if (PQresultStatus(res) != PGRES_COMMAND_OK)\n+ pg_fatal(\"failed to link custom variable: %s\", PQerrorMessage(conn));\n+ PQclear(res);\n\nThe tests should also change the config after running\nPQlinkParameterStatus/PQunlinkParameterStatus to show that the guc is\nreported then or not reported then.\n\n+ if (!PQsendTypedCommand(conn, PqMsg_ReportGUC, 't', paramName))\n+ return NULL;\n\n\nI think we'll need some bikeshedding on what the protocol message\nshould look like exactly. I'm not entirely sure what is the most\nsensible here, so please treat everything I write next as\nsuggestions/discussion:\nI see that you're piggy-backing on PQsendTypedCommand, which seems\nnice to avoid code duplication. It has one downside though: not every\ntype, is valid for each command anymore.\nOne way to avoid that would be to not introduce a new command, but\nonly add a new type that is understood by Describe and Close, e.g. a\n'G' (for GUC). Then PqMsg_Describe, G would become the equivalent of\nwhat'the current patch its PqMsg_ReportGUC, 't' and PqMsg_Close, G\nwould be the same as PqMsg_ReportGUC, 'f'.\n\nThe rest of this email assumes that we're keeping your current\nproposal for the protocol message, so it might not make sense to\naddress all of this feedback, in case we're still going to change the\nprotocol:\n\n+ if (is_set == 't')\n+ {\n+ SetGUCOptionFlag(name, GUC_REPORT);\n+ status = \"SET REPORT_GUC\";\n+ }\n+ else\n+ {\n+ UnsetGUCOptionFlag(name, GUC_REPORT);\n+ status = \"UNSET REPORT_GUC\";\n+ }\n\nI think we should be strict about what we accept here. Any other value\nthan 'f'/'t' for is_set should result in an error imho.\n\n\n", "msg_date": "Tue, 29 Aug 2023 14:10:54 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "út 29. 8. 2023 v 14:11 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Mon, 28 Aug 2023 at 15:00, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> + minServerMajor = 1600;\n> + serverMajor = PQserverVersion(pset.db) / 100;\n>\n> Instead of using the server version, we should instead use the\n> protocol version negotiation that's provided by the\n> NegotiateProtocolVersion message type. We should bump the requested\n> protocol version from 3.0 to 3.1 and check that the server supports\n> 3.1. Otherwise proxies or connection poolers might get this new\n> message type, without knowing what to do with them.\n>\n\nI checked this part and this part of the code, and it looks like current\nlibpq doesn't allow optional requirements of higher protocol version number.\n\nWith the current state of NegotiateProtocolVersion handling I am not able\nto connect to any older servers, because there is no fallback\nimplementation.\n\nMy personal feeling from this area is that the protocol design is done, but\nit is not implemented on libpq level. My feelings can be wrong. The\nprotocol number is hardcoded in libpq, so I cannot change it from the\nclient side.\n\nBut maybe I don't see some possibility?\n\nRegards\n\nPavel\n\n\n>\n\nút 29. 8. 2023 v 14:11 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:On Mon, 28 Aug 2023 at 15:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n+                       minServerMajor = 1600;\n+                       serverMajor = PQserverVersion(pset.db) / 100;\n\nInstead of using the server version, we should instead use the\nprotocol version negotiation that's provided by the\nNegotiateProtocolVersion message type. We should bump the requested\nprotocol version from 3.0 to 3.1 and check that the server supports\n3.1. Otherwise proxies or connection poolers might get this new\nmessage type, without knowing what to do with them.I checked this part and this part of the code, and it looks like current libpq doesn't allow optional requirements of higher protocol version number.With the current state of NegotiateProtocolVersion handling I am not able to connect to any older servers, because there is no fallback implementation.My personal feeling from this area is that the protocol design is done, but it is not implemented on libpq level. My feelings can be wrong. The protocol number is hardcoded in libpq, so I cannot change it from the client side.But maybe I don't see some possibility?RegardsPavel", "msg_date": "Sun, 3 Sep 2023 08:23:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Sun, 3 Sept 2023 at 08:24, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> My personal feeling from this area is that the protocol design is done, but it is not implemented on libpq level. My feelings can be wrong. The protocol number is hardcoded in libpq, so I cannot change it from the client side.\n\n\nNo, I agree you're right the client side code to fall back to older\nversions is not implemented. But that seems fairly simple to do. We\ncan change pqGetNegotiateProtocolVersion3 its behaviour. That function\nshould change conn->pversion to the server provided version if it's\nlower than the client version (as long as the server provided version\nis 3.0 or larger). And then we should return an error when calling\nPQlinkParameterStatus/PQunlinkParameterStatus if the pversion is not\nhigh enough.\n\n\n", "msg_date": "Sun, 3 Sep 2023 09:59:33 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\nne 3. 9. 2023 v 9:59 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Sun, 3 Sept 2023 at 08:24, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > My personal feeling from this area is that the protocol design is done,\n> but it is not implemented on libpq level. My feelings can be wrong. The\n> protocol number is hardcoded in libpq, so I cannot change it from the\n> client side.\n>\n>\n> No, I agree you're right the client side code to fall back to older\n> versions is not implemented. But that seems fairly simple to do. We\n> can change pqGetNegotiateProtocolVersion3 its behaviour. That function\n> should change conn->pversion to the server provided version if it's\n> lower than the client version (as long as the server provided version\n> is 3.0 or larger). And then we should return an error when calling\n> PQlinkParameterStatus/PQunlinkParameterStatus if the pversion is not\n> high enough.\n>\n\nok\n\nhere is an try\n\nRegards\n\nPavel", "msg_date": "Sun, 3 Sep 2023 20:58:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Sun, 3 Sept 2023 at 20:58, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> here is an try\n\nOverall it does what I had in mind. Below a few suggestions:\n\n+int\n+PQprotocolSubversion(const PGconn *conn)\n\nUgh, it's quite annoying that the original PQprotocolVersion only\nreturns the major version and thus we need this new function. It\nseems like it would be much nicer if it returned a number similar to\nPQserverVersion. I think it might be nicer to change PQprotocolVersion\nto do that than to add another function. We could do:\n\nreturn PG_PROTOCOL_MAJOR(conn->pversion) * 100 +\nPG_PROTOCOL_MINOR(conn->pversion);\n\nor even:\n\nif (PG_PROTOCOL_MAJOR(conn->pversion) == 3 && PG_PROTOCOL_MINOR(conn->pversion))\n return 3;\nelse\n return PG_PROTOCOL_MAJOR(conn->pversion) * 100 +\nPG_PROTOCOL_MINOR(conn->pversion);\n\nThe second option would be safest backwards compatibility wise, but in\npractice you would only get another value than 3 (or 0) when\nconnecting to pre 7.4 servers. That seems old enough that I don't\nthink anyone is actually calling this function. **I'd like some\nfeedback from others on this though.**\n\n+ /* The protocol 3.0 is required */\n+ if (PG_PROTOCOL_MAJOR(their_version) == 3)\n+ conn->pversion = their_version;\n\nLet's compare against the actual PG_PROTOCOL_EARLIEST and\nPG_PROTOCOL_LATEST to determine if the version is supported or not.\n\n\n", "msg_date": "Mon, 4 Sep 2023 14:24:05 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "po 4. 9. 2023 v 14:24 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Sun, 3 Sept 2023 at 20:58, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > here is an try\n>\n> Overall it does what I had in mind. Below a few suggestions:\n>\n> +int\n> +PQprotocolSubversion(const PGconn *conn)\n>\n> Ugh, it's quite annoying that the original PQprotocolVersion only\n> returns the major version and thus we need this new function. It\n> seems like it would be much nicer if it returned a number similar to\n> PQserverVersion. I think it might be nicer to change PQprotocolVersion\n> to do that than to add another function. We could do:\n>\n> return PG_PROTOCOL_MAJOR(conn->pversion) * 100 +\n> PG_PROTOCOL_MINOR(conn->pversion);\n>\n> or even:\n>\n> if (PG_PROTOCOL_MAJOR(conn->pversion) == 3 &&\n> PG_PROTOCOL_MINOR(conn->pversion))\n> return 3;\n> else\n> return PG_PROTOCOL_MAJOR(conn->pversion) * 100 +\n> PG_PROTOCOL_MINOR(conn->pversion);\n>\n> The second option would be safest backwards compatibility wise, but in\n> practice you would only get another value than 3 (or 0) when\n> connecting to pre 7.4 servers. That seems old enough that I don't\n> think anyone is actually calling this function. **I'd like some\n> feedback from others on this though.**\n>\n\nBoth versions look a little bit strange to me. I have not strong opinion\nabout it, but I am not sure if is best to change contract after 20 years ago\n\ncommit from Jun 2003 efc3a25bb02ada63158fe7006673518b005261ba\n\nI prefer to introduce a new function - it is ten lines of code. The form is\nnot important - it can be a full number or minor number. It doesn't matter\nI think. But my opinion in this area is not strong, and I like to see\nfeedback from others too. It is true that this feature and interface is not\nfully complete.\n\nReards\n\nPavel\n\n\n> + /* The protocol 3.0 is required */\n> + if (PG_PROTOCOL_MAJOR(their_version) == 3)\n> + conn->pversion = their_version;\n>\n> Let's compare against the actual PG_PROTOCOL_EARLIEST and\n> PG_PROTOCOL_LATEST to determine if the version is supported or not.\n>\n\npo 4. 9. 2023 v 14:24 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:On Sun, 3 Sept 2023 at 20:58, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> here is an try\n\nOverall it does what I had in mind. Below a few suggestions:\n\n+int\n+PQprotocolSubversion(const PGconn *conn)\n\nUgh, it's quite annoying that the original PQprotocolVersion only\nreturns the major version and thus we need this new function. It\nseems like it would be much nicer if it returned a number similar to\nPQserverVersion. I think it might be nicer to change PQprotocolVersion\nto do that than to add another function. We could do:\n\nreturn PG_PROTOCOL_MAJOR(conn->pversion) * 100 +\nPG_PROTOCOL_MINOR(conn->pversion);\n\nor even:\n\nif (PG_PROTOCOL_MAJOR(conn->pversion) == 3 && PG_PROTOCOL_MINOR(conn->pversion))\n    return 3;\nelse\n    return PG_PROTOCOL_MAJOR(conn->pversion) * 100 +\nPG_PROTOCOL_MINOR(conn->pversion);\n\nThe second option would be safest backwards compatibility wise, but in\npractice you would only get another value than 3 (or 0) when\nconnecting to pre 7.4 servers. That seems old enough that I don't\nthink anyone is actually calling this function. **I'd like some\nfeedback from others on this though.**Both versions look a little bit strange to me. I have not strong opinion about it, but I am not sure if is best to change contract after 20 years agocommit from Jun 2003 efc3a25bb02ada63158fe7006673518b005261baI prefer to introduce a new function - it is ten lines of code. The form is not important - it can be a full number or minor number. It doesn't matter I think. But my opinion in this area is not strong, and I like to see feedback from others too. It is true that this feature and interface is not fully complete.ReardsPavel\n\n+       /* The protocol 3.0 is required */\n+       if (PG_PROTOCOL_MAJOR(their_version) == 3)\n+           conn->pversion = their_version;\n\nLet's compare against the actual PG_PROTOCOL_EARLIEST and\nPG_PROTOCOL_LATEST to determine if the version is supported or not.", "msg_date": "Mon, 4 Sep 2023 20:03:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "po 4. 9. 2023 v 14:24 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Sun, 3 Sept 2023 at 20:58, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > here is an try\n>\n> Overall it does what I had in mind. Below a few suggestions:\n>\n> +int\n> +PQprotocolSubversion(const PGconn *conn)\n>\n> Ugh, it's quite annoying that the original PQprotocolVersion only\n> returns the major version and thus we need this new function. It\n> seems like it would be much nicer if it returned a number similar to\n> PQserverVersion. I think it might be nicer to change PQprotocolVersion\n> to do that than to add another function. We could do:\n>\n> return PG_PROTOCOL_MAJOR(conn->pversion) * 100 +\n> PG_PROTOCOL_MINOR(conn->pversion);\n>\n> or even:\n>\n> if (PG_PROTOCOL_MAJOR(conn->pversion) == 3 &&\n> PG_PROTOCOL_MINOR(conn->pversion))\n> return 3;\n> else\n> return PG_PROTOCOL_MAJOR(conn->pversion) * 100 +\n> PG_PROTOCOL_MINOR(conn->pversion);\n>\n> The second option would be safest backwards compatibility wise, but in\n> practice you would only get another value than 3 (or 0) when\n> connecting to pre 7.4 servers. That seems old enough that I don't\n> think anyone is actually calling this function. **I'd like some\n> feedback from others on this though.**\n>\n\nThis point is open. I'll wait for a reply from others.\n\n\n>\n> + /* The protocol 3.0 is required */\n> + if (PG_PROTOCOL_MAJOR(their_version) == 3)\n> + conn->pversion = their_version;\n>\n> Let's compare against the actual PG_PROTOCOL_EARLIEST and\n> PG_PROTOCOL_LATEST to determine if the version is supported or not.\n>\n\nchanged", "msg_date": "Tue, 5 Sep 2023 05:49:33 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Tue, 5 Sept 2023 at 05:50, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I prefer to introduce a new function - it is ten lines of code. The form is not important - it can be a full number or minor number. It doesn't matter I think. But my opinion in this area is not strong, and I like to see feedback from others too. It is true that this feature and interface is not fully complete.\n\nI think when using the API it is nicest to have a single function that\nreturns the full version number. i.e. if we're introducing a new\nfunction I think it should be PQprotocolFullVersion instead of\nPQprotocolSubversion. Then instead of doing a version check like this:\n\nif (PQprotocolVersion(pset.db) == 3 && PQprotocolSubversion(pset.db) >= 1)\n\nYou can do the simpler:\n\nif (PQprotocolFullVersion(pset.db) >= 301)\n\nThis is also in line with how you do version checks for postgres versions.\n\nSo I think this patch should at least add that one instead of\nPQprotocolSubversion. If we then decide to replace PQprotocolVersion\nwith this new implementation, that would be a trivial change.\n\n>> + /* The protocol 3.0 is required */\n>> + if (PG_PROTOCOL_MAJOR(their_version) == 3)\n>> + conn->pversion = their_version;\n>>\n>> Let's compare against the actual PG_PROTOCOL_EARLIEST and\n>> PG_PROTOCOL_LATEST to determine if the version is supported or not.\n>\n>\n> changed\n\nI think we should also check the minor version part. So like this instead\n+ if (their_version < PG_PROTOCOL_EARLIEST || their_version >\nPG_PROTOCOL_LATEST)\n\n\nPS. If you use the -v flag of git format-patch a version number is\nprepended to your patches. That makes it easier to reference them.\n\n\n", "msg_date": "Tue, 5 Sep 2023 13:29:36 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "út 5. 9. 2023 v 13:29 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Tue, 5 Sept 2023 at 05:50, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n> > I prefer to introduce a new function - it is ten lines of code. The form\n> is not important - it can be a full number or minor number. It doesn't\n> matter I think. But my opinion in this area is not strong, and I like to\n> see feedback from others too. It is true that this feature and interface is\n> not fully complete.\n>\n> I think when using the API it is nicest to have a single function that\n> returns the full version number. i.e. if we're introducing a new\n> function I think it should be PQprotocolFullVersion instead of\n> PQprotocolSubversion. Then instead of doing a version check like this:\n>\n> if (PQprotocolVersion(pset.db) == 3 && PQprotocolSubversion(pset.db) >= 1)\n>\n> You can do the simpler:\n>\n> if (PQprotocolFullVersion(pset.db) >= 301)\n>\n> This is also in line with how you do version checks for postgres versions.\n>\n> So I think this patch should at least add that one instead of\n> PQprotocolSubversion. If we then decide to replace PQprotocolVersion\n> with this new implementation, that would be a trivial change.\n>\n\nok changed - there is minor problem - almost all PQ functions are of int\ntype, but ProtocolVersion is uint\n\nUsing different mapping to int can be problematic - can be messy if we\ncannot to use PG_PROTOCOL macro.\n\n\n> >> + /* The protocol 3.0 is required */\n> >> + if (PG_PROTOCOL_MAJOR(their_version) == 3)\n> >> + conn->pversion = their_version;\n> >>\n> >> Let's compare against the actual PG_PROTOCOL_EARLIEST and\n> >> PG_PROTOCOL_LATEST to determine if the version is supported or not.\n> >\n> >\n> > changed\n>\n> I think we should also check the minor version part. So like this instead\n> + if (their_version < PG_PROTOCOL_EARLIEST || their_version >\n> PG_PROTOCOL_LATEST)\n>\n>\ndone\n\nRegards\n\nPavel\n\n\n>\n> PS. If you use the -v flag of git format-patch a version number is\n> prepended to your patches. That makes it easier to reference them.\n>", "msg_date": "Fri, 8 Sep 2023 21:07:23 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\nAnother thing that should be described there is that this falls\n> outside of the transaction flow, i.e. it's changes are not reverted on\n> ROLLBACK. But that leaves an important consideration: What happens\n> when an error occurs on the server during handling of this message\n> (e.g. the GUC does not exist or an OOM is triggered). Is any open\n> transaction aborted in that case? If not, we should have a test for\n> that.\n>\n\nI tested this scenario. I had to modify message handling to fix warning\n\"message type 0x5a arrived from server while idle\"\n\nBut if this is inside a transaction, the transaction is aborted.\n\n>\n> + if (PQresultStatus(res) != PGRES_COMMAND_OK)\n> + pg_fatal(\"failed to link custom variable: %s\",\n> PQerrorMessage(conn));\n> + PQclear(res);\n>\n\ndone\n\n\n>\n> The tests should also change the config after running\n> PQlinkParameterStatus/PQunlinkParameterStatus to show that the guc is\n> reported then or not reported then.\n>\n\ndone\n\n\n>\n> + if (!PQsendTypedCommand(conn, PqMsg_ReportGUC, 't', paramName))\n> + return NULL;\n>\n>\n> I think we'll need some bikeshedding on what the protocol message\n> should look like exactly. I'm not entirely sure what is the most\n> sensible here, so please treat everything I write next as\n> suggestions/discussion:\n> I see that you're piggy-backing on PQsendTypedCommand, which seems\n> nice to avoid code duplication. It has one downside though: not every\n> type, is valid for each command anymore.\n> One way to avoid that would be to not introduce a new command, but\n> only add a new type that is understood by Describe and Close, e.g. a\n> 'G' (for GUC). Then PqMsg_Describe, G would become the equivalent of\n> what'the current patch its PqMsg_ReportGUC, 't' and PqMsg_Close, G\n> would be the same as PqMsg_ReportGUC, 'f'.\n>\n\nI am sorry, I don't understand this idea?\n\n\n>\n> The rest of this email assumes that we're keeping your current\n> proposal for the protocol message, so it might not make sense to\n> address all of this feedback, in case we're still going to change the\n> protocol:\n>\n> + if (is_set == 't')\n> + {\n> + SetGUCOptionFlag(name, GUC_REPORT);\n> + status = \"SET REPORT_GUC\";\n> + }\n> + else\n> + {\n> + UnsetGUCOptionFlag(name, GUC_REPORT);\n> + status = \"UNSET REPORT_GUC\";\n> + }\n>\n> I think we should be strict about what we accept here. Any other value\n> than 'f'/'t' for is_set should result in an error imho.\n>\n\ndone\n\nHi\nAnother thing that should be described there is that this falls\noutside of the transaction flow, i.e. it's changes are not reverted on\nROLLBACK. But that leaves an important consideration: What happens\nwhen an error occurs on the server during handling of this message\n(e.g. the GUC does not exist or an OOM is triggered). Is any open\ntransaction aborted in that case? If not, we should have a test for\nthat.I tested this scenario. I had to modify message handling to fix warning \"message type 0x5a arrived from server while idle\"But if this is inside a transaction, the transaction is aborted.\n\n+   if (PQresultStatus(res) != PGRES_COMMAND_OK)\n+       pg_fatal(\"failed to link custom variable: %s\", PQerrorMessage(conn));\n+   PQclear(res);done \n\nThe tests should also change the config after running\nPQlinkParameterStatus/PQunlinkParameterStatus to show that the guc is\nreported then or not reported then.done \n\n+   if (!PQsendTypedCommand(conn, PqMsg_ReportGUC, 't', paramName))\n+       return NULL;\n\n\nI think we'll need some bikeshedding on what the protocol message\nshould look like exactly. I'm not entirely sure what is the most\nsensible here, so please treat everything I write next as\nsuggestions/discussion:\nI see that you're piggy-backing on PQsendTypedCommand, which seems\nnice to avoid code duplication. It has one downside though: not every\ntype, is valid for each command anymore.\nOne way to avoid that would be to not introduce a new command, but\nonly add a new type that is understood by Describe and Close, e.g. a\n'G' (for GUC). Then PqMsg_Describe, G would become the equivalent of\nwhat'the current patch its PqMsg_ReportGUC, 't' and PqMsg_Close, G\nwould be the same as PqMsg_ReportGUC, 'f'.I am sorry, I don't understand this idea? \n\nThe rest of this email assumes that we're keeping your current\nproposal for the protocol message, so it might not make sense to\naddress all of this feedback, in case we're still going to change the\nprotocol:\n\n+                   if (is_set == 't')\n+                   {\n+                       SetGUCOptionFlag(name, GUC_REPORT);\n+                       status = \"SET REPORT_GUC\";\n+                   }\n+                   else\n+                   {\n+                       UnsetGUCOptionFlag(name, GUC_REPORT);\n+                       status = \"UNSET REPORT_GUC\";\n+                   }\n\nI think we should be strict about what we accept here. Any other value\nthan 'f'/'t' for is_set should result in an error imho.done", "msg_date": "Fri, 8 Sep 2023 21:07:35 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "pá 8. 9. 2023 v 21:07 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> Another thing that should be described there is that this falls\n>> outside of the transaction flow, i.e. it's changes are not reverted on\n>> ROLLBACK. But that leaves an important consideration: What happens\n>> when an error occurs on the server during handling of this message\n>> (e.g. the GUC does not exist or an OOM is triggered). Is any open\n>> transaction aborted in that case? If not, we should have a test for\n>> that.\n>>\n>\n> I tested this scenario. I had to modify message handling to fix warning\n> \"message type 0x5a arrived from server while idle\"\n>\n\nI fixed this issue. The problem was in the missing setting\n`doing_extended_query_message`.\n\n\n> But if this is inside a transaction, the transaction is aborted.\n>\n>>\n>> + if (PQresultStatus(res) != PGRES_COMMAND_OK)\n>> + pg_fatal(\"failed to link custom variable: %s\",\n>> PQerrorMessage(conn));\n>> + PQclear(res);\n>>\n>\n> done\n>\n>\n>>\n>> The tests should also change the config after running\n>> PQlinkParameterStatus/PQunlinkParameterStatus to show that the guc is\n>> reported then or not reported then.\n>>\n>\n> done\n>\n>\n>>\n>> + if (!PQsendTypedCommand(conn, PqMsg_ReportGUC, 't', paramName))\n>> + return NULL;\n>>\n>>\n>> I think we'll need some bikeshedding on what the protocol message\n>> should look like exactly. I'm not entirely sure what is the most\n>> sensible here, so please treat everything I write next as\n>> suggestions/discussion:\n>> I see that you're piggy-backing on PQsendTypedCommand, which seems\n>> nice to avoid code duplication. It has one downside though: not every\n>> type, is valid for each command anymore.\n>> One way to avoid that would be to not introduce a new command, but\n>> only add a new type that is understood by Describe and Close, e.g. a\n>> 'G' (for GUC). Then PqMsg_Describe, G would become the equivalent of\n>> what'the current patch its PqMsg_ReportGUC, 't' and PqMsg_Close, G\n>> would be the same as PqMsg_ReportGUC, 'f'.\n>>\n>\n> I am sorry, I don't understand this idea?\n>\n>\n>>\n>> The rest of this email assumes that we're keeping your current\n>> proposal for the protocol message, so it might not make sense to\n>> address all of this feedback, in case we're still going to change the\n>> protocol:\n>>\n>> + if (is_set == 't')\n>> + {\n>> + SetGUCOptionFlag(name, GUC_REPORT);\n>> + status = \"SET REPORT_GUC\";\n>> + }\n>> + else\n>> + {\n>> + UnsetGUCOptionFlag(name, GUC_REPORT);\n>> + status = \"UNSET REPORT_GUC\";\n>> + }\n>>\n>> I think we should be strict about what we accept here. Any other value\n>> than 'f'/'t' for is_set should result in an error imho.\n>>\n>\n> done\n>\n\nRegards\n\nPavel\n\n\n>\n>", "msg_date": "Sun, 10 Sep 2023 22:59:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Fri, 8 Sept 2023 at 21:08, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> ok changed - there is minor problem - almost all PQ functions are of int type, but ProtocolVersion is uint\n\nYour current implementation requires using the PG_PROTOCOL macros to\ncompare versions. But clients cannot currently use those macros since\nthey are not exported from libpq-fe.h, only from pqcomm.h which is\nthen imported by libpq-int.h. (psql/command.c imports pcomm.h\ndirectly, but I don't think we should expect clients to do that). We\ncould ofcourse export these macros from libpq-fe.h too. But then we'd\nneed to document those macros too.\n\n> Using different mapping to int can be problematic - can be messy if we cannot use PG_PROTOCOL macro.\n\nI see no big problems returning an unsigned or long from the new\nfunction (even if existing functions all returned int). But I don't\neven think that is necessary. Returning the following as an int from\nPQprotocolVersionFull would work fine afaict:\n\nreturn PG_PROTOCOL_MAJOR(version) * 1000 + PG_PROTOCOL_MINOR(version)\n\nThis would give us one thousand minor versions for each major version.\nThis seems fine for all practical purposes. Since postgres only\nreleases a version once every year, we'd need a protocol bump every\nyear for one thousand years for that to ever cause any problems. So\nI'd prefer this approach over making the PG_PROTOCOL macros a public\ninterface.\n\n\n", "msg_date": "Mon, 11 Sep 2023 13:23:54 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "@Tom and @Robert, since you originally suggested extending the\nprotocol for this, I think some input from you on the protocol design\nwould be quite helpful. BTW, this protocol extension is the main\nreason I personally care for this patch, because it would allow\nPgBouncer to ask for updates on certain GUCs so that it can preserve\nsession level SET commands even in transaction pooling mode.\nRight now PgBouncer can only do this for a handful of GUCs, but\nthere's quite a few others that are useful for PgBouncer to preserve\nby default:\n- search_path\n- statement_timeout\n- lock_timeout\n\nAnd users might have others that they want to preserve others too.\n\nOn Fri, 8 Sept 2023 at 21:08, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> I think we'll need some bikeshedding on what the protocol message\n>> should look like exactly. I'm not entirely sure what is the most\n>> sensible here, so please treat everything I write next as\n>> suggestions/discussion:\n>> I see that you're piggy-backing on PQsendTypedCommand, which seems\n>> nice to avoid code duplication. It has one downside though: not every\n>> type, is valid for each command anymore.\n>> One way to avoid that would be to not introduce a new command, but\n>> only add a new type that is understood by Describe and Close, e.g. a\n>> 'G' (for GUC). Then PqMsg_Describe, G would become the equivalent of\n>> what'the current patch its PqMsg_ReportGUC, 't' and PqMsg_Close, G\n>> would be the same as PqMsg_ReportGUC, 'f'.\n>\n>\n> I am sorry, I don't understand this idea?\n\nTo clarify what I meant: I meant instead of introducing a new top\nlevel message type (i.e. your newly introduced ReportGUC message), I\nsuggested adding a new subtype that is understood by the Describe and\nClose messages. In addition to being able to use Describe and Close\nfor statements and portals with the S and P subtypes respectively,\nlike you currently can, we could add a new subtype G which would start\nand stop GUC reporting (with the Describe and Close message\nrespectively). I think that would make the client code even simpler than\nit is now.\n\n\n", "msg_date": "Mon, 11 Sep 2023 13:59:50 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Mon, 11 Sept 2023 at 13:59, Jelte Fennema <postgres@jeltef.nl> wrote:\n> I think that would make the client code even simpler than it is now.\n\nTo be clear, I'm not saying we should do this. There's benefits to\nusing a dedicated new message type too (e.g. clearer errors if a proxy\nlike pgbouncer does not understand it).\n\n\n", "msg_date": "Mon, 11 Sep 2023 14:03:22 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "po 11. 9. 2023 v 13:24 odesílatel Jelte Fennema <postgres@jeltef.nl> napsal:\n\n> On Fri, 8 Sept 2023 at 21:08, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > ok changed - there is minor problem - almost all PQ functions are of int\n> type, but ProtocolVersion is uint\n>\n> Your current implementation requires using the PG_PROTOCOL macros to\n> compare versions. But clients cannot currently use those macros since\n> they are not exported from libpq-fe.h, only from pqcomm.h which is\n> then imported by libpq-int.h. (psql/command.c imports pcomm.h\n> directly, but I don't think we should expect clients to do that). We\n> could ofcourse export these macros from libpq-fe.h too. But then we'd\n> need to document those macros too.\n>\n> > Using different mapping to int can be problematic - can be messy if we\n> cannot use PG_PROTOCOL macro.\n>\n> I see no big problems returning an unsigned or long from the new\n> function (even if existing functions all returned int). But I don't\n> even think that is necessary. Returning the following as an int from\n> PQprotocolVersionFull would work fine afaict:\n>\n> return PG_PROTOCOL_MAJOR(version) * 1000 + PG_PROTOCOL_MINOR(version)\n>\n> This would give us one thousand minor versions for each major version.\n> This seems fine for all practical purposes. Since postgres only\n> releases a version once every year, we'd need a protocol bump every\n> year for one thousand years for that to ever cause any problems. So\n> I'd prefer this approach over making the PG_PROTOCOL macros a public\n> interface.\n>\n\nI did proposed change\n\nRegards\n\nPavel", "msg_date": "Mon, 11 Sep 2023 21:30:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On 11.09.23 13:59, Jelte Fennema wrote:\n> @Tom and @Robert, since you originally suggested extending the\n> protocol for this, I think some input from you on the protocol design\n> would be quite helpful. BTW, this protocol extension is the main\n> reason I personally care for this patch, because it would allow\n> PgBouncer to ask for updates on certain GUCs so that it can preserve\n> session level SET commands even in transaction pooling mode.\n> Right now PgBouncer can only do this for a handful of GUCs, but\n> there's quite a few others that are useful for PgBouncer to preserve\n> by default:\n> - search_path\n> - statement_timeout\n> - lock_timeout\n\nISTM that for a purpose like pgbouncer, it would be simpler to add a new \nGUC \"report these variables\" and send that in the startup message? That \nmight not help with the psql use case, but it would be much simpler.\n\n\n", "msg_date": "Tue, 12 Sep 2023 09:46:31 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Tue, 12 Sept 2023 at 14:39, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 11.09.23 13:59, Jelte Fennema wrote:\n> > @Tom and @Robert, since you originally suggested extending the\n> > protocol for this, I think some input from you on the protocol design\n> > would be quite helpful. BTW, this protocol extension is the main\n> > reason I personally care for this patch, because it would allow\n> > PgBouncer to ask for updates on certain GUCs so that it can preserve\n> > session level SET commands even in transaction pooling mode.\n> > Right now PgBouncer can only do this for a handful of GUCs, but\n> > there's quite a few others that are useful for PgBouncer to preserve\n> > by default:\n> > - search_path\n> > - statement_timeout\n> > - lock_timeout\n>\n> ISTM that for a purpose like pgbouncer, it would be simpler to add a new\n> GUC \"report these variables\" and send that in the startup message? That\n> might not help with the psql use case, but it would be much simpler.\n\nI have changed the status to \"Waiting on Author\" as Peter's comments\nhave not been followed up yet.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 8 Jan 2024 10:38:00 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Tue, 12 Sept 2023 at 09:46, Peter Eisentraut <peter@eisentraut.org> wrote:\n> ISTM that for a purpose like pgbouncer, it would be simpler to add a new\n> GUC \"report these variables\" and send that in the startup message? That\n> might not help with the psql use case, but it would be much simpler.\n\nFYI I implemented it that way yesterday on this other thread (patch\n0010 of that patchset):\nhttps://www.postgresql.org/message-id/flat/CAGECzQScQ3N-Ykv2j4NDyDtrPPc3FpRoa%3DLZ-2Uj2ocA4zr%3D4Q%40mail.gmail.com#cd9e8407820d492e8f677ee6a67c21ce\n\n\n", "msg_date": "Thu, 11 Jan 2024 12:12:11 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\nI starting work on this patch - start with rebase\n\nRegards\n\nPavel", "msg_date": "Sun, 21 Jan 2024 16:05:34 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "Hi\n\nčt 11. 1. 2024 v 12:12 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl>\nnapsal:\n\n> On Tue, 12 Sept 2023 at 09:46, Peter Eisentraut <peter@eisentraut.org>\n> wrote:\n> > ISTM that for a purpose like pgbouncer, it would be simpler to add a new\n> > GUC \"report these variables\" and send that in the startup message? That\n> > might not help with the psql use case, but it would be much simpler.\n>\n> FYI I implemented it that way yesterday on this other thread (patch\n> 0010 of that patchset):\n>\n> https://www.postgresql.org/message-id/flat/CAGECzQScQ3N-Ykv2j4NDyDtrPPc3FpRoa%3DLZ-2Uj2ocA4zr%3D4Q%40mail.gmail.com#cd9e8407820d492e8f677ee6a67c21ce\n\n\nI read your patch, and I see some advantages and some disadvantages.\n\n1. this doesn't need new protocol API just for this feature, what is nice\n\n2. using GUC for all reported GUC looks not too readable. Maybe it should\nbe used just for customized reporting, not for default\n\n3. Another issue of your proposal is less friendly enabling disabling\nreporting of specific GUC. Maintaining a list needs more work than just\nenabling and disabling one specific GUC.\nI think this is the main disadvantage of your proposal. In my proposal I\ndon't need to know the state of any GUC. Just I send PQlinkParameterStatus\nor PQunlinkParameterStatus. With your proposal, I need to read\n_pq_.report_parameters, parse it, and modify, and send it back. This\ndoesn't look too practical.\n\nPersonally I prefer usage of a more generic API than my\nPQlinkParameterStatus and PQunlinkParameterStatus. You talked about it with\nRobert If I remember.\n\nCan be nice if I can write just\n\n/* similar princip is used inside ncurses */\nset_report_guc_message_no = PQgetMessageNo(\"set_report_guc\");\n/* the result can be processed by server and by all proxies on the line */\n\nif (set_report_guc_message_no == -1)\n fprintf(stderr, \"feature is not supported\");\nresult = PQsendMessage(set_report_guc_message, \"user\");\nif (result == -1)\n fprintf(stderr, \"some error ...\");\n\nWith some API like this it can be easier to do some small protocol\nenhancement. Maybe this is overengineering. Enhancing protocol is not too\ncommon, and with usage PQsendTypedCommand enhancing protocol is less work\ntoo.\n\nRegards\n\nPavel\n\nHičt 11. 1. 2024 v 12:12 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl> napsal:On Tue, 12 Sept 2023 at 09:46, Peter Eisentraut <peter@eisentraut.org> wrote:\n> ISTM that for a purpose like pgbouncer, it would be simpler to add a new\n> GUC \"report these variables\" and send that in the startup message?  That\n> might not help with the psql use case, but it would be much simpler.\n\nFYI I implemented it that way yesterday on this other thread (patch\n0010 of that patchset):\nhttps://www.postgresql.org/message-id/flat/CAGECzQScQ3N-Ykv2j4NDyDtrPPc3FpRoa%3DLZ-2Uj2ocA4zr%3D4Q%40mail.gmail.com#cd9e8407820d492e8f677ee6a67c21ceI read your patch, and I see some advantages and some disadvantages.1. this doesn't need new protocol API just for this feature, what is nice2. using GUC for all reported GUC looks not too readable. Maybe it should be used just for customized reporting, not for default 3. Another issue of your proposal is less friendly enabling disabling reporting of specific GUC. Maintaining a list needs more work than just enabling and disabling one specific GUC.I think this is the main disadvantage of your proposal. In my proposal I don't need to know the state of any GUC. Just I send PQlinkParameterStatus or PQunlinkParameterStatus. With your proposal, I need to read _pq_.report_parameters, parse it, and modify, and send it back. This doesn't look too practical. Personally I prefer usage of a more generic API than my PQlinkParameterStatus and PQunlinkParameterStatus. You talked about it with Robert If I remember.Can be nice if I can write just/* similar princip is used inside ncurses */set_report_guc_message_no = PQgetMessageNo(\"set_report_guc\");/* the result can be processed by server and by all proxies on the line */if (set_report_guc_message_no == -1)  fprintf(stderr, \"feature is not supported\");result = PQsendMessage(set_report_guc_message, \"user\");if (result == -1)  fprintf(stderr, \"some error ...\");With some API like this it can be easier to do some small protocol enhancement. Maybe this is overengineering. Enhancing protocol is not too common, and with usage PQsendTypedCommand enhancing protocol is less work too.RegardsPavel", "msg_date": "Thu, 25 Jan 2024 21:42:47 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "út 12. 9. 2023 v 9:46 odesílatel Peter Eisentraut <peter@eisentraut.org>\nnapsal:\n\n> On 11.09.23 13:59, Jelte Fennema wrote:\n> > @Tom and @Robert, since you originally suggested extending the\n> > protocol for this, I think some input from you on the protocol design\n> > would be quite helpful. BTW, this protocol extension is the main\n> > reason I personally care for this patch, because it would allow\n> > PgBouncer to ask for updates on certain GUCs so that it can preserve\n> > session level SET commands even in transaction pooling mode.\n> > Right now PgBouncer can only do this for a handful of GUCs, but\n> > there's quite a few others that are useful for PgBouncer to preserve\n> > by default:\n> > - search_path\n> > - statement_timeout\n> > - lock_timeout\n>\n> ISTM that for a purpose like pgbouncer, it would be simpler to add a new\n> GUC \"report these variables\" and send that in the startup message? That\n> might not help with the psql use case, but it would be much simpler.\n>\n\nIntroducing this GUC, mainly when the usage will be limited to connection\nstring, makes sense to me. Unfortunately for usage in psql it is not\npractical.\n\n* For secure usage this GUC should be session immutable - probably you\ndon't want to disable reporting for search_path, ... inside session\n* Enhancing list requires more work - reading current state, parsing\n(theoretically the GUC \"report these variables\" can be marked as\nGUC_REPORT, so I can see this value on client side, but still there is\nparsing. I can imagine enhancing the SET command to style SET GUC += value\nor SET GUC -= value\n* SET statement is transactional - that means it cannot be used when a\ntransaction is broken, but it can be solved by some protocol based command\nfor setting GUC without dependency on state of transaction (if it is\npossible, my functions changes just flag of some existing GUC, so there is\nnot necessary memory allocation, changing the value can be different story).\n\nI can imagine both access - special GUC allowed only in connection string -\nthat can work as protection against unwanted remove of GUC_REPORT too, and\ndedicated functions for enabling, disabling GUC_REPORT. It can very nicely\nwork together. Then we don't need any other changes. There is no necessity\nfor protocol SET, there is no necessity for more user friendly work with\nlists in SET statements. And with connect settings for reporting, proxies\ncan easily detect and get the values of GUC.\n\nRegards\n\nPavel\n\nút 12. 9. 2023 v 9:46 odesílatel Peter Eisentraut <peter@eisentraut.org> napsal:On 11.09.23 13:59, Jelte Fennema wrote:\n> @Tom and @Robert, since you originally suggested extending the\n> protocol for this, I think some input from you on the protocol design\n> would be quite helpful. BTW, this protocol extension is the main\n> reason I personally care for this patch, because it would allow\n> PgBouncer to ask for updates on certain GUCs so that it can preserve\n> session level SET commands even in transaction pooling mode.\n> Right now PgBouncer can only do this for a handful of GUCs, but\n> there's quite a few others that are useful for PgBouncer to preserve\n> by default:\n> - search_path\n> - statement_timeout\n> - lock_timeout\n\nISTM that for a purpose like pgbouncer, it would be simpler to add a new \nGUC \"report these variables\" and send that in the startup message?  That \nmight not help with the psql use case, but it would be much simpler.Introducing this GUC, mainly when the usage will be limited to connection string, makes sense to me.  Unfortunately for usage in psql it is not practical.* For secure usage this GUC should be session immutable - probably you don't want to disable reporting for search_path, ... inside session* Enhancing list requires more work - reading current state, parsing (theoretically the GUC \"report these variables\" can be marked as GUC_REPORT, so I can see this value on client side, but still there is parsing. I can imagine enhancing the SET command to style SET GUC += value or SET GUC -= value* SET statement is transactional - that means it cannot be used when a transaction is broken, but it can be solved by some protocol based command for setting GUC without dependency on state of transaction (if it is possible, my functions changes just flag of some existing GUC, so there is not necessary memory allocation, changing the value can be different story).I can imagine both access - special GUC allowed only in connection string - that can work as protection against unwanted remove of GUC_REPORT too, and dedicated functions for enabling, disabling GUC_REPORT. It can very nicely work together. Then we don't need any other changes. There is no necessity for protocol SET, there is no necessity for more user friendly work with lists in SET statements. And with connect settings for reporting, proxies can easily detect and get the values of GUC.RegardsPavel", "msg_date": "Fri, 26 Jan 2024 09:11:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "po 8. 1. 2024 v 6:08 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n\n> On Tue, 12 Sept 2023 at 14:39, Peter Eisentraut <peter@eisentraut.org>\n> wrote:\n> >\n> > On 11.09.23 13:59, Jelte Fennema wrote:\n> > > @Tom and @Robert, since you originally suggested extending the\n> > > protocol for this, I think some input from you on the protocol design\n> > > would be quite helpful. BTW, this protocol extension is the main\n> > > reason I personally care for this patch, because it would allow\n> > > PgBouncer to ask for updates on certain GUCs so that it can preserve\n> > > session level SET commands even in transaction pooling mode.\n> > > Right now PgBouncer can only do this for a handful of GUCs, but\n> > > there's quite a few others that are useful for PgBouncer to preserve\n> > > by default:\n> > > - search_path\n> > > - statement_timeout\n> > > - lock_timeout\n> >\n> > ISTM that for a purpose like pgbouncer, it would be simpler to add a new\n> > GUC \"report these variables\" and send that in the startup message? That\n> > might not help with the psql use case, but it would be much simpler.\n>\n> I have changed the status to \"Waiting on Author\" as Peter's comments\n> have not been followed up yet.\n>\n\nPlease, see my reply to Peter. I think his comment is very valuable (and I\nam for it), but I don't think it is a practical API for this feature.\n\nRegards\n\nPavel\n\n\n\n>\n> Regards,\n> Vignesh\n>\n\npo 8. 1. 2024 v 6:08 odesílatel vignesh C <vignesh21@gmail.com> napsal:On Tue, 12 Sept 2023 at 14:39, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 11.09.23 13:59, Jelte Fennema wrote:\n> > @Tom and @Robert, since you originally suggested extending the\n> > protocol for this, I think some input from you on the protocol design\n> > would be quite helpful. BTW, this protocol extension is the main\n> > reason I personally care for this patch, because it would allow\n> > PgBouncer to ask for updates on certain GUCs so that it can preserve\n> > session level SET commands even in transaction pooling mode.\n> > Right now PgBouncer can only do this for a handful of GUCs, but\n> > there's quite a few others that are useful for PgBouncer to preserve\n> > by default:\n> > - search_path\n> > - statement_timeout\n> > - lock_timeout\n>\n> ISTM that for a purpose like pgbouncer, it would be simpler to add a new\n> GUC \"report these variables\" and send that in the startup message?  That\n> might not help with the psql use case, but it would be much simpler.\n\nI have changed the status to \"Waiting on Author\" as Peter's comments\nhave not been followed up yet.Please, see my reply to Peter. I think his comment is very valuable (and I am for it), but I don't think it is a practical API for this feature.RegardsPavel \n\nRegards,\nVignesh", "msg_date": "Fri, 26 Jan 2024 09:13:43 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Thu, 25 Jan 2024 at 21:43, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> 2. using GUC for all reported GUC looks not too readable. Maybe it should be used just for customized reporting, not for default\n\nI thought about this too, because indeed the default list is quite\nlong. But I decided against it because it seemed strange that you\ncannot disable reporting for the options that are currently reporting\nby default. Especially since the current default essentially boils\ndown to: \"whatever the psql client needed\"\n\n> 3. Another issue of your proposal is less friendly enabling disabling reporting of specific GUC. Maintaining a list needs more work than just enabling and disabling one specific GUC.\n> I think this is the main disadvantage of your proposal. In my proposal I don't need to know the state of any GUC. Just I send PQlinkParameterStatus or PQunlinkParameterStatus. With your proposal, I need to read _pq_.report_parameters, parse it, and modify, and send it back. This doesn't look too practical.\n\nWhile I agree it's a little bit less friendly, I think you're\noverestimating the difficulty of using my proposed approach. Most\nimportantly there's no need to parse the current GUC value. A client\nalways knows what variables it wants to have reported. So anytime that\nchanges the client can simply regenerate the full list of gucs that it\nwants to report and send that. So something similar to the following\npseudo code (using += for string concatenation):\n\nchar *report_parameters = \"server_version,client_encoding\"\nif (show_user_in_prompt)\n report_parameters += \",user\"\nif (show_search_path_in_prompt)\n report_parameters += \",search_path\"\nPQsetParameter(\"_pq_.report_parameters\", report_parameters)\n\n> Personally I prefer usage of a more generic API than my PQlinkParameterStatus and PQunlinkParameterStatus. You talked about it with Robert If I remember.\n>\n> Can be nice if I can write just\n>\n> /* similar princip is used inside ncurses */\n> set_report_guc_message_no = PQgetMessageNo(\"set_report_guc\");\n> /* the result can be processed by server and by all proxies on the line */\n>\n> if (set_report_guc_message_no == -1)\n> fprintf(stderr, \"feature is not supported\");\n> result = PQsendMessage(set_report_guc_message, \"user\");\n> if (result == -1)\n> fprintf(stderr, \"some error ...\");\n>\n> With some API like this it can be easier to do some small protocol enhancement. Maybe this is overengineering. Enhancing protocol is not too common, and with usage PQsendTypedCommand enhancing protocol is less work too.\n\nHonestly, I don't see a benefit to this over protocol extension\nparameters using the ParameterSet message. Afaict this is also\nessentially a key + value message type. And protocol extension\nparameters have the benefit that they are already an established thing\nin the protocol, and that they can easily be integrated with the\ncurrent GUC system.\n\n\n", "msg_date": "Fri, 26 Jan 2024 11:40:00 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "pá 26. 1. 2024 v 11:40 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl>\nnapsal:\n\n> On Thu, 25 Jan 2024 at 21:43, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > 2. using GUC for all reported GUC looks not too readable. Maybe it\n> should be used just for customized reporting, not for default\n>\n> I thought about this too, because indeed the default list is quite\n> long. But I decided against it because it seemed strange that you\n> cannot disable reporting for the options that are currently reporting\n> by default. Especially since the current default essentially boils\n> down to: \"whatever the psql client needed\"\n>\n\nI see a possibility of disabling reporting as possibly dangerous. Without\nreporting encoding you can break psql. So it should be limited just to few\nvalues where is known behave.\n\n\n> > 3. Another issue of your proposal is less friendly enabling disabling\n> reporting of specific GUC. Maintaining a list needs more work than just\n> enabling and disabling one specific GUC.\n> > I think this is the main disadvantage of your proposal. In my proposal I\n> don't need to know the state of any GUC. Just I send PQlinkParameterStatus\n> or PQunlinkParameterStatus. With your proposal, I need to read\n> _pq_.report_parameters, parse it, and modify, and send it back. This\n> doesn't look too practical.\n>\n> While I agree it's a little bit less friendly, I think you're\n> overestimating the difficulty of using my proposed approach. Most\n> importantly there's no need to parse the current GUC value. A client\n> always knows what variables it wants to have reported. So anytime that\n> changes the client can simply regenerate the full list of gucs that it\n> wants to report and send that. So something similar to the following\n> pseudo code (using += for string concatenation):\n>\n\nI disagree with this - I can imagine some proxies add some own reported GUC\nand the client can know nothing about it.\n\n\n>\n> char *report_parameters = \"server_version,client_encoding\"\n> if (show_user_in_prompt)\n> report_parameters += \",user\"\n> if (show_search_path_in_prompt)\n> report_parameters += \",search_path\"\n> PQsetParameter(\"_pq_.report_parameters\", report_parameters)\n>\n\nIt is simple only when default value of report_parameters is constant, but\nwhen not it can be fragile\n\n\n>\n> > Personally I prefer usage of a more generic API than my\n> PQlinkParameterStatus and PQunlinkParameterStatus. You talked about it with\n> Robert If I remember.\n> >\n> > Can be nice if I can write just\n> >\n> > /* similar princip is used inside ncurses */\n> > set_report_guc_message_no = PQgetMessageNo(\"set_report_guc\");\n> > /* the result can be processed by server and by all proxies on the line\n> */\n> >\n> > if (set_report_guc_message_no == -1)\n> > fprintf(stderr, \"feature is not supported\");\n> > result = PQsendMessage(set_report_guc_message, \"user\");\n> > if (result == -1)\n> > fprintf(stderr, \"some error ...\");\n> >\n> > With some API like this it can be easier to do some small protocol\n> enhancement. Maybe this is overengineering. Enhancing protocol is not too\n> common, and with usage PQsendTypedCommand enhancing protocol is less work\n> too.\n>\n> Honestly, I don't see a benefit to this over protocol extension\n> parameters using the ParameterSet message. Afaict this is also\n> essentially a key + value message type. And protocol extension\n> parameters have the benefit that they are already an established thing\n> in the protocol, and that they can easily be integrated with the\n> current GUC system.\n>\n\nIt was an idea.\n\nPersonally I like an idea that I described in mail to Peter. Using\ndedicated connection related GUC for immutably reported GUC, and using\npossibility to set dedicated function in protocol for enabling, disabling\nother GUC. It looks (to me) like the most robust solution.\n\nRegards\n\nPavel\n\npá 26. 1. 2024 v 11:40 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl> napsal:On Thu, 25 Jan 2024 at 21:43, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> 2. using GUC for all reported GUC looks not too readable. Maybe it should be used just for customized reporting, not for default\n\nI thought about this too, because indeed the default list is quite\nlong. But I decided against it because it seemed strange that you\ncannot disable reporting for the options that are currently reporting\nby default. Especially since the current default essentially boils\ndown to: \"whatever the psql client needed\"I see a possibility of disabling reporting as possibly dangerous.  Without reporting encoding you can break psql. So it should be limited just to few values where is known behave.\n\n> 3. Another issue of your proposal is less friendly enabling disabling reporting of specific GUC. Maintaining a list needs more work than just enabling and disabling one specific GUC.\n> I think this is the main disadvantage of your proposal. In my proposal I don't need to know the state of any GUC. Just I send PQlinkParameterStatus or PQunlinkParameterStatus. With your proposal, I need to read _pq_.report_parameters, parse it, and modify, and send it back. This doesn't look too practical.\n\nWhile I agree it's a little bit less friendly, I think you're\noverestimating the difficulty of using my proposed approach. Most\nimportantly there's no need to parse the current GUC value. A client\nalways knows what variables it wants to have reported. So anytime that\nchanges the client can simply regenerate the full list of gucs that it\nwants to report and send that. So something similar to the following\npseudo code (using += for string concatenation):I disagree with this - I can imagine some proxies add some own reported GUC and the client can know nothing about it. \n\nchar *report_parameters = \"server_version,client_encoding\"\nif (show_user_in_prompt)\n   report_parameters += \",user\"\nif (show_search_path_in_prompt)\n   report_parameters += \",search_path\"\nPQsetParameter(\"_pq_.report_parameters\", report_parameters)It is simple only when default value of report_parameters is constant, but when not it can be fragile \n\n> Personally I prefer usage of a more generic API than my PQlinkParameterStatus and PQunlinkParameterStatus. You talked about it with Robert If I remember.\n>\n> Can be nice if I can write just\n>\n> /* similar princip is used inside ncurses */\n> set_report_guc_message_no = PQgetMessageNo(\"set_report_guc\");\n> /* the result can be processed by server and by all proxies on the line */\n>\n> if (set_report_guc_message_no == -1)\n>   fprintf(stderr, \"feature is not supported\");\n> result = PQsendMessage(set_report_guc_message, \"user\");\n> if (result == -1)\n>   fprintf(stderr, \"some error ...\");\n>\n> With some API like this it can be easier to do some small protocol enhancement. Maybe this is overengineering. Enhancing protocol is not too common, and with usage PQsendTypedCommand enhancing protocol is less work too.\n\nHonestly, I don't see a benefit to this over protocol extension\nparameters using the ParameterSet message. Afaict this is also\nessentially a key + value message type. And protocol extension\nparameters have the benefit that they are already an established thing\nin the protocol, and that they can easily be integrated with the\ncurrent GUC system.It was an idea. Personally I like an idea that I described in mail to Peter. Using dedicated connection related GUC for immutably reported GUC, and using possibility to set dedicated function in protocol for enabling, disabling other GUC. It looks (to me) like the most robust solution. RegardsPavel", "msg_date": "Fri, 26 Jan 2024 21:34:53 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Fri, 26 Jan 2024 at 21:35, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I see a possibility of disabling reporting as possibly dangerous. Without reporting encoding you can break psql. So it should be limited just to few values where is known behave.\n\nI agree that client_encoding is a GUC that likely all clients would\nwant to request reporting for, so I can see the argument for always\nsending it. But I wouldn't call it dangerous for a client to be able\nto disable reporting for it. Ultimately the client is the one that\ndecides. A client might just as well completely ignore the reported\nvalue.\n\n>> While I agree it's a little bit less friendly, I think you're\n>> overestimating the difficulty of using my proposed approach. Most\n>> importantly there's no need to parse the current GUC value. A client\n>> always knows what variables it wants to have reported. So anytime that\n>> changes the client can simply regenerate the full list of gucs that it\n>> wants to report and send that. So something similar to the following\n>> pseudo code (using += for string concatenation):\n>\n>\n> I disagree with this - I can imagine some proxies add some own reported GUC and the client can know nothing about it.\n\nI've definitely thought about this case, since it's the main case I\ncare about as maintainer of PgBouncer. And a client wouldn't need to\nknow about the extra GUCs that the proxy requires for the proxy to\nwork correctly. A proxy can quite simply handle this itself in the\nfollowing manner: Whenever a client sends a ParameterSet for\n_pq_.report_parameters, the proxy could forward to the server after\nprepending its own extra GUCs at the front. The proxy wouldn't even\nneed to parse the list from the client to be able to do that. An even\nbetter behaving proxy, should parse the list of GUCs though and would\nonly forward the ParameterStatus messages that it receives from the\nserver if the client requested ParameterStatus updates for them.\n\n\n", "msg_date": "Sat, 27 Jan 2024 00:04:28 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "so 27. 1. 2024 v 0:04 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl>\nnapsal:\n\n> On Fri, 26 Jan 2024 at 21:35, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I see a possibility of disabling reporting as possibly dangerous.\n> Without reporting encoding you can break psql. So it should be limited just\n> to few values where is known behave.\n>\n> I agree that client_encoding is a GUC that likely all clients would\n> want to request reporting for, so I can see the argument for always\n> sending it. But I wouldn't call it dangerous for a client to be able\n> to disable reporting for it. Ultimately the client is the one that\n> decides. A client might just as well completely ignore the reported\n> value.\n>\n\nUntil now, it is not possible to disable reporting. So clients can expect\nso reporting is workable.\n\nDo you have a use case, when disabling some of the defaultly reported GUC\nmakes sense?\n\nMy patch doesn't protect these GUC, and now I think it is a mistake.\n\n\n>\n> >> While I agree it's a little bit less friendly, I think you're\n> >> overestimating the difficulty of using my proposed approach. Most\n> >> importantly there's no need to parse the current GUC value. A client\n> >> always knows what variables it wants to have reported. So anytime that\n> >> changes the client can simply regenerate the full list of gucs that it\n> >> wants to report and send that. So something similar to the following\n> >> pseudo code (using += for string concatenation):\n> >\n> >\n> > I disagree with this - I can imagine some proxies add some own reported\n> GUC and the client can know nothing about it.\n>\n> I've definitely thought about this case, since it's the main case I\n> care about as maintainer of PgBouncer. And a client wouldn't need to\n> know about the extra GUCs that the proxy requires for the proxy to\n> work correctly. A proxy can quite simply handle this itself in the\n> following manner: Whenever a client sends a ParameterSet for\n> _pq_.report_parameters, the proxy could forward to the server after\n> prepending its own extra GUCs at the front. The proxy wouldn't even\n> need to parse the list from the client to be able to do that. An even\n> better behaving proxy, should parse the list of GUCs though and would\n> only forward the ParameterStatus messages that it receives from the\n> server if the client requested ParameterStatus updates for them.\n>\n\nyes, inside gradual connect you can enhance the list of custom reported GUC\neasily.\n\nbut for use cases like prompt in psql, I need to enable, disable reporting\n- but this use case should be independent of \"protected\" connection related\nGUC reporting.\n\nFor example - when I disable %N, I can disable reporting \"role\" and disable\nshowing role in prompt. But when \"role\" should be necessary for pgbouncer,\nthen surely the disabling reporting should be ignored. The user by setting\na prompt should not break communication. And it can be ignored without any\nissue, there is not performance issue, because \"role\" is still necessary\nfor pgbouncer that is used for connection. Without described behaviour we\nshould not implement controlling reporting to psql, because there can be a\nlot of unhappy side effects in dependency if the user set or unset custom\nprompt or some other future feature.\n\nso 27. 1. 2024 v 0:04 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl> napsal:On Fri, 26 Jan 2024 at 21:35, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I see a possibility of disabling reporting as possibly dangerous.  Without reporting encoding you can break psql. So it should be limited just to few values where is known behave.\n\nI agree that client_encoding is a GUC that likely all clients would\nwant to request reporting for, so I can see the argument for always\nsending it. But I wouldn't call it dangerous for a client to be able\nto disable reporting for it. Ultimately the client is the one that\ndecides. A client might just as well completely ignore the reported\nvalue.Until now, it is not possible to disable reporting. So clients can expect so reporting is workable.Do you have a use case, when disabling some of the defaultly reported GUC makes sense?My patch doesn't protect these GUC, and now I think it is a mistake.  \n\n>> While I agree it's a little bit less friendly, I think you're\n>> overestimating the difficulty of using my proposed approach. Most\n>> importantly there's no need to parse the current GUC value. A client\n>> always knows what variables it wants to have reported. So anytime that\n>> changes the client can simply regenerate the full list of gucs that it\n>> wants to report and send that. So something similar to the following\n>> pseudo code (using += for string concatenation):\n>\n>\n> I disagree with this - I can imagine some proxies add some own reported GUC and the client can know nothing about it.\n\nI've definitely thought about this case, since it's the main case I\ncare about as maintainer of PgBouncer. And a client wouldn't need to\nknow about the extra GUCs that the proxy requires for the proxy to\nwork correctly. A proxy can quite simply handle this itself in the\nfollowing manner: Whenever a client sends a ParameterSet for\n_pq_.report_parameters, the proxy could forward to the server after\nprepending its own extra GUCs at the front. The proxy wouldn't even\nneed to parse the list from the client to be able to do that. An even\nbetter behaving proxy, should parse the list of GUCs though and would\nonly forward the ParameterStatus messages that it receives from the\nserver if the client requested ParameterStatus updates for them.yes, inside gradual connect you can enhance the list of custom reported GUC easily.but for use cases like prompt in psql, I need to enable, disable reporting - but this use case should be independent of \"protected\" connection related GUC reporting.For example - when I disable %N, I can disable reporting \"role\" and disable showing role in prompt. But when \"role\" should be necessary for pgbouncer, then surely the disabling reporting should be ignored. The user by setting a prompt should not break communication.  And it can be ignored without any issue, there is not performance issue, because \"role\" is still necessary for pgbouncer that is used for connection. Without described behaviour we should not implement controlling reporting to psql, because there can be a lot of unhappy side effects in dependency if the user set or unset custom prompt or some other future feature.", "msg_date": "Sat, 27 Jan 2024 08:34:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Sat, 27 Jan 2024 at 08:35, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Until now, it is not possible to disable reporting. So clients can expect so reporting is workable.\n\nSure, so if we leave the default as is that's fine. It's not as if\nthis GUC would be changed without the client knowing, the client would\nbe the one changing the GUC and thus disabling the sending of\nreporting for the default GUCs. If it doesn't want to disable the\nreporting, than it simply should not send such a request.\n\n> Do you have a use case, when disabling some of the defaultly reported GUC makes sense?\n\nMostly if the client doesn't actually use them, e.g. I expect many\nclients don't care what the current application_name is. But I do\nagree this is not a very strong usecase, so I'd be okay with always\nsending the ones that we sent as default for now. But that does make\nthe implementation more difficult, since we'd have to special case\nthese GUCs instead of having the same consistent behaviour for all\nGUCs.\n\n> yes, inside gradual connect you can enhance the list of custom reported GUC easily.\n>\n> but for use cases like prompt in psql, I need to enable, disable reporting - but this use case should be independent of \"protected\" connection related GUC reporting.\n>\n> For example - when I disable %N, I can disable reporting \"role\" and disable showing role in prompt. But when \"role\" should be necessary for pgbouncer, then surely the disabling reporting should be ignored. The user by setting a prompt should not break communication. And it can be ignored without any issue, there is not performance issue, because \"role\" is still necessary for pgbouncer that is used for connection. Without described behaviour we should not implement controlling reporting to psql, because there can be a lot of unhappy side effects in dependency if the user set or unset custom prompt or some other future feature.\n\nMaybe I'm misunderstanding what you're saying, but it's not clear to\nme why you are seeing two different use cases here. To me if we have\nthe ParameterSet message then they are both the same. When you enable\n%N you would send a ParamaterSet message for _pq_.report_parameters\nand set it to a list of gucs including \"role\". And when you disable %N\nyou would set _pq_.report_parameters to a list of GUCs without \"role\".\nThen if any proxy still needed \"role\" even if the list it receives in\n_pq_.report_parameters doesn't contain it, then this proxy would\nsimply prepend \"role\" to the list of GUCs before forwarding the\nParameterSet message.\n\nAlso to be clear, having a \"protected\" connection-start only GUC is\nproblematic for proxies. Because they need to update this setting\nwhile the connection is active when they hand of one server connection\nto another client.\n\n\n", "msg_date": "Sat, 27 Jan 2024 10:24:12 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "so 27. 1. 2024 v 10:24 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl>\nnapsal:\n\n> On Sat, 27 Jan 2024 at 08:35, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > Until now, it is not possible to disable reporting. So clients can\n> expect so reporting is workable.\n>\n> Sure, so if we leave the default as is that's fine. It's not as if\n> this GUC would be changed without the client knowing, the client would\n> be the one changing the GUC and thus disabling the sending of\n> reporting for the default GUCs. If it doesn't want to disable the\n> reporting, than it simply should not send such a request.\n>\n> > Do you have a use case, when disabling some of the defaultly reported\n> GUC makes sense?\n>\n> Mostly if the client doesn't actually use them, e.g. I expect many\n> clients don't care what the current application_name is. But I do\n> agree this is not a very strong usecase, so I'd be okay with always\n> sending the ones that we sent as default for now. But that does make\n> the implementation more difficult, since we'd have to special case\n> these GUCs instead of having the same consistent behaviour for all\n> GUCs.\n>\n\nclient_encoding, standard_conforming_strings, server_version,\ndefault_transaction_read_only, in_hot_standby\nand scram_iterations\nare used by libpq directly, so it can be wrong to introduce the possibility\nto break it.\n\n\n\n> > yes, inside gradual connect you can enhance the list of custom reported\n> GUC easily.\n> >\n> > but for use cases like prompt in psql, I need to enable, disable\n> reporting - but this use case should be independent of \"protected\"\n> connection related GUC reporting.\n> >\n> > For example - when I disable %N, I can disable reporting \"role\" and\n> disable showing role in prompt. But when \"role\" should be necessary for\n> pgbouncer, then surely the disabling reporting should be ignored. The user\n> by setting a prompt should not break communication. And it can be ignored\n> without any issue, there is not performance issue, because \"role\" is still\n> necessary for pgbouncer that is used for connection. Without described\n> behaviour we should not implement controlling reporting to psql, because\n> there can be a lot of unhappy side effects in dependency if the user set or\n> unset custom prompt or some other future feature.\n>\n> Maybe I'm misunderstanding what you're saying, but it's not clear to\n> me why you are seeing two different use cases here. To me if we have\n> the ParameterSet message then they are both the same. When you enable\n> %N you would send a ParamaterSet message for _pq_.report_parameters\n> and set it to a list of gucs including \"role\". And when you disable %N\n> you would set _pq_.report_parameters to a list of GUCs without \"role\".\n> Then if any proxy still needed \"role\" even if the list it receives in\n> _pq_.report_parameters doesn't contain it, then this proxy would\n> simply prepend \"role\" to the list of GUCs before forwarding the\n> ParameterSet message.\n>\n\nYour scenario can work but looks too fragile. I checked - GUC now cannot\ncontain some special chars, so writing parser should not be hard work. But\nyour proposal means the proxy should be smart about it, and have to check\nany change of _pq_.report_parameters, and this point can be fragile and a\nsource of hardly searched bugs.\n\n\n>\n> Also to be clear, having a \"protected\" connection-start only GUC is\n> problematic for proxies. Because they need to update this setting\n> while the connection is active when they hand of one server connection\n> to another client.\n>\n\nThis is true, but how common is this situation? Probably every client that\nuses one proxy will use the same defaultly reported GUC. Reporting has no\nextra overhead. The notification is reduced. When there is a different set\nof reported GUC, then the proxy can try to find another connection with the\nsame set or can reconnect. I think so there is still opened question what\nshould be correct behaviour when client execute RESET ALL or DISCARD ALL.\nWithout special protection the communication with proxy can be broken - and\nwe use GUC for reported variables, then my case, prompt in psql will be\nbroken too. Inside psql I have not callback on change of reported GUC. So\nthis is reason why reporting based on mutable GUC is fragile :-/\n\nPavel\n\nso 27. 1. 2024 v 10:24 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl> napsal:On Sat, 27 Jan 2024 at 08:35, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Until now, it is not possible to disable reporting. So clients can expect so reporting is workable.\n\nSure, so if we leave the default as is that's fine. It's not as if\nthis GUC would be changed without the client knowing, the client would\nbe the one changing the GUC and thus disabling the sending of\nreporting for the default GUCs. If it doesn't want to disable the\nreporting, than it simply should not send such a request.\n\n> Do you have a use case, when disabling some of the defaultly reported GUC makes sense?\n\nMostly if the client doesn't actually use them, e.g. I expect many\nclients don't care what the current application_name is. But I do\nagree this is not a very strong usecase, so I'd be okay with always\nsending the ones that we sent as default for now. But that does make\nthe implementation more difficult, since we'd have to special case\nthese GUCs instead of having the same consistent behaviour for all\nGUCs.client_encoding, standard_conforming_strings, server_version, default_transaction_read_only, in_hot_standby and scram_iterations are used by libpq directly, so it can be wrong to introduce the possibility to break it.\n\n> yes, inside gradual connect you can enhance the list of custom reported GUC easily.\n>\n> but for use cases like prompt in psql, I need to enable, disable reporting - but this use case should be independent of \"protected\" connection related GUC reporting.\n>\n> For example - when I disable %N, I can disable reporting \"role\" and disable showing role in prompt. But when \"role\" should be necessary for pgbouncer, then surely the disabling reporting should be ignored. The user by setting a prompt should not break communication.  And it can be ignored without any issue, there is not performance issue, because \"role\" is still necessary for pgbouncer that is used for connection. Without described behaviour we should not implement controlling reporting to psql, because there can be a lot of unhappy side effects in dependency if the user set or unset custom prompt or some other future feature.\n\nMaybe I'm misunderstanding what you're saying, but it's not clear to\nme why you are seeing two different use cases here. To me if we have\nthe ParameterSet message then they are both the same. When you enable\n%N you would send a ParamaterSet message for _pq_.report_parameters\nand set it to a list of gucs including \"role\". And when you disable %N\nyou would set _pq_.report_parameters to a list of GUCs without \"role\".\nThen if any proxy still needed \"role\" even if the list it receives in\n_pq_.report_parameters doesn't contain it, then this proxy would\nsimply prepend \"role\" to the list of GUCs before forwarding the\nParameterSet message.Your scenario can work but looks too fragile. I checked - GUC now cannot contain some special chars, so writing parser should not be hard work. But your proposal means the proxy should be smart about it, and have to check any change of _pq_.report_parameters, and this point can be fragile and a source of hardly searched bugs. \n\nAlso to be clear, having a \"protected\" connection-start only GUC is\nproblematic for proxies. Because they need to update this setting\nwhile the connection is active when they hand of one server connection\nto another client.This is true, but how common is this situation? Probably every client  that uses one proxy will use the same defaultly reported GUC.  Reporting has no extra overhead. The notification is reduced. When there is a different set of reported GUC, then the proxy can try to find another connection with the same set or can reconnect.  I think so there is still opened question what should be correct behaviour when client execute RESET ALL or DISCARD ALL.  Without special protection the communication with proxy can be broken - and we use GUC for reported variables, then my case, prompt in psql will be broken too. Inside psql I have not callback on change of reported GUC. So this is reason why reporting based on mutable GUC is fragile :-/Pavel", "msg_date": "Sat, 27 Jan 2024 20:44:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Sat, 27 Jan 2024 at 20:44, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> client_encoding, standard_conforming_strings, server_version, default_transaction_read_only, in_hot_standby and scram_iterations\n> are used by libpq directly, so it can be wrong to introduce the possibility to break it.\n\nlibpq could add these ones automatically to the list, just like a\nproxy. But I think you are probably right and always reporting our\ncurrent default set is probably easier.\n\n>> Maybe I'm misunderstanding what you're saying, but it's not clear to\n>> me why you are seeing two different use cases here. To me if we have\n>> the ParameterSet message then they are both the same. When you enable\n>> %N you would send a ParamaterSet message for _pq_.report_parameters\n>> and set it to a list of gucs including \"role\". And when you disable %N\n>> you would set _pq_.report_parameters to a list of GUCs without \"role\".\n>> Then if any proxy still needed \"role\" even if the list it receives in\n>> _pq_.report_parameters doesn't contain it, then this proxy would\n>> simply prepend \"role\" to the list of GUCs before forwarding the\n>> ParameterSet message.\n>\n>\n> Your scenario can work but looks too fragile. I checked - GUC now cannot contain some special chars, so writing parser should not be hard work. But your proposal means the proxy should be smart about it, and have to check any change of _pq_.report_parameters, and this point can be fragile and a source of hardly searched bugs.\n\nYes, proxies should be smart about it. But if there's new message\ntypes introduced specifically for this, then proxies need to be smart\nabout it too. Because they would need to remember which reporting was\nrequested by the client, to be able to correctly ask for reporting\nGUCs it after server connection . Using GUCs actually makes this\neasier to implement (and thus less error prone), because proxies\nalready have logic to re-sync GUCs after connection assignment.\n\nI think this is probably one of the core reasons why I would very much\nprefer GUCs over new message types to configure protocol extensions\nlike this: It means proxies would not to keep track of and re-sync a\nnew kind of connection state every time a protocol extension is added.\nThey can make their GUC tracking and re-syncing robust, and that's all\nthey would need.\n\n> This is true, but how common is this situation? Probably every client that uses one proxy will use the same defaultly reported GUC.\n\nIf you have different clients connecting to the same proxy, it seems\nquite likely that this will happen. This does not seem uncommon to me,\ne.g. actual application would need different things always reported\nthan some dev client. Or clients for different languages might ask to\nreport slightly different settings.\n\n> Reporting has no extra overhead. The notification is reduced. When there is a different set of reported GUC, then the proxy can try to find another connection with the same set or can reconnect.\n\nHonestly, this logic seems much more fragile to implement. And\nrequiring reconnection seems problematic from a performance point of\nview.\n\n> I think so there is still opened question what should be correct behaviour when client execute RESET ALL or DISCARD ALL. Without special protection the communication with proxy can be broken - and we use GUC for reported variables, then my case, prompt in psql will be broken too. Inside psql I have not callback on change of reported GUC. So this is reason why reporting based on mutable GUC is fragile :-/\n\nSpecifically for this reason, the current patchset in the other thread\nalready ignores RESET ALL and DISCARD ALL for protocol extension\nparameters (including _pq_.report_parameters). So this would be a\nnon-issue.\n\n\n", "msg_date": "Sun, 28 Jan 2024 10:42:13 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "ne 28. 1. 2024 v 10:42 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl>\nnapsal:\n\n> On Sat, 27 Jan 2024 at 20:44, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > client_encoding, standard_conforming_strings, server_version,\n> default_transaction_read_only, in_hot_standby and scram_iterations\n> > are used by libpq directly, so it can be wrong to introduce the\n> possibility to break it.\n>\n> libpq could add these ones automatically to the list, just like a\n> proxy. But I think you are probably right and always reporting our\n> current default set is probably easier.\n>\n\nThere is another reason - compatibility with other drivers. We maintain\njust libpq, but there are JDBC, Npgsql, and some native Python drivers. I\ndidn't checked, but maybe they expect GUC with GUC_REPORT flag.\n\n\n> >> Maybe I'm misunderstanding what you're saying, but it's not clear to\n> >> me why you are seeing two different use cases here. To me if we have\n> >> the ParameterSet message then they are both the same. When you enable\n> >> %N you would send a ParamaterSet message for _pq_.report_parameters\n> >> and set it to a list of gucs including \"role\". And when you disable %N\n> >> you would set _pq_.report_parameters to a list of GUCs without \"role\".\n> >> Then if any proxy still needed \"role\" even if the list it receives in\n> >> _pq_.report_parameters doesn't contain it, then this proxy would\n> >> simply prepend \"role\" to the list of GUCs before forwarding the\n> >> ParameterSet message.\n> >\n> >\n> > Your scenario can work but looks too fragile. I checked - GUC now cannot\n> contain some special chars, so writing parser should not be hard work. But\n> your proposal means the proxy should be smart about it, and have to check\n> any change of _pq_.report_parameters, and this point can be fragile and a\n> source of hardly searched bugs.\n>\n> Yes, proxies should be smart about it. But if there's new message\n> types introduced specifically for this, then proxies need to be smart\n> about it too. Because they would need to remember which reporting was\n> requested by the client, to be able to correctly ask for reporting\n> GUCs it after server connection . Using GUCs actually makes this\n> easier to implement (and thus less error prone), because proxies\n> already have logic to re-sync GUCs after connection assignment.\n>\n> I think this is probably one of the core reasons why I would very much\n> prefer GUCs over new message types to configure protocol extensions\n> like this: It means proxies would not to keep track of and re-sync a\n> new kind of connection state every time a protocol extension is added.\n> They can make their GUC tracking and re-syncing robust, and that's all\n> they would need.\n>\n\nI am not against GUC based solutions. I think so for proxies it is probably\nthe best solution. But I just see only a GUC based solution not practical\nfor application.\n\nThings are more complex when we try to think about possibility so\nmaintaining a list of reported GUC is more than one application. But now, I\ndon't see any design without problems. Your look a little bit fragile to\nme, my proposal probably needs two independent lists of reported GUC, which\nis not nice too. From my perspective the situation can be better if I know\nso defaultly reported GUC are fixed, and cannot be broken. Then for almost\nall clients (without pgbouncer users), the CUSTOM_REPORT_GUC GUC will\ncontain just \"role\", and then the risk is minimal. But still there are\nproblems with handling of RESET ALL - so that means I need to do a recheck\nof the local state every time, when I will show a prompt with %N - that is\nnot nice, but probably with a short list it will not be a problem.\n\n\n\n> > This is true, but how common is this situation? Probably every client\n> that uses one proxy will use the same defaultly reported GUC.\n>\n> If you have different clients connecting to the same proxy, it seems\n> quite likely that this will happen. This does not seem uncommon to me,\n> e.g. actual application would need different things always reported\n> than some dev client. Or clients for different languages might ask to\n> report slightly different settings.\n>\n> > Reporting has no extra overhead. The notification is reduced. When there\n> is a different set of reported GUC, then the proxy can try to find another\n> connection with the same set or can reconnect.\n>\n> Honestly, this logic seems much more fragile to implement. And\n> requiring reconnection seems problematic from a performance point of\n> view.\n>\n> > I think so there is still opened question what should be correct\n> behaviour when client execute RESET ALL or DISCARD ALL. Without special\n> protection the communication with proxy can be broken - and we use GUC for\n> reported variables, then my case, prompt in psql will be broken too. Inside\n> psql I have not callback on change of reported GUC. So this is reason why\n> reporting based on mutable GUC is fragile :-/\n>\n> Specifically for this reason, the current patchset in the other thread\n> already ignores RESET ALL and DISCARD ALL for protocol extension\n> parameters (including _pq_.report_parameters). So this would be a\n> non-issue.\n>\n\nI see one problematic scenario (my patch doesn't handle it well too).\n\nWhen a user explicitly calls RESET ALL, or DISCARD ALL, and the connect -\nclient continues, then _pq_.report_parameters should not be changed.\n\nBut I can imagine a client crash, and then pgbouncer executes RESET ALL,\nand at this moment I would like to reset GUC_REPORT on \"role\" GUC. Maybe\nthe introduction of a new flag for DISCARD can solve it. But again there\ncan be a problem for which GUC the flag GUC_REPORT should be removed,\nbecause there are not two independent lists.\n\nne 28. 1. 2024 v 10:42 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl> napsal:On Sat, 27 Jan 2024 at 20:44, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> client_encoding, standard_conforming_strings, server_version, default_transaction_read_only, in_hot_standby and scram_iterations\n> are used by libpq directly, so it can be wrong to introduce the possibility to break it.\n\nlibpq could add these ones automatically to the list, just like a\nproxy. But I think you are probably right and always reporting our\ncurrent default set is probably easier.There is another reason - compatibility with other drivers.  We maintain just libpq, but there are JDBC, Npgsql, and some native Python drivers. I didn't checked, but maybe they expect GUC with GUC_REPORT flag.\n\n>> Maybe I'm misunderstanding what you're saying, but it's not clear to\n>> me why you are seeing two different use cases here. To me if we have\n>> the ParameterSet message then they are both the same. When you enable\n>> %N you would send a ParamaterSet message for _pq_.report_parameters\n>> and set it to a list of gucs including \"role\". And when you disable %N\n>> you would set _pq_.report_parameters to a list of GUCs without \"role\".\n>> Then if any proxy still needed \"role\" even if the list it receives in\n>> _pq_.report_parameters doesn't contain it, then this proxy would\n>> simply prepend \"role\" to the list of GUCs before forwarding the\n>> ParameterSet message.\n>\n>\n> Your scenario can work but looks too fragile. I checked - GUC now cannot contain some special chars, so writing parser should not be hard work. But your proposal means the proxy should be smart about it, and have to check any change of _pq_.report_parameters, and this point can be fragile and a source of hardly searched bugs.\n\nYes, proxies should be smart about it. But if there's new message\ntypes introduced specifically for this, then proxies need to be smart\nabout it too. Because they would need to remember which reporting was\nrequested by the client, to be able to correctly ask for reporting\nGUCs it after server connection . Using GUCs actually makes this\neasier to implement (and thus less error prone), because proxies\nalready have logic to re-sync GUCs after connection assignment.\n\nI think this is probably one of the core reasons why I would very much\nprefer GUCs over new message types to configure protocol extensions\nlike this: It means proxies would not to keep track of and re-sync a\nnew kind of connection state every time a protocol extension is added.\nThey can make their GUC tracking and re-syncing robust, and that's all\nthey would need.I am not against GUC based solutions. I think so for proxies it is probably the best solution. But I just see only a GUC based solution not practical for application.Things are more complex when we try to think about possibility so maintaining a list of reported GUC is more than one application. But now, I don't see any design without problems. Your look a little bit fragile to me, my proposal probably needs two independent lists of reported GUC, which is not nice too. From my perspective the situation can be better if I know so defaultly reported GUC are fixed, and cannot be broken. Then for almost all clients (without pgbouncer users), the CUSTOM_REPORT_GUC GUC will contain just \"role\", and then the risk is minimal. But still there are problems with handling of RESET ALL - so that means I need to do a recheck of the local state every time, when I will show a prompt with %N - that is not nice, but probably with a short list it will not be a problem.\n\n> This is true, but how common is this situation? Probably every client  that uses one proxy will use the same defaultly reported GUC.\n\nIf you have different clients connecting to the same proxy, it seems\nquite likely that this will happen. This does not seem uncommon to me,\ne.g. actual application would need different things always reported\nthan some dev client. Or clients for different languages might ask to\nreport slightly different settings.\n\n> Reporting has no extra overhead. The notification is reduced. When there is a different set of reported GUC, then the proxy can try to find another connection with the same set or can reconnect.\n\nHonestly, this logic seems much more fragile to implement. And\nrequiring reconnection seems problematic from a performance point of\nview.\n\n> I think so there is still opened question what should be correct behaviour when client execute RESET ALL or DISCARD ALL.  Without special protection the communication with proxy can be broken - and we use GUC for reported variables, then my case, prompt in psql will be broken too. Inside psql I have not callback on change of reported GUC. So this is reason why reporting based on mutable GUC is fragile :-/\n\nSpecifically for this reason, the current patchset in the other thread\nalready ignores RESET ALL and DISCARD ALL for protocol extension\nparameters (including _pq_.report_parameters). So this would be a\nnon-issue.I see one problematic scenario (my patch doesn't handle it well too).When a user explicitly calls RESET ALL, or DISCARD ALL, and the connect - client continues, then  _pq_.report_parameters should not be changed.But I can imagine a client crash, and then pgbouncer executes RESET ALL, and at this moment I would like to reset GUC_REPORT on \"role\" GUC. Maybe the introduction of a new flag for DISCARD can solve it. But again there can be a problem for which GUC the flag GUC_REPORT should be removed, because there are not two independent lists.", "msg_date": "Sun, 28 Jan 2024 20:00:57 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "On Sun, 28 Jan 2024 at 20:01, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> There is another reason - compatibility with other drivers. We maintain just libpq, but there are JDBC, Npgsql, and some native Python drivers. I didn't checked, but maybe they expect GUC with GUC_REPORT flag.\n\nThis doesn't matter, because these drivers themselves would only stop\nreceiving certain GUC report messages if they changed this the\n_pq_.report_paramers GUC. And at that point the other driver is\ndisabling the reporting on purpose. But like I said, I'm fine with\nforcing the currently default GUC_REPORT GUCs to be GUC_REPORT always\n(maybe excluding application_name).\n\n> But now, I don't see any design without problems. Your look a little bit fragile to me,\n\nCan you explain what still looks fragile to you about my design? Like\nI explained at least from a proxy perspective this is the least\nfragile imho, since it can reuse already existing and battle tested\ncode.\n\n> From my perspective the situation can be better if I know so defaultly reported GUC are fixed, and cannot be broken. Then for almost all clients (without pgbouncer users), the CUSTOM_REPORT_GUC GUC will contain just \"role\", and then the risk is minimal.\n\nWhich risk are you talking about here?\n\n> But still there are problems with handling of RESET ALL - so that means I need to do a recheck of the local state every time, when I will show a prompt with %N - that is not nice, but probably with a short list it will not be a problem.\n\nI'm not entirely sure what you mean here. Is this still a problem if\nRESET ALL is ignored for _pq_.report_parameters? If so, what problem\nare you talking about then?\n\n> But I can imagine a client crash, and then pgbouncer executes RESET ALL, and at this moment I would like to reset GUC_REPORT on \"role\" GUC. Maybe the introduction of a new flag for DISCARD can solve it. But again there can be a problem for which GUC the flag GUC_REPORT should be removed, because there are not two independent lists.\n\nI don't think this is a problem. PgBouncer wouldn't rely on RESET ALL\nto reset the state of _pq_.report_parameters. Before handing off the\nold connection to a new client, PgBouncer would simply change the\n_pq_.report_parameters GUC back to its default value by sending a\nParameterSet message. i.e. PgBouncer would use the same logic as it\ncurrently uses to correctly reset tracked GUCs (application_name,\nclient_encoding, etc).\n\n\n", "msg_date": "Sun, 28 Jan 2024 22:52:06 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "ne 28. 1. 2024 v 22:52 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl>\nnapsal:\n\n> On Sun, 28 Jan 2024 at 20:01, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > There is another reason - compatibility with other drivers. We maintain\n> just libpq, but there are JDBC, Npgsql, and some native Python drivers. I\n> didn't checked, but maybe they expect GUC with GUC_REPORT flag.\n>\n> This doesn't matter, because these drivers themselves would only stop\n> receiving certain GUC report messages if they changed this the\n> _pq_.report_paramers GUC. And at that point the other driver is\n> disabling the reporting on purpose. But like I said, I'm fine with\n> forcing the currently default GUC_REPORT GUCs to be GUC_REPORT always\n> (maybe excluding application_name).\n>\n\nok\n\n\n>\n> > But now, I don't see any design without problems. Your look a little bit\n> fragile to me,\n>\n> Can you explain what still looks fragile to you about my design? Like\n> I explained at least from a proxy perspective this is the least\n> fragile imho, since it can reuse already existing and battle tested\n> code.\n>\n\nbecause there is not 100% isolation of different layers, there is one\nresource that can be modified by network layer (proxy) and by application\nlayer (psql). Unfortunately, I don't see any better solution, how these\nlayes separated.\n\n\n\n>\n> > From my perspective the situation can be better if I know so defaultly\n> reported GUC are fixed, and cannot be broken. Then for almost all clients\n> (without pgbouncer users), the CUSTOM_REPORT_GUC GUC will contain just\n> \"role\", and then the risk is minimal.\n>\n> Which risk are you talking about here?\n>\n\nThe risk of not wanted reporting. I'll return to my psql prompt case. Who\nwill disable reporting of \"rule\", when psql crashes? pgbouncer will call\nDISCARD ALL before reuse connection, but it ignores\n_pq_.report_parameters/';.>\"\n\n\n\n>\n> > But still there are problems with handling of RESET ALL - so that means\n> I need to do a recheck of the local state every time, when I will show a\n> prompt with %N - that is not nice, but probably with a short list it will\n> not be a problem.\n>\n> I'm not entirely sure what you mean here. Is this still a problem if\n> RESET ALL is ignored for _pq_.report_parameters? If so, what problem\n> are you talking about then?\n>\n> > But I can imagine a client crash, and then pgbouncer executes RESET ALL,\n> and at this moment I would like to reset GUC_REPORT on \"role\" GUC. Maybe\n> the introduction of a new flag for DISCARD can solve it. But again there\n> can be a problem for which GUC the flag GUC_REPORT should be removed,\n> because there are not two independent lists.\n>\n> I don't think this is a problem. PgBouncer wouldn't rely on RESET ALL\n> to reset the state of _pq_.report_parameters. Before handing off the\n> old connection to a new client, PgBouncer would simply change the\n> _pq_.report_parameters GUC back to its default value by sending a\n> ParameterSet message. i.e. PgBouncer would use the same logic as it\n> currently uses to correctly reset tracked GUCs (application_name,\n> client_encoding, etc).\n>\n\nok, this can work, and this is the reply to my previous query.\n\nRegards\n\nPavel\n\nne 28. 1. 2024 v 22:52 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl> napsal:On Sun, 28 Jan 2024 at 20:01, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> There is another reason - compatibility with other drivers.  We maintain just libpq, but there are JDBC, Npgsql, and some native Python drivers. I didn't checked, but maybe they expect GUC with GUC_REPORT flag.\n\nThis doesn't matter, because these drivers themselves would only stop\nreceiving certain GUC report messages if they changed this the\n_pq_.report_paramers GUC. And at that point the other driver is\ndisabling the reporting on purpose. But like I said, I'm fine with\nforcing the currently default GUC_REPORT GUCs to be GUC_REPORT always\n(maybe excluding application_name).ok \n\n> But now, I don't see any design without problems. Your look a little bit fragile to me,\n\nCan you explain what still looks fragile to you about my design? Like\nI explained at least from a proxy perspective this is the least\nfragile imho, since it can reuse already existing and battle tested\ncode.because there is not 100% isolation of different layers, there is one resource that can be modified by network layer (proxy) and by application layer (psql). Unfortunately, I don't see any better solution, how these layes separated. \n\n> From my perspective the situation can be better if I know so defaultly reported GUC are fixed, and cannot be broken. Then for almost all clients (without pgbouncer users), the CUSTOM_REPORT_GUC GUC will contain just \"role\", and then the risk is minimal.\n\nWhich risk are you talking about here?The risk of not wanted reporting. I'll return to my psql prompt case. Who will disable reporting of \"rule\", when psql crashes? pgbouncer will call DISCARD ALL before reuse connection, but it ignores _pq_.report_parameters/';.>\" \n\n> But still there are problems with handling of RESET ALL - so that means I need to do a recheck of the local state every time, when I will show a prompt with %N - that is not nice, but probably with a short list it will not be a problem.\n\nI'm not entirely sure what you mean here. Is this still a problem if\nRESET ALL is ignored for _pq_.report_parameters? If so, what problem\nare you talking about then?\n\n> But I can imagine a client crash, and then pgbouncer executes RESET ALL, and at this moment I would like to reset GUC_REPORT on \"role\" GUC. Maybe the introduction of a new flag for DISCARD can solve it. But again there can be a problem for which GUC the flag GUC_REPORT should be removed, because there are not two independent lists.\n\nI don't think this is a problem. PgBouncer wouldn't rely on RESET ALL\nto reset the state of _pq_.report_parameters. Before handing off the\nold connection to a new client, PgBouncer would simply change the\n_pq_.report_parameters GUC back to its default value by sending a\nParameterSet message. i.e. PgBouncer would use the same logic as it\ncurrently uses to correctly reset tracked GUCs (application_name,\nclient_encoding, etc).ok, this can work, and this is the reply to my previous query.RegardsPavel", "msg_date": "Mon, 29 Jan 2024 10:26:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" }, { "msg_contents": "po 29. 1. 2024 v 10:26 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> ne 28. 1. 2024 v 22:52 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl>\n> napsal:\n>\n>> On Sun, 28 Jan 2024 at 20:01, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> > There is another reason - compatibility with other drivers. We\n>> maintain just libpq, but there are JDBC, Npgsql, and some native Python\n>> drivers. I didn't checked, but maybe they expect GUC with GUC_REPORT flag.\n>>\n>> This doesn't matter, because these drivers themselves would only stop\n>> receiving certain GUC report messages if they changed this the\n>> _pq_.report_paramers GUC. And at that point the other driver is\n>> disabling the reporting on purpose. But like I said, I'm fine with\n>> forcing the currently default GUC_REPORT GUCs to be GUC_REPORT always\n>> (maybe excluding application_name).\n>>\n>\n> ok\n>\n>\n>>\n>> > But now, I don't see any design without problems. Your look a little\n>> bit fragile to me,\n>>\n>> Can you explain what still looks fragile to you about my design? Like\n>> I explained at least from a proxy perspective this is the least\n>> fragile imho, since it can reuse already existing and battle tested\n>> code.\n>>\n>\n> because there is not 100% isolation of different layers, there is one\n> resource that can be modified by network layer (proxy) and by application\n> layer (psql). Unfortunately, I don't see any better solution, how these\n> layes separated.\n>\n>\n>\n>>\n>> > From my perspective the situation can be better if I know so defaultly\n>> reported GUC are fixed, and cannot be broken. Then for almost all clients\n>> (without pgbouncer users), the CUSTOM_REPORT_GUC GUC will contain just\n>> \"role\", and then the risk is minimal.\n>>\n>> Which risk are you talking about here?\n>>\n>\n> The risk of not wanted reporting. I'll return to my psql prompt case. Who\n> will disable reporting of \"rule\", when psql crashes? pgbouncer will call\n> DISCARD ALL before reuse connection, but it ignores\n> _pq_.report_parameters/';.>\"\n>\n>\n>\n>>\n>> > But still there are problems with handling of RESET ALL - so that means\n>> I need to do a recheck of the local state every time, when I will show a\n>> prompt with %N - that is not nice, but probably with a short list it will\n>> not be a problem.\n>>\n>> I'm not entirely sure what you mean here. Is this still a problem if\n>> RESET ALL is ignored for _pq_.report_parameters? If so, what problem\n>> are you talking about then?\n>>\n>> > But I can imagine a client crash, and then pgbouncer executes RESET\n>> ALL, and at this moment I would like to reset GUC_REPORT on \"role\" GUC.\n>> Maybe the introduction of a new flag for DISCARD can solve it. But again\n>> there can be a problem for which GUC the flag GUC_REPORT should be removed,\n>> because there are not two independent lists.\n>>\n>> I don't think this is a problem. PgBouncer wouldn't rely on RESET ALL\n>> to reset the state of _pq_.report_parameters. Before handing off the\n>> old connection to a new client, PgBouncer would simply change the\n>> _pq_.report_parameters GUC back to its default value by sending a\n>> ParameterSet message. i.e. PgBouncer would use the same logic as it\n>> currently uses to correctly reset tracked GUCs (application_name,\n>> client_encoding, etc).\n>>\n>\n> ok, this can work, and this is the reply to my previous query.\n>\n\nI marked my patch as withdrawn. I'll resend it when your patch\n_pq_.report_parameters\nwill be committed.\n\nRegards\n\nPavel\n\n>\n> Regards\n>\n> Pavel\n>\n\npo 29. 1. 2024 v 10:26 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:ne 28. 1. 2024 v 22:52 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl> napsal:On Sun, 28 Jan 2024 at 20:01, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> There is another reason - compatibility with other drivers.  We maintain just libpq, but there are JDBC, Npgsql, and some native Python drivers. I didn't checked, but maybe they expect GUC with GUC_REPORT flag.\n\nThis doesn't matter, because these drivers themselves would only stop\nreceiving certain GUC report messages if they changed this the\n_pq_.report_paramers GUC. And at that point the other driver is\ndisabling the reporting on purpose. But like I said, I'm fine with\nforcing the currently default GUC_REPORT GUCs to be GUC_REPORT always\n(maybe excluding application_name).ok \n\n> But now, I don't see any design without problems. Your look a little bit fragile to me,\n\nCan you explain what still looks fragile to you about my design? Like\nI explained at least from a proxy perspective this is the least\nfragile imho, since it can reuse already existing and battle tested\ncode.because there is not 100% isolation of different layers, there is one resource that can be modified by network layer (proxy) and by application layer (psql). Unfortunately, I don't see any better solution, how these layes separated. \n\n> From my perspective the situation can be better if I know so defaultly reported GUC are fixed, and cannot be broken. Then for almost all clients (without pgbouncer users), the CUSTOM_REPORT_GUC GUC will contain just \"role\", and then the risk is minimal.\n\nWhich risk are you talking about here?The risk of not wanted reporting. I'll return to my psql prompt case. Who will disable reporting of \"rule\", when psql crashes? pgbouncer will call DISCARD ALL before reuse connection, but it ignores _pq_.report_parameters/';.>\" \n\n> But still there are problems with handling of RESET ALL - so that means I need to do a recheck of the local state every time, when I will show a prompt with %N - that is not nice, but probably with a short list it will not be a problem.\n\nI'm not entirely sure what you mean here. Is this still a problem if\nRESET ALL is ignored for _pq_.report_parameters? If so, what problem\nare you talking about then?\n\n> But I can imagine a client crash, and then pgbouncer executes RESET ALL, and at this moment I would like to reset GUC_REPORT on \"role\" GUC. Maybe the introduction of a new flag for DISCARD can solve it. But again there can be a problem for which GUC the flag GUC_REPORT should be removed, because there are not two independent lists.\n\nI don't think this is a problem. PgBouncer wouldn't rely on RESET ALL\nto reset the state of _pq_.report_parameters. Before handing off the\nold connection to a new client, PgBouncer would simply change the\n_pq_.report_parameters GUC back to its default value by sending a\nParameterSet message. i.e. PgBouncer would use the same logic as it\ncurrently uses to correctly reset tracked GUCs (application_name,\nclient_encoding, etc).ok, this can work, and this is the reply to my previous query.I marked my patch as withdrawn. I'll resend it when your patch _pq_.report_parameters will be committed.RegardsPavel RegardsPavel", "msg_date": "Tue, 30 Jan 2024 06:19:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: psql: show current user in prompt" } ]
[ { "msg_contents": "Hi hackers,\n\nRight now cost of index-only scan is using `random_page_cost`.\nCertainly for point selects we really have random access pattern, but \nqueries like \"select count(*) from hits\"  access pattern is more or less \nsequential:\nwe are iterating through subsequent leaf B-Tree pages.  As far as \ndefault value of `random_page_cost`  is 4 times larger than `seq_page_cost`\nit may force Postgres optimizer to choose sequential scan, while \nindex-only scan is usually much faster in this case.\nCan we do something here to provide more accurate cost estimation?\n\n\n\n\n", "msg_date": "Fri, 3 Feb 2023 19:55:37 +0200", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": true, "msg_subject": "Index-only scan and random_page_cost" } ]
[ { "msg_contents": "... at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f282b026787da69d88a35404cf62f1cc21cfbb7c\n\nAs usual, please send corrections/comments by Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Feb 2023 14:32:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "First draft of back-branch release notes is done" }, { "msg_contents": "\n> On Feb 4, 2023, at 10:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> ... at\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f282b026787da69d88a35404cf62f1cc21cfbb7c\n> \n> As usual, please send corrections/comments by Sunday.\n\nWhile reviewing for the release announcement, I noticed this (abbreviated as I’m on my mobile):\n\n“Prevent clobbering of cached parsetrees…Bad things could happen if…”\n\nWhile I chuckled over the phrasing, I’m left to wonder what the “bad things” are, in case I\nneed to check an older version to see if I’m susceptible to this bug.\n\nThanks,\n\nJonathan\n\n\n", "msg_date": "Sat, 4 Feb 2023 10:27:49 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "On 2023-Feb-03, Tom Lane wrote:\n\n> ... at\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f282b026787da69d88a35404cf62f1cc21cfbb7c\n> \n> As usual, please send corrections/comments by Sunday.\n\n Fix edge-case data corruption in shared tuplestores (Dmitry Astapov)\n\n If the final chunk of a large tuple being written out to disk was\n exactly 32760 bytes, it would be corrupted due to a fencepost bug.\n This is a hazard for parallelized plans that require a tuplestore,\n such as parallel hash join. The query would typically fail later\n with corrupted-data symptoms.\n\nI think this sounds really scary, because people are going to think that\ntheir stored data can get corrupted -- they don't necessarily know what\na \"shared tuplestore\" is. Maybe \"Avoid query failures in parallel hash\njoins\" as headline?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n", "msg_date": "Sun, 5 Feb 2023 13:10:47 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2023-Feb-03, Tom Lane wrote:\n>> Fix edge-case data corruption in shared tuplestores (Dmitry Astapov)\n\n> I think this sounds really scary, because people are going to think that\n> their stored data can get corrupted -- they don't necessarily know what\n> a \"shared tuplestore\" is. Maybe \"Avoid query failures in parallel hash\n> joins\" as headline?\n\nHmmm ... are we sure it *can't* lead to corruption of stored data,\nif this happens during an INSERT or UPDATE plan? I'll grant that\nsuch a case seems pretty unlikely though, as the bogus data\nretrieved from the tuplestore would have to not cause a failure\nwithin the query before it can get written out.\n\nAlso, aren't shared tuplestores used in more places than just\nparallel hash join? I mentioned that as an example, not an\nexhaustive list of trouble spots.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 14:57:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On Feb 4, 2023, at 10:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> “Prevent clobbering of cached parsetrees…Bad things could happen if…”\n\n> While I chuckled over the phrasing, I’m left to wonder what the “bad things” are, in case I\n> need to check an older version to see if I’m susceptible to this bug.\n\nFair. I was trying to avoid committing to specific consequences.\nThe assertion failure seen in the original report (#17702) wouldn't\noccur for typical users, but they might see crashes or \"unexpected node\ntype\" failures. Maybe we can say that instead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 15:01:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> I think this sounds really scary, because people are going to think that\n>> their stored data can get corrupted -- they don't necessarily know what\n>> a \"shared tuplestore\" is. Maybe \"Avoid query failures in parallel hash\n>> joins\" as headline?\n\nMaybe less scary if we make it clear we're talking about a temporary file?\n\n <para>\n Fix edge-case corruption of temporary data within shared tuplestores\n (Dmitry Astapov)\n </para>\n\n <para>\n If the final chunk of a large tuple being written out to a temporary\n file was exactly 32760 bytes, it would be corrupted due to a\n fencepost bug. This is a hazard for parallelized plans that require\n a tuplestore, such as parallel hash join. The query would typically\n fail later with corrupted-data symptoms.\n </para>\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 15:09:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "On Mon, Feb 6, 2023 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2023-Feb-03, Tom Lane wrote:\n> >> Fix edge-case data corruption in shared tuplestores (Dmitry Astapov)\n>\n> > I think this sounds really scary, because people are going to think that\n> > their stored data can get corrupted -- they don't necessarily know what\n> > a \"shared tuplestore\" is. Maybe \"Avoid query failures in parallel hash\n> > joins\" as headline?\n>\n> Hmmm ... are we sure it *can't* lead to corruption of stored data,\n> if this happens during an INSERT or UPDATE plan? I'll grant that\n> such a case seems pretty unlikely though, as the bogus data\n> retrieved from the tuplestore would have to not cause a failure\n> within the query before it can get written out.\n\nAgreed. I think you have to be quite unlucky to hit this in the first\nplace (very large tuples with very particular alignment), and then\nyou'd be highly likely to fail in some way due to corrupted tuple\nsize, making permanent corruption extremely unlikely.\n\n> Also, aren't shared tuplestores used in more places than just\n> parallel hash join? I mentioned that as an example, not an\n> exhaustive list of trouble spots.\n\nShared file sets (= a directory of temp files with automatic cleanup)\nare used by more things, but shared tuplestores (= a shared file set\nwith a tuple-oriented interface on top, to support partial scan) are\ncurrently only used by PHJ.\n\n\n", "msg_date": "Mon, 6 Feb 2023 09:37:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Feb 6, 2023 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also, aren't shared tuplestores used in more places than just\n>> parallel hash join? I mentioned that as an example, not an\n>> exhaustive list of trouble spots.\n\n> Shared file sets (= a directory of temp files with automatic cleanup)\n> are used by more things, but shared tuplestores (= a shared file set\n> with a tuple-oriented interface on top, to support partial scan) are\n> currently only used by PHJ.\n\nOh, okay. I'll change it to say \"corruption within parallel hash\njoin\", then, and not use the word \"tuplestore\" at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 15:44:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "On 2/5/23 3:01 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On Feb 4, 2023, at 10:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>>> “Prevent clobbering of cached parsetrees…Bad things could happen if…”\r\n> \r\n>> While I chuckled over the phrasing, I’m left to wonder what the “bad things” are, in case I\r\n>> need to check an older version to see if I’m susceptible to this bug.\r\n> \r\n> Fair. I was trying to avoid committing to specific consequences.\r\n> The assertion failure seen in the original report (#17702) wouldn't\r\n> occur for typical users, but they might see crashes or \"unexpected node\r\n> type\" failures. Maybe we can say that instead.\r\n\r\nI did a quick readthrough of #17702. Your proposal sounds reasonable.\r\n\r\nBased on that explanation and reading #17702, I'm still not sure if this \r\nwill make the cut in the release announcement itself, but +1 for \r\nmodifying it in the release notes.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sun, 5 Feb 2023 20:46:15 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 2/5/23 3:01 PM, Tom Lane wrote:\n>> Fair. I was trying to avoid committing to specific consequences.\n>> The assertion failure seen in the original report (#17702) wouldn't\n>> occur for typical users, but they might see crashes or \"unexpected node\n>> type\" failures. Maybe we can say that instead.\n\n> I did a quick readthrough of #17702. Your proposal sounds reasonable.\n\n> Based on that explanation and reading #17702, I'm still not sure if this \n> will make the cut in the release announcement itself, but +1 for \n> modifying it in the release notes.\n\nThe notes now say\n\n <para>\n Prevent clobbering of cached parsetrees for utility statements in\n SQL functions (Tom Lane, Daniel Gustafsson)\n </para>\n\n <para>\n If a SQL-language function executes the same utility command more\n than once within a single calling query, it could crash or report\n strange errors such as <quote>unrecognized node type</quote>.\n </para>\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 21:39:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "On 2/5/23 9:39 PM, Tom Lane wrote:\r\n\r\n> \r\n> <para>\r\n> Prevent clobbering of cached parsetrees for utility statements in\r\n> SQL functions (Tom Lane, Daniel Gustafsson)\r\n> </para>\r\n> \r\n> <para>\r\n> If a SQL-language function executes the same utility command more\r\n> than once within a single calling query, it could crash or report\r\n> strange errors such as <quote>unrecognized node type</quote>.\r\n> </para>\r\n> \r\n> \t\t\tregards, tom lane\r\n\r\n+1. Thanks!\r\n\r\nJonathan", "msg_date": "Sun, 5 Feb 2023 22:08:44 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "On Fri, Feb 03, 2023 at 02:32:39PM -0500, Tom Lane wrote:\n> ... at\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f282b026787da69d88a35404cf62f1cc21cfbb7c\n> \n> As usual, please send corrections/comments by Sunday.\n\nIt's of no concern, but I was curious why this one wasn't included:\n\ncommit 72aea955d49712a17c08748aa9abcbcf98c32fc5\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: Fri Jan 6 16:38:46 2023 +1300\n\n Fix pg_truncate() on Windows.\n \n Commit 57faaf376 added pg_truncate(const char *path, off_t length), but\n \"length\" was ignored under WIN32 and the file was unconditionally\n truncated to 0.\n \n There was no live bug, since the only caller passes 0.\n \n Fix, and back-patch to 14 where the function arrived.\n\n\n", "msg_date": "Wed, 8 Feb 2023 16:08:35 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: First draft of back-branch release notes is done" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> It's of no concern, but I was curious why this one wasn't included:\n\n> commit 72aea955d49712a17c08748aa9abcbcf98c32fc5\n> Author: Thomas Munro <tmunro@postgresql.org>\n> Date: Fri Jan 6 16:38:46 2023 +1300\n\n> Fix pg_truncate() on Windows.\n \n> Commit 57faaf376 added pg_truncate(const char *path, off_t length), but\n> \"length\" was ignored under WIN32 and the file was unconditionally\n> truncated to 0.\n \n> There was no live bug, since the only caller passes 0.\n\nI concluded that due to the lack of live bug, this would not be of\ninterest to end users. The back-patch was just for future-proofing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Feb 2023 17:22:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: First draft of back-branch release notes is done" } ]
[ { "msg_contents": "Respected Sir/Mam\n\n I am Shivam Ardeshna, A Computer Science Undergraduate 2nd\nYear. I was looking for a contribution and then I saw pgsql and wanted to\ncontribute to it even if I don't know about some languages I will try to\nlearn and solve the issues so can you allow me to do something?\nI am hoping to hear from you soon.\nRegards\nShivam\n\nRespected Sir/Mam              I am Shivam Ardeshna, A Computer Science Undergraduate 2nd Year. I was looking for a contribution and then I saw pgsql and wanted to contribute to it even if I don't know about some languages I will try to learn and solve the issues so can you allow me to do something?  I am hoping to hear from you soon.RegardsShivam", "msg_date": "Sat, 4 Feb 2023 02:02:10 +0530", "msg_from": "Shivam Ardeshna <ardeshnashivam12@gmail.com>", "msg_from_op": true, "msg_subject": "Hi i am Intrested to contribute" }, { "msg_contents": "Hi, Shivam!\n\n> Respected Sir/Mam\n>\n> I am Shivam Ardeshna, A Computer Science Undergraduate 2nd Year. I was looking for a contribution and then I saw pgsql and wanted to contribute to it even if I don't know about some languages I will try to learn and solve the issues so can you allow me to do something?\n> I am hoping to hear from you soon.\n\nYou may find useful the guide on how to contribute [1]. You can freely\nchoose what you want (from the list of TODOs linked or anything else)\nand work on it, no permission from anyone is necessary.\nThe downside is that it's not easy to detect what is useful for the\nfirst time, so I'd recommend first joining reviewing existing patches\nat commitfest page [2] and/or trying to do some bugfixes from\npgsql-bugs mailing list. Then over time, you can gather some context\nand you can choose more and more complicated things.\n\nI wish you succeed and enjoy this activity!\n\nKind regards,\nPavel Borisov\n\n[1] https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n[2] https://commitfest.postgresql.org/42/\n\n\n", "msg_date": "Tue, 7 Feb 2023 15:54:07 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hi i am Intrested to contribute" }, { "msg_contents": "Hi, Shivam!\n\n> You may find useful the guide on how to contribute [1]. You can freely\n> choose what you want (from the list of TODOs linked or anything else)\n> and work on it, no permission from anyone is necessary.\n> The downside is that it's not easy to detect what is useful for the\n> first time, so I'd recommend first joining reviewing existing patches\n> at commitfest page [2] and/or trying to do some bugfixes from\n> pgsql-bugs mailing list. Then over time, you can gather some context\n> and you can choose more and more complicated things.\n\nAdditionally, take a look at several recent discussions [1][2] of the subject.\n\n[1]: https://postgr.es/m/48279D7D-F780-4F79-B820-4336D2EA10BE%40u.nus.edu\n[2]: https://postgr.es/m/jVE8e0yCYML-PtkT9EkRu7L31k05D2PptAmrjx2CMP2CE0v4kFI1rysuo4lAuYmcPUXsUD-0UXISJ62GZC2P5Ktf9KukCxjDfADCHOaorfY%3D%40pm.me\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 8 Feb 2023 16:54:26 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Hi i am Intrested to contribute" } ]
[ { "msg_contents": "Hi,\nin src/test/modules/test_regex/test_regex.c\n\n\t/* Report individual info bit states */\n\tfor (inf = infonames; inf->bit != 0; inf++)\n\t{\n\t\tif (cpattern->re_info & inf->bit)\n\t\t{\n\t\t\tif (flags->info & inf->bit)\n\t\t\t\telems[nresults++] = PointerGetDatum(cstring_to_text(inf->text));\n\t\t\telse\n\t\t\t{\n\t\t\t\tsnprintf(buf, sizeof(buf), \"unexpected %s!\", inf->text);\n\t\t\t\telems[nresults++] = PointerGetDatum(cstring_to_text(buf));\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (flags->info & inf->bit)\n\t\t\t{\n\t\t\t\tsnprintf(buf, sizeof(buf), \"missing %s!\", inf->text);\n\t\t\t\telems[nresults++] = PointerGetDatum(cstring_to_text(buf));\n\t\t\t}\n\t\t}\n\t}\n\nI think \"expected\" should be replaced with \"missing\".\nthe \"missing\" should be replaced with \"expected\".\n\nHi,in src/test/modules/test_regex/test_regex.c\t/* Report individual info bit states */\n\tfor (inf = infonames; inf->bit != 0; inf++)\n\t{\n\t\tif (cpattern->re_info & inf->bit)\n\t\t{\n\t\t\tif (flags->info & inf->bit)\n\t\t\t\telems[nresults++] = PointerGetDatum(cstring_to_text(inf->text));\n\t\t\telse\n\t\t\t{\n\t\t\t\tsnprintf(buf, sizeof(buf), \"unexpected %s!\", inf->text);\n\t\t\t\telems[nresults++] = PointerGetDatum(cstring_to_text(buf));\n\t\t\t}\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (flags->info & inf->bit)\n\t\t\t{\n\t\t\t\tsnprintf(buf, sizeof(buf), \"missing %s!\", inf->text);\n\t\t\t\telems[nresults++] = PointerGetDatum(cstring_to_text(buf));\n\t\t\t}\n\t\t}\n\t}I think \"expected\" should be replaced with \"missing\".the \"missing\" should be replaced with \"expected\".", "msg_date": "Sat, 4 Feb 2023 13:22:23 +0530", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": true, "msg_subject": "src/test/modules/test_regex/test_regex.c typo issue" }, { "msg_contents": "jian he <jian.universality@gmail.com> writes:\n> in src/test/modules/test_regex/test_regex.c\n> ...\n> I think \"expected\" should be replaced with \"missing\".\n> the \"missing\" should be replaced with \"expected\".\n\nIt looks correct to me as-is. cpattern->re_info is the\ndata-under-test, flags->info is the presumed-correct\nreference data.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Feb 2023 03:08:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: src/test/modules/test_regex/test_regex.c typo issue" } ]
[ { "msg_contents": "\nHi,I'm trying to implement my own access method. But I find the functions baout read is difficult.\nCan you give me an existed easy extension that impelment the tableamroutine to reference? I ho[e\nit's not complicated like heap_am,and it support insert sqls and select sqls.Thanks\n\n--------------\n\n\n\njacktby@gmail.com\n\n\n", "msg_date": "Sat, 4 Feb 2023 16:59:24 +0800", "msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>", "msg_from_op": true, "msg_subject": "How to implement read operations for my own access method?" }, { "msg_contents": "On Sat, Feb 4, 2023 at 12:59 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\n>\n>\n> Hi,I'm trying to implement my own access method. But I find the functions baout read is difficult.\n> Can you give me an existed easy extension that impelment the tableamroutine to reference? I ho[e\n> it's not complicated like heap_am,and it support insert sqls and select sqls.Thanks\n>\n\nHi Jack,\n\nI'd recommend first to start from official documentation on index\naccess methods [0].\nPlease also check the contrib/bloom module [1]. It is designed to\nshowcase index-as-extension technology.\nAlso, you can see my free lectures about details of implementation of\nbuilt-in access methods [2]. Also, I can offer you my PGCon talk\n\"Index DIY\" about forking GiST into extension [3].\n\nThank you!\n\nBest regards, Andrey Borodin.\n[0] https://www.postgresql.org/docs/current/xindex.html\n[1] https://github.com/postgres/postgres/tree/REL_15_STABLE/contrib/bloom\n[2] https://www.youtube.com/watch?v=UgSeSo973lA&list=PLzrhBdcKTjLTyCIdDO1ig8qCZFZYYi3VH\n[3] https://github.com/x4m/index_diy\n\n\n", "msg_date": "Sat, 4 Feb 2023 12:44:26 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to implement read operations for my own access method?" } ]
[ { "msg_contents": "Hi,\n\ngcc warns about code like this:\n\ntypedef union foo\n{\n int i;\n long long l;\n} foo;\n\nfoo * assign(int i) {\n foo *p = (foo *) __builtin_malloc(sizeof(int));\n p->i = i;\n\n return p;\n}\n\n\n<source>: In function 'assign':\n<source>:9:6: warning: array subscript 'foo[0]' is partly outside array bounds of 'unsigned char[4]' [-Warray-bounds=]\n 9 | p->i = i;\n | ^~\n<source>:8:22: note: object of size 4 allocated by '__builtin_malloc'\n 8 | foo *p = (foo *) __builtin_malloc(sizeof(int));\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCompiler returned: 0\n\n\n\nI can't really tell if gcc is right or wrong wrong to warn about\nthis. On the one hand it's a union, and we only access the element that\nis actually backed by memory, on the other hand, the standard does say\nthat the size of a union is the largest element, so we are pointing to\nsomething undersized.\n\n\nWe actually have a fair amount of code like that, but currently are\nescaping most of the warnings, because gcc doesn't know that palloc() is\nan allocator. With more optimizations (particularly with LTO), we end up\nwith more of such warnings. I'd like to annotate palloc so gcc\nunderstands the size, as that does help to catch bugs when confusing the\ntype. It also helps static analyzers.\n\n\nAn example of such code in postgres:\n\n../../home/andres/src/postgresql/src/backend/utils/adt/numeric.c: In function 'make_result_opt_error':\n../../home/andres/src/postgresql/src/backend/utils/adt/numeric.c:7628:23: warning: array subscript 'struct NumericData[0]' is partly outside array bounds of 'unsigned char[6]' [-Warray-bounds=]\n 7628 | result->choice.n_header = sign;\n | ^~\n../../home/andres/src/postgresql/src/backend/utils/adt/numeric.c:7625:36: note: object of size 6 allocated by 'palloc'\n 7625 | result = (Numeric) palloc(NUMERIC_HDRSZ_SHORT);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nGiven that Numeric is defined as:\n\nstruct NumericData\n{\n\tint32\t\tvl_len_;\t\t/* varlena header (do not touch directly!) */\n\tunion NumericChoice choice; /* choice of format */\n};\n\nand\n#define NUMERIC_HDRSZ_SHORT (VARHDRSZ + sizeof(uint16))\n\nHere I can blame gcc even less - result is indeed not a valid pointer to\nstruct NumericData, because sizeof(NumericData) is 8, not 6. I suspect\nit's actually undefined behaviour to ever dereference a Numeric pointer,\nwhen the pointer points to something smaller than sizeof(NumericData).\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 4 Feb 2023 05:07:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "undersized unions" }, { "msg_contents": "On Sat, Feb 04, 2023 at 05:07:08AM -0800, Andres Freund wrote:\n> <source>: In function 'assign':\n> <source>:9:6: warning: array subscript 'foo[0]' is partly outside array bounds of 'unsigned char[4]' [-Warray-bounds=]\n> 9 | p->i = i;\n> | ^~\n> <source>:8:22: note: object of size 4 allocated by '__builtin_malloc'\n> 8 | foo *p = (foo *) __builtin_malloc(sizeof(int));\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> Compiler returned: 0\n> \n> I can't really tell if gcc is right or wrong wrong to warn about\n> this. On the one hand it's a union, and we only access the element that\n> is actually backed by memory, on the other hand, the standard does say\n> that the size of a union is the largest element, so we are pointing to\n> something undersized.\n\nSomething I have noticed, related to that.. meson reports a set of\nwarnings here, not ./configure, still I apply the same set of CFLAGS\nto both. What's the difference in the meson setup that creates that,\nif I may ask? There is a link to the way -Warray-bound is handled?\n\n> We actually have a fair amount of code like that, but currently are\n> escaping most of the warnings, because gcc doesn't know that palloc() is\n> an allocator. With more optimizations (particularly with LTO), we end up\n> with more of such warnings. I'd like to annotate palloc so gcc\n> understands the size, as that does help to catch bugs when confusing the\n> type. It also helps static analyzers.\n\nAh, that seems like a good idea in the long run.\n--\nMichael", "msg_date": "Sun, 5 Feb 2023 10:18:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: undersized unions" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Feb 04, 2023 at 05:07:08AM -0800, Andres Freund wrote:\n>> We actually have a fair amount of code like that, but currently are\n>> escaping most of the warnings, because gcc doesn't know that palloc() is\n>> an allocator. With more optimizations (particularly with LTO), we end up\n>> with more of such warnings. I'd like to annotate palloc so gcc\n>> understands the size, as that does help to catch bugs when confusing the\n>> type. It also helps static analyzers.\n\n> Ah, that seems like a good idea in the long run.\n\nI'm kind of skeptical about whether we'll be able to get rid of all\nthe resulting warnings without extremely invasive (and ugly) changes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 00:16:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: undersized unions" }, { "msg_contents": "Hi, \n\nOn February 5, 2023 6:16:55 AM GMT+01:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Michael Paquier <michael@paquier.xyz> writes:\n>> On Sat, Feb 04, 2023 at 05:07:08AM -0800, Andres Freund wrote:\n>>> We actually have a fair amount of code like that, but currently are\n>>> escaping most of the warnings, because gcc doesn't know that palloc() is\n>>> an allocator. With more optimizations (particularly with LTO), we end up\n>>> with more of such warnings. I'd like to annotate palloc so gcc\n>>> understands the size, as that does help to catch bugs when confusing the\n>>> type. It also helps static analyzers.\n>\n>> Ah, that seems like a good idea in the long run.\n>\n>I'm kind of skeptical about whether we'll be able to get rid of all\n>the resulting warnings without extremely invasive (and ugly) changes.\n\nIt's not that many sources of warnings, fwiw.\n\nBut the concrete reason for posting here was that I'm wondering whether the \"undersized\" allocations could cause problems as-is. \n\nOn the one hand there's compiler optimizations that could end up being a problem - imagine two branches of an if allocating something containing a union and one assigning to 32 the other to a 64bit integer union member. It'd imo be reasonable for the compiler to move that register->memory move outside of the if.\n\nOn the other hand, it also just seems risky from a code writing perspective. It's not immediate obvious that it'd be unsafe to create an on-stack Numeric by assigning *ptr. But it is.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sun, 05 Feb 2023 12:27:28 +0100", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: undersized unions" }, { "msg_contents": "Hi,\n\nOn 2023-02-05 10:18:14 +0900, Michael Paquier wrote:\n> On Sat, Feb 04, 2023 at 05:07:08AM -0800, Andres Freund wrote:\n> > <source>: In function 'assign':\n> > <source>:9:6: warning: array subscript 'foo[0]' is partly outside array bounds of 'unsigned char[4]' [-Warray-bounds=]\n> > 9 | p->i = i;\n> > | ^~\n> > <source>:8:22: note: object of size 4 allocated by '__builtin_malloc'\n> > 8 | foo *p = (foo *) __builtin_malloc(sizeof(int));\n> > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> > Compiler returned: 0\n> >\n> > I can't really tell if gcc is right or wrong wrong to warn about\n> > this. On the one hand it's a union, and we only access the element that\n> > is actually backed by memory, on the other hand, the standard does say\n> > that the size of a union is the largest element, so we are pointing to\n> > something undersized.\n>\n> Something I have noticed, related to that.. meson reports a set of\n> warnings here, not ./configure, still I apply the same set of CFLAGS\n> to both. What's the difference in the meson setup that creates that,\n> if I may ask? There is a link to the way -Warray-bound is handled?\n\nIt's possibly related to the optimization level used. Need a bit more\ninformation to provide a more educated guess. What warnings, what CFLAGS\netc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Feb 2023 04:44:15 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: undersized unions" }, { "msg_contents": "On Sun, Feb 5, 2023 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> On the other hand, it also just seems risky from a code writing perspective. It's not immediate obvious that it'd be unsafe to create an on-stack Numeric by assigning *ptr. But it is.\n\nWell, I think that is pretty obvious: we have lots of things that are\nessentially variable-length types, and you can't put any of them on\nthe stack.\n\nBut I do also think that the Numeric situation is messier than some\nothers we have got, and that's partly my fault, and it would be nice\nto make it better.\n\nI do not really know exactly how to do that, though. Our usual pattern\nis to just have a struct and end with a variable-length array, or\nalternatively add a comment says \"other stuff follows!\" at the end of\nthe struct definition, without doing anything that C knows about at\nall. But here it's more complicated: there's a uint16 value for sure,\nand then maybe an int16 value, and then some number of NumericDigit\nvalues. That \"maybe an int16 value\" part is not something that C has a\nbuilt-in way of representing, to my knowledge, which is why we end up\nwith this hackish thing.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:42:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: undersized unions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I do not really know exactly how to do that, though. Our usual pattern\n> is to just have a struct and end with a variable-length array, or\n> alternatively add a comment says \"other stuff follows!\" at the end of\n> the struct definition, without doing anything that C knows about at\n> all. But here it's more complicated: there's a uint16 value for sure,\n> and then maybe an int16 value, and then some number of NumericDigit\n> values. That \"maybe an int16 value\" part is not something that C has a\n> built-in way of representing, to my knowledge, which is why we end up\n> with this hackish thing.\n\nIf we were willing to blow off the optimizations for NBASE < 10000,\nand say that NumericDigit is always int16, then it would be possible\nto represent all of these variants as plain array-of-int16, with\nsome conventions about which indexes are what (and some casting\nbetween int16 and uint16).\n\nI am, however, very dubious that Andres is correct that there's a\nproblem here. Given that two of the variants of union NumericChoice\nare structs ending with a flexible array, any compiler that thinks\nit knows the size of the union precisely is broken.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Feb 2023 11:55:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: undersized unions" }, { "msg_contents": "Hi\n\nOn 2023-02-06 11:42:57 -0500, Robert Haas wrote:\n> On Sun, Feb 5, 2023 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > On the other hand, it also just seems risky from a code writing perspective. It's not immediate obvious that it'd be unsafe to create an on-stack Numeric by assigning *ptr. But it is.\n>\n> Well, I think that is pretty obvious: we have lots of things that are\n> essentially variable-length types, and you can't put any of them on\n> the stack.\n\nThere's a difference between a stack copy not containing all the data, and a\nstack copy triggering undefined behaviour. But I agree, it's not something\nthat'll commonly endanger us.\n\n\n> But I do also think that the Numeric situation is messier than some\n> others we have got, and that's partly my fault, and it would be nice\n> to make it better.\n>\n> I do not really know exactly how to do that, though. Our usual pattern\n> is to just have a struct and end with a variable-length array, or\n> alternatively add a comment says \"other stuff follows!\" at the end of\n> the struct definition, without doing anything that C knows about at\n> all. But here it's more complicated: there's a uint16 value for sure,\n> and then maybe an int16 value, and then some number of NumericDigit\n> values. That \"maybe an int16 value\" part is not something that C has a\n> built-in way of representing, to my knowledge, which is why we end up\n> with this hackish thing.\n\nPerhaps something like\n\ntypedef struct NumericBase\n{\n uint16\t\tn_header;\n} NumericBase;\n\ntypedef struct NumericData\n{\n\tint32\t\tvl_len_;\t\t/* varlena header (do not touch directly!) */\n\tNumericBase data;\n} NumericData;\n\n/* subclass of NumericBase, needs to start in a compatible way */\ntypedef struct NumericLong\n{\n\tuint16\t\tn_sign_dscale;\t/* Sign + display scale */\n\tint16\t\tn_weight;\t\t/* Weight of 1st digit\t*/\n} NumericLong;\n\n/* subclass of NumericBase, needs to start in a compatible way */\nstruct NumericShort\n{\n\tuint16\t\tn_header;\t\t/* Sign + display scale + weight */\n\tNumericDigit n_data[FLEXIBLE_ARRAY_MEMBER]; /* Digits */\n};\n\nMacros that e.g. access n_long would need to cast, before they're able to\naccess n_long. So we'd end up with something like\n\n#define NUMERIC_SHORT(n) ((NumericShort *)&((n)->data))\n#define NUMERIC_LONG(n) ((NumericLong *)&((n)->data))\n\n#define NUMERIC_WEIGHT(n)\t(NUMERIC_HEADER_IS_SHORT((n)) ? \\\n\t((NUMERIC_SHORT(n)->n_header & NUMERIC_SHORT_WEIGHT_SIGN_MASK ? \\\n\t\t~NUMERIC_SHORT_WEIGHT_MASK : 0) \\\n\t | (NUMERIC_SHORT(n)->n_header & NUMERIC_SHORT_WEIGHT_MASK)) \\\n\t: (NUMERIC_LONG(n)->n_weight))\n\nAlthough I'd actually be tempted to rip out all the casts but NUMERIC_LONG()\nin this case, because it's all all just accessing the base n_header anyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Feb 2023 10:28:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: undersized unions" }, { "msg_contents": "Hi,\n\nOn 2023-02-06 11:55:40 -0500, Tom Lane wrote:\n> I am, however, very dubious that Andres is correct that there's a\n> problem here. Given that two of the variants of union NumericChoice\n> are structs ending with a flexible array, any compiler that thinks\n> it knows the size of the union precisely is broken.\n\nThe compiler just complains about the minimum size of the union, which is\n Max(offsetof(NumericShort, n_data), offsetof(NumericLong, n_data))\nIOW, our trickery with flexible arrays would allow us to allocate just 8 bytes\nfor a NumericData, but not just 6.\n\nFlexible arrays allow the compiler to understand the variable size, but we\ndon't use it for all variability. Hence the warnings.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Feb 2023 10:36:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: undersized unions" }, { "msg_contents": "On Mon, Feb 6, 2023 at 1:28 PM Andres Freund <andres@anarazel.de> wrote:\n> Perhaps something like\n\nYeah, that'd work. You'd want a big ol' warning comment here:\n\n> typedef struct NumericData\n> {\n> int32 vl_len_; /* varlena header (do not touch directly!) */\n> NumericBase data;\n> } NumericData;\n\nlike /* actually NumericShort or NumericLong */ or something\n\n> Although I'd actually be tempted to rip out all the casts but NUMERIC_LONG()\n> in this case, because it's all all just accessing the base n_header anyway.\n\nYeah.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 13:51:33 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: undersized unions" } ]
[ { "msg_contents": "While investigating code paths that use system() and popen() and\ntrying to write latch-multiplexing replacements, which I'll write\nabout separately, leaks of $SUBJECT became obvious. On a FreeBSD box,\nyou can see the sockets, pipes and data and WAL files that the\nsubprocess inherits:\n\n create table t (x text);\n copy t from program 'procstat -f $$';\n select * from t;\n\n(On Linux, you might try something like 'ls -slap /proc/self/fd'. Not\nsure how to do it on a Mac but note that 'lsof' is no good for this\npurpose because the first thing it does is close all descriptors > 2;\nmaybe 'lsof -p $PPID' inside a shell wrapper so you're analysing the\nshell process that sits between postgres and lsof, rather than lsof\nitself?)\n\nSince we've started assuming a few other bits of SUSv3 (POSIX 2008)\nfunctionality, the standard that specified O_CLOEXEC, and since in\npractice Linux, *BSD, macOS, AIX, Solaris, illumos all have it, I\nthink we can unconditionally just use it on all files we open. That\nis, if we were to make fallback code, it would be untested, and if we\nwere to do it with fcntl() always it would be a frequent extra system\ncall that we don't need to support a computer that doesn't exist. For\nsockets and pipes, much more rarely created, some systems have\nnon-standard extensions along the same lines, but I guess we should\nstick with standards and call fcntl(FD_CLOEXEC) for now.\n\nThere is a place in fd.c that already referenced O_CLOEXEC (it wasn't\nreally using it, just making an assertion that flags don't collide),\nwith #ifdef around it, but that was only conditional because at the\ntime of commit 04cad8f7 we had a macOS 10.4 system (released 2005) in\nthe 'farm which obviously didn't know about POSIX 2008 interfaces. We\ncan just remove that #ifdef. (It's probably OK to remove the test of\nO_DSYNC too but I'll think about that another time.)\n\nOn Windows, handles, at least as we create them, are not inherited so\nthe problem doesn't come up AFAICS. I *think* if we were to use\nWindows' own open(), that would be an issue, but we have our own\nCreateFile() thing and it doesn't turn on inheritance IIUC. So I just\ngave O_CLOEXEC a zero definition there. It would be interesting to\nknow what handles a subprocess sees. If someone who knows how to\ndrive Windows could run a subprogram that just does the equivalent of\n'sleep 60' they might be able to see that in one of those handle spy\ntools, to visually check the above. (If I'm wrong about that, it\nmight be possible for a subprocess to interfere with a\nProcSignalBarrier command to close all files, so I'd love to know\nabout it.)\n\nWe were already doing FD_CLOEXEC on the latch self-pipe with comment\n\"we surely do not want any child processes messing with them\", so it's\nnot like this wasn't a well-known issue before, but I guess it just\nnever bothered anyone enough to do anything about the more general\nproblem.\n\nWith the attached, the test at the top of this email shows only in,\nout, error, and one thing that procstat opened itself.\n\nAre there any more descriptors we need to think about?", "msg_date": "Sun, 5 Feb 2023 13:00:50 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "File descriptors in exec'd subprocesses" }, { "msg_contents": "On Sun, Feb 5, 2023 at 1:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> SUSv3 (POSIX 2008)\n\nOh, oops, 2008 actually corresponds to SUSv4. Hmm.\n\n\n", "msg_date": "Sun, 5 Feb 2023 13:03:17 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Feb 5, 2023 at 1:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> SUSv3 (POSIX 2008)\n\n> Oh, oops, 2008 actually corresponds to SUSv4. Hmm.\n\nWorst case, if we come across some allegedly-supported platform without\nO_CLOEXEC, we #define that to zero. Said platform is no worse off\nthan it was before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 00:15:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "Hi, \n\nUnsurprisingly I'm in favor of this. \n\nOn February 5, 2023 1:00:50 AM GMT+01:00, Thomas Munro <thomas.munro@gmail.com> wrote:\n>Are there any more descriptors we need to think about?\n\nPostmaster's listen sockets? Saves a bunch of syscalls, at least. Logging collector pipe write end, in backends?\n\nGreetings,\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sun, 05 Feb 2023 15:29:33 +0100", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On February 5, 2023 1:00:50 AM GMT+01:00, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Are there any more descriptors we need to think about?\n\n> Postmaster's listen sockets?\n\nI wonder whether O_CLOEXEC on that would be inherited by the\nclient-communication sockets, though. That's fine ... unless you\nare doing EXEC_BACKEND.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Feb 2023 11:06:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "Hi,\n\nOn 2023-02-05 11:06:13 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On February 5, 2023 1:00:50 AM GMT+01:00, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >> Are there any more descriptors we need to think about?\n> \n> > Postmaster's listen sockets?\n> \n> I wonder whether O_CLOEXEC on that would be inherited by the\n> client-communication sockets, though.\n\nI'd be very suprised if it were.\n\n<hack>\n\nNope, at least not on linux. Verified by looking at /proc/*/fdinfo/n\nafter adding SOCK_CLOEXEC to just the socket() call. 'flags' changes\nfrom 02 -> 02000002 for the listen socket, but stays at 04002 for the\nclient socket. If I add SOCK_CLOEXEC to accept() (well, accept4()), it\ndoes change from 04002 to 02004002.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Feb 2023 08:40:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "On Mon, Feb 6, 2023 at 3:29 AM Andres Freund <andres@anarazel.de> wrote:\n> On February 5, 2023 1:00:50 AM GMT+01:00, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >Are there any more descriptors we need to think about?\n>\n> Postmaster's listen sockets? Saves a bunch of syscalls, at least.\n\nAssuming you mean accepted sockets, yeah, I see how two save two\nsyscalls there, and since you nerd-sniped me into looking into the\nSOCK_CLOEXEC landscape, I like it even more now that I've understood\nthat accept4() is rubber-stamped for the next revision of POSIX[1] and\nis already accepted almost everywhere. It's not just window dressing,\nyou need it to write multi-threaded programs that fork/exec without\nworrying about the window between fd creation and fcntl(FD_CLOEXEC) in\nanother thread; hopefully one day we will care about that sort of\nthing in some places too! Here's a separate patch for that.\n\nI *guess* we need HAVE_DECL_ACCEPT4 for the guarded availability\nsystem (cf pwritev) when Apple gets the memo, but see below. Hard to\nsay if AIX is still receiving memos (cf recent speculation in the\nRegister). All other target OSes seem to have had this stuff for a\nwhile.\n\nSince client connections already do fcntl(FD_CLOEXEC), postgres_fdw\nconnections didn't have this problem. It seems reasonable to want to\nskip a couple of system calls there too; also, client programs might\nalso be interested in future-POSIX's atomic race-free close-on-exec\nsocket fabrication. So here also is a patch to use SOCK_CLOEXEC on\nthat end too, if available.\n\nBut ... hmph, all we can do here is test for the existence of\nSOCK_NONBLOCK and SOCK_CLOEXEC, since there is no new function to test\nfor. Maybe we should just assume accept4() also exists if these exist\n(it's hard to imagine that Apple or IBM would address atomicity on one\nend but not the other of a socket), but predictions are so difficult,\nespecially about the future! Anyone want to guess if it's better to\nleave the meson/configure probe in for the accept4 end or just roll\nwith the macros?\n\n> Logging collector pipe write end, in backends?\n\nThe write end of the logging pipe is already closed, after dup2'ing it\nto STDOUT_FILENO to STDERR_FILENO, so archive commands and suchlike do\nreceive the handle, but they want them. It's the intended and\ndocumented behaviour that anything written to that will finish up in\nthe log.\n\nAs for pipe2(O_CLOEXEC), I see the point of it in a multi-threaded\napplication. It's not terribly useful for us though, because we\nusually want to close only one end, except in the case of the\nself-pipe. But the self-pipe is no longer used on the systems that\nhave pipe2()-from-the-future.\n\nI haven't tested this under EXEC_BACKEND yet.\n\n[1] https://www.austingroupbugs.net/view.php?id=411", "msg_date": "Mon, 6 Feb 2023 15:30:15 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "I had missed one: the \"watch\" end of the postmaster pipe also needs FD_CLOEXEC.", "msg_date": "Tue, 21 Feb 2023 01:13:57 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "Something bothered me about the previous versions: Which layer should\nadd O_CLOEXEC, given that we have md.c -> PathNameOpenXXX() ->\nBasicOpenFile() -> open()? Previously I had md.c adding it, but on\nreflection, it makes no sense to open a \"File\" (virtual file\ndescriptor) that is *not* O_CLOEXEC. The client doesn't really know\nwhen the raw descriptor is currently open and shouldn't really access\nit; it would be strange to want it to survive a call to exec*(). I\nthink the 'highest' level API that we could consider requiring\nO_CLOEXEC to be passed in explicitly, or not, is BasicOpenFile().\nDone like that in this version. This is the version I'm thinking of\ncommitting, unless someone wants to argue for another level.\n\nAnother good choice would be to do it inside BasicOpenFile(), and then\nthe patch would be smaller again (xlog.c wouldn't need to mention it,\nand there would perhaps be less risk that some long-lived descriptor\nsomewhere else has failed to request it), but perhaps that would be\npresumptuous. That function returns raw descriptors, and the caller,\nperhaps an extension, might legitimately want to make an inheritable\ndescriptor for some reason, I guess? Does anyone think I should move\nit in there instead?\n\nI realised that if we're going to use accept4() to cut down on\nsyscalls, we could also do the same for the postmaster pipe with\npipe2().\n\nHere also is a tiny archeological cleanup to avoid creating\ncontradictory claims about whether all computers have O_CLOEXEC.\n\nI toyed with the idea of a tiny Linux-only regression test using \"COPY\nfds FROM PROGRAM 'ls /proc/self/fd'\" expecting 0, 1, 2, 3 (3 being\nls's opendir()), but that's probably a little too cute; and also\nshowed me that pg_regress.c leaks its log file, the fix for which is\nto add \"e\" to its fdopen(), but that's another POSIX-next feature[1]\nthat seems a little harder to detect, and I gave up on that.\n\n[1] https://wiki.freebsd.org/AtomicCloseOnExec", "msg_date": "Tue, 21 Feb 2023 16:59:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "On Mon, 20 Feb 2023 at 23:04, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Done like that in this version. This is the version I'm thinking of\n> committing, unless someone wants to argue for another level.\n\nFWIW the cfbot doesn't understand this patch series. I'm not sure why\nbut it's only trying to apply the first (the MacOS one) and it's\nfailing to apply even that.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Wed, 1 Mar 2023 15:48:42 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "On Thu, Mar 2, 2023 at 9:49 AM Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n> On Mon, 20 Feb 2023 at 23:04, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > Done like that in this version. This is the version I'm thinking of\n> > committing, unless someone wants to argue for another level.\n>\n> FWIW the cfbot doesn't understand this patch series. I'm not sure why\n> but it's only trying to apply the first (the MacOS one) and it's\n> failing to apply even that.\n\nAh, it's because I committed one patch in the series. I'll commit one\nmore, and then repost the rest, shortly.\n\n\n", "msg_date": "Thu, 2 Mar 2023 09:57:33 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "On Thu, Mar 2, 2023 at 9:57 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Mar 2, 2023 at 9:49 AM Gregory Stark (as CFM)\n> <stark.cfm@gmail.com> wrote:\n> > On Mon, 20 Feb 2023 at 23:04, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Done like that in this version. This is the version I'm thinking of\n> > > committing, unless someone wants to argue for another level.\n> >\n> > FWIW the cfbot doesn't understand this patch series. I'm not sure why\n> > but it's only trying to apply the first (the MacOS one) and it's\n> > failing to apply even that.\n>\n> Ah, it's because I committed one patch in the series. I'll commit one\n> more, and then repost the rest, shortly.\n\nI pushed the main patch, \"Don't leak descriptors into subprograms.\".\nHere's a rebase of the POSIX-next stuff, but I'll sit on these for a\nbit longer to see if the build farm agrees with my claim about the\nubiquity of O_CLOEXEC, and if anyone has comments on this stuff.", "msg_date": "Fri, 3 Mar 2023 11:04:21 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: File descriptors in exec'd subprocesses" }, { "msg_contents": "I pushed the libpq changes. I'll leave the pipe2 and accept4 changes\non ice for now, maybe for a later cycle (unlike the committed patches,\nthey don't currently fix a known problem, they just avoid some\nsyscalls that are already fairly rare). For the libpq change, the\nbuild farm seems happy so far. I was a little worried that there\ncould be ways that #ifdef SOCK_CLOEXEC could be true for a build that\nmight encounter a too-old kernel and break, but it looks like you'd\nhave to go so far back into EOL'd releases that even our zombie build\nfarm animals have it. Only macOS and AIX don't have it yet, and this\nshould be fine with Apple's availability guards, which leaves just\nAIX. (AFAIK headers and kernels are always in sync at the major\nversion level on AIX, but if not presumably there'd have to be some\nsimilar guard system?)\n\n\n", "msg_date": "Sat, 18 Mar 2023 10:53:21 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: File descriptors in exec'd subprocesses" } ]
[ { "msg_contents": "I'm doing research on heap_am, and for heap_beginscan func, I find\r\nout that there is a arg called nkeys, I use some sqls as examples like \r\n'select * from t;' and 'select * from t where a = 1', but it is always zero,\r\ncan you give me some descriptions for this? what's it used for? \r\n\r\n\r\njacktby@gmail.com\r\n\n\n\nI'm doing research on heap_am, and for heap_beginscan func, I findout that there is a arg called nkeys, I use some sqls as examples like 'select * from t;' and 'select * from t where a  = 1', but it is always zero,can you give me some descriptions for this? what's it used for? \njacktby@gmail.com", "msg_date": "Sun, 5 Feb 2023 12:09:10 +0800", "msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>", "msg_from_op": true, "msg_subject": "what's the meaning of key?" }, { "msg_contents": "On 05/02/2023 06:09, jacktby@gmail.com wrote:\n> I'm doing research on heap_am, and for heap_beginscan func, I find\n> out that there is a arg called nkeys, I use some sqls as examples like\n> 'select * from t;' and 'select * from t where a  = 1', but it is always \n> zero,\n> can you give me some descriptions for this? what's it used for?\n\nThe executor evaluates table scan quals in the SeqScan node itself, in \nExecScan function. It doesn't use the heap_beginscan scankeys.\n\nThere has been some discussion on changing that, as some table access \nmethods might be able to filter rows more efficiently using the scan \nkeys than the executor node. But that's how it currently works.\n\nI think the heap scankeys are used by catalog accesses, though, so it's \nnot completely dead code.\n\n- Heikki\n\n\n\n", "msg_date": "Sun, 5 Feb 2023 10:22:05 +0100", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: what's the meaning of key?" } ]
[ { "msg_contents": "When I use 'select * from t where a = 1'; And I debug to find where the 'a = 1' is used,\r\nwhen I arrive ExecScan in src/backend/executor/execScan.c, line 158, where this 'a = 1' is\r\nstored in?\r\n\r\n\r\njacktby@gmail.com\r\n\n\n\nWhen I use 'select * from t where a = 1'; And I debug to find where the 'a = 1' is used,when I arrive ExecScan in src/backend/executor/execScan.c, line 158, where this 'a = 1' isstored in?\njacktby@gmail.com", "msg_date": "Sun, 5 Feb 2023 12:29:40 +0800", "msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>", "msg_from_op": true, "msg_subject": "Where is the filter?" }, { "msg_contents": "On Sat, Feb 4, 2023 at 11:29 PM jacktby@gmail.com <jacktby@gmail.com> wrote:\n> When I use 'select * from t where a = 1'; And I debug to find where the 'a = 1' is used,\n> when I arrive ExecScan in src/backend/executor/execScan.c, line 158, where this 'a = 1' is\n> stored in?\n\nIt depends somewhat on what query plan you got. For instance if it was\na Seq Scan then it will be a filter-condition, or \"qual\", and the call\nto ExecQual() later in ExecScan() will be responsible for evaluating\nit. But if you are using an index scan, then it will probably become\nan index qual, and those are passed down into the index machinery and\ninternally handled by the index AM. It's a good idea when you're\ndebugging this sort of thing to start by looking at the EXPLAIN or\nEXPLAIN ANALYZE output, and perhaps also the output with\ndebug_print_plan = true.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:15:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Where is the filter?" } ]
[ { "msg_contents": "hi,\n\nI noticed that the pg_rules system view (all PG versions) does not include a\n\"status\" field (like in pg_trigger with tgenabled column)\n\nthe official view (from 15.1 sources) is :\n\nCREATE VIEW pg_rules AS\n SELECT\n N.nspname AS schemaname,\n C.relname AS tablename,\n R.rulename AS rulename,\n pg_get_ruledef(R.oid) AS definition\n FROM (pg_rewrite R JOIN pg_class C ON (C.oid = R.ev_class))\n LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n WHERE R.rulename != '_RETURN';\n\ni propose to add a new field \"rule_enabled\" to get (easilly and officially)\nthe rule status to all PG version\n\nCREATE VIEW pg_rules AS\n SELECT\n N.nspname AS schemaname,\n C.relname AS tablename,\n R.rulename AS rulename,\n R.ev_enabled as rule_enabled,\n pg_get_ruledef(R.oid) AS definition\n FROM (pg_rewrite R JOIN pg_class C ON (C.oid = R.ev_class))\n LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n WHERE R.rulename != '_RETURN';\n\n\nWhat do u think about that ?\n\nThx\n\nAlbin\n\n\nhi, \n\nI noticed that the pg_rules system view (all PG versions) does not include a\n\"status\" field (like in pg_trigger with tgenabled column)\n\nthe official view (from 15.1 sources) is :\n\nCREATE VIEW pg_rules AS\n    SELECT\n        N.nspname AS schemaname,\n        C.relname AS tablename,\n        R.rulename AS rulename,\n        pg_get_ruledef(R.oid) AS definition\n    FROM (pg_rewrite R JOIN pg_class C ON (C.oid = R.ev_class))\n        LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n    WHERE R.rulename != '_RETURN';\n\ni propose to add a new field \"rule_enabled\" to get (easilly and officially)\nthe rule status to all PG version \n\nCREATE VIEW pg_rules AS\n    SELECT\n        N.nspname AS schemaname,\n        C.relname AS tablename,\n        R.rulename AS rulename,\n        R.ev_enabled as rule_enabled,\n        pg_get_ruledef(R.oid) AS definition\n    FROM (pg_rewrite R JOIN pg_class C ON (C.oid = R.ev_class))\n        LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n    WHERE R.rulename != '_RETURN';\n\n\nWhat do u think about that ?\n\nThx\n\nAlbin", "msg_date": "Sun, 5 Feb 2023 19:16:04 +0100", "msg_from": "Albin Hermange <albin.hermange@gmail.com>", "msg_from_op": true, "msg_subject": "add a \"status\" column to the pg_rules system view" } ]
[ { "msg_contents": "Instead of defining the same set of macros several times, define it once \nin an appropriate header file. In passing, convert to inline functions.", "msg_date": "Mon, 6 Feb 2023 10:54:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Consolidate ItemPointer to Datum conversion functions" }, { "msg_contents": "On 06/02/2023 11:54, Peter Eisentraut wrote:\n> Instead of defining the same set of macros several times, define it once\n> in an appropriate header file. In passing, convert to inline functions.\n\nLooks good to me. Did you consider moving PG_GETARG_ITEMPOINTER and \nPG_RETURN_ITEMPOINTER, too? They're only used in tid.c, but for most \ndatatypes, we define the PG_GETARG and PG_RETURN macros in the same \nheader file as the the Datum conversion functions.\n\n- Heikki\n\n\n\n", "msg_date": "Mon, 6 Feb 2023 11:11:54 +0100", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Consolidate ItemPointer to Datum conversion functions" }, { "msg_contents": "On 06.02.23 11:11, Heikki Linnakangas wrote:\n> On 06/02/2023 11:54, Peter Eisentraut wrote:\n>> Instead of defining the same set of macros several times, define it once\n>> in an appropriate header file.  In passing, convert to inline functions.\n> \n> Looks good to me. Did you consider moving PG_GETARG_ITEMPOINTER and \n> PG_RETURN_ITEMPOINTER, too? They're only used in tid.c, but for most \n> datatypes, we define the PG_GETARG and PG_RETURN macros in the same \n> header file as the the Datum conversion functions.\n\nYeah that makes sense. Here is an updated patch for that.", "msg_date": "Thu, 9 Feb 2023 09:33:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Consolidate ItemPointer to Datum conversion functions" }, { "msg_contents": "On 09.02.23 09:33, Peter Eisentraut wrote:\n> On 06.02.23 11:11, Heikki Linnakangas wrote:\n>> On 06/02/2023 11:54, Peter Eisentraut wrote:\n>>> Instead of defining the same set of macros several times, define it once\n>>> in an appropriate header file.  In passing, convert to inline functions.\n>>\n>> Looks good to me. Did you consider moving PG_GETARG_ITEMPOINTER and \n>> PG_RETURN_ITEMPOINTER, too? They're only used in tid.c, but for most \n>> datatypes, we define the PG_GETARG and PG_RETURN macros in the same \n>> header file as the the Datum conversion functions.\n> \n> Yeah that makes sense.  Here is an updated patch for that.\n\ncommitted\n\n\n", "msg_date": "Mon, 13 Feb 2023 10:26:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Consolidate ItemPointer to Datum conversion functions" } ]
[ { "msg_contents": "I was just playing with some random timestamps for a week, for a month, for\na year ...\n\nselect distinct current_date+((random()::numeric)||'month')::interval from\ngenerate_series(1,100) order by 1;\nIt´s with distinct clause because if you change that 'month' for a 'year'\nit´ll return only 12 rows, instead of 100. So, why years part of interval\nworks differently than any other ?\n\nselect '1.01 week'::interval; --> 0 years 0 mons 7 days 1 hours 40 mins\n48.00 secs\nselect '1.01 month'::interval; --> 0 years 1 mons 0 days 7 hours 12 mins\n0.00 secs\nselect '1.01 year'::interval; --> 1 years 0 mons 0 days 0 hours 0 mins 0.00\nsecs\n\nthanks\nMarcos\n\nI was just playing with some random timestamps for a week, for a month, for a year ...select distinct current_date+((random()::numeric)||'month')::interval from generate_series(1,100) order by 1;It´s with distinct clause because if you change that 'month' for a 'year' it´ll return only 12 rows, instead of 100. So, why years part of interval works differently than any other ?select '1.01 week'::interval; --> 0 years 0 mons 7 days 1 hours 40 mins 48.00 secsselect '1.01 month'::interval; --> 0 years 1 mons 0 days 7 hours 12 mins 0.00 secsselect '1.01 year'::interval; --> 1 years 0 mons 0 days 0 hours 0 mins 0.00 secsthanksMarcos", "msg_date": "Mon, 6 Feb 2023 08:20:17 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Understanding years part of Interval" }, { "msg_contents": "> On 06/02/2023 12:20 CET Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> I was just playing with some random timestamps for a week, for a month,\n> for a year ...\n>\n> select distinct current_date+((random()::numeric)||'month')::interval from generate_series(1,100) order by 1;\n> It´s with distinct clause because if you change that 'month' for a 'year'\n> it´ll return only 12 rows, instead of 100. So, why years part of interval\n> works differently than any other ?\n>\n> select '1.01 week'::interval; --> 0 years 0 mons 7 days 1 hours 40 mins 48.00 secs\n> select '1.01 month'::interval; --> 0 years 1 mons 0 days 7 hours 12 mins 0.00 secs\n> select '1.01 year'::interval; --> 1 years 0 mons 0 days 0 hours 0 mins 0.00 secs\n\nExplained in https://www.postgresql.org/docs/15/datatype-datetime.html#DATATYPE-INTERVAL-INPUT:\n\n\tField values can have fractional parts: for example, '1.5 weeks' or\n\t'01:02:03.45'. However, because interval internally stores only\n\tthree integer units (months, days, microseconds), fractional units\n\tmust be spilled to smaller units. Fractional parts of units greater\n\tthan months are rounded to be an integer number of months, e.g.\n\t'1.5 years' becomes '1 year 6 mons'. Fractional parts of weeks and\n\tdays are computed to be an integer number of days and microseconds,\n\tassuming 30 days per month and 24 hours per day, e.g., '1.75 months'\n\tbecomes 1 mon 22 days 12:00:00. Only seconds will ever be shown as\n\tfractional on output.\n\n\tInternally interval values are stored as months, days, and\n\tmicroseconds. This is done because the number of days in a month\n\tvaries, and a day can have 23 or 25 hours if a daylight savings time\n\tadjustment is involved.\n\n--\nErik\n\n\n", "msg_date": "Mon, 6 Feb 2023 14:59:11 +0100 (CET)", "msg_from": "Erik Wienhold <ewie@ewie.name>", "msg_from_op": false, "msg_subject": "Re: Understanding years part of Interval" }, { "msg_contents": "Em seg., 6 de fev. de 2023 às 10:59, Erik Wienhold <ewie@ewie.name>\nescreveu:\n\n> > On 06/02/2023 12:20 CET Marcos Pegoraro <marcos@f10.com.br> wrote:\n> >\n> > I was just playing with some random timestamps for a week, for a month,\n> > for a year ...\n> >\n> > select distinct current_date+((random()::numeric)||'month')::interval\n> from generate_series(1,100) order by 1;\n> > It´s with distinct clause because if you change that 'month' for a 'year'\n> > it´ll return only 12 rows, instead of 100. So, why years part of interval\n> > works differently than any other ?\n> >\n> > select '1.01 week'::interval; --> 0 years 0 mons 7 days 1 hours 40 mins\n> 48.00 secs\n> > select '1.01 month'::interval; --> 0 years 1 mons 0 days 7 hours 12 mins\n> 0.00 secs\n> > select '1.01 year'::interval; --> 1 years 0 mons 0 days 0 hours 0 mins\n> 0.00 secs\n>\n> Explained in\n> https://www.postgresql.org/docs/15/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n> :\n>\n> Field values can have fractional parts: for example, '1.5 weeks' or\n> '01:02:03.45'. However, because interval internally stores only\n> three integer units (months, days, microseconds), fractional units\n> must be spilled to smaller units. Fractional parts of units greater\n> than months are rounded to be an integer number of months, e.g.\n> '1.5 years' becomes '1 year 6 mons'. Fractional parts of weeks and\n> days are computed to be an integer number of days and microseconds,\n> assuming 30 days per month and 24 hours per day, e.g., '1.75\n> months'\n> becomes 1 mon 22 days 12:00:00. Only seconds will ever be shown as\n> fractional on output.\n>\n> Internally interval values are stored as months, days, and\n> microseconds. This is done because the number of days in a month\n> varies, and a day can have 23 or 25 hours if a daylight savings\n> time\n> adjustment is involved.\n>\n> I´ve sent this message initially to general and Erik told me it's\ndocumented, so it's better to hackers help me if this has an explaining why\nit's done that way.\n\nselect '1 year'::interval = '1.05 year'::interval -->true ?\nI cannot agree that this select returns true.\n\nEm seg., 6 de fev. de 2023 às 10:59, Erik Wienhold <ewie@ewie.name> escreveu:> On 06/02/2023 12:20 CET Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> I was just playing with some random timestamps for a week, for a month,\n> for a year ...\n>\n> select distinct current_date+((random()::numeric)||'month')::interval from generate_series(1,100) order by 1;\n> It´s with distinct clause because if you change that 'month' for a 'year'\n> it´ll return only 12 rows, instead of 100. So, why years part of interval\n> works differently than any other ?\n>\n> select '1.01 week'::interval; --> 0 years 0 mons 7 days 1 hours 40 mins 48.00 secs\n> select '1.01 month'::interval; --> 0 years 1 mons 0 days 7 hours 12 mins 0.00 secs\n> select '1.01 year'::interval; --> 1 years 0 mons 0 days 0 hours 0 mins 0.00 secs\n\nExplained in https://www.postgresql.org/docs/15/datatype-datetime.html#DATATYPE-INTERVAL-INPUT:\n\n        Field values can have fractional parts: for example, '1.5 weeks' or\n        '01:02:03.45'. However, because interval internally stores only\n        three integer units (months, days, microseconds), fractional units\n        must be spilled to smaller units. Fractional parts of units greater\n        than months are rounded to be an integer number of months, e.g.\n        '1.5 years' becomes '1 year 6 mons'. Fractional parts of weeks and\n        days are computed to be an integer number of days and microseconds,\n        assuming 30 days per month and 24 hours per day, e.g., '1.75 months'\n        becomes 1 mon 22 days 12:00:00. Only seconds will ever be shown as\n        fractional on output.\n\n        Internally interval values are stored as months, days, and\n        microseconds. This is done because the number of days in a month\n        varies, and a day can have 23 or 25 hours if a daylight savings time\n        adjustment is involved.\n\n\nI´ve sent this message initially to general and Erik told me it's documented, so it's better to hackers help me if this has an explaining why it's done that way.select '1 year'::interval = '1.05 year'::interval -->true ?I cannot agree that this select returns true.", "msg_date": "Mon, 6 Feb 2023 14:33:01 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Understanding years part of Interval" }, { "msg_contents": "> On 06/02/2023 18:33 CET Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> Em seg., 6 de fev. de 2023 às 10:59, Erik Wienhold <ewie@ewie.name> escreveu:\n> > > On 06/02/2023 12:20 CET Marcos Pegoraro <marcos@f10.com.br> wrote:\n> > >\n> > > I was just playing with some random timestamps for a week, for a month,\n> > > for a year ...\n> > >\n> > > select distinct current_date+((random()::numeric)||'month')::interval from generate_series(1,100) order by 1;\n> > > It´s with distinct clause because if you change that 'month' for a 'year'\n> > > it´ll return only 12 rows, instead of 100. So, why years part of interval\n> > > works differently than any other ?\n> > >\n> > > select '1.01 week'::interval; --> 0 years 0 mons 7 days 1 hours 40 mins 48.00 secs\n> > > select '1.01 month'::interval; --> 0 years 1 mons 0 days 7 hours 12 mins 0.00 secs\n> > > select '1.01 year'::interval; --> 1 years 0 mons 0 days 0 hours 0 mins 0.00 secs\n> >\n> > Explained in https://www.postgresql.org/docs/15/datatype-datetime.html#DATATYPE-INTERVAL-INPUT:\n> >\n> > Field values can have fractional parts: for example, '1.5 weeks' or\n> > '01:02:03.45'. However, because interval internally stores only\n> > three integer units (months, days, microseconds), fractional units\n> > must be spilled to smaller units. Fractional parts of units greater\n> > than months are rounded to be an integer number of months, e.g.\n> > '1.5 years' becomes '1 year 6 mons'. Fractional parts of weeks and\n> > days are computed to be an integer number of days and microseconds,\n> > assuming 30 days per month and 24 hours per day, e.g., '1.75 months'\n> > becomes 1 mon 22 days 12:00:00. Only seconds will ever be shown as\n> > fractional on output.\n> >\n> > Internally interval values are stored as months, days, and\n> > microseconds. This is done because the number of days in a month\n> > varies, and a day can have 23 or 25 hours if a daylight savings time\n> > adjustment is involved.\n> >\n> I´ve sent this message initially to general and Erik told me it's documented,\n> so it's better to hackers help me if this has an explaining why it's done that way.\n>\n> select '1 year'::interval = '1.05 year'::interval -->true ?\n> I cannot agree that this select returns true.\n\nThe years are converted to months and the fractional month is rounded half up:\n\n\t1.05 year = 12.6 month\n\t=> 1 year 0.6 month\n\t=> 1 year 1 month (after rounding)\n\nCompare that to 12.5 months to see when the rounding occurs:\n\n\t12.5 month / 12 month\n\t=> 1.0416... years\n\nPlug 1.0416 and 1.0417 into the interval to observe the rounding:\n\n\t=# select '1.0416 year'::interval, '1.0417 year'::interval;\n\t interval | interval\n\t----------+--------------\n\t 1 year | 1 year 1 mon\n\n--\nErik\n\n\n", "msg_date": "Mon, 6 Feb 2023 20:29:58 +0100 (CET)", "msg_from": "Erik Wienhold <ewie@ewie.name>", "msg_from_op": false, "msg_subject": "Re: Understanding years part of Interval" }, { "msg_contents": ">\n> The years are converted to months and the fractional month is rounded half\n>> up:\n>>\n>> 1.05 year = 12.6 month\n>> => 1 year 0.6 month\n>> => 1 year 1 month (after rounding)\n>>\n>> Compare that to 12.5 months to see when the rounding occurs:\n>>\n>> 12.5 month / 12 month\n>> => 1.0416... years\n>>\n>> Plug 1.0416 and 1.0417 into the interval to observe the rounding:\n>>\n>> =# select '1.0416 year'::interval, '1.0417 year'::interval;\n>> interval | interval\n>> ----------+--------------\n>> 1 year | 1 year 1 mon\n>>\n>> I understood what you explained, but cannot agree that it's correct.\n> Run these and you'll see the first and second select are fine, the third\n> ... why ?\n>\n> select distinct current_date + ((random()::numeric) * '1 year'::interval)\n> from generate_series(1,100) order by 1;\n> select distinct current_date + ((random()::numeric) * '12\n> month'::interval) from generate_series(1,100) order by 1;\n> select distinct current_date + ((random()::numeric) || 'year')::interval\n> from generate_series(1,100) order by 1;\n>\n> So, I have to think ... never use fractional parts on years, right ?\n>\n\nOnly to be written, if somebody has to work with fractional parts of years.\n\nThis way works\nselect distinct (random()::numeric) * ('1 year'::interval) from\ngenerate_series(1,100) order by 1;\n\nThis way doesn´t\nselect distinct ((random()::numeric) || 'year')::interval from\ngenerate_series(1,100) order by 1;\n\nThe years are converted to months and the fractional month is rounded half up:\n\n        1.05 year = 12.6 month\n        => 1 year 0.6 month\n        => 1 year 1 month        (after rounding)\n\nCompare that to 12.5 months to see when the rounding occurs:\n\n        12.5 month / 12 month\n        => 1.0416... years\n\nPlug 1.0416 and 1.0417 into the interval to observe the rounding:\n\n        =# select '1.0416 year'::interval, '1.0417 year'::interval;\n         interval |   interval\n        ----------+--------------\n         1 year   | 1 year 1 mon\nI understood what you explained, but cannot agree that it's correct.Run these and you'll see the first and second select are fine, the third ... why ?select distinct current_date + ((random()::numeric) * '1 year'::interval) from generate_series(1,100) order by 1;select distinct current_date + ((random()::numeric) * '12 month'::interval) from generate_series(1,100) order by 1;select distinct current_date + ((random()::numeric) || 'year')::interval from generate_series(1,100) order by 1;So, I have to think ... never use fractional parts on years, right ? Only to be written, if somebody has to work with fractional parts of years.This way worksselect distinct (random()::numeric) * ('1 year'::interval) from generate_series(1,100) order by 1;This way doesn´tselect distinct ((random()::numeric) || 'year')::interval from generate_series(1,100) order by 1;", "msg_date": "Tue, 7 Feb 2023 09:00:03 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Understanding years part of Interval" } ]
[ { "msg_contents": "I recently moved crake to a new machine running Fedora 36, which has \nOpenSSL 3.0.0. This causes the SSL tests to fail on branches earlier \nthan release 13, so I propose to backpatch commit f0d2c65f17 to the \nrelease 11 and 12 branches.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nI recently moved crake to a new machine running Fedora 36, which\n has OpenSSL 3.0.0. This causes the SSL tests to fail on branches\n earlier than release 13, so I propose to backpatch commit\n f0d2c65f17 to the release 11 and 12 branches. \n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 6 Feb 2023 10:56:11 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I recently moved crake to a new machine running Fedora 36, which has \n> OpenSSL 3.0.0. This causes the SSL tests to fail on branches earlier \n> than release 13, so I propose to backpatch commit f0d2c65f17 to the \n> release 11 and 12 branches.\n\nHmm ... according to that commit message,\n\n Note that the minimum supported OpenSSL version is 1.0.1 as of\n 7b283d0e1d1d79bf1c962d790c94d2a53f3bb38a, so this does not introduce\n any new version requirements.\n\nSo presumably, changing this test would break it for OpenSSL 0.9.8,\nwhich is still nominally supported in those branches. On the other\nhand, this test isn't run by default, so users would likely never\nnotice anyway.\n\nOn the whole, +1 for doing this (after the release freeze lifts).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Feb 2023 11:13:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "On 2023-02-06 Mo 11:13, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> I recently moved crake to a new machine running Fedora 36, which has\n>> OpenSSL 3.0.0. This causes the SSL tests to fail on branches earlier\n>> than release 13, so I propose to backpatch commit f0d2c65f17 to the\n>> release 11 and 12 branches.\n> Hmm ... according to that commit message,\n>\n> Note that the minimum supported OpenSSL version is 1.0.1 as of\n> 7b283d0e1d1d79bf1c962d790c94d2a53f3bb38a, so this does not introduce\n> any new version requirements.\n>\n> So presumably, changing this test would break it for OpenSSL 0.9.8,\n> which is still nominally supported in those branches. On the other\n> hand, this test isn't run by default, so users would likely never\n> notice anyway.\n>\n> On the whole, +1 for doing this (after the release freeze lifts).\n>\n> \t\t\t\n\n\nPresumably we don't have any buildfarm animals running with such old \nversions of openssl, or they would be failing the same test on release \n >= 13.\n\n\nI'll push this in due course.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-06 Mo 11:13, Tom Lane wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nI recently moved crake to a new machine running Fedora 36, which has \nOpenSSL 3.0.0. This causes the SSL tests to fail on branches earlier \nthan release 13, so I propose to backpatch commit f0d2c65f17 to the \nrelease 11 and 12 branches.\n\n\n\nHmm ... according to that commit message,\n\n Note that the minimum supported OpenSSL version is 1.0.1 as of\n 7b283d0e1d1d79bf1c962d790c94d2a53f3bb38a, so this does not introduce\n any new version requirements.\n\nSo presumably, changing this test would break it for OpenSSL 0.9.8,\nwhich is still nominally supported in those branches. On the other\nhand, this test isn't run by default, so users would likely never\nnotice anyway.\n\nOn the whole, +1 for doing this (after the release freeze lifts).\n\n\t\t\t\n\n\n\nPresumably we don't have any buildfarm animals running with such\n old versions of openssl, or they would be failing the same test on\n release >= 13.\n\n\nI'll push this in due course.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 6 Feb 2023 16:19:30 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-02-06 Mo 11:13, Tom Lane wrote:\n>> So presumably, changing this test would break it for OpenSSL 0.9.8,\n>> which is still nominally supported in those branches. On the other\n>> hand, this test isn't run by default, so users would likely never\n>> notice anyway.\n\n> Presumably we don't have any buildfarm animals running with such old \n> versions of openssl, or they would be failing the same test on release \n> >= 13.\n\nThat test isn't run by default in the buildfarm either, no?\n\nBut indeed, probably nobody in the community is testing such builds\nat all. I did have such setups on my old dinosaur BF animals, but\nthey bit the dust last year for unrelated reasons. I wonder how\nrealistic it is to claim that we still support those old OpenSSL\nversions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Feb 2023 17:01:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "On 06.02.23 16:56, Andrew Dunstan wrote:\n> I recently moved crake to a new machine running Fedora 36, which has \n> OpenSSL 3.0.0. This causes the SSL tests to fail on branches earlier \n> than release 13, so I propose to backpatch commit f0d2c65f17 to the \n> release 11 and 12 branches.\n\nThis is not the only patch that we did to support OpenSSL 3.0.0. There \nwas a very lengthy discussion that resulted in various patches. Unless \nwe have a complete analysis of what was done and how it affects various \nbranches, I would not do this. Notably, we did actually consider what \nto backpatch, and the current state is the result of that. So let's not \nthrow that away without considering that carefully. Even if it gets it \nto compile, I personally would not *trust* it without that analysis. I \nthink we should just leave it alone and consider OpenSSL 3.0.0 \nunsupported in the branches were it is now unsupported. OpenSSL 1.1.1 \nis still supported upstream to serve those releases.\n\n\n\n", "msg_date": "Tue, 7 Feb 2023 08:18:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "On 2023-02-07 Tu 02:18, Peter Eisentraut wrote:\n> On 06.02.23 16:56, Andrew Dunstan wrote:\n>> I recently moved crake to a new machine running Fedora 36, which has \n>> OpenSSL 3.0.0. This causes the SSL tests to fail on branches earlier \n>> than release 13, so I propose to backpatch commit f0d2c65f17 to the \n>> release 11 and 12 branches.\n>\n> This is not the only patch that we did to support OpenSSL 3.0.0. There \n> was a very lengthy discussion that resulted in various patches.  \n> Unless we have a complete analysis of what was done and how it affects \n> various branches, I would not do this.  Notably, we did actually \n> consider what to backpatch, and the current state is the result of \n> that.  So let's not throw that away without considering that \n> carefully.  Even if it gets it to compile, I personally would not \n> *trust* it without that analysis.  I think we should just leave it \n> alone and consider OpenSSL 3.0.0 unsupported in the branches were it \n> is now unsupported.  OpenSSL 1.1.1 is still supported upstream to \n> serve those releases.\n\n\nThe only thing this commit does is replace a DES encrypted key file with \none encrypted with AES-256. It doesn't affect compilation at all, and \nshouldn't affect tests run with 1.1.1.\n\nI guess the alternatives are a) disable the SSL tests on branches <= 12 \nor b) completely disable building with SSL for branches <= 12. I would \nprobably opt for a). I bet this crops up a few more times as OpenSSL \n3.0.0 becomes more widespread, until release 12 goes EOL.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-07 Tu 02:18, Peter\n Eisentraut wrote:\n\nOn\n 06.02.23 16:56, Andrew Dunstan wrote:\n \nI recently moved crake to a new machine\n running Fedora 36, which has OpenSSL 3.0.0. This causes the SSL\n tests to fail on branches earlier than release 13, so I propose\n to backpatch commit f0d2c65f17 to the release 11 and 12\n branches.\n \n\n\n This is not the only patch that we did to support OpenSSL 3.0.0. \n There was a very lengthy discussion that resulted in various\n patches.  Unless we have a complete analysis of what was done and\n how it affects various branches, I would not do this.  Notably, we\n did actually consider what to backpatch, and the current state is\n the result of that.  So let's not throw that away without\n considering that carefully.  Even if it gets it to compile, I\n personally would not *trust* it without that analysis.  I think we\n should just leave it alone and consider OpenSSL 3.0.0 unsupported\n in the branches were it is now unsupported.  OpenSSL 1.1.1 is\n still supported upstream to serve those releases.\n \n\n\n\nThe only thing this commit does is replace a DES encrypted key\n file with one encrypted with AES-256. It doesn't affect\n compilation at all, and shouldn't affect tests run with 1.1.1.\nI guess the alternatives are a) disable the SSL tests on branches\n <= 12 or b) completely disable building with SSL for branches\n <= 12. I would probably opt for a). I bet this crops up a few\n more times as OpenSSL 3.0.0 becomes more widespread, until release\n 12 goes EOL.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 7 Feb 2023 07:08:07 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-02-07 Tu 02:18, Peter Eisentraut wrote:\n>> This is not the only patch that we did to support OpenSSL 3.0.0. There \n>> was a very lengthy discussion that resulted in various patches.  \n>> Unless we have a complete analysis of what was done and how it affects \n>> various branches, I would not do this.  Notably, we did actually \n>> consider what to backpatch, and the current state is the result of \n>> that.  So let's not throw that away without considering that \n>> carefully.  Even if it gets it to compile, I personally would not \n>> *trust* it without that analysis.  I think we should just leave it \n>> alone and consider OpenSSL 3.0.0 unsupported in the branches were it \n>> is now unsupported.  OpenSSL 1.1.1 is still supported upstream to \n>> serve those releases.\n\nAFAICT we did back-patch those changes into the branches at issue.\nI find this in the 12.9 and 11.14 release notes, for example:\n\n <listitem>\n<!--\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nBranch: master Release: REL_14_BR [22e1943f1] 2021-03-23 11:48:37 +0100\nBranch: REL_13_STABLE [a69e1506f] 2021-09-25 11:25:48 +0200\nBranch: REL_12_STABLE [90cfd269f] 2021-09-25 11:25:48 +0200\nBranch: REL_11_STABLE [0f28d267c] 2021-09-25 11:25:48 +0200\nBranch: REL_10_STABLE [841075a65] 2021-09-25 11:25:48 +0200\nAuthor: Daniel Gustafsson <dgustafsson@postgresql.org>\nBranch: master [318df8023] 2021-08-10 15:01:52 +0200\nBranch: REL_14_STABLE Release: REL_14_0 [4fa2b15e1] 2021-09-25 11:27:20 +0200\nBranch: REL_13_STABLE [135d8687a] 2021-09-25 11:27:20 +0200\nBranch: REL_12_STABLE [00c72da4a] 2021-09-25 11:27:20 +0200\nBranch: REL_11_STABLE [11901cd96] 2021-09-25 11:27:20 +0200\nBranch: REL_10_STABLE [e802b594e] 2021-09-25 11:27:20 +0200\nAuthor: Daniel Gustafsson <dgustafsson@postgresql.org>\nBranch: master [72bbff4cd] 2021-08-10 15:08:46 +0200\nBranch: REL_14_STABLE Release: REL_14_0 [6d0001aab] 2021-09-25 11:27:28 +0200\nBranch: REL_13_STABLE [8e7199453] 2021-09-25 11:27:28 +0200\nBranch: REL_12_STABLE [7b6ce36fb] 2021-09-25 11:27:28 +0200\nBranch: REL_11_STABLE [19e91a40b] 2021-09-25 11:27:28 +0200\nBranch: REL_10_STABLE [eb643536b] 2021-09-25 11:27:28 +0200\nAuthor: Michael Paquier <michael@paquier.xyz>\nBranch: master [41f30ecc2] 2021-10-20 16:48:24 +0900\nBranch: REL_14_STABLE [81aefaea8] 2021-10-20 16:48:57 +0900\nBranch: REL_13_STABLE [abb9ee92c] 2021-10-20 16:49:00 +0900\nBranch: REL_12_STABLE [1539e0ecd] 2021-10-20 16:49:03 +0900\nBranch: REL_11_STABLE [e00d45fea] 2021-10-20 16:49:06 +0900\nBranch: REL_10_STABLE [922e3c3b7] 2021-10-20 16:49:10 +0900\nBranch: REL9_6_STABLE [d581960df] 2021-10-20 16:49:14 +0900\n-->\n <para>\n Support OpenSSL 3.0.0\n (Peter Eisentraut, Daniel Gustafsson, Michael Paquier)\n </para>\n </listitem>\n\n> The only thing this commit does is replace a DES encrypted key file with \n> one encrypted with AES-256. It doesn't affect compilation at all, and \n> shouldn't affect tests run with 1.1.1.\n\nI double-checked this on Fedora 37 (openssl 3.0.5). v11 and v12\ndo build --with-openssl. There are an annoyingly large number of\n-Wdeprecated-declarations warnings, but those are there in v13 too.\nI confirm that back-patching f0d2c65f17 is required and sufficient\nto make the ssl test pass.\n\nI think Peter's misremembering the history, and OpenSSL 3 *is*\nsupported in these branches. There could be an argument for\nnot back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\nalso supported there. On the whole though, it seems more useful\ntoday for that test to pass with 3.x than for it to pass with 0.9.8.\nAnd I can't see investing effort to make it do both (but if Peter\nwants to, I won't stand in the way).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Feb 2023 13:28:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "On Tue, Feb 07, 2023 at 01:28:26PM -0500, Tom Lane wrote:\n> I double-checked this on Fedora 37 (openssl 3.0.5). v11 and v12\n> do build --with-openssl. There are an annoyingly large number of\n> -Wdeprecated-declarations warnings, but those are there in v13 too.\n> I confirm that back-patching f0d2c65f17 is required and sufficient\n> to make the ssl test pass.\n\n+1. (I am annoyed by that for any backpatch that involves v11 and\nv12.)\n\n> I think Peter's misremembering the history, and OpenSSL 3 *is*\n> supported in these branches. There could be an argument for\n> not back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\n> also supported there. On the whole though, it seems more useful\n> today for that test to pass with 3.x than for it to pass with 0.9.8.\n> And I can't see investing effort to make it do both (but if Peter\n> wants to, I won't stand in the way).\n\nCutting support for 0.9.8 in oldest branches would be a very risky\nmove, but as you say, if that only involves a failure in the SSL\ntests while still allowing anything we have to work, fine by me to\nlive with that.\n\nSaying that, not being able to test these when working on a\nSSL-specific patch adds an extra cost in back-patching. There are not\nmany of these lately, so that may be OK, still it would mean to apply\na reverse of f0d2c65. If things were to work for all the versions of\nOpenSSL supported on 11 and 12, would it mean that the tests need to\nstore both -des and -aes256 data, having the tests switch from one to\nthe other depending on the version of OpenSSL built with?\n--\nMichael", "msg_date": "Wed, 8 Feb 2023 13:24:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Feb 07, 2023 at 01:28:26PM -0500, Tom Lane wrote:\n>> I think Peter's misremembering the history, and OpenSSL 3 *is*\n>> supported in these branches. There could be an argument for\n>> not back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\n>> also supported there. On the whole though, it seems more useful\n>> today for that test to pass with 3.x than for it to pass with 0.9.8.\n>> And I can't see investing effort to make it do both (but if Peter\n>> wants to, I won't stand in the way).\n\n> Cutting support for 0.9.8 in oldest branches would be a very risky\n> move, but as you say, if that only involves a failure in the SSL\n> tests while still allowing anything we have to work, fine by me to\n> live with that.\n\nQuestion: is anybody around here still testing with 0.9.8 (or 1.0.x)\nat all? The systems I had that had that version on them are dead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Feb 2023 23:37:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "On 2023-02-07 Tu 23:37, Tom Lane wrote:\n> Michael Paquier<michael@paquier.xyz> writes:\n>> On Tue, Feb 07, 2023 at 01:28:26PM -0500, Tom Lane wrote:\n>>> I think Peter's misremembering the history, and OpenSSL 3 *is*\n>>> supported in these branches. There could be an argument for\n>>> not back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\n>>> also supported there. On the whole though, it seems more useful\n>>> today for that test to pass with 3.x than for it to pass with 0.9.8.\n>>> And I can't see investing effort to make it do both (but if Peter\n>>> wants to, I won't stand in the way).\n>> Cutting support for 0.9.8 in oldest branches would be a very risky\n>> move, but as you say, if that only involves a failure in the SSL\n>> tests while still allowing anything we have to work, fine by me to\n>> live with that.\n> Question: is anybody around here still testing with 0.9.8 (or 1.0.x)\n> at all? The systems I had that had that version on them are dead.\n\n\nIn the last 30 days, only the following buildfarm animals have reported \nrunning the ssl checks on the relevant branches:\n\n  crake\n  eelpout\n  fairywren\n  gokiburi\n  hachi\n  longfin\n\nI don't think any of these runs openssl <= 1.0.x. If we want to preserve \ntestability for those very old versions we should actually be doing some \ntesting. Or we could just move on and backpatch this as I've suggested. \nI'll be pretty surprised if we get a single complaint.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-07 Tu 23:37, Tom Lane wrote:\n\n\nMichael Paquier <michael@paquier.xyz> writes:\n\n\nOn Tue, Feb 07, 2023 at 01:28:26PM -0500, Tom Lane wrote:\n\n\nI think Peter's misremembering the history, and OpenSSL 3 *is*\nsupported in these branches. There could be an argument for\nnot back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\nalso supported there. On the whole though, it seems more useful\ntoday for that test to pass with 3.x than for it to pass with 0.9.8.\nAnd I can't see investing effort to make it do both (but if Peter\nwants to, I won't stand in the way).\n\n\n\n\n\n\nCutting support for 0.9.8 in oldest branches would be a very risky\nmove, but as you say, if that only involves a failure in the SSL\ntests while still allowing anything we have to work, fine by me to\nlive with that.\n\n\n\nQuestion: is anybody around here still testing with 0.9.8 (or 1.0.x)\nat all? The systems I had that had that version on them are dead.\n\n\n\nIn the last 30 days, only the following buildfarm animals have\n reported running the ssl checks on the relevant branches:\n crake\n  eelpout\n  fairywren\n  gokiburi\n  hachi\n  longfin\n  \n\nI don't think any of these runs openssl <= 1.0.x. If we want\n to preserve testability for those very old versions we should\n actually be doing some testing. Or we could just move on and\n backpatch this as I've suggested. I'll be pretty surprised if we\n get a single complaint.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 8 Feb 2023 07:30:32 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "Op 08-02-2023 om 05:37 schreef Tom Lane:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Tue, Feb 07, 2023 at 01:28:26PM -0500, Tom Lane wrote:\n>>> I think Peter's misremembering the history, and OpenSSL 3 *is*\n>>> supported in these branches. There could be an argument for\n>>> not back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\n>>> also supported there. On the whole though, it seems more useful\n>>> today for that test to pass with 3.x than for it to pass with 0.9.8.\n>>> And I can't see investing effort to make it do both (but if Peter\n>>> wants to, I won't stand in the way).\n> \n>> Cutting support for 0.9.8 in oldest branches would be a very risky\n>> move, but as you say, if that only involves a failure in the SSL\n>> tests while still allowing anything we have to work, fine by me to\n>> live with that.\n> \n> Question: is anybody around here still testing with 0.9.8 (or 1.0.x)\n> at all? The systems I had that had that version on them are dead.\n> \n> \t\t\tregards, tom lane\n\nI've hoarded an old centos 6.1 system that I don't really use anymore \nbut sometimes (once every few weeks, I guess) start up and build master \non, for instance to test with postgres_fdw/replication. Such a build \nwould include a make check, and I think I would have noticed any fails.\n\nThat system says:\nOpenSSL> OpenSSL 1.0.1e-fips 11 Feb 2013\n\nFWIW, just now I built & ran check-world for 15 and 16 with \nPG_TEST_EXTRA=ssl (which I didn't use before). Both finished ok.\n\nErik Rijkers\n\n\n", "msg_date": "Wed, 8 Feb 2023 16:12:39 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "On 07.02.23 19:28, Tom Lane wrote:\n> I think Peter's misremembering the history, and OpenSSL 3*is*\n> supported in these branches. There could be an argument for\n> not back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\n> also supported there. On the whole though, it seems more useful\n> today for that test to pass with 3.x than for it to pass with 0.9.8.\n\nOk, let's do it.\n\n\n\n", "msg_date": "Wed, 8 Feb 2023 16:42:33 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "Erik Rijkers <er@xs4all.nl> writes:\n> Op 08-02-2023 om 05:37 schreef Tom Lane:\n>> Question: is anybody around here still testing with 0.9.8 (or 1.0.x)\n>> at all? The systems I had that had that version on them are dead.\n\n> I've hoarded an old centos 6.1 system that I don't really use anymore \n> but sometimes (once every few weeks, I guess) start up and build master \n> on, for instance to test with postgres_fdw/replication. Such a build \n> would include a make check, and I think I would have noticed any fails.\n> That system says:\n> OpenSSL> OpenSSL 1.0.1e-fips 11 Feb 2013\n> FWIW, just now I built & ran check-world for 15 and 16 with \n> PG_TEST_EXTRA=ssl (which I didn't use before). Both finished ok.\n\nOh, that's good to know. That means that the newer form of this\ntest works with 1.0.1, which means that we'd only lose test\ncompatibility with 0.9.x OpenSSL. That bothers me not at all\nin 2023.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Feb 2023 11:02:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "On 2023-02-08 We 10:42, Peter Eisentraut wrote:\n> On 07.02.23 19:28, Tom Lane wrote:\n>> I think Peter's misremembering the history, and OpenSSL 3*is*\n>> supported in these branches.  There could be an argument for\n>> not back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\n>> also supported there.  On the whole though, it seems more useful\n>> today for that test to pass with 3.x than for it to pass with 0.9.8.\n>\n> Ok, let's do it.\n\n\nDone\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-08 We 10:42, Peter\n Eisentraut wrote:\n\nOn\n 07.02.23 19:28, Tom Lane wrote:\n \nI think Peter's misremembering the\n history, and OpenSSL 3*is*\n \n supported in these branches.  There could be an argument for\n \n not back-patching f0d2c65f17 on the grounds that pre-1.1.1 is\n \n also supported there.  On the whole though, it seems more useful\n \n today for that test to pass with 3.x than for it to pass with\n 0.9.8.\n \n\n\n Ok, let's do it.\n \n\n\n\nDone\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 8 Feb 2023 16:58:48 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" }, { "msg_contents": "On Wed, Feb 08, 2023 at 07:30:32AM -0500, Andrew Dunstan wrote:\n> In the last 30 days, only the following buildfarm animals have reported\n> running the ssl checks on the relevant branches:\n> \n>  gokiburi\n>  hachi\n\nFWIW, these two ones are using OpenSSL 1.1.1, so that's fine.\n--\nMichael", "msg_date": "Thu, 9 Feb 2023 09:07:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 vs old branches" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is a draft of the announcement for the 2023-02-09 update release.\r\n\r\nPlease review and provide corrections, notable omissions, and \r\nsuggestions no later than 2023-02-09 0:00 AoE.\r\n\r\nThanks!\r\n\r\nJonathan", "msg_date": "Mon, 6 Feb 2023 13:19:50 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2023-02-09 release announcement draft" } ]
[ { "msg_contents": "I notice that Michael's new BF animal gokiburi is failing in\nall the pre-v15 branches, though it's fine in v15 and HEAD.\nIt's evidently dying from ASLR effects because it's trying\nto build with EXEC_BACKEND on Linux: there's lots of\n\n2023-02-06 06:07:02.131 GMT [1503972] FATAL: could not reattach to shared memory (key=813803, addr=0xffff8c3a5000): Invalid argument\n2023-02-06 06:07:02.131 GMT [1503971] FATAL: could not reattach to shared memory (key=813803, addr=0xffff8c3a5000): Invalid argument\n2023-02-06 06:07:02.132 GMT [1503976] FATAL: could not reattach to shared memory (key=813803, addr=0xffff8c3a5000): Invalid argument\n\nin its logs.\n\nThe reason it's okay in v15 and up is presumably this:\n\nAuthor: Thomas Munro <tmunro@postgresql.org>\nBranch: master Release: REL_15_BR [f3e78069d] 2022-01-11 00:04:33 +1300\n\n Make EXEC_BACKEND more convenient on Linux and FreeBSD.\n \n Try to disable ASLR when building in EXEC_BACKEND mode, to avoid random\n memory mapping failures while testing. For developer use only, no\n effect on regular builds.\n\nIs it time to back-patch that commit? The alternative would be\nto run the animal with an ASLR-disabling environment variable.\nOn the whole I think testing that f3e78069d works is more\nuseful than working around lack of it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Feb 2023 18:27:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "gokiburi versus the back branches" }, { "msg_contents": "On Mon, Feb 06, 2023 at 06:27:50PM -0500, Tom Lane wrote:\n> Is it time to back-patch that commit? The alternative would be\n> to run the animal with an ASLR-disabling environment variable.\n> On the whole I think testing that f3e78069d works is more\n> useful than working around lack of it.\n\nYes, this is my intention as of this message from last week, once this\nweek's release is tagged:\nhttps://www.postgresql.org/message-id/Y9sMhxo51HRXAmtu@paquier.xyz\n\nThanks,\n--\nMichael", "msg_date": "Tue, 7 Feb 2023 08:35:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: gokiburi versus the back branches" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Feb 06, 2023 at 06:27:50PM -0500, Tom Lane wrote:\n>> Is it time to back-patch that commit?\n\n> Yes, this is my intention as of this message from last week, once this\n> week's release is tagged:\n> https://www.postgresql.org/message-id/Y9sMhxo51HRXAmtu@paquier.xyz\n\nD'oh, I'd totally forgotten that conversation already :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Feb 2023 18:46:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: gokiburi versus the back branches" } ]
[ { "msg_contents": "In deconstruct_distribute_oj_quals, when we've identified a commutable\nleft join which provides join clause with flexible semantics, we try to\ngenerate multiple versions of the join clause. Here we have the logic\nthat puts back any ojrelids that were removed from its min_righthand.\n\n /*\n * Put any OJ relids that were removed from min_righthand back into\n * ojscope, else distribute_qual_to_rels will complain.\n */\n ojscope = bms_join(ojscope, bms_intersect(sjinfo->commute_below,\n sjinfo->syn_righthand));\n\nI doubt this is necessary. It seems to me that all relids mentioned\nwithin the join clause have already been contained in ojscope, which is\nthe union of min_lefthand and min_righthand.\n\nI noticed this code because I came across a problem with a query as\nbelow.\n\ncreate table t (a int);\n\nselect t1.a from (t t1 left join t t2 on true) left join (t t3 left join t\nt4 on t3.a = t4.a) on t2.a = t3.a;\n\nWhen we deal with qual 't2.a = t3.a', deconstruct_distribute_oj_quals\nwould always add the OJ relid of t3/t4 into its required_relids, due to\nthe code above, which I think is wrong. The direct consequence is that\nwe would miss the plan that joins t2 and t3 directly.\n\nIf we add unique constraint for 'a' and try the outer-join removal\nlogic, we would notice that the left join of t2/t3 cannot be removed\nbecause its join qual is treated as pushed down due to the fact that its\nrequired_relids exceed the scope of the join. I think this is also not\ncorrect.\n\nSo is it safe we remove that code?\n\nThanks\nRichard\n\nIn deconstruct_distribute_oj_quals, when we've identified a commutableleft join which provides join clause with flexible semantics, we try togenerate multiple versions of the join clause.  Here we have the logicthat puts back any ojrelids that were removed from its min_righthand.     /*      * Put any OJ relids that were removed from min_righthand back into      * ojscope, else distribute_qual_to_rels will complain.      */     ojscope = bms_join(ojscope, bms_intersect(sjinfo->commute_below,                                               sjinfo->syn_righthand));I doubt this is necessary.  It seems to me that all relids mentionedwithin the join clause have already been contained in ojscope, which isthe union of min_lefthand and min_righthand.I noticed this code because I came across a problem with a query asbelow.create table t (a int);select t1.a from (t t1 left join t t2 on true) left join (t t3 left join t t4 on t3.a = t4.a) on t2.a = t3.a;When we deal with qual 't2.a = t3.a', deconstruct_distribute_oj_qualswould always add the OJ relid of t3/t4 into its required_relids, due tothe code above, which I think is wrong.  The direct consequence is thatwe would miss the plan that joins t2 and t3 directly.If we add unique constraint for 'a' and try the outer-join removallogic, we would notice that the left join of t2/t3 cannot be removedbecause its join qual is treated as pushed down due to the fact that itsrequired_relids exceed the scope of the join.  I think this is also notcorrect.So is it safe we remove that code?ThanksRichard", "msg_date": "Tue, 7 Feb 2023 11:07:08 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "A problem in deconstruct_distribute_oj_quals" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> In deconstruct_distribute_oj_quals, when we've identified a commutable\n> left join which provides join clause with flexible semantics, we try to\n> generate multiple versions of the join clause. Here we have the logic\n> that puts back any ojrelids that were removed from its min_righthand.\n\n> /*\n> * Put any OJ relids that were removed from min_righthand back into\n> * ojscope, else distribute_qual_to_rels will complain.\n> */\n> ojscope = bms_join(ojscope, bms_intersect(sjinfo->commute_below,\n> sjinfo->syn_righthand));\n\n> I doubt this is necessary. It seems to me that all relids mentioned\n> within the join clause have already been contained in ojscope, which is\n> the union of min_lefthand and min_righthand.\n\nHmm ... that was needed at some point in the development of that\nfunction, but maybe it isn't as the code stands now. It does look\nlike the \"this_ojscope\" manipulations within the loop cover this.\n\n> I noticed this code because I came across a problem with a query as\n> below.\n\n> create table t (a int);\n\n> select t1.a from (t t1 left join t t2 on true) left join (t t3 left join t\n> t4 on t3.a = t4.a) on t2.a = t3.a;\n\n> When we deal with qual 't2.a = t3.a', deconstruct_distribute_oj_quals\n> would always add the OJ relid of t3/t4 into its required_relids, due to\n> the code above, which I think is wrong. The direct consequence is that\n> we would miss the plan that joins t2 and t3 directly.\n\nI don't see any change in this query plan when I remove that code, so\nI'm not sure you're explaining your point very well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Feb 2023 01:12:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A problem in deconstruct_distribute_oj_quals" }, { "msg_contents": "On Tue, Feb 7, 2023 at 2:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > I noticed this code because I came across a problem with a query as\n> > below.\n>\n> > create table t (a int);\n>\n> > select t1.a from (t t1 left join t t2 on true) left join (t t3 left join\n> t\n> > t4 on t3.a = t4.a) on t2.a = t3.a;\n>\n> > When we deal with qual 't2.a = t3.a', deconstruct_distribute_oj_quals\n> > would always add the OJ relid of t3/t4 into its required_relids, due to\n> > the code above, which I think is wrong. The direct consequence is that\n> > we would miss the plan that joins t2 and t3 directly.\n>\n> I don't see any change in this query plan when I remove that code, so\n> I'm not sure you're explaining your point very well.\n\n\nSorry I didn't make myself clear. The plan change may not be obvious\nexcept when the cheapest path happens to be joining t2 and t3 first and\nthen joining with t4 afterwards. Currently HEAD would not generate such\na path because the joinqual of t2/t3 always has the OJ relid of t3/t4 in\nits required_relids.\n\nTo observe an obvious plan change, we can add unique constraint for 'a'\nand look how outer-join removal works.\n\nalter table t add unique (a);\n\n-- with that code\n# explain (costs off)\nselect t1.a from (t t1 left join t t2 on true) left join (t t3 left join t\nt4 on t3.a = t4.a) on t2.a = t3.a;\n QUERY PLAN\n---------------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on t t1\n -> Nested Loop Left Join\n -> Seq Scan on t t2\n -> Index Only Scan using t_a_key on t t3\n Index Cond: (a = t2.a)\n(6 rows)\n\n\n-- without that code\n# explain (costs off)\nselect t1.a from (t t1 left join t t2 on true) left join (t t3 left join t\nt4 on t3.a = t4.a) on t2.a = t3.a;\n QUERY PLAN\n------------------------------\n Nested Loop Left Join\n -> Seq Scan on t t1\n -> Materialize\n -> Seq Scan on t t2\n(4 rows)\n\nThis is another side-effect of that code. The joinqual of t2/t3 is\ntreated as being pushed down when we try to remove t2/t3, because its\nrequired_relids, which incorrectly includes the OJ relid of t3/t4,\nexceed the scope of the join. This is not right.\n\nThanks\nRichard\n\nOn Tue, Feb 7, 2023 at 2:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> I noticed this code because I came across a problem with a query as\n> below.\n\n> create table t (a int);\n\n> select t1.a from (t t1 left join t t2 on true) left join (t t3 left join t\n> t4 on t3.a = t4.a) on t2.a = t3.a;\n\n> When we deal with qual 't2.a = t3.a', deconstruct_distribute_oj_quals\n> would always add the OJ relid of t3/t4 into its required_relids, due to\n> the code above, which I think is wrong.  The direct consequence is that\n> we would miss the plan that joins t2 and t3 directly.\n\nI don't see any change in this query plan when I remove that code, so\nI'm not sure you're explaining your point very well. Sorry I didn't make myself clear.  The plan change may not be obviousexcept when the cheapest path happens to be joining t2 and t3 first andthen joining with t4 afterwards.  Currently HEAD would not generate sucha path because the joinqual of t2/t3 always has the OJ relid of t3/t4 inits required_relids.To observe an obvious plan change, we can add unique constraint for 'a'and look how outer-join removal works.alter table t add unique (a);-- with that code# explain (costs off)select t1.a from (t t1 left join t t2 on true) left join (t t3 left join t t4 on t3.a = t4.a) on t2.a = t3.a;                    QUERY PLAN--------------------------------------------------- Nested Loop Left Join   ->  Seq Scan on t t1   ->  Nested Loop Left Join         ->  Seq Scan on t t2         ->  Index Only Scan using t_a_key on t t3               Index Cond: (a = t2.a)(6 rows)-- without that code# explain (costs off)select t1.a from (t t1 left join t t2 on true) left join (t t3 left join t t4 on t3.a = t4.a) on t2.a = t3.a;          QUERY PLAN------------------------------ Nested Loop Left Join   ->  Seq Scan on t t1   ->  Materialize         ->  Seq Scan on t t2(4 rows)This is another side-effect of that code.  The joinqual of t2/t3 istreated as being pushed down when we try to remove t2/t3, because itsrequired_relids, which incorrectly includes the OJ relid of t3/t4,exceed the scope of the join.  This is not right.ThanksRichard", "msg_date": "Tue, 7 Feb 2023 16:08:21 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A problem in deconstruct_distribute_oj_quals" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Tue, Feb 7, 2023 at 2:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't see any change in this query plan when I remove that code, so\n>> I'm not sure you're explaining your point very well.\n\n> To observe an obvious plan change, we can add unique constraint for 'a'\n> and look how outer-join removal works.\n\nAh. Yeah, that's pretty convincing, especially since v15 manages to\nfind that optimization. Pushed with a test case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Feb 2023 11:58:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A problem in deconstruct_distribute_oj_quals" } ]
[ { "msg_contents": "Use appropriate wait event when sending data in the apply worker.\n\nCurrently, we reuse WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE in the\napply worker while sending data to the parallel apply worker via a shared\nmemory queue. This is not appropriate as one won't be able to distinguish\nwhether the worker is waiting for sending data or for the state change.\n\nTo patch instead uses the wait event WAIT_EVENT_MQ_SEND which has been\nalready used in blocking mode while sending data via a shared memory\nqueue.\n\nAuthor: Hou Zhijie\nReviewed-by: Kuroda Hayato, Amit Kapila\nDiscussion: https://postgr.es/m/OS0PR01MB57161C680B22E4C591628EE994DA9@OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d9d7fe68d35e1e10c7c8276d07f5abf9c477cb13\n\nModified Files\n--------------\nsrc/backend/replication/logical/applyparallelworker.c | 3 +--\n1 file changed, 1 insertion(+), 2 deletions(-)", "msg_date": "Tue, 07 Feb 2023 04:40:15 +0000", "msg_from": "Amit Kapila <akapila@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Use appropriate wait event when sending data in the apply\n worker" }, { "msg_contents": "On Mon, Feb 6, 2023 at 11:40 PM Amit Kapila <akapila@postgresql.org> wrote:\n> Use appropriate wait event when sending data in the apply worker.\n>\n> Currently, we reuse WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE in the\n> apply worker while sending data to the parallel apply worker via a shared\n> memory queue. This is not appropriate as one won't be able to distinguish\n> whether the worker is waiting for sending data or for the state change.\n>\n> To patch instead uses the wait event WAIT_EVENT_MQ_SEND which has been\n> already used in blocking mode while sending data via a shared memory\n> queue.\n\nThis is not right at all. You should invent a new wait state if you're\nwaiting in a new place.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Feb 2023 09:26:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Use appropriate wait event when sending data in the apply\n worker" }, { "msg_contents": "On Thu, Feb 9, 2023 at 7:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Feb 6, 2023 at 11:40 PM Amit Kapila <akapila@postgresql.org> wrote:\n> > Use appropriate wait event when sending data in the apply worker.\n> >\n> > Currently, we reuse WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE in the\n> > apply worker while sending data to the parallel apply worker via a shared\n> > memory queue. This is not appropriate as one won't be able to distinguish\n> > whether the worker is waiting for sending data or for the state change.\n> >\n> > To patch instead uses the wait event WAIT_EVENT_MQ_SEND which has been\n> > already used in blocking mode while sending data via a shared memory\n> > queue.\n>\n> This is not right at all. You should invent a new wait state if you're\n> waiting in a new place.\n>\n\nThis is a misunderstanding on my part to reuse the wait_event for a\nsimilar kind of wait but I got your point and will take care of this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 10 Feb 2023 07:36:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Use appropriate wait event when sending data in the apply\n worker" } ]
[ { "msg_contents": "In cases where we have any clauses between two outer joins, these\nclauses should be treated as degenerate clauses in the upper OJ, and\nthey may prevent us from re-ordering the two outer joins. Previously we\nhave the flag 'delay_upper_joins' to help avoid the re-ordering in such\ncases.\n\nIn b448f1c8 remove_useless_result_rtes will remove useless FromExprs and\nmerge its quals up to parent. This makes flag 'delay_upper_joins' not\nnecessary any more if the clauses between the two outer joins come from\nFromExprs. However, if the clauses between the two outer joins come\nfrom JoinExpr of an inner join, it seems we have holes in preserving\nordering. As an example, consider\n\ncreate table t (a int unique);\n\nselect * from t t1 left join (t t2 left join t t3 on t2.a = t3.a) inner\njoin t t4 on coalesce(t3.a,1) = t4.a on t1.a = t2.a;\n\nWhen building SpecialJoinInfo for outer join t1/t2, make_outerjoininfo\nthinks identity 3 applies between OJ t1/t2 and OJ t2/t3, which is wrong\nas there is an inner join between them.\n\nThis query will trigger the assertion for cross-checking on nullingrels\nin search_indexed_tlist_for_var.\n\nThanks\nRichard\n\nIn cases where we have any clauses between two outer joins, theseclauses should be treated as degenerate clauses in the upper OJ, andthey may prevent us from re-ordering the two outer joins.  Previously wehave the flag 'delay_upper_joins' to help avoid the re-ordering in suchcases.In b448f1c8 remove_useless_result_rtes will remove useless FromExprs andmerge its quals up to parent.  This makes flag 'delay_upper_joins' notnecessary any more if the clauses between the two outer joins come fromFromExprs.  However, if the clauses between the two outer joins comefrom JoinExpr of an inner join, it seems we have holes in preservingordering.  As an example, considercreate table t (a int unique);select * from t t1 left join (t t2 left join t t3 on t2.a = t3.a) inner join t t4 on coalesce(t3.a,1) = t4.a on t1.a = t2.a;When building SpecialJoinInfo for outer join t1/t2, make_outerjoininfothinks identity 3 applies between OJ t1/t2 and OJ t2/t3, which is wrongas there is an inner join between them.This query will trigger the assertion for cross-checking on nullingrelsin search_indexed_tlist_for_var.ThanksRichard", "msg_date": "Tue, 7 Feb 2023 15:11:06 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "A bug in make_outerjoininfo" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> In cases where we have any clauses between two outer joins, these\n> clauses should be treated as degenerate clauses in the upper OJ, and\n> they may prevent us from re-ordering the two outer joins. Previously we\n> have the flag 'delay_upper_joins' to help avoid the re-ordering in such\n> cases.\n\n> In b448f1c8 remove_useless_result_rtes will remove useless FromExprs and\n> merge its quals up to parent. This makes flag 'delay_upper_joins' not\n> necessary any more if the clauses between the two outer joins come from\n> FromExprs. However, if the clauses between the two outer joins come\n> from JoinExpr of an inner join, it seems we have holes in preserving\n> ordering.\n\nHmm ... we'll preserve the ordering all right, but if we set commute_below\nor commute_above_x bits that don't match reality then we'll have trouble\nlater with mis-marked varnullingrels, the same as we saw in b2d0e13a0.\nI don't think you need a JoinExpr, an intermediate multi-member FromExpr\nshould have the same effect.\n\nThis possibility was bugging me a little bit while working on b2d0e13a0,\nbut I didn't have a test case showing that it was an issue, and it\ndoesn't seem that easy to incorporate into make_outerjoininfo's\nSpecialJoinInfo-based logic. I wonder if we could do something based on\ninsisting that the upper OJ's relevant \"min_xxxside\" relids exactly match\nthe lower OJ's min scope, thereby proving that there's no relevant join\nof any kind between them.\n\nThe main question there is whether it'd break optimization of any cases\nwhere we need to apply multiple OJ identities to get to the most favorable\nplan. I think not, as long as we compare the \"min\" relid sets not the\nsyntactic relid sets, but I've not done a careful analysis.\n\nIf that doesn't work, another idea could be to reformulate\nmake_outerjoininfo's loop as a re-traversal of the jointree, allowing\nit to see intermediate plain joins directly. However, that still leaves\nme wondering what we *do* about the intermediate joins. I don't think\nwe want to fail immediately on seeing one, because we could possibly\napply OJ identity 1 to get the inner join out of the way. That is:\n\n((A leftjoin B on (Pab)) innerjoin C on (Pac)) leftjoin D on (Pbd)\n\ninitially looks like identity 3 can't apply, but apply identity 1\nfirst:\n\n((A innerjoin C on (Pac)) leftjoin B on (Pab)) leftjoin D on (Pbd)\n\nand now it works (insert usual caveat about strictness):\n\n(A innerjoin C on (Pac)) leftjoin (B leftjoin D on (Pbd)) on (Pab)\n\nand you can even go back the other way:\n\n(A leftjoin (B leftjoin D on (Pbd)) on (Pab)) innerjoin C on (Pac)\n\nSo it's actually possible to push an innerjoin out of the identity-3\nnest in either direction, and we don't want to lose that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Feb 2023 15:15:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug in make_outerjoininfo" }, { "msg_contents": "I wrote:\n> Richard Guo <guofenglinux@gmail.com> writes:\n>> In b448f1c8 remove_useless_result_rtes will remove useless FromExprs and\n>> merge its quals up to parent. This makes flag 'delay_upper_joins' not\n>> necessary any more if the clauses between the two outer joins come from\n>> FromExprs. However, if the clauses between the two outer joins come\n>> from JoinExpr of an inner join, it seems we have holes in preserving\n>> ordering.\n\n> Hmm ... we'll preserve the ordering all right, but if we set commute_below\n> or commute_above_x bits that don't match reality then we'll have trouble\n> later with mis-marked varnullingrels, the same as we saw in b2d0e13a0.\n> I don't think you need a JoinExpr, an intermediate multi-member FromExpr\n> should have the same effect.\n\nBTW, the presented test case doesn't fail anymore after the fix\nfor bug #17781. That's because build_joinrel_tlist() doesn't use\ncommute_above_l anymore at all, and is a bit more wary in its use of\ncommute_above_r. I'm not sure that that completely eliminates this\nproblem, but it at least makes it a lot harder to reach.\n\nWe might want to see if we can devise a new example (or wait for\nRobins to break it ;-)) before expending a lot of effort on making\nthe commute_xxx bits more precise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Feb 2023 18:33:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug in make_outerjoininfo" }, { "msg_contents": "On Wed, Feb 8, 2023 at 7:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> We might want to see if we can devise a new example (or wait for\n> Robins to break it ;-)) before expending a lot of effort on making\n> the commute_xxx bits more precise.\n\n\nHere is an example that can trigger the same assertion as in bug #17781\nwith HEAD. But I haven't got time to look into it, so not sure if it is\nthe same issue.\n\nselect\n coalesce(ref_0.permissive, 'a') as c0\nfrom\n (SELECT pol.polpermissive::text as permissive\n FROM pg_policy pol JOIN pg_class c ON c.oid = pol.polrelid\n LEFT JOIN pg_namespace n ON n.oid = c.relnamespace) as ref_0\n right join pg_catalog.pg_amop as sample_0 on (true)\nwhere (select objsubid from pg_catalog.pg_shdepend) < 1;\n\nThanks\nRichard\n\nOn Wed, Feb 8, 2023 at 7:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nWe might want to see if we can devise a new example (or wait for\nRobins to break it ;-)) before expending a lot of effort on making\nthe commute_xxx bits more precise. Here is an example that can trigger the same assertion as in bug #17781with HEAD.  But I haven't got time to look into it, so not sure if it isthe same issue.select  coalesce(ref_0.permissive, 'a') as c0from (SELECT pol.polpermissive::text as permissive   FROM pg_policy pol JOIN pg_class c ON c.oid = pol.polrelid   LEFT JOIN pg_namespace n ON n.oid = c.relnamespace) as ref_0 right join pg_catalog.pg_amop as sample_0 on (true)where (select objsubid from pg_catalog.pg_shdepend) < 1;ThanksRichard", "msg_date": "Wed, 8 Feb 2023 13:55:10 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A bug in make_outerjoininfo" }, { "msg_contents": "On Wed, Feb 8, 2023 at 1:55 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Wed, Feb 8, 2023 at 7:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> We might want to see if we can devise a new example (or wait for\n>> Robins to break it ;-)) before expending a lot of effort on making\n>> the commute_xxx bits more precise.\n>\n>\n> Here is an example that can trigger the same assertion as in bug #17781\n> with HEAD. But I haven't got time to look into it, so not sure if it is\n> the same issue.\n>\n\nAha, Robins succeeds in breaking it at [1]. It should be the same issue\nas reported here. I've looked at it a little bit and concluded my\nfindings there at [1].\n\n[1]\nhttps://www.postgresql.org/message-id/flat/17781-c0405c8b3cd5e072%40postgresql.org\n\nThanks\nRichard\n\nOn Wed, Feb 8, 2023 at 1:55 PM Richard Guo <guofenglinux@gmail.com> wrote:On Wed, Feb 8, 2023 at 7:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nWe might want to see if we can devise a new example (or wait for\nRobins to break it ;-)) before expending a lot of effort on making\nthe commute_xxx bits more precise. Here is an example that can trigger the same assertion as in bug #17781with HEAD.  But I haven't got time to look into it, so not sure if it isthe same issue. Aha, Robins succeeds in breaking it at [1].  It should be the same issueas reported here.  I've looked at it a little bit and concluded myfindings there at [1].[1] https://www.postgresql.org/message-id/flat/17781-c0405c8b3cd5e072%40postgresql.orgThanksRichard", "msg_date": "Wed, 8 Feb 2023 15:52:58 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A bug in make_outerjoininfo" } ]
[ { "msg_contents": "Hi All,\n\nI want to do TPCC benchmarking of postgresql with streaming data\nNow I am guessing the COPY command can be used for this purpose or is there\nany other option for this?\n\nCan someone point me towards a better option to do it in the best way?\n\nRegards,\nChandan\n\nHi All,I want to do TPCC benchmarking of postgresql with streaming dataNow I am guessing the COPY command can be used for this purpose or is there any other option for this?Can someone point me towards a better option  to do it in the best way?Regards,Chandan", "msg_date": "Tue, 7 Feb 2023 14:47:07 +0530", "msg_from": "chandan kunal <ckkunal@gmail.com>", "msg_from_op": true, "msg_subject": "Regarding TPCC benchmarking of postgresql for streaming" } ]