threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nas I've just upgraded an instance which contained tables \"WITH OIDS\" I wonder if it would make sense if pg_upgrade directly creates a script to fix those. I know it is easy to that with e.g. sed over tables_with_oids.txt but it would be more convenient to have the script generated directly.\n\nThoughts?\n\nRegards\nDaniel\n\n", "msg_date": "Sun, 6 Nov 2022 08:48:03 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": true, "msg_subject": "pg_upgrade, tables_with_oids.txt -> tables_with_oids.sql?" }, { "msg_contents": "> On 6 Nov 2022, at 09:48, Daniel Westermann (DWE) <daniel.westermann@dbi-services.com> wrote:\n\n> as I've just upgraded an instance which contained tables \"WITH OIDS\" I wonder if it would make sense if pg_upgrade directly creates a script to fix those. I know it is easy to that with e.g. sed over tables_with_oids.txt but it would be more convenient to have the script generated directly.\n\nFor the checks on the old system we don't generate any scripts, only reports of\nproblems. I don't recall the reasoning but I would assume it stems from some\nchecks being up to the user to deal with, no one-size-fits-all script is\npossible. Having them all generate reports rather than scripts makes that\nconsistent across the old checks.\n\nIn this particular case we probably could safely make a script, but if we we'd\nneed to expand testing to validate it etc so I'm not sure it's worth it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:59:15 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade, tables_with_oids.txt -> tables_with_oids.sql?" } ]
[ { "msg_contents": "Hi hackers,\n\nDuring the [1] discussion it was suggested to constify the arguments\nof ilist.c/ilist.h functions. Bharath (cc'ed) pointed out that it's\nbetter to start a new thread in order to attract more hackers that may\nbe interested in this change, so I started one.\n\nThe patch is attached. Here are the reasons why we may want to do this:\n\n\"\"\"\nConst qualifiers ensure that we don't do something stupid in the function\nimplementation. Additionally they clarify the interface. As an example:\n\nvoid\nslist_delete(slist_head *head, const slist_node *node)\n\nHere one can instantly tell that node->next is not going to be set to NULL.\nFinally, const qualifiers potentially allow the compiler to do more\noptimizations. This being said no benchmarking was done for this patch.\n\"\"\"\n\nAdditionally Bharath pointed out that there are other pieces of code\nthat we may want to change in a similar fashion,\nproclist.h/proclist_types.h as one example. I didn't do this yet\nbecause I would like to know the community opinion first on whether we\nshould do this at all.\n\nThoughts?\n\n[1]: https://www.postgresql.org/message-id/flat/CAApHDvrtVxr+FXEX0VbViCFKDGxA3tWDgw9oFewNXCJMmwLjLg@mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 7 Nov 2022 12:03:23 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Const'ify the arguments of ilist.c/ilist.h functions" }, { "msg_contents": "Hi,\n\nOn 2022-11-07 12:03:23 +0300, Aleksander Alekseev wrote:\n> During the [1] discussion it was suggested to constify the arguments\n> of ilist.c/ilist.h functions. Bharath (cc'ed) pointed out that it's\n> better to start a new thread in order to attract more hackers that may\n> be interested in this change, so I started one.\n\nI needed something similar in https://postgr.es/m/20221120055930.t6kl3tyivzhlrzu2%40awork3.anarazel.de\n\n\n\n> @@ -484,7 +484,7 @@ dlist_has_prev(dlist_head *head, dlist_node *node)\n> * Return the next node in the list (there must be one).\n> */\n> static inline dlist_node *\n> -dlist_next_node(dlist_head *head, dlist_node *node)\n> +dlist_next_node(const dlist_head *head, const dlist_node *node)\n> {\n> \tAssert(dlist_has_next(head, node));\n> \treturn node->next;\n> @@ -494,7 +494,7 @@ dlist_next_node(dlist_head *head, dlist_node *node)\n> * Return previous node in the list (there must be one).\n> */\n> static inline dlist_node *\n> -dlist_prev_node(dlist_head *head, dlist_node *node)\n> +dlist_prev_node(const dlist_head *head, const dlist_node *node)\n> {\n> \tAssert(dlist_has_prev(head, node));\n> \treturn node->prev;\n> @@ -502,7 +502,7 @@ dlist_prev_node(dlist_head *head, dlist_node *node)\n> \n> /* internal support function to get address of head element's struct */\n> static inline void *\n> -dlist_head_element_off(dlist_head *head, size_t off)\n> +dlist_head_element_off(const dlist_head *head, size_t off)\n> {\n> \tAssert(!dlist_is_empty(head));\n> \treturn (char *) head->head.next - off;\n> @@ -512,14 +512,14 @@ dlist_head_element_off(dlist_head *head, size_t off)\n> * Return the first node in the list (there must be one).\n> */\n> static inline dlist_node *\n> -dlist_head_node(dlist_head *head)\n> +dlist_head_node(const dlist_head *head)\n> {\n> \treturn (dlist_node *) dlist_head_element_off(head, 0);\n> }\n> \n> /* internal support function to get address of tail element's struct */\n> static inline void *\n> -dlist_tail_element_off(dlist_head *head, size_t off)\n> +dlist_tail_element_off(const dlist_head *head, size_t off)\n> {\n> \tAssert(!dlist_is_empty(head));\n> \treturn (char *) head->head.prev - off;\n> @@ -529,7 +529,7 @@ dlist_tail_element_off(dlist_head *head, size_t off)\n> * Return the last node in the list (there must be one).\n> */\n> static inline dlist_node *\n> -dlist_tail_node(dlist_head *head)\n> +dlist_tail_node(const dlist_head *head)\n> {\n> \treturn (dlist_node *) dlist_tail_element_off(head, 0);\n> }\n\nI don't think it is correct for any of these to add const. The only reason it\nworks is because of casting etc.\n\n\n> @@ -801,7 +801,7 @@ dclist_has_prev(dclist_head *head, dlist_node *node)\n> *\t\tReturn the next node in the list (there must be one).\n> */\n> static inline dlist_node *\n> -dclist_next_node(dclist_head *head, dlist_node *node)\n> +dclist_next_node(const dclist_head *head, const dlist_node *node)\n> {\n> \tAssert(head->count > 0);\n> \n> @@ -813,7 +813,7 @@ dclist_next_node(dclist_head *head, dlist_node *node)\n> *\t\tReturn the prev node in the list (there must be one).\n> */\n> static inline dlist_node *\n> -dclist_prev_node(dclist_head *head, dlist_node *node)\n> +dclist_prev_node(const dclist_head *head, const dlist_node *node)\n> {\n> \tAssert(head->count > 0);\n> \n> @@ -822,7 +822,7 @@ dclist_prev_node(dclist_head *head, dlist_node *node)\n> \n> /* internal support function to get address of head element's struct */\n> static inline void *\n> -dclist_head_element_off(dclist_head *head, size_t off)\n> +dclist_head_element_off(const dclist_head *head, size_t off)\n> {\n> \tAssert(!dclist_is_empty(head));\n> \n> @@ -834,7 +834,7 @@ dclist_head_element_off(dclist_head *head, size_t off)\n> *\t\tReturn the first node in the list (there must be one).\n> */\n> static inline dlist_node *\n> -dclist_head_node(dclist_head *head)\n> +dclist_head_node(const dclist_head *head)\n> {\n> \tAssert(head->count > 0);\n> \n> @@ -843,7 +843,7 @@ dclist_head_node(dclist_head *head)\n> \n> /* internal support function to get address of tail element's struct */\n> static inline void *\n> -dclist_tail_element_off(dclist_head *head, size_t off)\n> +dclist_tail_element_off(const dclist_head *head, size_t off)\n> {\n> \tAssert(!dclist_is_empty(head));\n> \n> @@ -854,7 +854,7 @@ dclist_tail_element_off(dclist_head *head, size_t off)\n> * Return the last node in the list (there must be one).\n> */\n> static inline dlist_node *\n> -dclist_tail_node(dclist_head *head)\n> +dclist_tail_node(const dclist_head *head)\n> {\n> \tAssert(head->count > 0);\n> \n\nDito.\n\n\n> @@ -988,7 +988,7 @@ slist_has_next(slist_head *head, slist_node *node)\n> * Return the next node in the list (there must be one).\n> */\n> static inline slist_node *\n> -slist_next_node(slist_head *head, slist_node *node)\n> +slist_next_node(const slist_head *head, const slist_node *node)\n> {\n> \tAssert(slist_has_next(head, node));\n> \treturn node->next;\n> @@ -996,7 +996,7 @@ slist_next_node(slist_head *head, slist_node *node)\n> \n> /* internal support function to get address of head element's struct */\n> static inline void *\n> -slist_head_element_off(slist_head *head, size_t off)\n> +slist_head_element_off(const slist_head *head, size_t off)\n> {\n> \tAssert(!slist_is_empty(head));\n> \treturn (char *) head->head.next - off;\n> @@ -1006,7 +1006,7 @@ slist_head_element_off(slist_head *head, size_t off)\n> * Return the first node in the list (there must be one).\n> */\n> static inline slist_node *\n> -slist_head_node(slist_head *head)\n> +slist_head_node(const slist_head *head)\n> {\n> \treturn (slist_node *) slist_head_element_off(head, 0);\n> }\n\nDito.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Nov 2022 09:31:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Const'ify the arguments of ilist.c/ilist.h functions" }, { "msg_contents": "Hi Andres,\n\nThanks for the review!\n\n> I don't think it is correct for any of these to add const. The only reason it\n> works is because of casting etc.\n\nFair enough. PFA the corrected patch v2.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 23 Nov 2022 16:57:36 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Const'ify the arguments of ilist.c/ilist.h functions" }, { "msg_contents": "On 23.11.22 14:57, Aleksander Alekseev wrote:\n> Hi Andres,\n> \n> Thanks for the review!\n> \n>> I don't think it is correct for any of these to add const. The only reason it\n>> works is because of casting etc.\n> \n> Fair enough. PFA the corrected patch v2.\n\nThis patch version looks correct to me. It is almost the same as the \none that Andres had posted in his thread, except that yours also \nmodifies slist_delete() and dlist_member_check(). Both of these changes \nalso look correct to me.\n\n\n\n", "msg_date": "Sat, 7 Jan 2023 08:21:26 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Const'ify the arguments of ilist.c/ilist.h functions" }, { "msg_contents": "On 07.01.23 08:21, Peter Eisentraut wrote:\n> On 23.11.22 14:57, Aleksander Alekseev wrote:\n>> Hi Andres,\n>>\n>> Thanks for the review!\n>>\n>>> I don't think it is correct for any of these to add const. The only \n>>> reason it\n>>> works is because of casting etc.\n>>\n>> Fair enough. PFA the corrected patch v2.\n> \n> This patch version looks correct to me.  It is almost the same as the \n> one that Andres had posted in his thread, except that yours also \n> modifies slist_delete() and dlist_member_check().  Both of these changes \n> also look correct to me.\n\ncommitted\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 08:34:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Const'ify the arguments of ilist.c/ilist.h functions" }, { "msg_contents": "Hi,\n\nOn 2023-01-12 08:34:25 +0100, Peter Eisentraut wrote:\n> On 07.01.23 08:21, Peter Eisentraut wrote:\n> > On 23.11.22 14:57, Aleksander Alekseev wrote:\n> > > Hi Andres,\n> > > \n> > > Thanks for the review!\n> > > \n> > > > I don't think it is correct for any of these to add const. The\n> > > > only reason it\n> > > > works is because of casting etc.\n> > > \n> > > Fair enough. PFA the corrected patch v2.\n> > \n> > This patch version looks correct to me.� It is almost the same as the\n> > one that Andres had posted in his thread, except that yours also\n> > modifies slist_delete() and dlist_member_check().� Both of these changes\n> > also look correct to me.\n> \n> committed\n\nUnfortunately this causes a build failure with ILIST_DEBUG\nenabled. dlist_member_check() uses dlist_foreach(), which isn't set up to work\nwith const :(. I'll push a quick workaround.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 18 Jan 2023 10:22:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Const'ify the arguments of ilist.c/ilist.h functions" }, { "msg_contents": "Hi,\n\nOn 2023-01-18 10:22:14 -0800, Andres Freund wrote:\n> On 2023-01-12 08:34:25 +0100, Peter Eisentraut wrote:\n> > On 07.01.23 08:21, Peter Eisentraut wrote:\n> > > This patch version looks correct to me.� It is almost the same as the\n> > > one that Andres had posted in his thread, except that yours also\n> > > modifies slist_delete() and dlist_member_check().� Both of these changes\n> > > also look correct to me.\n> > \n> > committed\n> \n> Unfortunately this causes a build failure with ILIST_DEBUG\n> enabled. dlist_member_check() uses dlist_foreach(), which isn't set up to work\n> with const :(. I'll push a quick workaround.\n\nPushed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 18 Jan 2023 10:30:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Const'ify the arguments of ilist.c/ilist.h functions" } ]
[ { "msg_contents": "Hi,\n\nIn both TransactionGroupUpdateXidStatus and ProcArrayGroupClearXid\nglobal MyProc is used. for consistency, replaced with a function local variable.\n\n\nthanks\nRajesh", "msg_date": "Mon, 7 Nov 2022 15:16:49 +0530", "msg_from": "rajesh singarapu <rajesh.rs0541@gmail.com>", "msg_from_op": true, "msg_subject": "Use proc instead of MyProc in\n ProcArrayGroupClearXid()/TransactionGroupUpdateXidStatus()" }, { "msg_contents": "On Mon, Nov 7, 2022 at 3:17 PM rajesh singarapu <rajesh.rs0541@gmail.com> wrote:\n>\n> Hi,\n>\n> In both TransactionGroupUpdateXidStatus and ProcArrayGroupClearXid\n> global MyProc is used. for consistency, replaced with a function local variable.\n\n if (nextproc != MyProc)\n PGSemaphoreUnlock(nextproc->sem);\n\nThe intention of this wake up code in the two functions is to skip the\nleader process from waking itself up. Only the leader gets to execute\nthis code and all the followers don't hit this code at all as they\nreturn from the first loop in those functions.\n\nAll the callers of ProcArrayGroupClearXid() get MyProc as their proc\nand pass it down. And using the passed down function parameter proc\nmakes the function look consistent.\n\nAnd, in TransactionGroupUpdateXidStatus() proc is initialized with\nMyProc and using it instead of MyProc in the wake up loop also makes\nthe code consistent.\n\nWhile it does no harm with the existing way using MyProc, +1 for\nreplacing it with the local variable proc in both the functions for\nconsistency.\n\nAnother thing I noticed is an extra assertion in\nProcArrayGroupClearXid() Assert(TransactionIdIsValid(proc->xid));, the\ncaller already has the same assertion, I think we can also remove it.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Nov 2022 11:56:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use proc instead of MyProc in\n ProcArrayGroupClearXid()/TransactionGroupUpdateXidStatus()" }, { "msg_contents": "On Mon, Nov 7, 2022 at 3:17 PM rajesh singarapu <rajesh.rs0541@gmail.com> wrote:\n>\n> In both TransactionGroupUpdateXidStatus and ProcArrayGroupClearXid\n> global MyProc is used. for consistency, replaced with a function local variable.\n>\n\nIn ProcArrayGroupClearXid(), currently, we always pass MyProc as proc,\nso the change suggested by you will work but I think if in the future\nsomeone calls it with a different proc, then the change suggested by\nyou won't work. The change in TransactionGroupUpdateXidStatus() looks\ngood but If we don't want to change ProcArrayGroupClearXid() then I am\nnot sure if there is much value in making the change in\nTransactionGroupUpdateXidStatus().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Nov 2022 11:58:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use proc instead of MyProc in\n ProcArrayGroupClearXid()/TransactionGroupUpdateXidStatus()" }, { "msg_contents": "Thanks Bharat and Amit for the review and explaining rationale.\n\nfor the TransactionGroupUpdateXidStatus() change, let me see if I can\npiggy back this change on something more valuable.\n\n\nthanks\nRajesh\n\nOn Tue, Nov 8, 2022 at 11:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 7, 2022 at 3:17 PM rajesh singarapu <rajesh.rs0541@gmail.com> wrote:\n> >\n> > In both TransactionGroupUpdateXidStatus and ProcArrayGroupClearXid\n> > global MyProc is used. for consistency, replaced with a function local variable.\n> >\n>\n> In ProcArrayGroupClearXid(), currently, we always pass MyProc as proc,\n> so the change suggested by you will work but I think if in the future\n> someone calls it with a different proc, then the change suggested by\n> you won't work. The change in TransactionGroupUpdateXidStatus() looks\n> good but If we don't want to change ProcArrayGroupClearXid() then I am\n> not sure if there is much value in making the change in\n> TransactionGroupUpdateXidStatus().\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n", "msg_date": "Tue, 8 Nov 2022 12:28:30 +0530", "msg_from": "rajesh singarapu <rajesh.rs0541@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use proc instead of MyProc in\n ProcArrayGroupClearXid()/TransactionGroupUpdateXidStatus()" }, { "msg_contents": "On Tue, Nov 8, 2022 at 11:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 7, 2022 at 3:17 PM rajesh singarapu <rajesh.rs0541@gmail.com> wrote:\n> >\n> > In both TransactionGroupUpdateXidStatus and ProcArrayGroupClearXid\n> > global MyProc is used. for consistency, replaced with a function local variable.\n> >\n>\n> In ProcArrayGroupClearXid(), currently, we always pass MyProc as proc,\n> so the change suggested by you will work but I think if in the future\n> someone calls it with a different proc, then the change suggested by\n> you won't work.\n\nWell, yes. Do you have any thoughts around such future usages of\nProcArrayGroupClearXid()?\n\n> The change in TransactionGroupUpdateXidStatus() looks\n> good but If we don't want to change ProcArrayGroupClearXid() then I am\n> not sure if there is much value in making the change in\n> TransactionGroupUpdateXidStatus().\n\nAFICS, there are many places in the code that use proc == MyProc (20\ninstances) or proc != MyProc (6 instances) sorts of things. I think\ndefining a macro, something like below, is better for readability.\nHowever, I'm concerned that we might have to use it in 26 places.\n\n#define IsPGPROCMine(proc) (proc != NULL && proc == MyProc)\nor just\n#define IsPGPROCMine(proc) (proc == MyProc)\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Nov 2022 15:43:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use proc instead of MyProc in\n ProcArrayGroupClearXid()/TransactionGroupUpdateXidStatus()" } ]
[ { "msg_contents": "Hi,\n\nWe have a NMS application in cisco and using postgres as a database.\n\nWe have query related to disabling auto vacuum. We have below configuration in postgres.conf where the autovacuum=on is commented out.\n\n[Shape Description automatically generated]\n\nBut when checked in database we notice that it’s showing as on\n\n[Graphical user interface, timeline Description automatically generated]\n\nWhat would this mean? Does it mean that autovacuum is not disabled? Appreciate a response.\n\nRegards,\nKarthik", "msg_date": "Mon, 7 Nov 2022 11:27:06 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Postgres auto vacuum - Disable" }, { "msg_contents": "Hi\n\nOn Mon, 7 Nov 2022 at 11:42, Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\nwrote:\n\n> Hi,\n>\n>\n>\n> We have a NMS application in cisco and using postgres as a database.\n>\n>\n>\n> We have query related to disabling auto vacuum. We have below\n> configuration in postgres.conf where the autovacuum=on is commented out.\n>\n>\n>\n> [image: Shape Description automatically generated]\n>\n>\n>\n> But when checked in database we notice that it’s showing as on\n>\n>\n>\n> [image: Graphical user interface, timeline Description automatically\n> generated]\n>\n>\n>\n> What would this mean? Does it mean that autovacuum is not disabled?\n> Appreciate a response.\n>\n\nRight. The default is for it to be enabled, so commenting out the option\ndoes nothing. You would need to set it explicitly to off.\n\nBUT... you almost certainly don't want to do that. Cases where it should be\ndisabled are *extremely* rare. Make sure you *really* know what you're\nletting yourself in for by disabling autovacuum, and don't rely on 10+ year\nold performance tuning advice from random places on the internet, if that's\nwhat you're doing.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 7 Nov 2022 11:47:09 +0000", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: Postgres auto vacuum - Disable" }, { "msg_contents": "Hi,\n\nThanks for the response.\n\nI have follow-up question where the vacuum process is waiting and not doing it’s job. When we grep on waiting process we see below output. Whenever we see this we notice that the vacuum is not happening and the system is running out of space.\n\n[root@zpah0031 ~]# ps -ef | grep 'waiting'\npostgres 8833 62646 0 Jul28 ? 00:00:00 postgres: postgres cgms [local] VACUUM waiting\npostgres 18437 62646 0 Jul27 ? 00:00:00 postgres: postgres cgms [local] VACUUM waiting\n\n\nWhat could be the reason as to why the vacuum is not happening? Is it because some lock is present in the table/db or any other reason?\n\nRegards,\nKarthik\n\nFrom: Dave Page <dpage@pgadmin.org>\nDate: Monday, 7 November 2022 at 5:17 PM\nTo: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>, Joel Mariadasan (jomariad) <jomariad@cisco.com>\nSubject: Re: Postgres auto vacuum - Disable\nHi\n\nOn Mon, 7 Nov 2022 at 11:42, Karthik Jagadish (kjagadis) <kjagadis@cisco.com<mailto:kjagadis@cisco.com>> wrote:\nHi,\n\nWe have a NMS application in cisco and using postgres as a database.\n\nWe have query related to disabling auto vacuum. We have below configuration in postgres.conf where the autovacuum=on is commented out.\n\n[Shape Description automatically generated]\n\nBut when checked in database we notice that it’s showing as on\n\n[Graphical user interface, timeline Description automatically generated]\n\nWhat would this mean? Does it mean that autovacuum is not disabled? Appreciate a response.\n\nRight. The default is for it to be enabled, so commenting out the option does nothing. You would need to set it explicitly to off.\n\nBUT... you almost certainly don't want to do that. Cases where it should be disabled are *extremely* rare. Make sure you *really* know what you're letting yourself in for by disabling autovacuum, and don't rely on 10+ year old performance tuning advice from random places on the internet, if that's what you're doing.\n\n--\nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 7 Nov 2022 12:12:14 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Re: Postgres auto vacuum - Disable" }, { "msg_contents": "On Mon, 2022-11-07 at 12:12 +0000, Karthik Jagadish (kjagadis) wrote:\n> I have follow-up question where the vacuum process is waiting and not doing it’s job.\n> When we grep on waiting process we see below output. Whenever we see this we notice\n> that the vacuum is not happening and the system is running out of space.\n>  \n> [root@zpah0031 ~]# ps -ef | grep 'waiting'\n> postgres  8833 62646  0 Jul28 ?        00:00:00 postgres: postgres cgms [local] VACUUM waiting\n> postgres 18437 62646  0 Jul27 ?        00:00:00 postgres: postgres cgms [local] VACUUM waiting\n>  \n>  \n> What could be the reason as to why the vacuum is not happening? Is it because some lock is\n> present in the table/db or any other reason?\n\nLook in \"pg_stat_activity\". I didn't check, but I'm sure it's the intentional break\nconfigured with \"autovacuum_vacuum_cost_delay\". Reduce that parameter for more\nautovacuum speed.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 07 Nov 2022 14:22:56 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres auto vacuum - Disable" }, { "msg_contents": "Hi,\n\nOn Mon, Nov 07, 2022 at 02:22:56PM +0100, Laurenz Albe wrote:\n> On Mon, 2022-11-07 at 12:12 +0000, Karthik Jagadish (kjagadis) wrote:\n> > I have follow-up question where the vacuum process is waiting and not doing it’s job.\n> > When we grep on waiting process we see below output. Whenever we see this we notice\n> > that the vacuum is not happening and the system is running out of space.\n> >  \n> > [root@zpah0031 ~]# ps -ef | grep 'waiting'\n> > postgres  8833 62646  0 Jul28 ?        00:00:00 postgres: postgres cgms [local] VACUUM waiting\n> > postgres 18437 62646  0 Jul27 ?        00:00:00 postgres: postgres cgms [local] VACUUM waiting\n> >  \n> >  \n> > What could be the reason as to why the vacuum is not happening? Is it because some lock is\n> > present in the table/db or any other reason?\n>\n> Look in \"pg_stat_activity\". I didn't check, but I'm sure it's the intentional break\n> configured with \"autovacuum_vacuum_cost_delay\". Reduce that parameter for more\n> autovacuum speed.\n\nReally? An autovacuum should be displayed as \"autovacuum worker\", this looks\nlike plain backends to me, where an interactive VACUUM has been issued and is\nwaiting on a heavyweight lock.\n\n\n", "msg_date": "Mon, 7 Nov 2022 21:36:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres auto vacuum - Disable" }, { "msg_contents": "Hi Again,\n\nIs there any difference in the way vacuum is handled in postgres9.6 and postgres12.9, We are noticing the below issue of waiting process only after upgrading to postgres12.5\n\n$ ps -ef | grep 'waiting'\npostgres 8833 62646 0 Jul28 ? 00:00:00 postgres: postgres cgms [local] VACUUM waiting\npostgres 18437 62646 0 Jul27 ? 00:00:00 postgres: postgres cgms [local] VACUUM waiting\n\nRegards,\nKarthik\n\nFrom: Julien Rouhaud <rjuju123@gmail.com>\nDate: Monday, 7 November 2022 at 7:06 PM\nTo: Laurenz Albe <laurenz.albe@cybertec.at>\nCc: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>, Dave Page <dpage@pgadmin.org>, pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>, Joel Mariadasan (jomariad) <jomariad@cisco.com>\nSubject: Re: Postgres auto vacuum - Disable\nHi,\n\nOn Mon, Nov 07, 2022 at 02:22:56PM +0100, Laurenz Albe wrote:\n> On Mon, 2022-11-07 at 12:12 +0000, Karthik Jagadish (kjagadis) wrote:\n> > I have follow-up question where the vacuum process is waiting and not doing it’s job.\n> > When we grep on waiting process we see below output. Whenever we see this we notice\n> > that the vacuum is not happening and the system is running out of space.\n> >\n> > [root@zpah0031 ~]# ps -ef | grep 'waiting'\n> > postgres 8833 62646 0 Jul28 ? 00:00:00 postgres: postgres cgms [local] VACUUM waiting\n> > postgres 18437 62646 0 Jul27 ? 00:00:00 postgres: postgres cgms [local] VACUUM waiting\n> >\n> >\n> > What could be the reason as to why the vacuum is not happening? Is it because some lock is\n> > present in the table/db or any other reason?\n>\n> Look in \"pg_stat_activity\". I didn't check, but I'm sure it's the intentional break\n> configured with \"autovacuum_vacuum_cost_delay\". Reduce that parameter for more\n> autovacuum speed.\n\nReally? An autovacuum should be displayed as \"autovacuum worker\", this looks\nlike plain backends to me, where an interactive VACUUM has been issued and is\nwaiting on a heavyweight lock.\n\n\n\n\n\n\n\n\n\nHi Again,\n \nIs there any difference in the way vacuum is handled in postgres9.6 and postgres12.9, We are noticing the below issue of waiting process only after upgrading to postgres12.5\n \n$ ps -ef | grep 'waiting'\npostgres  8833 62646  0 Jul28 ?        00:00:00 postgres: postgres cgms [local] VACUUM waiting\npostgres 18437 62646  0 Jul27 ?        00:00:00 postgres: postgres cgms [local] VACUUM waiting\n \nRegards,\nKarthik\n \n\nFrom:\nJulien Rouhaud <rjuju123@gmail.com>\nDate: Monday, 7 November 2022 at 7:06 PM\nTo: Laurenz Albe <laurenz.albe@cybertec.at>\nCc: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>, Dave Page <dpage@pgadmin.org>, pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>,\n Jaganbabu M (jmunusam) <jmunusam@cisco.com>, Joel Mariadasan (jomariad) <jomariad@cisco.com>\nSubject: Re: Postgres auto vacuum - Disable\n\n\nHi,\n\nOn Mon, Nov 07, 2022 at 02:22:56PM +0100, Laurenz Albe wrote:\n> On Mon, 2022-11-07 at 12:12 +0000, Karthik Jagadish (kjagadis) wrote:\n> > I have follow-up question where the vacuum process is waiting and not doing it’s job.\n> > When we grep on waiting process we see below output. Whenever we see this we notice\n> > that the vacuum is not happening and the system is running out of space.\n> >  \n> > [root@zpah0031 ~]# ps -ef | grep 'waiting'\n> > postgres  8833 62646  0 Jul28 ?        00:00:00 postgres: postgres cgms [local] VACUUM waiting\n> > postgres 18437 62646  0 Jul27 ?        00:00:00 postgres: postgres cgms [local] VACUUM waiting\n> >  \n> >  \n> > What could be the reason as to why the vacuum is not happening? Is it because some lock is\n> > present in the table/db or any other reason?\n>\n> Look in \"pg_stat_activity\".  I didn't check, but I'm sure it's the intentional break\n> configured with \"autovacuum_vacuum_cost_delay\".  Reduce that parameter for more\n> autovacuum speed.\n\nReally?  An autovacuum should be displayed as \"autovacuum worker\", this looks\nlike plain backends to me, where an interactive VACUUM has been issued and is\nwaiting on a heavyweight lock.", "msg_date": "Mon, 7 Nov 2022 14:46:18 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Re: Postgres auto vacuum - Disable" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is a draft of the release announcement for the 2022-11-10 release.\r\n\r\nPlease provide feedback no later than 2022-11-10 0:00 AoE[1].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Mon, 7 Nov 2022 10:51:45 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2022-11-10 release announcement draft" }, { "msg_contents": "Op 07-11-2022 om 16:51 schreef Jonathan S. Katz:\n> Hi,\n> \n> Attached is a draft of the release announcement for the 2022-11-10 release.\n> \n> Please provide feedback no later than 2022-11-10 0:00 AoE[1].\n\n'now exists' should be (I think)\n'now exits'\n\n\nErik\n\n> \n> Thanks,\n> \n> Jonathan\n> \n> [1] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n\n", "msg_date": "Mon, 7 Nov 2022 16:59:13 +0100", "msg_from": "Erikjan Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: 2022-11-10 release announcement draft" }, { "msg_contents": "On 11/7/22 10:59 AM, Erikjan Rijkers wrote:\r\n> Op 07-11-2022 om 16:51 schreef Jonathan S. Katz:\r\n>> Hi,\r\n>>\r\n>> Attached is a draft of the release announcement for the 2022-11-10 \r\n>> release.\r\n>>\r\n>> Please provide feedback no later than 2022-11-10 0:00 AoE[1].\r\n> \r\n> 'now exists'  should be (I think)\r\n> 'now exits'\r\n\r\nCorrect -- I have made that fix -- thanks!\r\n\r\nJonathan", "msg_date": "Mon, 7 Nov 2022 11:25:46 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2022-11-10 release announcement draft" } ]
[ { "msg_contents": "Hi,\n\nI was recently reminded of my previous desire to allow setting the segment\nsize to less than 1GB. It's pretty painful to test large amount of segments\nwith a segment size of 1GB, certainly our regression test don't cover anything\nwith multiple segments.\n\nThis likely wouldn't have detected the issue fixed in 0e758ae89a2, but it make\nit easier to validate that the fix doesn't break anything badly.\n\nIn the attached patch I renamed --with-segsize= to --with-segsize-mb= /\n-Dsegsize= to -Dsegsize_mb=, to avoid somebody building with --with-segsize=2\nor such suddenly ending up with an incompatible build.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 7 Nov 2022 09:13:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "allow segment size to be set to < 1GiB" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> In the attached patch I renamed --with-segsize= to --with-segsize-mb= /\n> -Dsegsize= to -Dsegsize_mb=, to avoid somebody building with --with-segsize=2\n> or such suddenly ending up with an incompatible build.\n\nFor the purpose of exercising these code paths with the standard\nregression tests, even a megabyte seems large -- we don't create\nvery many test tables that are that big. How about instead\nallowing the segment size to be set in pages?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Nov 2022 12:52:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "Hi,\n\nOn 2022-11-07 12:52:25 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > In the attached patch I renamed --with-segsize= to --with-segsize-mb= /\n> > -Dsegsize= to -Dsegsize_mb=, to avoid somebody building with --with-segsize=2\n> > or such suddenly ending up with an incompatible build.\n> \n> For the purpose of exercising these code paths with the standard\n> regression tests, even a megabyte seems large -- we don't create\n> very many test tables that are that big.\n\nGood point.\n\n\n> How about instead allowing the segment size to be set in pages?\n\nIn addition or instead of --with-segsize/-Dsegsize? Just offering the number\nof pages seems like a not great UI.\n\nI guess we could add support for units or such? But that seems messy as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Nov 2022 18:29:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-07 12:52:25 -0500, Tom Lane wrote:\n>> How about instead allowing the segment size to be set in pages?\n\n> In addition or instead of --with-segsize/-Dsegsize?\n\nIn addition to. What I meant by \"instead\" was to replace\nyour proposal of --with-segsize-mb.\n\n> Just offering the number of pages seems like a not great UI.\n\nWell, it's a developer/debug focused API. I think regular users\nwould only care for the existing --with-segsize = so-many-GB API.\nBut for testing, I think --with-segsize-pages = so-many-pages\nis actually a pretty good UI.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Nov 2022 21:36:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "On Tue, Nov 8, 2022 at 8:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-07 12:52:25 -0500, Tom Lane wrote:\n> >> How about instead allowing the segment size to be set in pages?\n>\n> > In addition or instead of --with-segsize/-Dsegsize?\n>\n> In addition to. What I meant by \"instead\" was to replace\n> your proposal of --with-segsize-mb.\n>\n> > Just offering the number of pages seems like a not great UI.\n>\n> Well, it's a developer/debug focused API. I think regular users\n> would only care for the existing --with-segsize = so-many-GB API.\n> But for testing, I think --with-segsize-pages = so-many-pages\n> is actually a pretty good UI.\n\nPerhaps --with-segsize-blocks is a better name here as we use block\ninstead of page for --with-blocksize and --with-wal-blocksize.\n\nIf this option is for dev/debug purposes only, do we want to put a\nmechanism to disallow it in release builds or something like that,\njust in case? Or at least, add a note in the documentation?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Nov 2022 11:06:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "Hi,\n\nOn 2022-11-07 21:36:33 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-07 12:52:25 -0500, Tom Lane wrote:\n> >> How about instead allowing the segment size to be set in pages?\n> \n> > In addition or instead of --with-segsize/-Dsegsize?\n> \n> In addition to. What I meant by \"instead\" was to replace\n> your proposal of --with-segsize-mb.\n\nWorking on updating the patch.\n\nOne semi-interesting bit is that <= 5 blocks per segment fails, because\ncorrupt_page_checksum() doesn't know about segments and\nsrc/bin/pg_basebackup/t/010_pg_basebackup.pl does\n\n# induce further corruption in 5 more blocks\n$node->stop;\nfor my $i (1 .. 5)\n{\n\t$node->corrupt_page_checksum($file_corrupt1, $i * $block_size);\n}\n$node->start;\n\nI'd be content with not dealing with that given the use case of the\nfunctionality? A buildfarm animal setting it to 10 seem to\nsuffice. Alternatively we could add segment support to\ncorrupt_page_checksum().\n\nOpinions?\n\nFWIW, with HEAD, all tests pass with -Dsegsize_blocks=6 on HEAD.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Nov 2022 18:28:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "On Tue, Nov 08, 2022 at 06:28:08PM -0800, Andres Freund wrote:\n> FWIW, with HEAD, all tests pass with -Dsegsize_blocks=6 on HEAD.\n\nWow. The relation page size influences some of the plans in the\nmain regression test suite, but this is nice to hear. +1 from me for\nmore flexibility with this option at compile-time.\n--\nMichael", "msg_date": "Wed, 9 Nov 2022 13:52:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "On 2022-11-08 18:28:08 -0800, Andres Freund wrote:\n> On 2022-11-07 21:36:33 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2022-11-07 12:52:25 -0500, Tom Lane wrote:\n> > >> How about instead allowing the segment size to be set in pages?\n> > \n> > > In addition or instead of --with-segsize/-Dsegsize?\n> > \n> > In addition to. What I meant by \"instead\" was to replace\n> > your proposal of --with-segsize-mb.\n> \n> Working on updating the patch.\n> \n> One semi-interesting bit is that <= 5 blocks per segment fails, because\n> corrupt_page_checksum() doesn't know about segments and\n> src/bin/pg_basebackup/t/010_pg_basebackup.pl does\n\nA second question: Both autoconf and meson print the segment size as GB right\nnow. Obviously that'll print out a size of 0 for a segsize < 1GB.\n\nThe easiest way to would be to just display the number of blocks, but that's\nnot particularly nice. We could show kB, but that ends up being large. Or we\ncan have some code to adjust the unit, but that seems a bit overkill.\n\nOpinions?\n\n\n", "msg_date": "Wed, 9 Nov 2022 11:42:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> A second question: Both autoconf and meson print the segment size as GB right\n> now. Obviously that'll print out a size of 0 for a segsize < 1GB.\n\n> The easiest way to would be to just display the number of blocks, but that's\n> not particularly nice.\n\nWell, it would be fine if you'd written --with-segsize-blocks, wouldn't\nit? Can we make the printout format depend on which switch was used?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Nov 2022 14:44:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "Hi,\n\nOn 2022-11-09 14:44:42 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > A second question: Both autoconf and meson print the segment size as GB right\n> > now. Obviously that'll print out a size of 0 for a segsize < 1GB.\n> \n> > The easiest way to would be to just display the number of blocks, but that's\n> > not particularly nice.\n> \n> Well, it would be fine if you'd written --with-segsize-blocks, wouldn't\n> it? Can we make the printout format depend on which switch was used?\n\nNot sure why I didn't think of that...\n\nUpdated patch attached.\n\nI made one autoconf and one meson CI task use a small block size, but just to\nensure it work on both. I'd probably leave it set on one, so we keep the\ncoverage for cfbot?\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 9 Nov 2022 12:25:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "\nOn 2022-11-09 We 15:25, Andres Freund wrote:\n> Hi,\n>\n> On 2022-11-09 14:44:42 -0500, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> A second question: Both autoconf and meson print the segment size as GB right\n>>> now. Obviously that'll print out a size of 0 for a segsize < 1GB.\n>>> The easiest way to would be to just display the number of blocks, but that's\n>>> not particularly nice.\n>> Well, it would be fine if you'd written --with-segsize-blocks, wouldn't\n>> it? Can we make the printout format depend on which switch was used?\n> Not sure why I didn't think of that...\n>\n> Updated patch attached.\n>\n> I made one autoconf and one meson CI task use a small block size, but just to\n> ensure it work on both. I'd probably leave it set on one, so we keep the\n> coverage for cfbot?\n>\n\nAre we going to impose some sane minimum, or leave it up to developers\nto discover that for themselves?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 09:58:48 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "Hi,\n\nOn 2022-11-17 09:58:48 -0500, Andrew Dunstan wrote:\n> On 2022-11-09 We 15:25, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-11-09 14:44:42 -0500, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> A second question: Both autoconf and meson print the segment size as GB right\n> >>> now. Obviously that'll print out a size of 0 for a segsize < 1GB.\n> >>> The easiest way to would be to just display the number of blocks, but that's\n> >>> not particularly nice.\n> >> Well, it would be fine if you'd written --with-segsize-blocks, wouldn't\n> >> it? Can we make the printout format depend on which switch was used?\n> > Not sure why I didn't think of that...\n> >\n> > Updated patch attached.\n> >\n> > I made one autoconf and one meson CI task use a small block size, but just to\n> > ensure it work on both. I'd probably leave it set on one, so we keep the\n> > coverage for cfbot?\n> >\n> \n> Are we going to impose some sane minimum, or leave it up to developers\n> to discover that for themselves?\n\nI don't think we should. It's actually useful to e.g. use 1 page sized\nsegments for testing, and with one exceptions the tests pass with it too. Do\nyou see a reason to impose one?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Nov 2022 07:39:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-17 09:58:48 -0500, Andrew Dunstan wrote:\n>> Are we going to impose some sane minimum, or leave it up to developers\n>> to discover that for themselves?\n\n> I don't think we should. It's actually useful to e.g. use 1 page sized\n> segments for testing, and with one exceptions the tests pass with it too. Do\n> you see a reason to impose one?\n\nYeah, I think we should allow setting it to 1 block. This switch is\nonly for testing purposes (I hope the docs make that clear).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 10:48:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "On 2022-11-17 10:48:52 -0500, Tom Lane wrote:\n> Yeah, I think we should allow setting it to 1 block. This switch is\n> only for testing purposes (I hope the docs make that clear).\n\n\"This option is only for developers, to test segment related code.\"\n\n\n", "msg_date": "Thu, 17 Nov 2022 08:27:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "\nOn 2022-11-17 Th 10:48, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-11-17 09:58:48 -0500, Andrew Dunstan wrote:\n>>> Are we going to impose some sane minimum, or leave it up to developers\n>>> to discover that for themselves?\n>> I don't think we should. It's actually useful to e.g. use 1 page sized\n>> segments for testing, and with one exceptions the tests pass with it too. Do\n>> you see a reason to impose one?\n> Yeah, I think we should allow setting it to 1 block. This switch is\n> only for testing purposes (I hope the docs make that clear).\n>\n> \t\t\t\n\n\nYeah clearly if 1 is useful there's no point in limiting it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 17:24:04 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: allow segment size to be set to < 1GiB" }, { "msg_contents": "Hi,\n\nOn 2022-11-09 12:25:09 -0800, Andres Freund wrote:\n> Updated patch attached.\n\nI pushed it now.\n\n\n> I made one autoconf and one meson CI task use a small block size, but just to\n> ensure it work on both. I'd probably leave it set on one, so we keep the\n> coverage for cfbot?\n\nIt doesn't seem to cost that much, so I left it set in those two tasks for\nnow.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Dec 2022 19:37:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: allow segment size to be set to < 1GiB" } ]
[ { "msg_contents": "Hi,\n\nWe have a NMS application where we are using postgres as database, what we are noticing is that vacuuming is not happening for certain tables for 2-3 days and eventually the table bloats and disk space is running out.\n\nWhat could be the reason for auto vacuuming not happening for certain tables?\n\nAutovacuum is enabled\n\nRegards,\nKarthik\n\n\n\n\n\n\n\n\n\nHi, \n\nWe have a NMS application where we are using postgres as database, what we are noticing is that vacuuming is not happening for certain tables for 2-3 days and eventually the table bloats and disk space is running out.\n\nWhat could be the reason for auto vacuuming not happening for certain tables?\n\nAutovacuum is enabled \n\nRegards,\nKarthik", "msg_date": "Tue, 8 Nov 2022 11:30:44 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Tables not getting vacuumed in postgres " }, { "msg_contents": "On Tue, Nov 8, 2022 at 5:00 PM Karthik Jagadish (kjagadis)\n<kjagadis@cisco.com> wrote:\n>\n> Hi,\n>\n> We have a NMS application where we are using postgres as database, what we are noticing is that vacuuming is not happening for certain tables for 2-3 days and eventually the table bloats and disk space is running out.\n>\n> What could be the reason for auto vacuuming not happening for certain tables?\n>\n\nCheck if there is any long-running or prepared transaction.\n\nRegards,\nAmul\n\n\n", "msg_date": "Tue, 8 Nov 2022 17:37:20 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tables not getting vacuumed in postgres" }, { "msg_contents": "Hi,\n\nThanks for the response.\n\nBut what I understand that insert update and delete would still work and will not interfere with vacuuming process. Yes we do perform a lot of updates on that particular table which is not vacuuming. Does it mean that it waiting for the lock to be released?\n\nRegards,\nKarthik\n\nFrom: Amul Sul <sulamul@gmail.com>\nDate: Tuesday, 8 November 2022 at 5:38 PM\nTo: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>\nSubject: Re: Tables not getting vacuumed in postgres\nOn Tue, Nov 8, 2022 at 5:00 PM Karthik Jagadish (kjagadis)\n<kjagadis@cisco.com> wrote:\n>\n> Hi,\n>\n> We have a NMS application where we are using postgres as database, what we are noticing is that vacuuming is not happening for certain tables for 2-3 days and eventually the table bloats and disk space is running out.\n>\n> What could be the reason for auto vacuuming not happening for certain tables?\n>\n\nCheck if there is any long-running or prepared transaction.\n\nRegards,\nAmul\n\n\n\n\n\n\n\n\n\nHi,\n \nThanks for the response.\n \nBut what I understand that insert update and delete would still work and will not interfere with vacuuming process. Yes we do perform a lot of updates on that particular table which\n is not vacuuming. Does it mean that it waiting for the lock to be released?\n \nRegards,\nKarthik\n \n\nFrom:\nAmul Sul <sulamul@gmail.com>\nDate: Tuesday, 8 November 2022 at 5:38 PM\nTo: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>\nSubject: Re: Tables not getting vacuumed in postgres\n\n\nOn Tue, Nov 8, 2022 at 5:00 PM Karthik Jagadish (kjagadis)\n<kjagadis@cisco.com> wrote:\n>\n> Hi,\n>\n> We have a NMS application where we are using postgres as database, what we are noticing is that vacuuming is not happening for certain tables for 2-3 days and eventually the table bloats and disk space is running out.\n>\n> What could be the reason for auto vacuuming not happening for certain tables?\n>\n\nCheck if there is any long-running or prepared transaction.\n\nRegards,\nAmul", "msg_date": "Tue, 8 Nov 2022 12:41:09 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Re: Tables not getting vacuumed in postgres" }, { "msg_contents": "On Tue, Nov 8, 2022 at 6:11 PM Karthik Jagadish (kjagadis)\n<kjagadis@cisco.com> wrote:\n>\n> Hi,\n>\n>\n>\n> Thanks for the response.\n>\n>\n>\n> But what I understand that insert update and delete would still work and will not interfere with vacuuming process. Yes we do perform a lot of updates on that particular table which is not vacuuming. Does it mean that it waiting for the lock to be released?\n>\n\nWell, yes, that won't interfere but the primary job of autovacuum is\nto remove the bloat, if the dead tuple(s) is visible to any\ntransaction, then not going to remove that.\n\n\n>\n>\n> Regards,\n>\n> Karthik\n>\n>\n>\n> From: Amul Sul <sulamul@gmail.com>\n> Date: Tuesday, 8 November 2022 at 5:38 PM\n> To: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\n> Cc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>\n> Subject: Re: Tables not getting vacuumed in postgres\n>\n> On Tue, Nov 8, 2022 at 5:00 PM Karthik Jagadish (kjagadis)\n> <kjagadis@cisco.com> wrote:\n> >\n> > Hi,\n> >\n> > We have a NMS application where we are using postgres as database, what we are noticing is that vacuuming is not happening for certain tables for 2-3 days and eventually the table bloats and disk space is running out.\n> >\n> > What could be the reason for auto vacuuming not happening for certain tables?\n> >\n>\n> Check if there is any long-running or prepared transaction.\n>\n> Regards,\n> Amul\n\n\n", "msg_date": "Tue, 8 Nov 2022 18:38:50 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tables not getting vacuumed in postgres" }, { "msg_contents": "I didn’t get your point dead tuples are visible to transaction means? Vacuuming job is to remove dead tuples right?\n\nRegards,\nKarthik\n\nFrom: Amul Sul <sulamul@gmail.com>\nDate: Tuesday, 8 November 2022 at 6:39 PM\nTo: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>\nSubject: Re: Tables not getting vacuumed in postgres\nOn Tue, Nov 8, 2022 at 6:11 PM Karthik Jagadish (kjagadis)\n<kjagadis@cisco.com> wrote:\n>\n> Hi,\n>\n>\n>\n> Thanks for the response.\n>\n>\n>\n> But what I understand that insert update and delete would still work and will not interfere with vacuuming process. Yes we do perform a lot of updates on that particular table which is not vacuuming. Does it mean that it waiting for the lock to be released?\n>\n\nWell, yes, that won't interfere but the primary job of autovacuum is\nto remove the bloat, if the dead tuple(s) is visible to any\ntransaction, then not going to remove that.\n\n\n>\n>\n> Regards,\n>\n> Karthik\n>\n>\n>\n> From: Amul Sul <sulamul@gmail.com>\n> Date: Tuesday, 8 November 2022 at 5:38 PM\n> To: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\n> Cc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>\n> Subject: Re: Tables not getting vacuumed in postgres\n>\n> On Tue, Nov 8, 2022 at 5:00 PM Karthik Jagadish (kjagadis)\n> <kjagadis@cisco.com> wrote:\n> >\n> > Hi,\n> >\n> > We have a NMS application where we are using postgres as database, what we are noticing is that vacuuming is not happening for certain tables for 2-3 days and eventually the table bloats and disk space is running out.\n> >\n> > What could be the reason for auto vacuuming not happening for certain tables?\n> >\n>\n> Check if there is any long-running or prepared transaction.\n>\n> Regards,\n> Amul\n\n\n\n\n\n\n\n\n\nI didn’t get your point dead tuples are visible to transaction means? Vacuuming job is to remove dead tuples right?\n\n \nRegards,\nKarthik\n \n\nFrom:\nAmul Sul <sulamul@gmail.com>\nDate: Tuesday, 8 November 2022 at 6:39 PM\nTo: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>\nSubject: Re: Tables not getting vacuumed in postgres\n\n\nOn Tue, Nov 8, 2022 at 6:11 PM Karthik Jagadish (kjagadis)\n<kjagadis@cisco.com> wrote:\n>\n> Hi,\n>\n>\n>\n> Thanks for the response.\n>\n>\n>\n> But what I understand that insert update and delete would still work and will not interfere with vacuuming process. Yes we do perform a lot of updates on that particular table which is not vacuuming. Does it mean that it waiting for the lock to be released?\n>\n\nWell, yes, that won't interfere but the primary job of autovacuum is\nto remove the bloat, if the dead tuple(s) is visible to any\ntransaction, then not going to remove that.\n\n\n>\n>\n> Regards,\n>\n> Karthik\n>\n>\n>\n> From: Amul Sul <sulamul@gmail.com>\n> Date: Tuesday, 8 November 2022 at 5:38 PM\n> To: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\n> Cc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>, Jaganbabu M (jmunusam) <jmunusam@cisco.com>\n> Subject: Re: Tables not getting vacuumed in postgres\n>\n> On Tue, Nov 8, 2022 at 5:00 PM Karthik Jagadish (kjagadis)\n> <kjagadis@cisco.com> wrote:\n> >\n> > Hi,\n> >\n> > We have a NMS application where we are using postgres as database, what we are noticing is that vacuuming is not happening for certain tables for 2-3 days and eventually the table bloats and disk space is running out.\n> >\n> > What could be the reason for auto vacuuming not happening for certain tables?\n> >\n>\n> Check if there is any long-running or prepared transaction.\n>\n> Regards,\n> Amul", "msg_date": "Tue, 8 Nov 2022 13:21:41 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Re: Tables not getting vacuumed in postgres" }, { "msg_contents": "\n\n> On Nov 8, 2022, at 5:21 AM, Karthik Jagadish (kjagadis) <kjagadis@cisco.com> wrote:\n> \n> I didn’t get your point dead tuples are visible to transaction means? Vacuuming job is to remove dead tuples right?\n\nPlease see https://www.2ndquadrant.com/en/blog/when-autovacuum-does-not-vacuum/ for more information about your question. Specifically, you might look at the third section down, \"Long transactions\", which starts with \"So, if the table is vacuumed regularly, surely it can’t accumulate a lot of dead rows, right?\" You might benefit from reading the entire article rather than skipping down to that section.\n\nI hope it helps....\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 8 Nov 2022 06:24:13 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Tables not getting vacuumed in postgres" } ]
[ { "msg_contents": "In the release team's discussion leading up to commit 0e758ae89,\nAndres opined that what commit 4ab5dae94 had done to mdunlinkfork\nwas a mess, and I concur. It invented an entirely new code path\nthrough that function, and required two different behaviors from the\nsegment-deletion loop. I think a very straight line can be drawn\nbetween that extra complexity and the introduction of a nasty bug.\nIt's all unnecessary too, because AFAICS all we really need is to\napply the pre-existing behavior for temp tables and REDO mode\nto binary-upgrade mode as well.\n\nHence, the attached reverts everything 4ab5dae94 did to this function,\nand most of 0e758ae89 too, and instead makes IsBinaryUpgrade an\nadditional reason to take the immediate-unlink path.\n\nBarring objections, I'll push this after the release freeze lifts.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 08 Nov 2022 11:28:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Non-emergency patch for bug #17679" }, { "msg_contents": "Hi,\n\nOn 2022-11-08 11:28:08 -0500, Tom Lane wrote:\n> In the release team's discussion leading up to commit 0e758ae89,\n> Andres opined that what commit 4ab5dae94 had done to mdunlinkfork\n> was a mess, and I concur. It invented an entirely new code path\n> through that function, and required two different behaviors from the\n> segment-deletion loop. I think a very straight line can be drawn\n> between that extra complexity and the introduction of a nasty bug.\n> It's all unnecessary too, because AFAICS all we really need is to\n> apply the pre-existing behavior for temp tables and REDO mode\n> to binary-upgrade mode as well.\n\nI'm not sure I understand the current code. In the binary upgrade case we\ncurrently *do* truncate the file in the else of \"Delete or truncate the first\nsegment.\", then again truncate it in the loop and then unlink it, right?\n\n\n> Hence, the attached reverts everything 4ab5dae94 did to this function,\n> and most of 0e758ae89 too, and instead makes IsBinaryUpgrade an\n> additional reason to take the immediate-unlink path.\n> \n> Barring objections, I'll push this after the release freeze lifts.\n\nI wonder if it's worth aiming slightly higher. There's plenty duplicated code\nbetween the first segment handling and the loop body. Perhaps the if at the\ntop just should decide whether to unlink the first segment or not, and we then\ncheck that in the body of the loop for segno == 0?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Nov 2022 12:31:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-emergency patch for bug #17679" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-08 11:28:08 -0500, Tom Lane wrote:\n>> Hence, the attached reverts everything 4ab5dae94 did to this function,\n>> and most of 0e758ae89 too, and instead makes IsBinaryUpgrade an\n>> additional reason to take the immediate-unlink path.\n\n> I wonder if it's worth aiming slightly higher. There's plenty duplicated code\n> between the first segment handling and the loop body. Perhaps the if at the\n> top just should decide whether to unlink the first segment or not, and we then\n> check that in the body of the loop for segno == 0?\n\nI don't care for that. I think the point here is precisely that\nwe want behavior A for the first segment and behavior B for the\nremaining ones, and so I'd prefer to keep the code that does A\nand the code that does B distinct. It was a misguided attempt to\nshare that code that got us into trouble here in the first place.\nMoreover, any future changes to either behavior will be that much\nharder if we combine the implementations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Nov 2022 15:40:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-emergency patch for bug #17679" } ]
[ { "msg_contents": "I happened to notice that these lists of supported versions haven't\nbeen updated in a good long time:\n\n PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7 llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n\n PGAC_PATH_PROGS(CLANG, clang clang-7 clang-6.0 clang-5.0 clang-4.0 clang-3.9)\n\nGiven the lack of complaints, it seems likely that nobody is relying\non these. Maybe we should just nuke them? If not, I suppose we\nbetter add 8 through 15.\n\nI may be missing it, but it doesn't look like meson.build has any\nequivalent lists. So that might be an argument for getting rid\nof the lists here?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Nov 2022 13:08:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Out-of-date clang/llvm version lists in PGAC_LLVM_SUPPORT" }, { "msg_contents": "Hi,\n\nOn 2022-11-08 13:08:45 -0500, Tom Lane wrote:\n> I happened to notice that these lists of supported versions haven't\n> been updated in a good long time:\n> \n> PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7 llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n> \n> PGAC_PATH_PROGS(CLANG, clang clang-7 clang-6.0 clang-5.0 clang-4.0 clang-3.9)\n> \n> Given the lack of complaints, it seems likely that nobody is relying\n> on these. Maybe we should just nuke them?\n\nYea, that's probably a good idea. Pretty clear that that should happen only in\nHEAD?\n\n\n> I may be missing it, but it doesn't look like meson.build has any\n> equivalent lists. So that might be an argument for getting rid\n> of the lists here?\n\nThe list is just in meson, it has a builtin helper for depending on llvm. So\nthat's not quite an argument unfortunately.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Nov 2022 16:25:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Out-of-date clang/llvm version lists in PGAC_LLVM_SUPPORT" } ]
[ { "msg_contents": "Hi, hackers\n\nGetRunningTransactionData requires holding both ProcArrayLock and\nXidGenLock (in that order). Then LogStandbySnapshot releases those\nlocks in that order. However, to reduce the frequency of having to\nwait for XidGenLock while holding ProcArrayLock, ProcArrayAdd releases\nthem in reversed acquisition order.\n\nThe comments of LogStandbySnapshot says:\n\n> GetRunningTransactionData() acquired ProcArrayLock, we must release it.\n> For Hot Standby this can be done before inserting the WAL record\n> because ProcArrayApplyRecoveryInfo() rechecks the commit status using\n> the clog. For logical decoding, though, the lock can't be released\n> early because the clog might be \"in the future\" from the POV of the\n> historic snapshot. This would allow for situations where we're waiting\n> for the end of a transaction listed in the xl_running_xacts record\n> which, according to the WAL, has committed before the xl_running_xacts\n> record. Fortunately this routine isn't executed frequently, and it's\n> only a shared lock.\n\nThis comment is only for ProcArrayLock, not for XidGenLock. IIUC,\nLogCurrentRunningXacts doesn't need holding XidGenLock, right?\n\nDoes there any sense to release them in reversed acquisition order in\nLogStandbySnapshot like ProcArrayRemove?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Wed, 09 Nov 2022 11:03:04 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Locks release order in LogStandbySnapshot" }, { "msg_contents": "Hi,\n\nOn 2022-11-09 11:03:04 +0800, Japin Li wrote:\n> GetRunningTransactionData requires holding both ProcArrayLock and\n> XidGenLock (in that order). Then LogStandbySnapshot releases those\n> locks in that order. However, to reduce the frequency of having to\n> wait for XidGenLock while holding ProcArrayLock, ProcArrayAdd releases\n> them in reversed acquisition order.\n>\n> The comments of LogStandbySnapshot says:\n> \n> > GetRunningTransactionData() acquired ProcArrayLock, we must release it.\n> > For Hot Standby this can be done before inserting the WAL record\n> > because ProcArrayApplyRecoveryInfo() rechecks the commit status using\n> > the clog. For logical decoding, though, the lock can't be released\n> > early because the clog might be \"in the future\" from the POV of the\n> > historic snapshot. This would allow for situations where we're waiting\n> > for the end of a transaction listed in the xl_running_xacts record\n> > which, according to the WAL, has committed before the xl_running_xacts\n> > record. Fortunately this routine isn't executed frequently, and it's\n> > only a shared lock.\n> \n> This comment is only for ProcArrayLock, not for XidGenLock. IIUC,\n> LogCurrentRunningXacts doesn't need holding XidGenLock, right?\n\nI think it does. If we allow xid assignment before LogCurrentRunningXacts() is\ndone, those new xids would not have been mentioned in the xl_running_xacts\nrecord, despite already running. Which I think result in corrupted snapshots\nduring hot standby and logical decoding.\n\n\n> Does there any sense to release them in reversed acquisition order in\n> LogStandbySnapshot like ProcArrayRemove?\n\nI don't think it's worth optimizing for, this happens at a low frequency\n(whereas connection establishment can be very frequent). And due to the above,\nwe can sometimes release ProcArrayLock earlier.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Nov 2022 19:21:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Locks release order in LogStandbySnapshot" }, { "msg_contents": "\nOn Wed, 09 Nov 2022 at 11:21, Andres Freund <andres@anarazel.de> wrote:\n> I think it does. If we allow xid assignment before LogCurrentRunningXacts() is\n> done, those new xids would not have been mentioned in the xl_running_xacts\n> record, despite already running. Which I think result in corrupted snapshots\n> during hot standby and logical decoding.\n>\n>\n>> Does there any sense to release them in reversed acquisition order in\n>> LogStandbySnapshot like ProcArrayRemove?\n>\n> I don't think it's worth optimizing for, this happens at a low frequency\n> (whereas connection establishment can be very frequent). And due to the above,\n> we can sometimes release ProcArrayLock earlier.\n>\n\nThanks for the explanation! Got it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 09 Nov 2022 13:10:41 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Locks release order in LogStandbySnapshot" } ]
[ { "msg_contents": "Commit b7eda3e0e3 moves XidINMVCCSnapshot into snapmgr.{c,h},\nhowever, it forgets the declaration of XidINMVCCSnapshot in\nheapam.h.\n\nAttached removes the redundant declaration in heapam.h.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Wed, 09 Nov 2022 18:50:53 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Remove redundant declaration for XidInMVCCSnapshot" }, { "msg_contents": "On 2022-Nov-09, Japin Li wrote:\n\n> Commit b7eda3e0e3 moves XidINMVCCSnapshot into snapmgr.{c,h},\n> however, it forgets the declaration of XidINMVCCSnapshot in\n> heapam.h.\n\nTrue. Pushed, thanks.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n", "msg_date": "Wed, 9 Nov 2022 18:33:33 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove redundant declaration for XidInMVCCSnapshot" } ]
[ { "msg_contents": "This arose during the review of another patch.\n\nWe often omit the default case of a switch statement to allow the \ncompiler to complain if an enum case has been missed. I found a few \nwhere that wasn't done yet, but it would make sense and would have found \nan omission in another patch.", "msg_date": "Wed, 9 Nov 2022 15:26:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Update some more ObjectType switch statements to not have default" } ]
[ { "msg_contents": "While looking through vacuum code, I noticed that\r\nunlike non-parallel vacuum, parallel vacuum only gets\r\na failsafe check after an entire index cycle completes.\r\n\r\nIn vacuumlazy.c, lazy_check_wraparound_failsafe is checked\r\nafter every index completes, while in parallel, it is checked\r\nafter an entire index cycle completed.\r\n\r\nif (!ParallelVacuumIsActive(vacrel))\r\n {\r\n for (int idx = 0; idx < vacrel->nindexes; idx++)\r\n {\r\n Relation indrel = vacrel->indrels[idx];\r\n IndexBulkDeleteResult *istat = vacrel->indstats[idx];\r\n\r\n vacrel->indstats[idx] =\r\n lazy_vacuum_one_index(indrel, istat, vacrel->old_live_tuples,\r\n vacrel);\r\n\r\n /*\r\n * Done vacuuming an index. Increment the indexes completed\r\n */\r\n pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\r\n idx + 1);\r\n\r\n if (lazy_check_wraparound_failsafe(vacrel))\r\n {\r\n /* Wraparound emergency -- end current index scan */\r\n allindexes = false;\r\n break;\r\n }\r\n }\r\n }\r\n else\r\n {\r\n /* Outsource everything to parallel variant */\r\n parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, vacrel->old_live_tuples,\r\n vacrel->num_index_scans);\r\n\r\n /*\r\n * Do a postcheck to consider applying wraparound failsafe now. Note\r\n * that parallel VACUUM only gets the precheck and this postcheck.\r\n */\r\n if (lazy_check_wraparound_failsafe(vacrel))\r\n allindexes = false;\r\n }\r\n\r\nWhen a user is running a parallel vacuum and the vacuum is long running\r\ndue to many large indexes, it would make sense to check for failsafe earlier.\r\n\r\nAlso, checking after every index for parallel vacuum will provide the same\r\nfailsafe behavior for both parallel and non-parallel vacuums.\r\n\r\nTo make this work, it is possible to call lazy_check_wraparound_failsafe\r\ninside parallel_vacuum_process_unsafe_indexes and\r\nparallel_vacuum_process_safe_indexes of vacuumparallel.c\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\n\n\n\n\n\n\n\n\nWhile looking through vacuum code, I noticed that\nunlike non-parallel vacuum, parallel vacuum only gets\na failsafe check after an entire index cycle completes.\n \nIn vacuumlazy.c, lazy_check_wraparound_failsafe is checked\nafter every index completes, while in parallel, it is checked\nafter an entire index cycle completed.\n \nif (!ParallelVacuumIsActive(vacrel))\n                {\n                                for (int idx = 0; idx < vacrel->nindexes; idx++)\n                                {\n                                                Relation                indrel = vacrel->indrels[idx];\n                                                IndexBulkDeleteResult *istat = vacrel->indstats[idx];\n \n                                                vacrel->indstats[idx] =\n                                                                lazy_vacuum_one_index(indrel, istat, vacrel->old_live_tuples,\n                                                                                                                                                  vacrel);\n \n                                                /*\n                                                * Done vacuuming an index. Increment the indexes completed\n                                                */\n                                                pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n                                                                                                                                                                idx + 1);\n \n                                                if (lazy_check_wraparound_failsafe(vacrel))\n                                                {\n                                                                /* Wraparound emergency -- end current index scan */\n                                                                allindexes = false;\n                                                                break;\n                                                }\n                                }\n                }\n                else\n                {\n                                /* Outsource everything to parallel variant */\n                                parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, vacrel->old_live_tuples,\n                                                                                                                                                                                vacrel->num_index_scans);\n \n                                /*\n                                * Do a postcheck to consider applying wraparound failsafe now.  Note\n                                * that parallel VACUUM only gets the precheck and this postcheck.\n                                */\n                                if (lazy_check_wraparound_failsafe(vacrel))\n                                                allindexes = false;\n                }\n \nWhen a user is running a parallel vacuum and the vacuum is long running\ndue to many large indexes, it would make sense to check for failsafe earlier.\r\n\n \nAlso, checking after every index for parallel vacuum will provide the same\nfailsafe behavior for both parallel and non-parallel vacuums.\n \nTo make this work, it is possible to call lazy_check_wraparound_failsafe\ninside parallel_vacuum_process_unsafe_indexes and \nparallel_vacuum_process_safe_indexes of vacuumparallel.c\n \n \nRegards,\n \nSami Imseih\nAmazon Web Services (AWS)", "msg_date": "Wed, 9 Nov 2022 14:29:18 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "On Wed, Nov 9, 2022 at 6:29 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> When a user is running a parallel vacuum and the vacuum is long running\n>\n> due to many large indexes, it would make sense to check for failsafe earlier.\n\nIt makes sense to prefer consistency here, I suppose. The reason why\nwe're not consistent is because it was easier not to be, which isn't\nexactly the best reason (nor the worst).\n\nI don't think that it's obvious that we need to call the failsafe at\nany particular frequency. There is probably an argument to be made for\nthe idea that we're not checking frequently enough (even in the common\nserial VACUUM case), just as there is an argument to be made for the\nopposite idea. It's not like there is some simple linear relationship\n(or any kind of relationship) between the amount of physical work\nperformed by one VACUUM operation, and the rate of XID consumption by\nthe system as a whole. And so the details of how we do it have plenty\nto do with what we can afford to do.\n\nMy gut instinct is that the most important thing is to at least call\nlazy_check_wraparound_failsafe() once per index scan. Multiple index\nscans are disproportionately involved in VACUUMs that take far longer\nthan expected, which are presumably the kind of VACUUMs that tend to\nbe running when the failsafe actually triggers. We don't really expect\nthe failsafe to trigger, so presumably when it actually does things haven't\nbeen going well for some time. (Index corruption that prevents forward\nprogress on one particular index is another example.)\n\nThat said, one thing that does bother me in this area occurs to me: we\ncall lazy_check_wraparound_failsafe() from lazy_scan_heap() (before we\nget to the index scans that you're talking about) at an interval that\nis based on how many heap pages we've either processed (and recorded\nas a scanned_pages page) *or* have skipped over using the visibility\nmap. In other words we use blkno here, when we should really be using\nscanned_pages instead:\n\n if (blkno - next_failsafe_block >= FAILSAFE_EVERY_PAGES)\n {\n lazy_check_wraparound_failsafe(vacrel);\n next_failsafe_block = blkno;\n }\n\nThis code effectively treats pages skipped using the visibility map as\nequivalent to pages physically scanned (scanned_pages), even though\nskipping requires essentially no work at all. That just isn't logical,\nand feels like something worth fixing. The fundamental unit of work in\nlazy_scan_heap() is a *scanned* heap page.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 9 Nov 2022 13:42:43 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "> It makes sense to prefer consistency here, I suppose. The reason why\r\n> we're not consistent is because it was easier not to be, which isn't\r\n> exactly the best reason (nor the worst).\r\n\r\nConsistency is the key point here. It is odd that a serial\r\nvacuum may skip the remainder of the indexes if failsafe\r\nkicks-in, but in the parallel case it will go through the entire index\r\ncycle.\r\n\r\n> My gut instinct is that the most important thing is to at least call\r\n> lazy_check_wraparound_failsafe() once per index scan. \r\n\r\nI agree. And this should happen in the serial and parallel case.\r\n\r\n> That said, one thing that does bother me in this area occurs to me: we\r\n> call lazy_check_wraparound_failsafe() from lazy_scan_heap() (before we\r\n> get to the index scans that you're talking about) at an interval that\r\n> is based on how many heap pages we've either processed (and recorded\r\n> as a scanned_pages page) *or* have skipped over using the visibility\r\n> map. In other words we use blkno here, when we should really be using\r\n> scanned_pages instead:\r\n\r\n> if (blkno - next_failsafe_block >= FAILSAFE_EVERY_PAGES)\r\n> {\r\n> lazy_check_wraparound_failsafe(vacrel);\r\n> next_failsafe_block = blkno;\r\n> }\r\n\r\n> This code effectively treats pages skipped using the visibility map as\r\n> equivalent to pages physically scanned (scanned_pages), even though\r\n> skipping requires essentially no work at all. That just isn't logical,\r\n> and feels like something worth fixing. The fundamental unit of work in\r\n> lazy_scan_heap() is a *scanned* heap page.\r\n\r\nIt makes perfect sense to use the scanned_pages instead.\r\n\r\nRegards,\r\n\r\nSami imseih\r\nAmazon Web Services (AWS)\r\n\r\n", "msg_date": "Thu, 10 Nov 2022 18:20:34 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "On Thu, Nov 10, 2022 at 10:20 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> Consistency is the key point here. It is odd that a serial\n> vacuum may skip the remainder of the indexes if failsafe\n> kicks-in, but in the parallel case it will go through the entire index\n> cycle.\n\nYeah, it's a little inconsistent.\n\n> > My gut instinct is that the most important thing is to at least call\n> > lazy_check_wraparound_failsafe() once per index scan.\n>\n> I agree. And this should happen in the serial and parallel case.\n\nI meant that there should definitely be a check between each round of\nindex scans (one index scan here affects each and every index). Doing\nmore than a single index scan is presumably rare, but are likely\ncommon among VACUUM operations that take an unusually long time --\nwhich is where the failsafe is relevant.\n\nI'm just highlighting that multiple index scans (rather than just 0 or\n1 index scans) is by far the primary risk factor that leads to a\nVACUUM that takes way longer than is typical. (The other notable risk\ncomes from aggressive VACUUMs that freeze a great deal of heap pages\nall at once, which I'm currently addressing by getting rid of the\nwhole concept of discrete aggressive mode VACUUM operations.)\n\n> > This code effectively treats pages skipped using the visibility map as\n> > equivalent to pages physically scanned (scanned_pages), even though\n> > skipping requires essentially no work at all. That just isn't logical,\n> > and feels like something worth fixing. The fundamental unit of work in\n> > lazy_scan_heap() is a *scanned* heap page.\n>\n> It makes perfect sense to use the scanned_pages instead.\n\nWant to have a go at writing a patch for that?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Nov 2022 15:09:23 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "> Yeah, it's a little inconsistent.\r\n\r\nYes, this should be corrected by calling the failsafe\r\ninside the parallel vacuum loops and handling the case by exiting\r\nthe loop and parallel vacuum if failsafe kicks in.\r\n\r\n> I meant that there should definitely be a check between each round of\r\n> index scans (one index scan here affects each and every index). Doing\r\n> more than a single index scan is presumably rare, but are likely\r\n> common among VACUUM operations that take an unusually long time --\r\n> which is where the failsafe is relevant.\r\n\r\nAh, OK. I was confused by the terminology. I read \"index scans\" as a single\r\nIndex scan rather than a index scan cycle.\r\n\r\nFWIW, even in the parallel case, the failsafe is checked after every index\r\nscan cycle.\r\n\r\n > Want to have a go at writing a patch for that?\r\n\r\nYes, I can. \r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n", "msg_date": "Fri, 11 Nov 2022 15:28:11 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "Attached is a patch to check scanned pages rather\r\nthan blockno. \r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Tue, 20 Dec 2022 17:44:36 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "On Sat, Nov 12, 2022 at 12:28 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > Yeah, it's a little inconsistent.\n>\n> Yes, this should be corrected by calling the failsafe\n> inside the parallel vacuum loops and handling the case by exiting\n> the loop and parallel vacuum if failsafe kicks in.\n\nI agree it's better to be consistent but I think we cannot simply call\nlazy_check_wraparound_failsafe() inside the parallel vacuum loops.\nIIUC the failsafe is heap (or lazyvacuum ) specific, whereas parallel\nvacuum is a common infrastructure to do index vacuum in parallel. We\nshould not break this design. For example, we would need to have a\ncallback for index scan loop so that the caller (i.e. lazy vacuum) can\ndo its work.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Dec 2022 17:35:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "On Wed, Dec 21, 2022 at 2:44 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Attached is a patch to check scanned pages rather\n> than blockno.\n\nThank you for the patch. It looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Dec 2022 17:49:29 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "On Tue, Dec 20, 2022 at 9:44 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> Attached is a patch to check scanned pages rather\n> than blockno.\n\nPushed, thanks!\n\nI adjusted the FAILSAFE_EVERY_PAGES comments, which now point out that\nFAILSAFE_EVERY_PAGES is a power-of-two. The implication is that the\ncompiler is all but guaranteed to be able to reduce the modulo\ndivision into a shift in the lazy_scan_heap loop, at the point of the\nfailsafe check. I doubt that it would really matter if the compiler\nhad to generate a DIV instruction, but it seems like a good idea to\navoid it on general principle, at least in performance sensitive code.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Dec 2022 10:43:46 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" }, { "msg_contents": "> I adjusted the FAILSAFE_EVERY_PAGES comments, which now point out that\r\n> FAILSAFE_EVERY_PAGES is a power-of-two. The implication is that the\r\n> compiler is all but guaranteed to be able to reduce the modulo\r\n> division into a shift in the lazy_scan_heap loop, at the point of the\r\n> failsafe check. I doubt that it would really matter if the compiler\r\n> had to generate a DIV instruction, but it seems like a good idea to\r\n> avoid it on general principle, at least in performance sensitive code.\r\n\r\nThank you! \r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n", "msg_date": "Fri, 23 Dec 2022 00:05:11 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Call lazy_check_wraparound_failsafe earlier for parallel vacuum" } ]
[ { "msg_contents": "Inspired by a recent posting on Slack...\n\ndiff --git a/doc/src/sgml/limits.sgml b/doc/src/sgml/limits.sgml\nindex d5b2b627dd..5d68eef093 100644\n--- a/doc/src/sgml/limits.sgml\n+++ b/doc/src/sgml/limits.sgml\n@@ -97,6 +97,13 @@\n <entry>32</entry>\n <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n </row>\n+\n+ <row>\n+ <entry>parameters per query</entry>\n+ <entry>65,535</entry>\n+ <entry>if you are reading this prepatorily, please redesign your query\nto use temporary tables or arrays</entry>\n+ </row>\n+\n </tbody>\n </tgroup>\n </table>\n\nDavid J.\n\nInspired by a recent posting on Slack...diff --git a/doc/src/sgml/limits.sgml b/doc/src/sgml/limits.sgmlindex d5b2b627dd..5d68eef093 100644--- a/doc/src/sgml/limits.sgml+++ b/doc/src/sgml/limits.sgml@@ -97,6 +97,13 @@     <entry>32</entry>     <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>    </row>++   <row>+    <entry>parameters per query</entry>+    <entry>65,535</entry>+    <entry>if you are reading this prepatorily, please redesign your query to use temporary tables or arrays</entry>+   </row>+    </tbody>   </tgroup>  </table>David J.", "msg_date": "Wed, 9 Nov 2022 17:34:26 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Document parameter count limit" }, { "msg_contents": ">\n>\n> + <entry>if you are reading this prepatorily, please redesign your\n> query to use temporary tables or arrays</entry>\n>\n\nI agree with the documentation of this parameter.\nI agree with dissuading anyone from attempting to change it\nThe wording is bordering on snark (however well deserved) and I think the\nvoice is slightly off.\n\nAlternate suggestion:\n\nQueries approaching this limit usually can be refactored to use arrays or\ntemporary tables, thus reducing parameter overhead.\n\n\nThe bit about parameter overhead appeals to the reader's desire for\nperformance, rather than just focusing on \"you shouldn't want this\".\n\n+    <entry>if you are reading this prepatorily, please redesign your query to use temporary tables or arrays</entry>I agree with the documentation of this parameter.I agree with dissuading anyone from attempting to change itThe wording is bordering on snark (however well deserved) and I think the voice is slightly off.Alternate suggestion:Queries approaching this limit usually can be refactored to use arrays or temporary tables, thus reducing parameter overhead.The bit about parameter overhead appeals to the reader's desire for performance, rather than just focusing on \"you shouldn't want this\".", "msg_date": "Thu, 10 Nov 2022 12:58:34 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Thu, Nov 10, 2022 at 10:58 AM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n>\n>> + <entry>if you are reading this prepatorily, please redesign your\n>> query to use temporary tables or arrays</entry>\n>>\n>\n> I agree with the documentation of this parameter.\n> I agree with dissuading anyone from attempting to change it\n> The wording is bordering on snark (however well deserved) and I think the\n> voice is slightly off.\n>\n> Alternate suggestion:\n>\n> Queries approaching this limit usually can be refactored to use arrays or\n> temporary tables, thus reducing parameter overhead.\n>\n>\n> The bit about parameter overhead appeals to the reader's desire for\n> performance, rather than just focusing on \"you shouldn't want this\".\n>\n\nYeah, the wording is a bit tongue-in-cheek. Figured assuming a committer\nwants this at all we'd come up with better wording. I like your suggestion.\n\nDavid J.\n\nOn Thu, Nov 10, 2022 at 10:58 AM Corey Huinker <corey.huinker@gmail.com> wrote:+    <entry>if you are reading this prepatorily, please redesign your query to use temporary tables or arrays</entry>I agree with the documentation of this parameter.I agree with dissuading anyone from attempting to change itThe wording is bordering on snark (however well deserved) and I think the voice is slightly off.Alternate suggestion:Queries approaching this limit usually can be refactored to use arrays or temporary tables, thus reducing parameter overhead.The bit about parameter overhead appeals to the reader's desire for performance, rather than just focusing on \"you shouldn't want this\".Yeah, the wording is a bit tongue-in-cheek.  Figured assuming a committer wants this at all we'd come up with better wording.  I like your suggestion.David J.", "msg_date": "Thu, 10 Nov 2022 11:01:18 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Thu, Nov 10, 2022 at 11:01:18AM -0700, David G. Johnston wrote:\n> On Thu, Nov 10, 2022 at 10:58 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n> \n> \n> +    <entry>if you are reading this prepatorily, please redesign your\n> query to use temporary tables or arrays</entry>\n> \n> \n> I agree with the documentation of this parameter.\n> I agree with dissuading anyone from attempting to change it\n> The wording is bordering on snark (however well deserved) and I think the\n> voice is slightly off.\n> \n> Alternate suggestion:\n> \n> \n> Queries approaching this limit usually can be refactored to use arrays\n> or temporary tables, thus reducing parameter overhead.\n> \n> \n> The bit about parameter overhead appeals to the reader's desire for\n> performance, rather than just focusing on \"you shouldn't want this\".\n> \n> \n> Yeah, the wording is a bit tongue-in-cheek.  Figured assuming a committer wants\n> this at all we'd come up with better wording.  I like your suggestion.\n\nDoes this come up enough to document it? I assume the error message the\nuser receives is clear.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:30:48 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Does this come up enough to document it? I assume the error message the\n> user receives is clear.\n\nLooks like you get\n\n if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)\n {\n libpq_append_conn_error(conn, \"number of parameters must be between 0 and %d\",\n PQ_QUERY_PARAM_MAX_LIMIT);\n return 0;\n }\n\nwhich seems clear enough.\n\nI think the concern here is that somebody who's not aware that a limit\nexists might write an application that thinks it can send lots of\nparameters, and then have it fall over in production. Now, I've got\ndoubts that an entry in the limits.sgml table will do much to prevent\nthat scenario. But perhaps offering the advice to use an array parameter\nwill be worthwhile even after-the-fact.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:47:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Wed, Nov 23, 2022 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Does this come up enough to document it? I assume the error message the\n> > user receives is clear.\n>\n> Looks like you get\n>\n> if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)\n> {\n> libpq_append_conn_error(conn, \"number of parameters must be\n> between 0 and %d\",\n> PQ_QUERY_PARAM_MAX_LIMIT);\n> return 0;\n> }\n>\n> which seems clear enough.\n>\n> I think the concern here is that somebody who's not aware that a limit\n> exists might write an application that thinks it can send lots of\n> parameters, and then have it fall over in production. Now, I've got\n> doubts that an entry in the limits.sgml table will do much to prevent\n> that scenario. But perhaps offering the advice to use an array parameter\n> will be worthwhile even after-the-fact.\n>\n\nIt comes up enough in places I troll that having a link to drop into a\nreply would be nice.\nI do believe that people who want to use a large parameter list likely have\nthat question in the back of their mind, and looking at a page called\n\"System Limits\" is at least plausibly something they would do. Since they\nare really caring about parse-bind-execute, and they aren't likely to dig\ninto libpq, this seems like the best spot (as opposed to, say PREPARE)\n\nDavid J.\n\nOn Wed, Nov 23, 2022 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Bruce Momjian <bruce@momjian.us> writes:\n> Does this come up enough to document it?  I assume the error message the\n> user receives is clear.\n\nLooks like you get\n\n    if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)\n    {\n        libpq_append_conn_error(conn, \"number of parameters must be between 0 and %d\",\n                           PQ_QUERY_PARAM_MAX_LIMIT);\n        return 0;\n    }\n\nwhich seems clear enough.\n\nI think the concern here is that somebody who's not aware that a limit\nexists might write an application that thinks it can send lots of\nparameters, and then have it fall over in production.  Now, I've got\ndoubts that an entry in the limits.sgml table will do much to prevent\nthat scenario.  But perhaps offering the advice to use an array parameter\nwill be worthwhile even after-the-fact.It comes up enough in places I troll that having a link to drop into a reply would be nice.I do believe that people who want to use a large parameter list likely have that question in the back of their mind, and looking at a page called \"System Limits\" is at least plausibly something they would do.  Since they are really caring about parse-bind-execute, and they aren't likely to dig into libpq, this seems like the best spot (as opposed to, say PREPARE)David J.", "msg_date": "Wed, 23 Nov 2022 12:35:59 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I do believe that people who want to use a large parameter list likely have\n> that question in the back of their mind, and looking at a page called\n> \"System Limits\" is at least plausibly something they would do. Since they\n> are really caring about parse-bind-execute, and they aren't likely to dig\n> into libpq, this seems like the best spot (as opposed to, say PREPARE)\n\nThis is a wire-protocol limitation; libpq is only the messenger.\nSo if we're going to document it, I agree that limits.sgml is the place.\n\n(BTW, I'm not certain that PREPARE has the same limit. It'd fall over\nat INT_MAX likely, or maybe sooner for lack of memory, but I don't\nrecall that there's any uint16 fields in that code path.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:29:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Wed, Nov 23, 2022 at 12:35:59PM -0700, David G. Johnston wrote:\n> On Wed, Nov 23, 2022 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > Does this come up enough to document it? I assume the error message the\n> > > user receives is clear.\n> >\n> > Looks like you get\n> >\n> > if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)\n> > {\n> > libpq_append_conn_error(conn, \"number of parameters must be between 0 and %d\",\n> > PQ_QUERY_PARAM_MAX_LIMIT);\n> > return 0;\n> > }\n> >\n> > which seems clear enough.\n> >\n> > I think the concern here is that somebody who's not aware that a limit\n> > exists might write an application that thinks it can send lots of\n> > parameters, and then have it fall over in production. Now, I've got\n> > doubts that an entry in the limits.sgml table will do much to prevent\n> > that scenario. But perhaps offering the advice to use an array parameter\n> > will be worthwhile even after-the-fact.\n\nYes, that's what happens :)\n\nI hit that error after increasing the number of VALUES(),() a loader\nused in a prepared statement (and that was with our non-wide tables).\n\n+1 to document the limit along with the other limits.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 23 Nov 2022 14:33:27 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Wed, Nov 23, 2022 at 02:33:27PM -0600, Justin Pryzby wrote:\n> On Wed, Nov 23, 2022 at 12:35:59PM -0700, David G. Johnston wrote:\n> > On Wed, Nov 23, 2022 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > Does this come up enough to document it? I assume the error message the\n> > > > user receives is clear.\n> > >\n> > > Looks like you get\n> > >\n> > > if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)\n> > > {\n> > > libpq_append_conn_error(conn, \"number of parameters must be between 0 and %d\",\n> > > PQ_QUERY_PARAM_MAX_LIMIT);\n> > > return 0;\n> > > }\n> > >\n> > > which seems clear enough.\n> > >\n> > > I think the concern here is that somebody who's not aware that a limit\n> > > exists might write an application that thinks it can send lots of\n> > > parameters, and then have it fall over in production. Now, I've got\n> > > doubts that an entry in the limits.sgml table will do much to prevent\n> > > that scenario. But perhaps offering the advice to use an array parameter\n> > > will be worthwhile even after-the-fact.\n> \n> Yes, that's what happens :)\n> \n> I hit that error after increasing the number of VALUES(),() a loader\n> used in a prepared statement (and that was with our non-wide tables).\n> \n> +1 to document the limit along with the other limits.\n\nHere is a patch to add this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 26 Oct 2023 18:51:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Here is a patch to add this.\n\n\"function arguments\" seems like a completely wrong description\n(and if we do want to document that limit, it's 100).\n\n\"query parameters\" would work, perhaps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Oct 2023 18:56:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Thu, Oct 26, 2023 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Nov 23, 2022 at 02:33:27PM -0600, Justin Pryzby wrote:\n> > On Wed, Nov 23, 2022 at 12:35:59PM -0700, David G. Johnston wrote:\n> > > On Wed, Nov 23, 2022 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > > Does this come up enough to document it? I assume the error\n> message the\n> > > > > user receives is clear.\n> > > >\n> > > > Looks like you get\n> > > >\n> > > > if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)\n> > > > {\n> > > > libpq_append_conn_error(conn, \"number of parameters must be\n> between 0 and %d\",\n> > > > PQ_QUERY_PARAM_MAX_LIMIT);\n> > > > return 0;\n> > > > }\n> > > >\n> > > > which seems clear enough.\n> > > >\n> > > > I think the concern here is that somebody who's not aware that a\n> limit\n> > > > exists might write an application that thinks it can send lots of\n> > > > parameters, and then have it fall over in production. Now, I've got\n> > > > doubts that an entry in the limits.sgml table will do much to prevent\n> > > > that scenario. But perhaps offering the advice to use an array\n> parameter\n> > > > will be worthwhile even after-the-fact.\n> >\n> > Yes, that's what happens :)\n> >\n> > I hit that error after increasing the number of VALUES(),() a loader\n> > used in a prepared statement (and that was with our non-wide tables).\n> >\n> > +1 to document the limit along with the other limits.\n>\n> Here is a patch to add this.\n>\n>\nWe aren't talking about \"function arguments\" though...is there something\nwrong with the term \"parameters per query\"?\n\nI suggest we take this opportunity to decide how to handle values > 999 in\nterms of separators. The existing page is inconsistent. I would prefer\nadding the needed commas.\n\nDavid J.\n\nOn Thu, Oct 26, 2023 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Nov 23, 2022 at 02:33:27PM -0600, Justin Pryzby wrote:\n> On Wed, Nov 23, 2022 at 12:35:59PM -0700, David G. Johnston wrote:\n> > On Wed, Nov 23, 2022 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > Does this come up enough to document it?  I assume the error message the\n> > > > user receives is clear.\n> > >\n> > > Looks like you get\n> > >\n> > >     if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)\n> > >     {\n> > >         libpq_append_conn_error(conn, \"number of parameters must be between 0 and %d\",\n> > >                            PQ_QUERY_PARAM_MAX_LIMIT);\n> > >         return 0;\n> > >     }\n> > >\n> > > which seems clear enough.\n> > >\n> > > I think the concern here is that somebody who's not aware that a limit\n> > > exists might write an application that thinks it can send lots of\n> > > parameters, and then have it fall over in production.  Now, I've got\n> > > doubts that an entry in the limits.sgml table will do much to prevent\n> > > that scenario.  But perhaps offering the advice to use an array parameter\n> > > will be worthwhile even after-the-fact.\n> \n> Yes, that's what happens :)\n> \n> I hit that error after increasing the number of VALUES(),() a loader\n> used in a prepared statement (and that was with our non-wide tables).\n> \n> +1 to document the limit along with the other limits.\n\nHere is a patch to add this.We aren't talking about \"function arguments\" though...is there something wrong with the term \"parameters per query\"?I suggest we take this opportunity to decide how to handle values > 999 in terms of separators.  The existing page is inconsistent.  I would prefer adding the needed commas.David J.", "msg_date": "Thu, 26 Oct 2023 15:56:53 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Thu, Oct 26, 2023 at 06:56:40PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Here is a patch to add this.\n> \n> \"function arguments\" seems like a completely wrong description\n> (and if we do want to document that limit, it's 100).\n> \n> \"query parameters\" would work, perhaps.\n\nAh, I was confused. I documented both in the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 26 Oct 2023 19:01:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Ah, I was confused. I documented both in the attached patch.\n\nThe function one should have the same annotation as some others:\n\n <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Oct 2023 19:08:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Thu, Oct 26, 2023 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Ah, I was confused. I documented both in the attached patch.\n>\n> The function one should have the same annotation as some others:\n>\n> <entry>can be increased by recompiling\n> <productname>PostgreSQL</productname></entry>\n>\n>\nI'd like to see a comment on the parameter count one too.\n\n\"Alternatives include using a temporary table or passing them in as a\nsingle array parameter.\"\n\nAbout the only time this is likely to come up is with many parameters of\nthe same type and meaning, pointing that out with the array option seems\nexcessively wordy for the comment area.\n\nNeeds a comma: 65,535\n\nKinda think both should be tacked on to the end of the table. I'd also put\nfunction arguments first so it appears under the compile time partition\nkeys limit.\n\nDavid J.\n\nOn Thu, Oct 26, 2023 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Bruce Momjian <bruce@momjian.us> writes:\n> Ah, I was confused.  I documented both in the attached patch.\n\nThe function one should have the same annotation as some others:\n\n     <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>I'd like to see a comment on the parameter count one too.\"Alternatives include using a temporary table or passing them in as a single array parameter.\"About the only time this is likely to come up is with many parameters of the same type and meaning, pointing that out with the array option seems excessively wordy for the comment area.Needs a comma: 65,535Kinda think both should be tacked on to the end of the table.  I'd also put function arguments first so it appears under the compile time partition keys limit.David J.", "msg_date": "Thu, 26 Oct 2023 16:13:07 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Thu, Oct 26, 2023 at 4:13 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Thu, Oct 26, 2023 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Bruce Momjian <bruce@momjian.us> writes:\n>> > Ah, I was confused. I documented both in the attached patch.\n>>\n>> The function one should have the same annotation as some others:\n>>\n>> <entry>can be increased by recompiling\n>> <productname>PostgreSQL</productname></entry>\n>>\n>>\n> I'd like to see a comment on the parameter count one too.\n>\n> \"Alternatives include using a temporary table or passing them in as a\n> single array parameter.\"\n>\n> About the only time this is likely to come up is with many parameters of\n> the same type and meaning, pointing that out with the array option seems\n> excessively wordy for the comment area.\n>\n> Needs a comma: 65,535\n>\n> Kinda think both should be tacked on to the end of the table. I'd also\n> put function arguments first so it appears under the compile time partition\n> keys limit.\n>\n>\nCleanups for consistency:\n\nMove \"identifier length\" after \"partition keys\" (before the new \"function\narguments\")\n\nAdd commas to: 1,600 and 1,664 and 8,192\n\nDavid J.\n\nOn Thu, Oct 26, 2023 at 4:13 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Thu, Oct 26, 2023 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Bruce Momjian <bruce@momjian.us> writes:\n> Ah, I was confused.  I documented both in the attached patch.\n\nThe function one should have the same annotation as some others:\n\n     <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>I'd like to see a comment on the parameter count one too.\"Alternatives include using a temporary table or passing them in as a single array parameter.\"About the only time this is likely to come up is with many parameters of the same type and meaning, pointing that out with the array option seems excessively wordy for the comment area.Needs a comma: 65,535Kinda think both should be tacked on to the end of the table.  I'd also put function arguments first so it appears under the compile time partition keys limit.Cleanups for consistency:Move \"identifier length\" after \"partition keys\" (before the new \"function arguments\")Add commas to: 1,600 and 1,664 and 8,192David J.", "msg_date": "Thu, 26 Oct 2023 16:17:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Cleanups for consistency:\n\n> Move \"identifier length\" after \"partition keys\" (before the new \"function\n> arguments\")\n\nYeah, the existing ordering of this table seems quite random.\nThat would help some, by separating items having to do with\ndatabase/table size from SQL-query-related limits.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Oct 2023 19:21:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Thu, Oct 26, 2023 at 04:17:19PM -0700, David G. Johnston wrote:\n> On Thu, Oct 26, 2023 at 4:13 PM David G. Johnston <david.g.johnston@gmail.com>\n> wrote:\n> \n> On Thu, Oct 26, 2023 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Bruce Momjian <bruce@momjian.us> writes:\n> > Ah, I was confused.  I documented both in the attached patch.\n> \n> The function one should have the same annotation as some others:\n> \n>      <entry>can be increased by recompiling <productname>PostgreSQL</\n> productname></entry>\n> \n> \n> \n> I'd like to see a comment on the parameter count one too.\n> \n> \"Alternatives include using a temporary table or passing them in as a\n> single array parameter.\"\n> \n> About the only time this is likely to come up is with many parameters of\n> the same type and meaning, pointing that out with the array option seems\n> excessively wordy for the comment area.\n> \n> Needs a comma: 65,535\n> \n> Kinda think both should be tacked on to the end of the table.  I'd also put\n> function arguments first so it appears under the compile time partition\n> keys limit.\n> \n> \n> \n> Cleanups for consistency:\n> \n> Move \"identifier length\" after \"partition keys\" (before the new \"function\n> arguments\")\n> \n> Add commas to: 1,600 and 1,664 and 8,192\n\nOkay, I made all the suggested changes in ordering and adding commas,\nplus the text about the ability to change function arguments via\nrecompiling.\n\nI didn't put commas in 8192 since that is a power-of-two and kind of a\nmagic number used in many places.\n\nI am not sure where to put text about using arrays to handle many\nfunction arguments. I just don't see it fitting in the table, or the\nparagraph below the table.\n\nPatch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 26 Oct 2023 23:04:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" }, { "msg_contents": "On Thu, Oct 26, 2023 at 11:04:47PM -0400, Bruce Momjian wrote:\n> On Thu, Oct 26, 2023 at 04:17:19PM -0700, David G. Johnston wrote:\n> > On Thu, Oct 26, 2023 at 4:13 PM David G. Johnston <david.g.johnston@gmail.com>\n> > wrote:\n> > \n> > On Thu, Oct 26, 2023 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > Ah, I was confused.  I documented both in the attached patch.\n> > \n> > The function one should have the same annotation as some others:\n> > \n> >      <entry>can be increased by recompiling <productname>PostgreSQL</\n> > productname></entry>\n> > \n> > \n> > \n> > I'd like to see a comment on the parameter count one too.\n> > \n> > \"Alternatives include using a temporary table or passing them in as a\n> > single array parameter.\"\n> > \n> > About the only time this is likely to come up is with many parameters of\n> > the same type and meaning, pointing that out with the array option seems\n> > excessively wordy for the comment area.\n> > \n> > Needs a comma: 65,535\n> > \n> > Kinda think both should be tacked on to the end of the table.  I'd also put\n> > function arguments first so it appears under the compile time partition\n> > keys limit.\n> > \n> > \n> > \n> > Cleanups for consistency:\n> > \n> > Move \"identifier length\" after \"partition keys\" (before the new \"function\n> > arguments\")\n> > \n> > Add commas to: 1,600 and 1,664 and 8,192\n> \n> Okay, I made all the suggested changes in ordering and adding commas,\n> plus the text about the ability to change function arguments via\n> recompiling.\n> \n> I didn't put commas in 8192 since that is a power-of-two and kind of a\n> magic number used in many places.\n> \n> I am not sure where to put text about using arrays to handle many\n> function arguments. I just don't see it fitting in the table, or the\n> paragraph below the table.\n\nPatch applied back to Postgres 12.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 31 Oct 2023 09:26:01 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Document parameter count limit" } ]
[ { "msg_contents": "I think I had brought this up a while ago, but I forget what the opinion was\non the matter.\n\nPostGIS has a number of extensions that rely on it. For the extensions that\nare packaged with PostGIS, we force them all into the same schema except for\nthe postgis_topology and postgis_tiger_geocoder extensions which are already\ninstalled in dedicated schemas.\n\nThis makes it impossible for postgis_topology and postgis_tiger_geocoder to\nschema qualify their use of postgis. Other extensions like pgRouting,\nh3-pg, mobilitydb have similar issues.\n\nMy proposal is this. If you think it's a good enough idea I can work up a\npatch for this.\n\nExtensions currently are allowed to specify a requires in the control file.\n\nI propose to use this information, to allow replacement of phrases\n\n@extschema_nameofextension@ as a variable, where nameofextension has to be\none of the extensions listed in the requires.\n\nThe extension plumbing will then use this information to look up the schema\nthat the current required extensions are installed in, and replace the\nvariables with the schema of where the dependent extension is installed.\n\nDoes anyone see any issue with this idea. \n\nThanks,\nRegina\n\n\n\n", "msg_date": "Wed, 9 Nov 2022 21:43:43 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "Ability to reference other extensions by schema in extension scripts" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n> My proposal is this. If you think it's a good enough idea I can work up a\n> patch for this.\n> Extensions currently are allowed to specify a requires in the control file.\n> I propose to use this information, to allow replacement of phrases\n> @extschema_nameofextension@ as a variable, where nameofextension has to be\n> one of the extensions listed in the requires.\n\nI have a distinct sense of deja vu here. I think this idea, or something\nisomorphic to it, was previously discussed with some other syntax details.\nI'm too lazy to go searching the archives right now, but I suggest that\nyou try to find that discussion and see if the discussed syntax seems\nbetter or worse than what you mention.\n\nI think it might've been along the line of @extschema:nameofextension@,\nwhich seems like it might be superior because colon isn't a valid\nidentifier character so there's less risk of ambiguity.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Nov 2022 22:49:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> \"Regina Obe\" <lr@pcorp.us> writes:\n> > My proposal is this. If you think it's a good enough idea I can work\n> > up a patch for this.\n> > Extensions currently are allowed to specify a requires in the control\nfile.\n> > I propose to use this information, to allow replacement of phrases\n> > @extschema_nameofextension@ as a variable, where nameofextension has\n> > to be one of the extensions listed in the requires.\n> \n> I have a distinct sense of deja vu here. I think this idea, or something\n> isomorphic to it, was previously discussed with some other syntax details.\n> I'm too lazy to go searching the archives right now, but I suggest that\nyou try to\n> find that discussion and see if the discussed syntax seems better or worse\nthan\n> what you mention.\n> \n> I think it might've been along the line of @extschema:nameofextension@,\n> which seems like it might be superior because colon isn't a valid\nidentifier\n> character so there's less risk of ambiguity.\n> \n> \t\t\tregards, tom lane\nI found the old discussion I recalled having and Stephen had suggested using\n\n@extschema{'postgis'}@\n\nOn this thread --\nhttps://www.postgresql.org/message-id/20160425232251.GR10850@tamriel.snowman\n.net\n\nIs that the one you remember?\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Wed, 9 Nov 2022 23:44:07 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n>> I have a distinct sense of deja vu here. I think this idea, or something\n>> isomorphic to it, was previously discussed with some other syntax details.\n\n> I found the old discussion I recalled having and Stephen had suggested using\n> @extschema{'postgis'}@\n> On this thread --\n> https://www.postgresql.org/message-id/20160425232251.GR10850@tamriel.snowman.net\n> Is that the one you remember?\n\nHmmm ... no, ISTM it was considerably more recent than that.\n[ ...digs... ] Here we go, it was in the discussion around\nconverting contrib SQL functions to new-style:\n\nhttps://www.postgresql.org/message-id/flat/3395418.1618352794%40sss.pgh.pa.us\n\nThere are a few different ideas bandied around in there.\nPersonally I still like the @extschema:extensionname@\noption the best, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Nov 2022 12:37:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> \"Regina Obe\" <lr@pcorp.us> writes:\n> >> I have a distinct sense of deja vu here. I think this idea, or\n> >> something isomorphic to it, was previously discussed with some other\n> syntax details.\n> \n> > I found the old discussion I recalled having and Stephen had suggested\n> > using @extschema{'postgis'}@ On this thread --\n> > https://www.postgresql.org/message-\n> id/20160425232251.GR10850@tamriel.s\n> > nowman.net\n> > Is that the one you remember?\n> \n> Hmmm ... no, ISTM it was considerably more recent than that.\n> [ ...digs... ] Here we go, it was in the discussion around converting\ncontrib SQL\n> functions to new-style:\n> \n> https://www.postgresql.org/message-\n> id/flat/3395418.1618352794%40sss.pgh.pa.us\n> \n> There are a few different ideas bandied around in there.\n> Personally I still like the @extschema:extensionname@ option the best,\n> though.\n> \n> \t\t\tregards, tom lane\n\nI had initially thought of a syntax that could always be used even outside\nof extension install as some mentioned. Like the PG_EXTENSION_SCHEMA(cube)\nexample. Main benefit I see with that is that even if an extension is moved,\nall the dependent extensions that reference it would still work fine.\n\nI had dismissed that because it seemed too invasive. Seems like it would\nrequire changes to the parser and possibly add query performance overhead to\nresolve the schema. Not to mention the added testing required to do no harm.\n\nThe other reason I dismissed it is because at least for PostGIS it would be\nharder to conditionally replace. The issue with\nPG_EXTENSION_SCHEMA(cube) is we can't support that in lower PG versions so\nwe'd need to strip for lower versions, and that would introduce the\npossibility of missing\nPG_EXTENSION_SCHEMA(cube) vs. PG_EXTENSION_SCHEMA( cube ), not a huge deal\nthough, but not quite as easy and precise as just stripping\n@extschema:extensionname@. References.\n\nWith the @extschema:extensionname@, it doesn't solve all problems, but the\nkey ones we care about like breakage of functions used in indexes,\nmaterialized views, and added security and is a little easier to strip out.\n\nI'll work on producing a patch.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Thu, 10 Nov 2022 13:42:28 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> \n> \"Regina Obe\" <lr@pcorp.us> writes:\n> >> I have a distinct sense of deja vu here. I think this idea, or\n> >> something isomorphic to it, was previously discussed with some other\n> syntax details.\n> \n> > I found the old discussion I recalled having and Stephen had suggested\n> > using @extschema{'postgis'}@ On this thread --\n> > https://www.postgresql.org/message-\n> id/20160425232251.GR10850@tamriel.s\n> > nowman.net\n> > Is that the one you remember?\n> \n> Hmmm ... no, ISTM it was considerably more recent than that.\n> [ ...digs... ] Here we go, it was in the discussion around converting\ncontrib SQL\n> functions to new-style:\n> \n> https://www.postgresql.org/message-\n> id/flat/3395418.1618352794%40sss.pgh.pa.us\n> \n> There are a few different ideas bandied around in there.\n> Personally I still like the @extschema:extensionname@ option the best,\n> though.\n> \n> \t\t\tregards, tom lane\n\nHere is first version of my patch using the @extschema:extensionname@ syntax\nyou proposed.\n\nThis patch includes:\n1) Changes to replace references of @extschema:extensionname@ with the\nschema of the required extension\n2) Documentation for the feature\n3) Tests for the feature.\n\nThere is one issue I thought about that is not addressed by this.\n\nIf an extension is required by another extension and that required extension\nschema is referenced in the extension scripts using the\n@extschema:extensionname@ syntax, then ideally we should prevent the\nrequired extension from being relocatable. This would prevent a user from\naccidentally moving the required extension, thus breaking the dependent\nextensions.\n\nI didn't add that feature cause I wasn't sure if it was overstepping the\nbounds of what should be done, or if we leave it up to the user to just know\nbetter.\n\nThanks,\nRegina", "msg_date": "Tue, 22 Nov 2022 23:24:19 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "On Tue, Nov 22, 2022 at 11:24:19PM -0500, Regina Obe wrote:\n\n> Here is first version of my patch using the @extschema:extensionname@ syntax\n> you proposed.\n> \n> This patch includes:\n> 1) Changes to replace references of @extschema:extensionname@ with the\n> schema of the required extension\n> 2) Documentation for the feature\n> 3) Tests for the feature.\n> \n> There is one issue I thought about that is not addressed by this.\n> \n> If an extension is required by another extension and that required extension\n> schema is referenced in the extension scripts using the\n> @extschema:extensionname@ syntax, then ideally we should prevent the\n> required extension from being relocatable. This would prevent a user from\n> accidentally moving the required extension, thus breaking the dependent\n> extensions.\n> \n> I didn't add that feature cause I wasn't sure if it was overstepping the\n> bounds of what should be done, or if we leave it up to the user to just know\n> better.\n\nAn alternative would be to forbid using @extschema:extensionname@ to\nreference relocatable extensions. DBA can toggle relocatability of an\nextension to allow it to be referenced.\n\n--strk;\n\n\n", "msg_date": "Thu, 15 Dec 2022 09:52:34 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> On Tue, Nov 22, 2022 at 11:24:19PM -0500, Regina Obe wrote:\n> \n> > Here is first version of my patch using the @extschema:extensionname@\n> > syntax you proposed.\n> >\n> > This patch includes:\n> > 1) Changes to replace references of @extschema:extensionname@ with the\n> > schema of the required extension\n> > 2) Documentation for the feature\n> > 3) Tests for the feature.\n> >\n> > There is one issue I thought about that is not addressed by this.\n> >\n> > If an extension is required by another extension and that required\n> > extension schema is referenced in the extension scripts using the\n> > @extschema:extensionname@ syntax, then ideally we should prevent the\n> > required extension from being relocatable. This would prevent a user\n> > from accidentally moving the required extension, thus breaking the\n> > dependent extensions.\n> >\n> > I didn't add that feature cause I wasn't sure if it was overstepping\n> > the bounds of what should be done, or if we leave it up to the user to\n> > just know better.\n> \n> An alternative would be to forbid using @extschema:extensionname@ to\n> reference relocatable extensions. DBA can toggle relocatability of an\nextension\n> to allow it to be referenced.\n> \n> --strk;\nThat would be hard to do in a DbaaS setup and not many users know they can\nfiddle with extension control files.\nPlus those would get overwritten with upgrades.\n\nIn my case for example I have postgis_tiger_geocoder that relies on both\npostgis and fuzzystrmatch.\nI'd rather not have to explain to users how to fiddle with the\nfuzzystrmatch.control file to make it not relocatable.\n\nBut I don't think anyone would mind if it's forced after install because\nit's a rare thing for people to be moving extensions to different schemas\nafter install. \n\nThanks,\nRegina\n\n\n\n", "msg_date": "Thu, 15 Dec 2022 08:04:22 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "On Thu, Dec 15, 2022 at 08:04:22AM -0500, Regina Obe wrote:\n> > On Tue, Nov 22, 2022 at 11:24:19PM -0500, Regina Obe wrote:\n> > \n> > > If an extension is required by another extension and that required\n> > > extension schema is referenced in the extension scripts using the\n> > > @extschema:extensionname@ syntax, then ideally we should prevent the\n> > > required extension from being relocatable. This would prevent a user\n> > > from accidentally moving the required extension, thus breaking the\n> > > dependent extensions.\n> > >\n> > > I didn't add that feature cause I wasn't sure if it was overstepping\n> > > the bounds of what should be done, or if we leave it up to the user to\n> > > just know better.\n> > \n> > An alternative would be to forbid using @extschema:extensionname@ to\n> > reference relocatable extensions. DBA can toggle relocatability of an\n> > extension to allow it to be referenced.\n> \n> That would be hard to do in a DbaaS setup and not many users know they can\n> fiddle with extension control files.\n> Plus those would get overwritten with upgrades.\n\nWouldn't this also be the case if you override relocatability ?\nCase:\n\n - Install fuzzystrmatch, marked as relocatable\n - Install ext2 depending on the former, which is them marked\n non-relocatable\n - Upgrade database -> fuzzystrmatch becomes relocatable again\n - Change fuzzystrmatch schema BREAKING ext2\n\nAllowing to relocate a dependency of other extensions using the\n@extschema@ syntax is very dangerous.\n\nI've seen that PostgreSQL itself doesn't even bother to replace\n@extschema@ IF the extension using it doesn't mark itself as\nnon-relocatable. For consistency this patch should basically refuse\nto expand @extschema:fuzzystrmatch@ if \"fuzzystrmatch\" extension\nis relocatable.\n\nChanging the current behaviour of PostgreSQL could be proposed but\nI don't think it's to be done in this thread ?\n\nSo my suggestion is to start consistent (do not expand if referenced\nextension is relocatable).\n\n\n--strk;\n\n\n", "msg_date": "Mon, 16 Jan 2023 10:35:43 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> \n> On Thu, Dec 15, 2022 at 08:04:22AM -0500, Regina Obe wrote:\n> > > On Tue, Nov 22, 2022 at 11:24:19PM -0500, Regina Obe wrote:\n> > >\n> > > > If an extension is required by another extension and that required\n> > > > extension schema is referenced in the extension scripts using the\n> > > > @extschema:extensionname@ syntax, then ideally we should prevent\n> > > > the required extension from being relocatable. This would prevent\n> > > > a user from accidentally moving the required extension, thus\n> > > > breaking the dependent extensions.\n> > > >\n> > > > I didn't add that feature cause I wasn't sure if it was\n> > > > overstepping the bounds of what should be done, or if we leave it\n> > > > up to the user to just know better.\n> > >\n> > > An alternative would be to forbid using @extschema:extensionname@ to\n> > > reference relocatable extensions. DBA can toggle relocatability of\n> > > an extension to allow it to be referenced.\n> >\n> > That would be hard to do in a DbaaS setup and not many users know they\n> > can fiddle with extension control files.\n> > Plus those would get overwritten with upgrades.\n> \n> Wouldn't this also be the case if you override relocatability ?\n> Case:\n> \n> - Install fuzzystrmatch, marked as relocatable\n> - Install ext2 depending on the former, which is them marked\n> non-relocatable\n> - Upgrade database -> fuzzystrmatch becomes relocatable again\n> - Change fuzzystrmatch schema BREAKING ext2\n> \n\nSomewhat. It would be an issue if someone does\n\nALTER EXTENSION fuzzystrmatch UPDATE;\n\nAnd \n\nALTER EXTENSION fuzzystrmatch SET SCHEMA a_different_schema;\n\nOtherwise the relocatability of an already installed extension wouldn't\nchange even during upgrade. I haven't checked pg_upgrade, but I suspect it\nwouldn't change there either.\n\nIt's my understanding that once an extension is installed, it's relocatable\nstatus is recorded in the pg_extension table. So it doesn't matter at that\npoint what the control file says. However if someone does update the\nextension, then yes it would look at the control file and make it updatable\nagain.\n\nI just tested this fiddling with postgis extension and moving it and then\nupgrading.\n\nUPDATE pg_extension SET extrelocatable = true where extname = 'postgis';\nALTER EXTENSION postgis SET schema postgis;\n\nALTER EXTENSION postgis UPDATE;\ne.g. if the above is already at latest version, get notice\nNOTICE: version \"3.3.2\" of extension \"postgis\" is already installed\n(and the extension is still relocatable)\n\n-- if the extension can be upgraded\nALTER EXTENSION postgis UPDATE;\n\n-- no longer relocatable (because postgis control file has relocatable =\nfalse)\n\nBut honestly I don't expect this to be a huge issue, more of just an extra\nsafety block.\nNot a bullet-proof safety block though.\n\n> Allowing to relocate a dependency of other extensions using the\n> @extschema@ syntax is very dangerous.\n> \n> I've seen that PostgreSQL itself doesn't even bother to replace\n@extschema@\n> IF the extension using it doesn't mark itself as non-relocatable. For\nconsistency\n> this patch should basically refuse to expand @extschema:fuzzystrmatch@ if\n> \"fuzzystrmatch\" extension is relocatable.\n> \n> Changing the current behaviour of PostgreSQL could be proposed but I don't\n> think it's to be done in this thread ?\n> \n> So my suggestion is to start consistent (do not expand if referenced\nextension\n> is relocatable).\n> \n> \n> --strk;\n\nI don't agree. That would make this patch of not much use.\nFor example lets revisit my postgis_tiger_geocoder which is a good bulk of\nthe reason why I want this.\n\nI use indexes that use postgis_tiger_geocoder functions that call\nfuzzystrmatch which causes pg_restore to break on reload and other issues,\nbecause I'm not explicitly referencing the function schema. With your\nproposal now I got to demand the postgresql project to make fuzzystrmatch\nnot relocatable so I can use this feature. It is so rare for people to go\naround moving the locations of their extensions once set, that I honestly\ndon't think \nthe ALTER EXTENSION .. UPDATE hole is a huge deal.\n\nI'd be more annoyed having to beg an extension provider to mark their\nextension as not relocatable so that I could explicitly reference the\nlocation of their extensions.\n\nAnd even then - think about it. I ask extension provider to make their\nextension schema relocatable. They do, but some people are using a version\nbefore they marked it as schema relocatable. So now if I change my code,\nusers can't install my extension, cause they are using a version before it\nwas schema relocatable and I'm using the new syntax.\n\nWhat would be more bullet-proof is having an extra column in pg_extension or\nadding an extra array element to pg_extension.extcondition[] that allows you\nto say \"Hey, don't allow this to be relocatable cause other extensions\ndepend on it that have explicitly referenced the schema.\"\n\nThanks,\nRegina\n\n\n\n\n\n", "msg_date": "Mon, 16 Jan 2023 23:57:30 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "On Mon, Jan 16, 2023 at 11:57:30PM -0500, Regina Obe wrote:\n\n> What would be more bullet-proof is having an extra column in pg_extension or\n> adding an extra array element to pg_extension.extcondition[] that allows you\n> to say \"Hey, don't allow this to be relocatable cause other extensions\n> depend on it that have explicitly referenced the schema.\"\n\nI've given this some more thoughts and I think a good \ncompromise could be to add the safety net in ALTER EXTESION SET SCHEMA\nso that it does not only check \"extrelocatable\" but also the presence\nof any extension effectively depending on it, in which case the\noperation could be prevented with a more useful message than\n\"extension does not support SET SCHEMA\" (what is currently output).\n\nExample query to determine those cases:\n\n SELECT e.extname, array_agg(v.name)\n FROM pg_extension e, pg_available_extension_versions v\n WHERE e.extname = ANY( v.requires )\n AND e.extrelocatable\n AND v.installed group by e.extname;\n\n extname | array_agg\n ---------------+--------------------------\n fuzzystrmatch | {postgis_tiger_geocoder}\n\n--strk;\n\n\n", "msg_date": "Wed, 18 Jan 2023 22:42:23 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> On Mon, Jan 16, 2023 at 11:57:30PM -0500, Regina Obe wrote:\n> \n> > What would be more bullet-proof is having an extra column in\n> > pg_extension or adding an extra array element to\n> > pg_extension.extcondition[] that allows you to say \"Hey, don't allow\n> > this to be relocatable cause other extensions depend on it that have\nexplicitly\n> referenced the schema.\"\n> \n> I've given this some more thoughts and I think a good compromise could be\nto\n> add the safety net in ALTER EXTESION SET SCHEMA so that it does not only\n> check \"extrelocatable\" but also the presence of any extension effectively\n> depending on it, in which case the operation could be prevented with a\nmore\n> useful message than \"extension does not support SET SCHEMA\" (what is\n> currently output).\n> \n> Example query to determine those cases:\n> \n> SELECT e.extname, array_agg(v.name)\n> FROM pg_extension e, pg_available_extension_versions v\n> WHERE e.extname = ANY( v.requires )\n> AND e.extrelocatable\n> AND v.installed group by e.extname;\n> \n> extname | array_agg\n> ---------------+--------------------------\n> fuzzystrmatch | {postgis_tiger_geocoder}\n> \n> --strk;\n\nThe only problem with the above is then it bars an extension from being\nrelocated even if no extensions reference their schema. Note you wouldn't\nbe able to tell if an extension references a schema without analyzing the\ninstall script. Which is why I was thinking another property would be\nbetter, cause that could be checked during the install/upgrade of the\ndependent extensions.\n\nI personally would be okay with this and is easier to code I think and\ndoesn't require structural changes, but not sure others would be as it's\ntaking away permissions they had before when it wasn't necessary to do so.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 17:04:19 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> > > Here is first version of my patch using the\n> > > @extschema:extensionname@ syntax you proposed.\n> > >\n> > > This patch includes:\n> > > 1) Changes to replace references of @extschema:extensionname@ with\n> > > the schema of the required extension\n> > > 2) Documentation for the feature\n> > > 3) Tests for the feature.\n> > >\n\nAttached is a revised version of the original patch. It is revised to\nprevent \n\nALTER EXTENSION .. SET SCHEMA if there is a dependent extension that \nreferences the extension in their scripts using @extschema:extensionname@\nIt also adds additional tests to verify that new feature.\n\nIn going thru the code base, I was tempted to add a new dependency type\ninstead of using the existing DEPENDENCY_AUTO. I think this would be\ncleaner, but I felt that was overstepping the area a bit, since it requires\nmaking changes to dependency.h and dependency.c\n\nMy main concern with using DEPENDENCY_AUTO is because it was designed for\ncases where an object can be dropped without need for CASCADE. In this\ncase, we don't want a dependent extension to be dropped if it's required is\ndropped. However since there will already exist \na DEPENDENCY_NORMAL between the 2 extensions, I figure we are protected\nagainst that issue already.\n\nThe issue I ran into is there doesn't seem to be an easy way of checking if\na pg_depend record is already in place, so I ended up dropping it first with\ndeleteDependencyRecordsForSpecific so I wouldn't need to check and then\nreading it.\n\nThe reason for that is during CREATE EXTENSION it would need to create the\ndependency.\nIt would also need to do so with ALTER EXTENSION .. UPDATE, since extension\ncould later on add it in their upgrade scripts and so there end up being\ndupes after many ALTER EXTENSION UPDATE calls.\n\n\npg_depends getAutoExtensionsOfObject seemed suited to that check, as is\ndone in \n\nalter.c ExecAlterObjectDependsStmt\n\t\t/* Avoid duplicates */\n\t\tcurrexts = getAutoExtensionsOfObject(address.classId,\n\t\naddress.objectId);\n\t\tif (!list_member_oid(currexts, refAddr.objectId))\n\t\t\trecordDependencyOn(&address, &refAddr,\nDEPENDENCY_AUTO_EXTENSION);\n\nbut it is hard-coded to only check DEPENDENCY_AUTO_EXTENSION\n\nWhy isn't there a variant getAutoExtensionsOfObject take a DEPENDENCY type\nas an option so it would be more useful or is there functionality for that I\nmissed?\n\nThanks,\nRegina", "msg_date": "Mon, 6 Feb 2023 05:19:39 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "On Mon, Feb 06, 2023 at 05:19:39AM -0500, Regina Obe wrote:\n> \n> Attached is a revised version of the original patch. It is revised to\n> prevent \n> \n> ALTER EXTENSION .. SET SCHEMA if there is a dependent extension that \n> references the extension in their scripts using @extschema:extensionname@\n> It also adds additional tests to verify that new feature.\n> \n> In going thru the code base, I was tempted to add a new dependency type\n> instead of using the existing DEPENDENCY_AUTO. I think this would be\n> cleaner, but I felt that was overstepping the area a bit, since it requires\n> making changes to dependency.h and dependency.c\n> \n> My main concern with using DEPENDENCY_AUTO is because it was designed for\n> cases where an object can be dropped without need for CASCADE. In this\n> case, we don't want a dependent extension to be dropped if it's required is\n> dropped. However since there will already exist \n> a DEPENDENCY_NORMAL between the 2 extensions, I figure we are protected\n> against that issue already.\n\nI was thinking: how about using the \"refobjsubid\" to encode the\n\"level\" of dependency on an extension ? Right now \"refobjsubid\" is\nalways 0 when the referenced object is an extension.\nCould we consider subid=1 to mean the dependency is not only\non the extension but ALSO on it's schema location ?\n\nAlso: should we really allow extensions to rely on other extension\nw/out fully-qualifying calls to their functions ? Or should it be\ndiscouraged and thus forbidden ? If we wanted to forbid it we then\nwould not need to encode any additional dependency but rather always\nforbid `ALTER EXTENSION .. SET SCHEMA` whenever the extension is\na dependency of any other extension.\n\nOn the code in the patch itself, I tried with this simple use case:\n\n - ext1, relocatable, exposes an ext1log(text) function\n\n - ext2, relocatable, exposes an ext2log(text) function\n calling @extschema:ext1@.ext1log()\n\nWhat is not good:\n\n\t- Drop of ext1 automatically cascades to drop of ext2 without even a notice:\n\n\t\ttest=# create extension ext2 cascade;\n\t\tNOTICE: installing required extension \"ext1\"\n\t\tCREATE EXTENSION\n\t\ttest=# drop extension ext1;\n\t\tDROP EXTENSION -- no WARNING, no NOTICE, ext2 is gone\n\nWhat is good:\n\n\t- ext1 cannot be relocated while ext2 is loaded:\n\n\t\ttest=# create extension ext2 cascade;\n\t\tNOTICE: installing required extension \"ext1\"\n\t\tCREATE EXTENSION\n\t\ttest=# alter extension ext1 set schema n1;\n\t\tERROR: Extension can not be relocated because dependent extension references it's location\n\t\ttest=# drop extension ext2;\n\t\tDROP EXTENSION\n\t\ttest=# alter extension ext1 set schema n1;\n\t\tALTER EXTENSION\n\n--strk;\n\n Libre GIS consultant/developer\n https://strk.kbt.io/services.html\n\n\n", "msg_date": "Thu, 23 Feb 2023 19:39:06 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> On Mon, Feb 06, 2023 at 05:19:39AM -0500, Regina Obe wrote:\n> >\n> > Attached is a revised version of the original patch. It is revised to\n> > prevent\n> >\n> > ALTER EXTENSION .. SET SCHEMA if there is a dependent extension that\n> > references the extension in their scripts using\n> > @extschema:extensionname@ It also adds additional tests to verify that\n> new feature.\n> >\n> > In going thru the code base, I was tempted to add a new dependency\n> > type instead of using the existing DEPENDENCY_AUTO. I think this\n> > would be cleaner, but I felt that was overstepping the area a bit,\n> > since it requires making changes to dependency.h and dependency.c\n> >\n> > My main concern with using DEPENDENCY_AUTO is because it was designed\n> > for cases where an object can be dropped without need for CASCADE. In\n> > this case, we don't want a dependent extension to be dropped if it's\n> > required is dropped. However since there will already exist a\n> > DEPENDENCY_NORMAL between the 2 extensions, I figure we are protected\n> > against that issue already.\n> \n> I was thinking: how about using the \"refobjsubid\" to encode the \"level\" of\n> dependency on an extension ? Right now \"refobjsubid\" is always 0 when the\n> referenced object is an extension.\n> Could we consider subid=1 to mean the dependency is not only on the\n> extension but ALSO on it's schema location ?\n> \n\nI like that idea. It's only been ever used for tables I think, but I don't\nsee why it wouldn't apply in this case as the concept is kinda the same.\nOnly concern if other parts rely on this being 0.\n\nThe other question, should this just update the existing DEPENDENCY_NORMAL\nextension or add a new DEPENDENCY_NORMAL between the extensions with\nsubid=1?\n\n\n> Also: should we really allow extensions to rely on other extension w/out\nfully-\n> qualifying calls to their functions ? Or should it be discouraged and thus\n> forbidden ? If we wanted to forbid it we then would not need to encode any\n> additional dependency but rather always forbid `ALTER EXTENSION .. SET\n> SCHEMA` whenever the extension is a dependency of any other extension.\n> \n> On the code in the patch itself, I tried with this simple use case:\n> \n> - ext1, relocatable, exposes an ext1log(text) function\n> \n> - ext2, relocatable, exposes an ext2log(text) function\n> calling @extschema:ext1@.ext1log()\n> \n\nThis would be an okay solution to me too if everyone is okay with it.\n\n\n> What is not good:\n> \n> \t- Drop of ext1 automatically cascades to drop of ext2 without even a\n> notice:\n> \n> \t\ttest=# create extension ext2 cascade;\n> \t\tNOTICE: installing required extension \"ext1\"\n> \t\tCREATE EXTENSION\n> \t\ttest=# drop extension ext1;\n> \t\tDROP EXTENSION -- no WARNING, no NOTICE, ext2 is gone\n> \n\nOops. I don't know why I thought the normal dependency would protect\nagainst this. I should have tested that. So DEPENDENCY_AUTO is not an\noption to use and creating a new type of dependency seems like over stepping\nthe bounds of this patch.\n\n\n> What is good:\n> \n> \t- ext1 cannot be relocated while ext2 is loaded:\n> \n> \t\ttest=# create extension ext2 cascade;\n> \t\tNOTICE: installing required extension \"ext1\"\n> \t\tCREATE EXTENSION\n> \t\ttest=# alter extension ext1 set schema n1;\n> \t\tERROR: Extension can not be relocated because dependent\n> extension references it's location\n> \t\ttest=# drop extension ext2;\n> \t\tDROP EXTENSION\n> \t\ttest=# alter extension ext1 set schema n1;\n> \t\tALTER EXTENSION\n> \n> --strk;\n> \n> Libre GIS consultant/developer\n> https://strk.kbt.io/services.html\n\nSo in conclusion we have 3 possible paths to go with this\n\n1) Just don't allow any extensions referenced by other extensions to be\nrelocatable.\nIt will show a message something like \n\"SET SCHEMA not allowed because other extensions depend on it\"\nGiven that if you don't specify relocatable in you .control file, the assume\nis relocatable = false , this isn't too far off from standard protocol.\n\n2) Use objsubid=1 to denote that another extension explicitly references the\nschema of another extension so setting schema of other extension is not\nokay. So instead of introducing another dependency, we'd update the\nDEPENDENCY_NORMAL one between the two schemas with objsubid=1 instead of 0.\n\nThis has 2 approaches:\n\na) Update the existing DEPENDENCY_NORMAL between the two extensions setting\nthe objsubid=1\n\nor \nb) Create a new DEPEDENCY_NORMAL between the two extensions with objsubid=1\n\nI'm not sure if either has implications in backup / restore . I suspect b\nwould be safer since I suspect objsubid might be checked and this\ndependency only needs checking during SET SCHEMA time.\n\n3) Create a whole new DEPENDENCY type, perhaps calling it something like\nDEPENDENCY_EXTENSION_SCHEMA\n\n4) Just don't allow @extschema:<reqextension>@ syntax to be used unless the\n<reqextension> is marked as relocatable=false. This one I don't like\nbecause it doesn't solve my fundamental issue of \n\npostgis_tiger_geocoder relying on fuzzystrmatch, which is marked as\nrelocatable.\n\nThe main issue I was trying to solve is my extension references\nfuzzystrmatch functions in a function used for functional indexes, and this\nfails restore of table indexes because I can't schema qualify the\nfuzzystrmatch extension in the backing function. \n\n \nIf no one has any opinion, I'll go with option 1 which is the one that strk\nhad actually proposed before and seems least programmatically invasive, but\nperhaps more annoying user facing.\n\nMy preferred would be #2\n\nThanks,\nRegina\n\n\n\n\n", "msg_date": "Sat, 25 Feb 2023 15:40:24 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> So in conclusion we have 3 possible paths to go with this\n> \n> 1) Just don't allow any extensions referenced by other extensions to be\n> relocatable.\n> It will show a message something like\n> \"SET SCHEMA not allowed because other extensions depend on it\"\n> Given that if you don't specify relocatable in you .control file, the\nassume is\n> relocatable = false , this isn't too far off from standard protocol.\n> \n> 2) Use objsubid=1 to denote that another extension explicitly references\nthe\n> schema of another extension so setting schema of other extension is not\nokay.\n> So instead of introducing another dependency, we'd update the\n> DEPENDENCY_NORMAL one between the two schemas with objsubid=1\n> instead of 0.\n> \n> This has 2 approaches:\n> \n> a) Update the existing DEPENDENCY_NORMAL between the two extensions\n> setting the objsubid=1\n> \n> or\n> b) Create a new DEPEDENCY_NORMAL between the two extensions with\n> objsubid=1\n> \n> I'm not sure if either has implications in backup / restore . I suspect b\nwould\n> be safer since I suspect objsubid might be checked and this dependency\nonly\n> needs checking during SET SCHEMA time.\n> \n> 3) Create a whole new DEPENDENCY type, perhaps calling it something like\n> DEPENDENCY_EXTENSION_SCHEMA\n> \n> 4) Just don't allow @extschema:<reqextension>@ syntax to be used unless\n> the <reqextension> is marked as relocatable=false. This one I don't like\n> because it doesn't solve my fundamental issue of\n> \n> postgis_tiger_geocoder relying on fuzzystrmatch, which is marked as\n> relocatable.\n> \n> The main issue I was trying to solve is my extension references\nfuzzystrmatch\n> functions in a function used for functional indexes, and this fails\nrestore of\n> table indexes because I can't schema qualify the fuzzystrmatch extension\nin\n> the backing function.\n> \n> \n> If no one has any opinion, I'll go with option 1 which is the one that\nstrk had\n> actually proposed before and seems least programmatically invasive, but\n> perhaps more annoying user facing.\n> \n> My preferred would be #2\n> \n> Thanks,\n> Regina\n\nAttached is my revision 3 patch, which follows the proposed #1.\nDon't allow schema relocation of an extension if another extension requires\nit.", "msg_date": "Sun, 26 Feb 2023 01:39:24 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "On Sat, Feb 25, 2023 at 03:40:24PM -0500, Regina Obe wrote:\n> > On Mon, Feb 06, 2023 at 05:19:39AM -0500, Regina Obe wrote:\n> > \n> > I was thinking: how about using the \"refobjsubid\" to encode the \"level\" of\n> > dependency on an extension ? Right now \"refobjsubid\" is always 0 when the\n> > referenced object is an extension.\n> > Could we consider subid=1 to mean the dependency is not only on the\n> > extension but ALSO on it's schema location ?\n> \n> I like that idea. It's only been ever used for tables I think, but I don't\n> see why it wouldn't apply in this case as the concept is kinda the same.\n> Only concern if other parts rely on this being 0.\n\nThis has to be verified, yes. But it feels to me like \"must be 0\" was\nmostly to _allow_ for future extensions like the proposed one.\n\n> The other question, should this just update the existing DEPENDENCY_NORMAL\n> extension or add a new DEPENDENCY_NORMAL between the extensions with\n> subid=1?\n\nI'd use the existing record.\n\n--strk;\n\n Libre GIS consultant/developer\n https://strk.kbt.io/services.html\n\n\n", "msg_date": "Tue, 28 Feb 2023 23:13:55 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "On Sun, Feb 26, 2023 at 01:39:24AM -0500, Regina Obe wrote:\n\n> > 1) Just don't allow any extensions referenced by other\n> > extensions to be relocatable.\n> \n> Attached is my revision 3 patch, which follows the proposed #1.\n> Don't allow schema relocation of an extension if another extension\n> requires it.\n\nI've built a version of PostgreSQL with this patch applied and I\nconfirm it works as expected.\n\nThe \"ext1\" is relocatable and creates a function ext1log():\n\n =# create extension ext1 schema n1;\n CREATE EXTENSION\n\nThe \"ext2\" is relocatable and creates a function ext2log() relying\non the ext1log() function from \"ext1\" extension, referencing\nit via @extschema:ext1@:\n\n =# create extension ext2 schema n2;\n CREATE EXTENSION\n =# select n2.ext2log('hello'); -- things work here\n ext1: ext2: hello\n\nBy creating \"ext2\", \"ext1\" becomes effectively non-relocatable:\n\n =# alter extension ext1 set schema n2;\n ERROR: cannot SET SCHEMA of extension ext1 because other extensions\n require it\n DETAIL: extension ext2 requires extension ext1\n\nDrop \"ext2\" makes \"ext1\" relocatable again:\n\n =# drop extension ext2;\n DROP EXTENSION\n =# alter extension ext1 set schema n2;\n ALTER EXTENSION\n\nUpon re-creating \"ext2\" the referenced ext1 schema will be\nthe correct one:\n\n =# create extension ext2 schema n1;\n CREATE EXTENSION\n =# select n1.ext2log('hello');\n ext1: ext2: hello\n \nThe code itself builds w/out warnings with:\n\n mkdir build\n cd build\n ../configure\n make 2> ERR # ERR is empty\n\nThe testsuite reports all successes:\n\n make check\n [...]\n =======================\n All 213 tests passed.\n =======================\n\nSince I didn't see the tests for extension in there, I've also\nexplicitly run that portion:\n\n make -C src/test/modules/test_extensions/ check\n [...]\n test test_extensions ... ok 32 ms\n test test_extdepend ... ok 12 ms\n [...]\n =====================\n All 2 tests passed.\n =====================\n\n\nAs mentioned already the downside of this patch is that it would\nnot be possibile to change the schema of an otherwise relocatable\nextension once other extension depend on it, but I can't think of\nany good reason to allow that, as it would mean dependent code\nwould need to always dynamically determine the install location\nof the objects in that extension, which sounds dangerous, security\nwise.\n\n--strk; \n\n Libre GIS consultant/developer\n https://strk.kbt.io/services.html\n\n\n", "msg_date": "Tue, 28 Feb 2023 23:46:08 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> On Sun, Feb 26, 2023 at 01:39:24AM -0500, Regina Obe wrote:\n> \n> > > 1) Just don't allow any extensions referenced by other\n> > > extensions to be relocatable.\n> >\n> > Attached is my revision 3 patch, which follows the proposed #1.\n> > Don't allow schema relocation of an extension if another extension\n> > requires it.\n> \n> I've built a version of PostgreSQL with this patch applied and I confirm\nit\n> works as expected.\n> \n> The \"ext1\" is relocatable and creates a function ext1log():\n> \n> =# create extension ext1 schema n1;\n> CREATE EXTENSION\n> \n> The \"ext2\" is relocatable and creates a function ext2log() relying on the\n> ext1log() function from \"ext1\" extension, referencing it via\n> @extschema:ext1@:\n> \n> =# create extension ext2 schema n2;\n> CREATE EXTENSION\n> =# select n2.ext2log('hello'); -- things work here\n> ext1: ext2: hello\n> \n> By creating \"ext2\", \"ext1\" becomes effectively non-relocatable:\n> \n> =# alter extension ext1 set schema n2;\n> ERROR: cannot SET SCHEMA of extension ext1 because other extensions\n> require it\n> DETAIL: extension ext2 requires extension ext1\n> \n> Drop \"ext2\" makes \"ext1\" relocatable again:\n> \n> =# drop extension ext2;\n> DROP EXTENSION\n> =# alter extension ext1 set schema n2;\n> ALTER EXTENSION\n> \n> Upon re-creating \"ext2\" the referenced ext1 schema will be the correct\none:\n> \n> =# create extension ext2 schema n1;\n> CREATE EXTENSION\n> =# select n1.ext2log('hello');\n> ext1: ext2: hello\n> \n> The code itself builds w/out warnings with:\n> \n> mkdir build\n> cd build\n> ../configure\n> make 2> ERR # ERR is empty\n> \n> The testsuite reports all successes:\n> \n> make check\n> [...]\n> =======================\n> All 213 tests passed.\n> =======================\n> \n> Since I didn't see the tests for extension in there, I've also explicitly\nrun that\n> portion:\n> \n> make -C src/test/modules/test_extensions/ check\n> [...]\n> test test_extensions ... ok 32 ms\n> test test_extdepend ... ok 12 ms\n> [...]\n> =====================\n> All 2 tests passed.\n> =====================\n> \n> \n> As mentioned already the downside of this patch is that it would not be\n> possibile to change the schema of an otherwise relocatable extension once\n> other extension depend on it, but I can't think of any good reason to\nallow\n> that, as it would mean dependent code would need to always dynamically\n> determine the install location of the objects in that extension, which\nsounds\n> dangerous, security wise.\n> \n> --strk;\n> \n> Libre GIS consultant/developer\n> https://strk.kbt.io/services.html\n\nOops I had forgotten to submit the updated patch strk was testing against in\nmy fork.\nHe had asked me to clean up the warnings off list and the description.\n\nAttached is the revised.\nThanks strk for the patient help and guidance.\n\nThanks,\nRegina", "msg_date": "Tue, 28 Feb 2023 17:59:16 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nI've applied the patch attached to message https://www.postgresql.org/message-id/000401d94bc8%2448dff700%24da9fe500%24%40pcorp.us (md5sum a7d45a32c054919d94cd4a26d7d07c20) onto current tip of the master branch being 128dd9f9eca0b633b51ffcd5b0f798fbc48ec4c0\r\n\r\nThe review written in https://www.postgresql.org/message-id/20230228224608.ak7br5shev4wic5a%40c19 all still applies.\r\n\r\nThe `make installcheck-world` test fails for me but the failures seem unrelated to the patch (many occurrences of \"+ERROR: function pg_input_error_info(unknown, unknown) does not exist\" in the regression.diff).\r\n\r\nDocumentation exists for the new feature\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 28 Feb 2023 23:43:13 +0000", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "It looks like this patch needs a quick rebase, there's a conflict in\nthe meson.build.\n\nI'll leave the state since presumably this would be easy to resolve\nbut it would be more likely to get attention if it's actually building\ncleanly.\n\nhttp://cfbot.cputube.org/patch_42_4023.log\n\nOn Tue, 28 Feb 2023 at 18:44, Sandro Santilli <strk@kbt.io> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> I've applied the patch attached to message https://www.postgresql.org/message-id/000401d94bc8%2448dff700%24da9fe500%24%40pcorp.us (md5sum a7d45a32c054919d94cd4a26d7d07c20) onto current tip of the master branch being 128dd9f9eca0b633b51ffcd5b0f798fbc48ec4c0\n>\n> The review written in https://www.postgresql.org/message-id/20230228224608.ak7br5shev4wic5a%40c19 all still applies.\n>\n> The `make installcheck-world` test fails for me but the failures seem unrelated to the patch (many occurrences of \"+ERROR: function pg_input_error_info(unknown, unknown) does not exist\" in the regression.diff).\n>\n> Documentation exists for the new feature\n>\n> The new status of this patch is: Ready for Committer\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 6 Mar 2023 13:49:51 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> It looks like this patch needs a quick rebase, there's a conflict in the\n> meson.build.\n> \n> I'll leave the state since presumably this would be easy to resolve but it would\n> be more likely to get attention if it's actually building cleanly.\n> \n> http://cfbot.cputube.org/patch_42_4023.log\n> \n> On Tue, 28 Feb 2023 at 18:44, Sandro Santilli <strk@kbt.io> wrote:\n> >\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, failed\n> > Implements feature: tested, passed\n> > Spec compliant: tested, passed\n> > Documentation: tested, passed\n> >\n> > I've applied the patch attached to message\n> > https://www.postgresql.org/message-\n> id/000401d94bc8%2448dff700%24da9fe5\n> > 00%24%40pcorp.us (md5sum a7d45a32c054919d94cd4a26d7d07c20) onto\n> > current tip of the master branch being\n> > 128dd9f9eca0b633b51ffcd5b0f798fbc48ec4c0\n> >\n> > The review written in https://www.postgresql.org/message-\n> id/20230228224608.ak7br5shev4wic5a%40c19 all still applies.\n> >\n> > The `make installcheck-world` test fails for me but the failures seem\n> unrelated to the patch (many occurrences of \"+ERROR: function\n> pg_input_error_info(unknown, unknown) does not exist\" in the\n> regression.diff).\n> >\n> > Documentation exists for the new feature\n> >\n> > The new status of this patch is: Ready for Committer\n> \n> \n> \n> --\n> Gregory Stark\n> As Commitfest Manager\n\nJust sent a note about the wildcard one. Was this conflicting with the wildcard one or some other.\nI can rebase if it was conflicting with another one, if it was the wildcard one, then maybe we should commit this one and we'll rebase the wildcard one.\n\nWe would like to submit the wildcard one too, but I think Tom had some reservations on that one.\n\nThanks,\nRegina\n\n\n\n\n", "msg_date": "Mon, 6 Mar 2023 16:40:29 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> It looks like this patch needs a quick rebase, there's a conflict in the\n> meson.build.\n> \n> I'll leave the state since presumably this would be easy to resolve but it would\n> be more likely to get attention if it's actually building cleanly.\n> \n> http://cfbot.cputube.org/patch_42_4023.log\n> --\n> Gregory Stark\n> As Commitfest Manager\n\nAttach is the patch rebased against master.", "msg_date": "Mon, 6 Mar 2023 18:26:59 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n> [ 0005-Allow-use-of-extschema-reqextname-to-reference.patch ]\n\nI took a look at this. I'm on board with the feature design,\nbut not so much with this undocumented restriction you added\nto ALTER EXTENSION SET SCHEMA:\n\n+\t\t/* If an extension requires this extension\n+\t\t * do not allow relocation */\n+\t\tif (pg_depend->deptype == DEPENDENCY_NORMAL && pg_depend->classid == ExtensionRelationId){\n+\t\t\tdep.classId = pg_depend->classid;\n+\t\t\tdep.objectId = pg_depend->objid;\n+\t\t\tdep.objectSubId = pg_depend->objsubid;\n+\t\t\tereport(ERROR,\n+\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+\t\t\t\t\t errmsg(\"cannot SET SCHEMA of extension %s because other extensions require it\",\n+\t\t\t\t\t\t\tNameStr(extForm->extname)),\n+\t\t\t\t\t errdetail(\"%s requires extension %s\",\n+\t\t\t\t\t\t\t getObjectDescription(&dep, false), NameStr(extForm->extname))));\n\nThat seems quite disastrous for usability, and it's making an assumption\nunsupported by any evidence: that it will be a majority use-case for\ndependent extensions to have used @extschema:myextension@ in a way that\nwould be broken by ALTER EXTENSION SET SCHEMA.\n\nI think we should just drop this. It might be worth putting in some\ndocumentation notes about the hazard, instead.\n\nIf you want to work harder, perhaps a reasonable way to deal with\nthe issue would be to allow dependent extensions to declare that\nthey don't want your extension relocated. But I do not think it's\nokay to make that the default behavior, much less the only behavior.\nAnd really, since we've gotten along without it so far, I'm not\nsure that it's necessary to have it.\n\nAnother thing that's bothering me a bit is the use of\nget_required_extension in execute_extension_script. That does way\nmore than you really need, and passing a bunch of bogus parameter\nvalues to it makes me uncomfortable. The callers already have\nthe required extensions' OIDs at hand; it'd be better to add that list\nto execute_extension_script's API instead of redoing the lookups.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Mar 2023 15:07:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, March 10, 2023 3:07 PM\n> To: Regina Obe <lr@pcorp.us>\n> Cc: 'Gregory Stark (as CFM)' <stark.cfm@gmail.com>; 'Sandro Santilli'\n> <strk@kbt.io>; pgsql-hackers@lists.postgresql.org; 'Regina Obe'\n> <r@pcorp.us>\n> Subject: Re: Ability to reference other extensions by schema in extension\n> scripts\n> \n> \"Regina Obe\" <lr@pcorp.us> writes:\n> > [ 0005-Allow-use-of-extschema-reqextname-to-reference.patch ]\n> \n> I took a look at this. I'm on board with the feature design, but not so\nmuch\n> with this undocumented restriction you added to ALTER EXTENSION SET\n> SCHEMA:\n> \n> +\t\t/* If an extension requires this extension\n> +\t\t * do not allow relocation */\n> +\t\tif (pg_depend->deptype == DEPENDENCY_NORMAL &&\n> pg_depend->classid == ExtensionRelationId){\n> +\t\t\tdep.classId = pg_depend->classid;\n> +\t\t\tdep.objectId = pg_depend->objid;\n> +\t\t\tdep.objectSubId = pg_depend->objsubid;\n> +\t\t\tereport(ERROR,\n> +\n> \t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t\t errmsg(\"cannot SET SCHEMA of\n> extension %s because other extensions require it\",\n> +\t\t\t\t\t\t\tNameStr(extForm-\n> >extname)),\n> +\t\t\t\t\t errdetail(\"%s requires extension\n%s\",\n> +\n> getObjectDescription(&dep, false),\n> +NameStr(extForm->extname))));\n> \n> That seems quite disastrous for usability, and it's making an assumption\n> unsupported by any evidence: that it will be a majority use-case for\n> dependent extensions to have used @extschema:myextension@ in a way that\n> would be broken by ALTER EXTENSION SET SCHEMA.\n> \n> I think we should just drop this. It might be worth putting in some\n> documentation notes about the hazard, instead.\n> \n> If you want to work harder, perhaps a reasonable way to deal with the\nissue\n> would be to allow dependent extensions to declare that they don't want\nyour\n> extension relocated. But I do not think it's okay to make that the\ndefault\n> behavior, much less the only behavior.\n> And really, since we've gotten along without it so far, I'm not sure that\nit's\n> necessary to have it.\n> \n> Another thing that's bothering me a bit is the use of\nget_required_extension\n> in execute_extension_script. That does way more than you really need, and\n> passing a bunch of bogus parameter values to it makes me uncomfortable.\n> The callers already have the required extensions' OIDs at hand; it'd be\nbetter\n> to add that list to execute_extension_script's API instead of redoing the\n> lookups.\n> \n> \t\t\tregards, tom lane\n\n\n\n", "msg_date": "Fri, 10 Mar 2023 15:38:25 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> \"Regina Obe\" <lr@pcorp.us> writes:\n> > [ 0005-Allow-use-of-extschema-reqextname-to-reference.patch ]\n> \n> I took a look at this. I'm on board with the feature design, but not so\nmuch\n> with this undocumented restriction you added to ALTER EXTENSION SET\n> SCHEMA:\n> \n> +\t\t/* If an extension requires this extension\n> +\t\t * do not allow relocation */\n> +\t\tif (pg_depend->deptype == DEPENDENCY_NORMAL &&\n> pg_depend->classid == ExtensionRelationId){\n> +\t\t\tdep.classId = pg_depend->classid;\n> +\t\t\tdep.objectId = pg_depend->objid;\n> +\t\t\tdep.objectSubId = pg_depend->objsubid;\n> +\t\t\tereport(ERROR,\n> +\n> \t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t\t errmsg(\"cannot SET SCHEMA of\n> extension %s because other extensions require it\",\n> +\t\t\t\t\t\t\tNameStr(extForm-\n> >extname)),\n> +\t\t\t\t\t errdetail(\"%s requires extension\n%s\",\n> +\n> getObjectDescription(&dep, false),\n> +NameStr(extForm->extname))));\n> \n> That seems quite disastrous for usability, and it's making an assumption\n> unsupported by any evidence: that it will be a majority use-case for\n> dependent extensions to have used @extschema:myextension@ in a way that\n> would be broken by ALTER EXTENSION SET SCHEMA.\n> \n> I think we should just drop this. It might be worth putting in some\n> documentation notes about the hazard, instead.\n> \nThat was my thought originally too and also given the rarity of people\nchanging schemas\nI wasn't that bothered with not forcing this. Sandro was a bit more\nbothered by not forcing it and given the default for extensions is not\nrelocatable, we didn't see that much of an issue with it.\n\n\n> If you want to work harder, perhaps a reasonable way to deal with the\nissue\n> would be to allow dependent extensions to declare that they don't want\nyour\n> extension relocated. But I do not think it's okay to make that the\ndefault\n> behavior, much less the only behavior.\n\nI had done that in one iteration of the patch.\nWe discussed this here\nhttps://www.postgresql.org/message-id/000001d949ad%241159adc0%24340d0940%24%\n40pcorp.us \n\nand here\nhttps://www.postgresql.org/message-id/20230223183906.6rhtybwdpe37sri7%40c19\n\n- the main issue I ran into is I have to introduce another dependency type\nor go with Sandro's idea of using refsubobjid for this purpose. I think\ndefining a new dependency type is less likely to cause unforeseen\ncomplications elsewhere, but did require me to expand the scope (to make\nchanges to pg_depend). Which I am fine with doing, but didn't want to over\nextend my reach too much.\n\nOne of my revisions tried to use DEPENDENCY_AUTO which did not work (as\nSandro discovered) and I had some other annoyances with lack of helper\nfunctions\nhttps://www.postgresql.org/message-id/000401d93a14%248647f540%2492d7dfc0%24%\n40pcorp.us\n\nkey point:\n\"Why isn't there a variant getAutoExtensionsOfObject take a DEPENDENCY type\nas an option so it would be more useful or is there functionality for that I\nmissed?\"\n\n\n> And really, since we've gotten along without it so far, I'm not sure that\nit's\n> necessary to have it.\n> \n> Another thing that's bothering me a bit is the use of\nget_required_extension\n> in execute_extension_script. That does way more than you really need, and\n> passing a bunch of bogus parameter values to it makes me uncomfortable.\n> The callers already have the required extensions' OIDs at hand; it'd be\nbetter\n> to add that list to execute_extension_script's API instead of redoing the\n> lookups.\n> \n> \t\t\tregards, tom lane\n\nSo you are proposing I change the execute_extension_scripts input args to\ntake more args?\n\n\n\n\n", "msg_date": "Fri, 10 Mar 2023 16:05:46 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n>> If you want to work harder, perhaps a reasonable way to deal with the issue\n>> would be to allow dependent extensions to declare that they don't want your\n>> extension relocated. But I do not think it's okay to make that the default\n>> behavior, much less the only behavior.\n\n> - the main issue I ran into is I have to introduce another dependency type\n> or go with Sandro's idea of using refsubobjid for this purpose.\n\nNo, pg_depend is not the thing to use for this. I was thinking of a new\nfield in the extension's control file, right beside where it says it's\ndependent on such-and-such extensions in the first place. Say like\n\n\trequires = 'extfoo, extbar'\n\tno_relocate = 'extfoo'\n\n> So you are proposing I change the execute_extension_scripts input args to\n> take more args?\n\nWhy not? It's local to that file, so you won't break anything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Mar 2023 17:14:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> No, pg_depend is not the thing to use for this. I was thinking of a new\nfield in\n> the extension's control file, right beside where it says it's dependent on\nsuch-\n> and-such extensions in the first place. Say like\n> \n> \trequires = 'extfoo, extbar'\n> \tno_relocate = 'extfoo'\n> \n\nSo when no_relocate is specified, where would that live?\n\nWould I mark the extfoo as not relocatable on CREATE / ALTER of said\nextension?\nOr add an extra field to pg_extension\n\nI had tried to do that originally, e.g. instead of even bothering with such\nan extra arg, just mark it as not relocatable if the extension's script\ncontains references to the required extension's schema.\n\nBut then what if extfoo is upgraded?\n\nALTER EXTENSION extfoo UPDATE;\n\nWipes out the not relocatable of extfoo set. \nSo in order to prevent that, I have to \n\na) check the control files of all extensions that depend on foo to see if\nthey made such a request.\nor \nb) \"Seeing if the extension is marked as not relocatable, prevent ALTER\nEXTENSION from marking it as relocatable\"\nproblem with b is what if the extension author changed their mind and wanted\nit to be relocatable? Given the default is (not relocatable), it's possible\nthe author didn't know this and later decided to put in an explicit\nrelocate=false.\nc) define a new column in pg_extension to hold this bit of info. I was\nhoping I could reuse pg_extension.extconfig, but it seems that's hardwired\nto be only used for backup.\n\nAm I missing something or is this really as complicated as I think it is?\n\nIf we go with b) I'm not sure why I need to bother defining a no_relocate,\nas it's obvious looking at the extension install/upgrade script that it\nshould not be relocatable.\n\n> > So you are proposing I change the execute_extension_scripts input args\n> > to take more args?\n> \n> Why not? It's local to that file, so you won't break anything.\n> \n\nOkay, I wasn't absolutely sure if it was. If it is then I'll change.\n\n> \t\t\tregards, tom lane\n\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Fri, 10 Mar 2023 17:35:05 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n>> requires = 'extfoo, extbar'\n>> no_relocate = 'extfoo'\n\n> So when no_relocate is specified, where would that live?\n\nIn the control file.\n\n> Would I mark the extfoo as not relocatable on CREATE / ALTER of said\n> extension?\n> Or add an extra field to pg_extension\n\nWe don't record dependent extensions in pg_extension now, so that\ndoesn't seem like it would fit well. I was envisioning that\nALTER EXTENSION SET SCHEMA would do something along the lines of\n\n(1) scrape the list of dependent extensions out of pg_depend\n(2) open and parse each of their control files\n(3) fail if any of their control files mentions the target one in\n no_relocate.\n\nAdmittedly, this'd be a bit slow, but I doubt that ALTER EXTENSION\nSET SCHEMA is a performance bottleneck for anybody.\n\n> I had tried to do that originally, e.g. instead of even bothering with such\n> an extra arg, just mark it as not relocatable if the extension's script\n> contains references to the required extension's schema.\n\nI don't think that's a great approach, because those references might\nappear in places that can track a rename (ie, in an object name that's\nresolved to a stored OID). Short of fully parsing the script file you\naren't going to get a reliable answer. I'm content to lay that problem\noff on the extension authors.\n\n> But then what if extfoo is upgraded?\n\nWe already have mechanisms for version-dependent control files, so\nI don't see where there's a problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Mar 2023 17:47:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> \"Regina Obe\" <lr@pcorp.us> writes:\n> >> requires = 'extfoo, extbar'\n> >> no_relocate = 'extfoo'\n> \n> > So when no_relocate is specified, where would that live?\n> \n> In the control file.\n> \n> > Would I mark the extfoo as not relocatable on CREATE / ALTER of said\n> > extension?\n> > Or add an extra field to pg_extension\n> \n> We don't record dependent extensions in pg_extension now, so that doesn't\n> seem like it would fit well. I was envisioning that ALTER EXTENSION SET\n> SCHEMA would do something along the lines of\n> \n> (1) scrape the list of dependent extensions out of pg_depend\n> (2) open and parse each of their control files\n> (3) fail if any of their control files mentions the target one in\n> no_relocate.\n> \n> Admittedly, this'd be a bit slow, but I doubt that ALTER EXTENSION SET\n> SCHEMA is a performance bottleneck for anybody.\n> \n\nOkay I'll move ahead with this approach.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Fri, 10 Mar 2023 17:52:40 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> Subject: Re: Ability to reference other extensions by schema in extension\n> scripts\n> \n> \"Regina Obe\" <lr@pcorp.us> writes:\n> >> requires = 'extfoo, extbar'\n> >> no_relocate = 'extfoo'\n> \n> > So when no_relocate is specified, where would that live?\n> \n> In the control file.\n> \n> > Would I mark the extfoo as not relocatable on CREATE / ALTER of said\n> > extension?\n> > Or add an extra field to pg_extension\n> \n> We don't record dependent extensions in pg_extension now, so that doesn't\n> seem like it would fit well. I was envisioning that ALTER EXTENSION SET\n> SCHEMA would do something along the lines of\n> \n> (1) scrape the list of dependent extensions out of pg_depend\n> (2) open and parse each of their control files\n> (3) fail if any of their control files mentions the target one in\n> no_relocate.\n> \n> Admittedly, this'd be a bit slow, but I doubt that ALTER EXTENSION SET\n> SCHEMA is a performance bottleneck for anybody.\n> \n> > I had tried to do that originally, e.g. instead of even bothering with\n> > such an extra arg, just mark it as not relocatable if the extension's\n> > script contains references to the required extension's schema.\n> \n> I don't think that's a great approach, because those references might\nappear\n> in places that can track a rename (ie, in an object name that's resolved\nto a\n> stored OID). Short of fully parsing the script file you aren't going to\nget a\n> reliable answer. I'm content to lay that problem off on the extension\nauthors.\n> \n> > But then what if extfoo is upgraded?\n> \n> We already have mechanisms for version-dependent control files, so I don't\n> see where there's a problem.\n> \n> \t\t\tregards, tom lane\n\nAttached is a revised patch with these changes in place.", "msg_date": "Sat, 11 Mar 2023 03:18:18 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "On Sat, Mar 11, 2023 at 03:18:18AM -0500, Regina Obe wrote:\n> Attached is a revised patch with these changes in place.\n\nI've given a try to this patch. It builds and regresses fine.\n\nMy own tests also worked fine. As long as ext1 was found\nin the ext2's no_relocate list it could not be relocated,\nand proper error message is given to user trying it.\n\nNitpicking, there are a few things that are weird to me:\n\n1) I don't get any error/warning if I put an arbitrary\nstring into no_relocate (there's no check to verify the\nno_relocate is a subset of the requires).\n\n2) An extension can still reference extensions it depends on\nwithout putting them in no_relocate. This may be intentional,\nas some substitutions may not require blocking relocation, but\nfelt inconsistent with the normal @extschema@ which is never\nreplaced unless an extension is marked as non-relocatable.\n\n--strk;\n\n Libre GIS consultant/developer\n https://strk.kbt.io/services.html\n\n\n", "msg_date": "Mon, 13 Mar 2023 12:59:16 +0100", "msg_from": "'Sandro Santilli' <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> I've given a try to this patch. It builds and regresses fine.\n> \n> My own tests also worked fine. As long as ext1 was found in the ext2's\n> no_relocate list it could not be relocated, and proper error message is\ngiven\n> to user trying it.\n> \n> Nitpicking, there are a few things that are weird to me:\n> \n> 1) I don't get any error/warning if I put an arbitrary string into\nno_relocate\n> (there's no check to verify the no_relocate is a subset of the requires).\n> \n\nI thought about that and decided it wasn't worth checking for. If an\nextension author puts in an extension not in requires it's on them as the\ndocs say it should be in requires.\n\nIt will just pretend that extension is not listed in no_relocate.\n\n> 2) An extension can still reference extensions it depends on without\nputting\n> them in no_relocate. This may be intentional, as some substitutions may\nnot\n> require blocking relocation, but felt inconsistent with the normal\n> @extschema@ which is never replaced unless an extension is marked as non-\n> relocatable.\n> \n> --strk;\n> \n> Libre GIS consultant/developer\n> https://strk.kbt.io/services.html\n\n\nYes this is intentional. As Tom mentioned, if for example an extension\nauthor decides\nTo schema qualify @extschema:foo@ in their table definition, and they marked\nas requiring foo\nsince such a reference \nis captured by a schema move, there is no need for them to prevent relocate\nof the foo extension (assuming foo was relocatable to begin with)\n\n\n\n", "msg_date": "Mon, 13 Mar 2023 10:28:04 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> On Sat, Mar 11, 2023 at 03:18:18AM -0500, Regina Obe wrote:\n> > Attached is a revised patch with these changes in place.\n> \n> I've given a try to this patch. It builds and regresses fine.\n> \n> My own tests also worked fine. As long as ext1 was found in the ext2's\n> no_relocate list it could not be relocated, and proper error message is\ngiven\n> to user trying it.\n> \n> Nitpicking, there are a few things that are weird to me:\n> \n> 1) I don't get any error/warning if I put an arbitrary string into\nno_relocate\n> (there's no check to verify the no_relocate is a subset of the requires).\n> \n> 2) An extension can still reference extensions it depends on without\nputting\n> them in no_relocate. This may be intentional, as some substitutions may\nnot\n> require blocking relocation, but felt inconsistent with the normal\n> @extschema@ which is never replaced unless an extension is marked as non-\n> relocatable.\n> \n> --strk;\n> \n> Libre GIS consultant/developer\n> https://strk.kbt.io/services.html\n\nAttached is a slightly revised patch to fix the extra whitespace in the\nextend.gml \ndocument that Sandro noted to me.\n\nThanks,\nRegina", "msg_date": "Mon, 13 Mar 2023 17:57:57 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "On Mon, Mar 13, 2023 at 05:57:57PM -0400, Regina Obe wrote:\n> \n> Attached is a slightly revised patch to fix the extra whitespace in the\n> extend.gml document that Sandro noted to me.\n\nThanks Regina.\nI've tested attached patch (md5 0b652a8271fc7e71ed5f712ac162a0ef)\nagainst current master (hash 4ef1be5a0b676a9f030cc2e4837f4b5650ecb069).\nThe patch applies cleanly, builds cleanly, regresses cleanly.\n\nI've also run my quick test and I'm satisfied with it:\n\n test=# create extension ext2 cascade;\n NOTICE: installing required extension \"ext1\"\n CREATE EXTENSION\n\n test=# select ext2log('h');\n ext1: ext2: h\n\n test=# alter extension ext1 set schema n1;\n ERROR: cannot SET SCHEMA of extension ext1 because other extensions prevent it\n DETAIL: extension ext2 prevents relocation of extension ext1\n\n test=# drop extension ext2;\n DROP EXTENSION\n\n test=# alter extension ext1 set schema n1;\n ALTER EXTENSION\n\n test=# create extension ext2;\n CREATE EXTENSION\n\n test=# select ext2log('h');\n ext1: ext2: h\n\n\n--strk;\n\n\n\n", "msg_date": "Thu, 16 Mar 2023 11:14:18 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "Sandro Santilli <strk@kbt.io> writes:\n> On Mon, Mar 13, 2023 at 05:57:57PM -0400, Regina Obe wrote:\n>> Attached is a slightly revised patch to fix the extra whitespace in the\n>> extend.gml document that Sandro noted to me.\n\n> Thanks Regina.\n> I've tested attached patch (md5 0b652a8271fc7e71ed5f712ac162a0ef)\n> against current master (hash 4ef1be5a0b676a9f030cc2e4837f4b5650ecb069).\n> The patch applies cleanly, builds cleanly, regresses cleanly.\n\nPushed with some mostly-cosmetic adjustments (in particular I tried\nto make the docs and tests neater).\n\nI did not commit the changes in get_available_versions_for_extension\nto add no_relocate as an output column. Those were dead code because\nyou hadn't done anything to connect them up to an actual output parameter\nof pg_available_extension_versions(). While I'm not necessarily averse\nto making the no_relocate values visible somehow, I'm not convinced that\npg_available_extension_versions should be the place to do it. ISTM what's\nrelevant is the no_relocate values of *installed* extensions, not those of\npotentially-installable extensions. If we had a view over pg_extension\nthen that might be a place to add this, but we don't. On the whole it\ndidn't seem important enough to pursue, so I just left it out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Mar 2023 18:47:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "> Pushed with some mostly-cosmetic adjustments (in particular I tried to\nmake\n> the docs and tests neater).\n> \n> I did not commit the changes in get_available_versions_for_extension\n> to add no_relocate as an output column. Those were dead code because you\n> hadn't done anything to connect them up to an actual output parameter of\n> pg_available_extension_versions(). While I'm not necessarily averse to\n> making the no_relocate values visible somehow, I'm not convinced that\n> pg_available_extension_versions should be the place to do it. ISTM what's\n> relevant is the no_relocate values of *installed* extensions, not those of\n> potentially-installable extensions. If we had a view over pg_extension\nthen\n> that might be a place to add this, but we don't. On the whole it didn't\nseem\n> important enough to pursue, so I just left it out.\n> \n> \t\t\tregards, tom lane\n\n\nThanks. Agree with get_available_versions_for_extension, not necessary.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Tue, 21 Mar 2023 10:29:44 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "RE: Ability to reference other extensions by schema in extension\n scripts" }, { "msg_contents": "\"Regina Obe\" <lr@pcorp.us> writes:\n>> making the no_relocate values visible somehow, I'm not convinced that\n>> pg_available_extension_versions should be the place to do it. ISTM what's\n>> relevant is the no_relocate values of *installed* extensions, not those of\n>> potentially-installable extensions. If we had a view over pg_extension then\n>> that might be a place to add this, but we don't. On the whole it didn't seem\n>> important enough to pursue, so I just left it out.\n\n> Thanks. Agree with get_available_versions_for_extension, not necessary.\n\nIf we did feel like doing something about this, on reflection I think\nthe thing to do would be to add no_relocate as an actual column in\npg_extension, probably of type \"oid[]\". Then we could modify the\nSET SCHEMA code to check that instead of parsing the extension control\nfiles. That'd be a little cleaner, but I can't say that I'm hugely\nexcited about it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Mar 2023 10:52:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ability to reference other extensions by schema in extension\n scripts" } ]
[ { "msg_contents": "Hello all,\n\nMeson doesn't see the redefinition of locale_t done\nin src/include/port/win32_port.h, so is not defining\nHAVE_LOCALE_T, HAVE_WCSTOMBS_L nor HAVE_MBSTOWCS_L as the\ncurrent src/tools/msvc/build.pl script does.\n\nPlease find attached a patch for so.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Thu, 10 Nov 2022 10:59:41 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Meson doesn't define HAVE_LOCALE_T for mscv" }, { "msg_contents": "On 10.11.22 10:59, Juan José Santamaría Flecha wrote:\n> Meson doesn't see the redefinition of locale_t done \n> in src/include/port/win32_port.h, so is not defining \n> HAVE_LOCALE_T, HAVE_WCSTOMBS_L nor HAVE_MBSTOWCS_L as the \n> current src/tools/msvc/build.pl <http://build.pl> script does.\n> \n> Please find attached a patch for so.\n\ncommitted\n\n\n\n", "msg_date": "Fri, 11 Nov 2022 16:02:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Meson doesn't define HAVE_LOCALE_T for mscv" }, { "msg_contents": "Hi,\n\nOn 2022-11-10 10:59:41 +0100, Juan Jos� Santamar�a Flecha wrote:\n> Meson doesn't see the redefinition of locale_t done\n> in src/include/port/win32_port.h, so is not defining\n> HAVE_LOCALE_T, HAVE_WCSTOMBS_L nor HAVE_MBSTOWCS_L as the\n> current src/tools/msvc/build.pl script does.\n> \n> Please find attached a patch for so.\n\nHm. Is it right that the changes are only done for msvc? win32_port.h defines\nthe types for mingw as well afaict.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 16:49:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson doesn't define HAVE_LOCALE_T for mscv" }, { "msg_contents": "On Tue, Nov 15, 2022 at 1:49 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Hm. Is it right that the changes are only done for msvc? win32_port.h\n> defines\n> the types for mingw as well afaict.\n>\n> Yes, it does, but configure does nothing with them, so adding those\ndefines is a new feature for MinGW but a correction for MSVC.\n\nPFA a patch for MinGW.\n\nI've seen that when building with meson on MinGW the output for version()\nis 'PostgreSQL 16devel on x86_64, compiled by gcc-12.2.0', which is not\nwrong but I cannot tell that it was done on MinGW. Should we include the\n'host_system' in PG_VERSION_STR?\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Tue, 15 Nov 2022 15:35:31 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Meson doesn't define HAVE_LOCALE_T for mscv" }, { "msg_contents": "Hi,\n\nOn 2022-11-15 15:35:31 +0100, Juan Jos� Santamar�a Flecha wrote:\n> I've seen that when building with meson on MinGW the output for version()\n> is 'PostgreSQL 16devel on x86_64, compiled by gcc-12.2.0', which is not\n> wrong but I cannot tell that it was done on MinGW. Should we include the\n> 'host_system' in PG_VERSION_STR?\n\nI don't think we should print mingw - that's really just redundant with\ngcc. But including host_system seems like a good idea. Not sure why I didn't\ndo that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 11:53:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson doesn't define HAVE_LOCALE_T for mscv" }, { "msg_contents": "Hi,\n\nHm, the quoting was odd, making me think you had written a separate email\nabout the define issue. Hence the separate email...\n\n\nOn 2022-11-15 15:35:31 +0100, Juan Jos� Santamar�a Flecha wrote:\n> On Tue, Nov 15, 2022 at 1:49 AM Andres Freund <andres@anarazel.de> wrote:\n> > Hm. Is it right that the changes are only done for msvc? win32_port.h\n> > defines the types for mingw as well afaict.\n\n> Yes, it does, but configure does nothing with them, so adding those\n> defines is a new feature for MinGW but a correction for MSVC.\n\nAny chance you checked if autoconf already detects locale_t with mingw?\nPossible that mingw supplies one of the relevant headers...\n\nOtherwise it looks like a sensible improvement to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 12:02:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson doesn't define HAVE_LOCALE_T for mscv" }, { "msg_contents": "On Tue, Nov 15, 2022 at 8:53 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> I don't think we should print mingw - that's really just redundant with\n> gcc. But including host_system seems like a good idea. Not sure why I\n> didn't\n> do that.\n>\n> I'll open a new thread for this. Also, I think this is skipping\ncollate.linux.utf.sql and infinite_recurse.sql tests in their intended\nplatforms.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Nov 15, 2022 at 8:53 PM Andres Freund <andres@anarazel.de> wrote:\nI don't think we should print mingw - that's really just redundant with\ngcc. But including host_system seems like a good idea. Not sure why I didn't\ndo that.I'll open a new thread for this. Also, I think this is skipping collate.linux.utf.sql and infinite_recurse.sql tests in their intended platforms.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 16 Nov 2022 01:06:04 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Meson doesn't define HAVE_LOCALE_T for mscv" }, { "msg_contents": "On Tue, Nov 15, 2022 at 9:02 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> On 2022-11-15 15:35:31 +0100, Juan José Santamaría Flecha wrote:\n> > On Tue, Nov 15, 2022 at 1:49 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > Hm. Is it right that the changes are only done for msvc? win32_port.h\n> > > defines the types for mingw as well afaict.\n>\n> > Yes, it does, but configure does nothing with them, so adding those\n> > defines is a new feature for MinGW but a correction for MSVC.\n>\n> Any chance you checked if autoconf already detects locale_t with mingw?\n> Possible that mingw supplies one of the relevant headers...\n>\n> Otherwise it looks like a sensible improvement to me.\n>\n> I've checked the autoconf version of pg_config.h and it's not detected.\nAlso, manually inspecting <locale.h> I see no definition of locale_t in\nMinGW.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Nov 15, 2022 at 9:02 PM Andres Freund <andres@anarazel.de> wrote:\nOn 2022-11-15 15:35:31 +0100, Juan José Santamaría Flecha wrote:\n> On Tue, Nov 15, 2022 at 1:49 AM Andres Freund <andres@anarazel.de> wrote:\n> > Hm. Is it right that the changes are only done for msvc? win32_port.h\n> > defines the types for mingw as well afaict.\n\n> Yes, it does, but configure does nothing with them, so adding those\n> defines is a new feature for MinGW but a correction for MSVC.\n\nAny chance you checked if autoconf already detects locale_t with mingw?\nPossible that mingw supplies one of the relevant headers...\n\nOtherwise it looks like a sensible improvement to me.I've checked the autoconf version of pg_config.h and it's not detected. Also, manually inspecting <locale.h> I see no definition of locale_t in MinGW.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 16 Nov 2022 01:10:52 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Meson doesn't define HAVE_LOCALE_T for mscv" } ]
[ { "msg_contents": "Hi,\n\nThomas has reported this failure in an email [1] and shared the\nfollowing links offlist with me:\nhttps://cirrus-ci.com/task/5311549010083840\nhttps://api.cirrus-ci.com/v1/artifact/task/5311549010083840/testrun/build/testrun/subscription/100_bugs/log/100_bugs_twoways.log\nhttps://api.cirrus-ci.com/v1/artifact/task/5311549010083840/crashlog/crashlog-postgres.exe_1c40_2022-11-08_00-20-28-110.txt\n\nThe call stack is as follows:\n00000063`4edff670 00007ff7`1922fcdf postgres!ExceptionalCondition(\nchar * conditionName = 0x00007ff7`198f8050\n\"TransactionIdPrecedesOrEquals(safeXid, snap->xmin)\",\nchar * fileName = 0x00007ff7`198f8020\n\"../src/backend/replication/logical/snapbuild.c\",\nint lineNumber = 0n600)+0x78 [c:\\cirrus\\src\\backend\\utils\\error\\assert.c @ 67]\n00000063`4edff6b0 00007ff7`192106df postgres!SnapBuildInitialSnapshot(\nstruct SnapBuild * builder = 0x00000251`5b95bce8)+0x20f\n[c:\\cirrus\\src\\backend\\replication\\logical\\snapbuild.c @ 600]\n00000063`4edff730 00007ff7`1920d9f6 postgres!CreateReplicationSlot(\nstruct CreateReplicationSlotCmd * cmd = 0x00000251`5b94d828)+0x40f\n[c:\\cirrus\\src\\backend\\replication\\walsender.c @ 1152]\n00000063`4edff870 00007ff7`192bc9c4 postgres!exec_replication_command(\nchar * cmd_string = 0x00000251`5b94ac68 \"CREATE_REPLICATION_SLOT\n\"pg_16400_sync_16392_7163433409941550636\" LOGICAL pgoutput (SNAPSHOT\n'use')\")+0x4a6 [c:\\cirrus\\src\\backend\\replication\\walsender.c @ 1804]\n\n\nAs per my investigation based on the above logs, the failed test is\ndue to the following command in 100_bugs.pl:\n$node_twoways->safe_psql('d2',\n \"CREATE SUBSCRIPTION testsub CONNECTION \\$\\$\"\n . $node_twoways->connstr('d1')\n . \"\\$\\$ PUBLICATION testpub WITH (create_slot=false, \"\n . \"slot_name='testslot')\");\n\nIt failed while creating the table sync slot.\n\nThe failure happens because the xmin computed by the snap builder is\nlesser than what is computed by GetOldestSafeDecodingTransactionId()\nduring initial snapshot creation for the tablesync slot by\nSnapBuildInitialSnapshot.\n\nTo investigate, I tried to study how the values of \"safeXid\" and\n\"snap->xmin\" are computed in SnapBuildInitialSnapshot(). There appear\nto be four places in the code where we assign value to xmin\n(builder-xmin) during the snapshot building process and then we assign\nthe same to snap->xmin. Those places are: (a) Two places in\nSnapBuildFindSnapshot(), (b) One place in SnapBuildRestore(), and (c)\nOne place in SnapBuildProcessRunningXacts()\n\nSeeing the LOGS, it appears to me that we find a consistent point from\nthe below code in SnapBuildFindSnapshot() and the following line\nassigns builder->xmin.\n\n...\nif (running->oldestRunningXid == running->nextXid)\n{\n...\nbuilder->xmin = running->nextXid;\n\nThe reason is we only see \"logical decoding found consistent point at\n...\" in LOGs. If SnapBuildFindSnapshot() has to go through the entire\nmachinery of snapshot building then, we should have seen \"logical\ndecoding found initial starting point at ...\" and similar other LOGs.\nThe builder->xmin can't be assigned from any other place in (b) or\n(c). From (c), it can't be assigned because we are building a full\nsnapshot here which won't restore any serialized snapshot. It can't be\nassigned from (b) because while creating a slot we stop as soon as we\nfind the consistent point, see\nDecodingContextFindStartpoint()->DecodingContextReady()\n\nIn the above code snippet, the running->nextXid in the above code\nsnippet has been assigned from ShmemVariableCache->nextXid in\nGetRunningTransactionData().\n\nThe safeXid computed from GetOldestSafeDecodingTransactionId() uses\nShmemVariableCache->nextXid as the starting point and keeps the slot's\nxmin as the safe Xid limit.\n\nIt seems to me due to SnapBuilder's initial_xmin_horizon, we won't set\n(SnapBuilder's) xmin lesser than slot's effective_xmin computed in\nCreateInitDecodingContext(). See SnapBuildFindSnapshot(). So, ideally,\nSnapBuildInitialSnapshot() should never compute safeXid which is based\non the minimum of all slot's effective_xmin to be greater than\nSnapBuilder's xmin (or snapshot's xmin). In short, the code as written\nseems correct to me.\n\nI have also tried to analyze if any recent commit (7f13ac8123) could\ncause this problem but can't think of any reason because the changes\nare related to the restart of decoding and the failed test is related\nto creating a slot the very first time.\n\nI don't have any good ideas on how to proceed with this. Any thoughts\non this would be helpful?\n\nNote: I have briefly discussed this issue with Sawada-San and\nKuroda-San, so kept them in Cc.\n\n[1] - https://www.postgresql.org/message-id/CA%2BhUKG%2BA_LyW%3DFAOi2ebA9Vr-1%3Desu%2BeBSm0dsVhU%3DEgfpipkg%40mail.gmail.com\n\n--\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 10 Nov 2022 16:04:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\nOn 2022-11-10 16:04:40 +0530, Amit Kapila wrote:\n> I don't have any good ideas on how to proceed with this. Any thoughts\n> on this would be helpful?\n\nOne thing worth doing might be to convert the assertion path into an elog(),\nmentioning both xids (or add a framework for things like AssertLT(), but that\nseems hard). With the concrete values we could make a better guess at what's\ngoing wrong.\n\nIt'd probably not hurt to just perform this check independent of\nUSE_ASSERT_CHECKING - compared to the cost of creating a slot it's neglegible.\n\nThat'll obviously only help us whenever we re-encounter the issue, which will\nlikely be a while...\n\n\nHave you tried reproducing the issue by running the test in a loop?\n\n\nOne thing I noticed just now is that we don't assert\nbuilder->building_full_snapshot==true. I think we should? That didn't use to\nbe an option, but now it is... It doesn't look to me like that's the issue,\nbut ...\n\nHm, also, shouldn't the patch adding CRS_USE_SNAPSHOT have copied more of\nSnapBuildExportSnapshot()? Why aren't the error checks for\nSnapBuildExportSnapshot() needed? Why don't we need to set XactReadOnly? Which\ntransactions are we even in when we import the snapshot (cf.\nSnapBuildExportSnapshot() doing a StartTransactionCommand()).\n\nI'm also somewhat suspicious of calling RestoreTransactionSnapshot() with\nsource=MyProc. Looks like it works, but it'd be pretty easy to screw up, and\nthere's no comments in SetTransactionSnapshot() or\nProcArrayInstallRestoredXmin() warning that that might be the case.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 17:25:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\nOn 2022-11-14 17:25:31 -0800, Andres Freund wrote:\n> Hm, also, shouldn't the patch adding CRS_USE_SNAPSHOT have copied more of\n> SnapBuildExportSnapshot()? Why aren't the error checks for\n> SnapBuildExportSnapshot() needed? Why don't we need to set XactReadOnly? Which\n> transactions are we even in when we import the snapshot (cf.\n> SnapBuildExportSnapshot() doing a StartTransactionCommand()).\n\nMost of the checks for that are in CreateReplicationSlot() - but not al,\ne.g. XactReadOnly isn't set, nor do we enforce in an obvious place that we\ndon't already hold a snapshot.\n\nI first thought this might directly explain the problem, due to the\nMyProc->xmin assignment in SnapBuildInitialSnapshot() overwriting a value that\ncould influence the return value for GetOldestSafeDecodingTransactionId(). But\nthat happens later, and we check that MyProc->xmin is invalid at the start.\n\nBut it still seems supicious. This will e.g. influence whether\nStartupDecodingContext() sets PROC_IN_LOGICAL_DECODING. Which probably is\nfine, but...\n\n\nAnother theory: I dimly remember Thomas mentioning that there's some different\nbehaviour of xlogreader during shutdown as part of the v15 changes. I don't\nquite remember what the scenario leading up to that was. Thomas?\n\n\nIt's certainly interesting that we see stuff like:\n\n2022-11-08 00:20:23.255 GMT [2012][walsender] [pg_16400_sync_16395_7163433409941550636][8/0:0] ERROR: could not find record while sending logically-decoded data: missing contrecord at 0/1D3B710\n2022-11-08 00:20:23.255 GMT [2012][walsender] [pg_16400_sync_16395_7163433409941550636][8/0:0] STATEMENT: START_REPLICATION SLOT \"pg_16400_sync_16395_7163433409941550636\" LOGICAL 0/1D2B650 (proto_version '3', origin 'any', publication_names '\"testpub\"')\nERROR: could not find record while sending logically-decoded data: missing contrecord at 0/1D3B710\n2022-11-08 00:20:23.255 GMT [248][logical replication worker] ERROR: error while shutting down streaming COPY: ERROR: could not find record while sending logically-decoded data: missing contrecord at 0/1D3B710\n\nIt could entirely be caused by postmaster slowly killing processes after the\nassertion failure and that that is corrupting shared memory state though. But\nit might also be related.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 18:38:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Tue, Nov 15, 2022 at 3:38 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-14 17:25:31 -0800, Andres Freund wrote:\n> Another theory: I dimly remember Thomas mentioning that there's some different\n> behaviour of xlogreader during shutdown as part of the v15 changes. I don't\n> quite remember what the scenario leading up to that was. Thomas?\n\nYeah. So as mentioned in:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKG%2BWKsZpdoryeqM8_rk5uQPCqS2HGY92WiMGFsK2wVkcig%40mail.gmail.com\n\nI still have on my list to remove a new \"missing contrecord\" error\nthat can show up in a couple of different scenarios that aren't\nexactly error conditions depending on how you think about it, but I\nhaven't done that yet. I am not currently aware of anything bad\nhappening because of those messages, but I could be wrong.\n\n> It's certainly interesting that we see stuff like:\n>\n> 2022-11-08 00:20:23.255 GMT [2012][walsender] [pg_16400_sync_16395_7163433409941550636][8/0:0] ERROR: could not find record while sending logically-decoded data: missing contrecord at 0/1D3B710\n> 2022-11-08 00:20:23.255 GMT [2012][walsender] [pg_16400_sync_16395_7163433409941550636][8/0:0] STATEMENT: START_REPLICATION SLOT \"pg_16400_sync_16395_7163433409941550636\" LOGICAL 0/1D2B650 (proto_version '3', origin 'any', publication_names '\"testpub\"')\n> ERROR: could not find record while sending logically-decoded data: missing contrecord at 0/1D3B710\n> 2022-11-08 00:20:23.255 GMT [248][logical replication worker] ERROR: error while shutting down streaming COPY: ERROR: could not find record while sending logically-decoded data: missing contrecord at 0/1D3B710\n\nRight, so that might fit the case described in my email above:\nlogical_read_xlog_page() notices that it has been asked to shut down\nwhen it is between reads of pages with a spanning contrecord. Before,\nit would fail silently, so XLogReadRecord() returns NULL without\nsetting *errmsg, but now it complains about a missing contrecord. In\nthe case where it was showing up on that other thread, just a few\nmachines often seemed to log that error when shutting down --\nperipatus for example -- I don't know why, but I assume something to\ndo with shutdown timing and page alignment.\n\n> It could entirely be caused by postmaster slowly killing processes after the\n> assertion failure and that that is corrupting shared memory state though. But\n> it might also be related.\n\nHmm.\n\n\n", "msg_date": "Tue, 15 Nov 2022 16:26:50 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\n\nOn Tuesday, November 15, 2022 10:26 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-10 16:04:40 +0530, Amit Kapila wrote:\n> > I don't have any good ideas on how to proceed with this. Any thoughts\n> > on this would be helpful?\n> \n> One thing worth doing might be to convert the assertion path into an elog(),\n> mentioning both xids (or add a framework for things like AssertLT(), but that\n> seems hard). With the concrete values we could make a better guess at\n> what's going wrong.\n> \n> It'd probably not hurt to just perform this check independent of\n> USE_ASSERT_CHECKING - compared to the cost of creating a slot it's\n> neglegible.\n> \n> That'll obviously only help us whenever we re-encounter the issue, which will\n> likely be a while...\n> \n> \n> Have you tried reproducing the issue by running the test in a loop?\nJust FYI, I've tried to reproduce this failure in a loop,\nbut I couldn't hit the same assertion failure.\n\n\nI extracted the #16643 of 100_bugs.pl only and\nexecuted the tests more than 500 times.\n\n\nMy env and test was done in rhel7.9 and gcc 4.8 with configure option of\n--enable-cassert --enable-debug --enable-tap-tests --with-icu CFLAGS=-O0 and prefix.\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 05:15:48 +0000", "msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Tue, Nov 15, 2022 at 8:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-14 17:25:31 -0800, Andres Freund wrote:\n> > Hm, also, shouldn't the patch adding CRS_USE_SNAPSHOT have copied more of\n> > SnapBuildExportSnapshot()? Why aren't the error checks for\n> > SnapBuildExportSnapshot() needed? Why don't we need to set XactReadOnly? Which\n> > transactions are we even in when we import the snapshot (cf.\n> > SnapBuildExportSnapshot() doing a StartTransactionCommand()).\n>\n> Most of the checks for that are in CreateReplicationSlot() - but not al,\n> e.g. XactReadOnly isn't set,\n>\n\nYeah, I think we can add the check for XactReadOnly along with other\nchecks in CreateReplicationSlot().\n\n> nor do we enforce in an obvious place that we\n> don't already hold a snapshot.\n>\n\nWe have a check for (FirstXactSnapshot == NULL) in\nRestoreTransactionSnapshot->SetTransactionSnapshot. Won't that be\nsufficient?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 15 Nov 2022 16:20:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Tue, Nov 15, 2022 at 6:55 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-10 16:04:40 +0530, Amit Kapila wrote:\n> > I don't have any good ideas on how to proceed with this. Any thoughts\n> > on this would be helpful?\n>\n> One thing worth doing might be to convert the assertion path into an elog(),\n> mentioning both xids (or add a framework for things like AssertLT(), but that\n> seems hard). With the concrete values we could make a better guess at what's\n> going wrong.\n>\n> It'd probably not hurt to just perform this check independent of\n> USE_ASSERT_CHECKING - compared to the cost of creating a slot it's neglegible.\n>\n> That'll obviously only help us whenever we re-encounter the issue, which will\n> likely be a while...\n>\n\nAgreed.\n\n>\n>\n> One thing I noticed just now is that we don't assert\n> builder->building_full_snapshot==true. I think we should? That didn't use to\n> be an option, but now it is... It doesn't look to me like that's the issue,\n> but ...\n>\n\nAgreed.\n\nThe attached patch contains both changes. It seems to me this issue\ncan happen, if somehow, either slot's effective_xmin increased after\nwe assign its initial value in CreateInitDecodingContext() or somehow\nits value is InvalidTransactionId when we have invoked\nSnapBuildInitialSnapshot(). The other possibility is that the\ninitial_xmin_horizon check in SnapBuildFindSnapshot() doesn't insulate\nus from assigning builder->xmin value older than initial_xmin_horizon.\nI am not able to see if any of this can be true.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 15 Nov 2022 17:21:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\nOn 2022-11-15 16:20:00 +0530, Amit Kapila wrote:\n> On Tue, Nov 15, 2022 at 8:08 AM Andres Freund <andres@anarazel.de> wrote:\n> > nor do we enforce in an obvious place that we\n> > don't already hold a snapshot.\n> >\n> \n> We have a check for (FirstXactSnapshot == NULL) in\n> RestoreTransactionSnapshot->SetTransactionSnapshot. Won't that be\n> sufficient?\n\nI don't think that'd e.g. catch a catalog snapshot being held, yet that'd\nstill be bad. And I think checking in SetTransactionSnapshot() is too late,\nwe've already overwritten MyProc->xmin by that point.\n\n\nOn 2022-11-15 17:21:44 +0530, Amit Kapila wrote:\n> > One thing I noticed just now is that we don't assert\n> > builder->building_full_snapshot==true. I think we should? That didn't use to\n> > be an option, but now it is... It doesn't look to me like that's the issue,\n> > but ...\n> >\n> \n> Agreed.\n> \n> The attached patch contains both changes. It seems to me this issue\n> can happen, if somehow, either slot's effective_xmin increased after\n> we assign its initial value in CreateInitDecodingContext() or somehow\n> its value is InvalidTransactionId when we have invoked\n> SnapBuildInitialSnapshot(). The other possibility is that the\n> initial_xmin_horizon check in SnapBuildFindSnapshot() doesn't insulate\n> us from assigning builder->xmin value older than initial_xmin_horizon.\n> I am not able to see if any of this can be true.\n\nYea, I don't immediately see anything either. Given the discussion in\nhttps://www.postgresql.org/message-id/Yz2hivgyjS1RfMKs%40depesz.com I am\nstarting to wonder if we've introduced a race in the slot machinery.\n\n\n> diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\n> index 5006a5c464..e85c75e0e6 100644\n> --- a/src/backend/replication/logical/snapbuild.c\n> +++ b/src/backend/replication/logical/snapbuild.c\n> @@ -566,11 +566,13 @@ SnapBuildInitialSnapshot(SnapBuild *builder)\n> {\n> \tSnapshot\tsnap;\n> \tTransactionId xid;\n> +\tTransactionId safeXid;\n> \tTransactionId *newxip;\n> \tint\t\t\tnewxcnt = 0;\n> \n> \tAssert(!FirstSnapshotSet);\n> \tAssert(XactIsoLevel == XACT_REPEATABLE_READ);\n> +\tAssert(builder->building_full_snapshot);\n> \n> \tif (builder->state != SNAPBUILD_CONSISTENT)\n> \t\telog(ERROR, \"cannot build an initial slot snapshot before reaching a consistent state\");\n> @@ -589,17 +591,13 @@ SnapBuildInitialSnapshot(SnapBuild *builder)\n> \t * mechanism. Due to that we can do this without locks, we're only\n> \t * changing our own value.\n> \t */\n\nPerhaps add something like \"Creating a snapshot is expensive and an unenforced\nxmin horizon would have bad consequences, therefore always double-check that\nthe horizon is enforced\"?\n\n\n> -#ifdef USE_ASSERT_CHECKING\n> -\t{\n> -\t\tTransactionId safeXid;\n> -\n> -\t\tLWLockAcquire(ProcArrayLock, LW_SHARED);\n> -\t\tsafeXid = GetOldestSafeDecodingTransactionId(false);\n> -\t\tLWLockRelease(ProcArrayLock);\n> +\tLWLockAcquire(ProcArrayLock, LW_SHARED);\n> +\tsafeXid = GetOldestSafeDecodingTransactionId(false);\n> +\tLWLockRelease(ProcArrayLock);\n> \n> -\t\tAssert(TransactionIdPrecedesOrEquals(safeXid, snap->xmin));\n> -\t}\n> -#endif\n> +\tif (TransactionIdFollows(safeXid, snap->xmin))\n> +\t\telog(ERROR, \"cannot build an initial slot snapshot when oldest safe xid %u follows snapshot's xmin %u\",\n> +\t\t\t safeXid, snap->xmin);\n> \n> \tMyProc->xmin = snap->xmin;\n> \n\ns/when/as/\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 18:00:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Wed, Nov 16, 2022 at 7:30 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-15 16:20:00 +0530, Amit Kapila wrote:\n> > On Tue, Nov 15, 2022 at 8:08 AM Andres Freund <andres@anarazel.de> wrote:\n> > > nor do we enforce in an obvious place that we\n> > > don't already hold a snapshot.\n> > >\n> >\n> > We have a check for (FirstXactSnapshot == NULL) in\n> > RestoreTransactionSnapshot->SetTransactionSnapshot. Won't that be\n> > sufficient?\n>\n> I don't think that'd e.g. catch a catalog snapshot being held, yet that'd\n> still be bad. And I think checking in SetTransactionSnapshot() is too late,\n> we've already overwritten MyProc->xmin by that point.\n>\n\nSo, shall we add the below Asserts in SnapBuildInitialSnapshot() after\nwe have the Assert for Assert(!FirstSnapshotSet)?\nAssert(FirstXactSnapshot == NULL);\nAssert(!HistoricSnapshotActive());\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 16 Nov 2022 14:22:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 14:22:01 +0530, Amit Kapila wrote:\n> On Wed, Nov 16, 2022 at 7:30 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-11-15 16:20:00 +0530, Amit Kapila wrote:\n> > > On Tue, Nov 15, 2022 at 8:08 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > nor do we enforce in an obvious place that we\n> > > > don't already hold a snapshot.\n> > > >\n> > >\n> > > We have a check for (FirstXactSnapshot == NULL) in\n> > > RestoreTransactionSnapshot->SetTransactionSnapshot. Won't that be\n> > > sufficient?\n> >\n> > I don't think that'd e.g. catch a catalog snapshot being held, yet that'd\n> > still be bad. And I think checking in SetTransactionSnapshot() is too late,\n> > we've already overwritten MyProc->xmin by that point.\n> >\n> \n> So, shall we add the below Asserts in SnapBuildInitialSnapshot() after\n> we have the Assert for Assert(!FirstSnapshotSet)?\n> Assert(FirstXactSnapshot == NULL);\n> Assert(!HistoricSnapshotActive());\n\nI don't think that'd catch a catalog snapshot. But perhaps the better answer\nfor the catalog snapshot is to just invalidate it explicitly. The user doesn't\nhave control over the catalog snapshot being taken, and it's not too hard to\nimagine the walsender code triggering one somewhere.\n\nSo maybe we should add something like:\n\nInvalidateCatalogSnapshot(); /* about to overwrite MyProc->xmin */\nif (HaveRegisteredOrActiveSnapshot())\n elog(ERROR, \"cannot build an initial slot snapshot when snapshots exist\")\nAssert(!HistoricSnapshotActive());\n\nI think we'd not need to assert FirstXactSnapshot == NULL or !FirstSnapshotSet\nwith that, because those would show up in HaveRegisteredOrActiveSnapshot().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:26:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Wed, Nov 16, 2022 at 11:56 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-16 14:22:01 +0530, Amit Kapila wrote:\n> > On Wed, Nov 16, 2022 at 7:30 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2022-11-15 16:20:00 +0530, Amit Kapila wrote:\n> > > > On Tue, Nov 15, 2022 at 8:08 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > nor do we enforce in an obvious place that we\n> > > > > don't already hold a snapshot.\n> > > > >\n> > > >\n> > > > We have a check for (FirstXactSnapshot == NULL) in\n> > > > RestoreTransactionSnapshot->SetTransactionSnapshot. Won't that be\n> > > > sufficient?\n> > >\n> > > I don't think that'd e.g. catch a catalog snapshot being held, yet that'd\n> > > still be bad. And I think checking in SetTransactionSnapshot() is too late,\n> > > we've already overwritten MyProc->xmin by that point.\n> > >\n> >\n> > So, shall we add the below Asserts in SnapBuildInitialSnapshot() after\n> > we have the Assert for Assert(!FirstSnapshotSet)?\n> > Assert(FirstXactSnapshot == NULL);\n> > Assert(!HistoricSnapshotActive());\n>\n> I don't think that'd catch a catalog snapshot. But perhaps the better answer\n> for the catalog snapshot is to just invalidate it explicitly. The user doesn't\n> have control over the catalog snapshot being taken, and it's not too hard to\n> imagine the walsender code triggering one somewhere.\n>\n> So maybe we should add something like:\n>\n> InvalidateCatalogSnapshot(); /* about to overwrite MyProc->xmin */\n>\n\nThe comment \"/* about to overwrite MyProc->xmin */\" is unclear to me.\nWe already have a check (/* so we don't overwrite the existing value\n*/\nif (TransactionIdIsValid(MyProc->xmin))) in SnapBuildInitialSnapshot()\nwhich ensures that we don't overwrite MyProc->xmin, so the above\ncomment seems contradictory to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Nov 2022 10:44:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Wed, Nov 16, 2022 at 11:56 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-16 14:22:01 +0530, Amit Kapila wrote:\n> > On Wed, Nov 16, 2022 at 7:30 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2022-11-15 16:20:00 +0530, Amit Kapila wrote:\n> > > > On Tue, Nov 15, 2022 at 8:08 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > nor do we enforce in an obvious place that we\n> > > > > don't already hold a snapshot.\n> > > > >\n> > > >\n> > > > We have a check for (FirstXactSnapshot == NULL) in\n> > > > RestoreTransactionSnapshot->SetTransactionSnapshot. Won't that be\n> > > > sufficient?\n> > >\n> > > I don't think that'd e.g. catch a catalog snapshot being held, yet that'd\n> > > still be bad. And I think checking in SetTransactionSnapshot() is too late,\n> > > we've already overwritten MyProc->xmin by that point.\n> > >\n> >\n> > So, shall we add the below Asserts in SnapBuildInitialSnapshot() after\n> > we have the Assert for Assert(!FirstSnapshotSet)?\n> > Assert(FirstXactSnapshot == NULL);\n> > Assert(!HistoricSnapshotActive());\n>\n> I don't think that'd catch a catalog snapshot. But perhaps the better answer\n> for the catalog snapshot is to just invalidate it explicitly. The user doesn't\n> have control over the catalog snapshot being taken, and it's not too hard to\n> imagine the walsender code triggering one somewhere.\n>\n> So maybe we should add something like:\n>\n> InvalidateCatalogSnapshot(); /* about to overwrite MyProc->xmin */\n> if (HaveRegisteredOrActiveSnapshot())\n> elog(ERROR, \"cannot build an initial slot snapshot when snapshots exist\")\n> Assert(!HistoricSnapshotActive());\n>\n> I think we'd not need to assert FirstXactSnapshot == NULL or !FirstSnapshotSet\n> with that, because those would show up in HaveRegisteredOrActiveSnapshot().\n>\n\nIn the attached patch, I have incorporated this change and other\nfeedback. I think this should probably help us find the reason for\nthis problem when we see it in the future.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 17 Nov 2022 12:03:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\nOn 2022-11-17 10:44:18 +0530, Amit Kapila wrote:\n> On Wed, Nov 16, 2022 at 11:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-11-16 14:22:01 +0530, Amit Kapila wrote:\n> > > On Wed, Nov 16, 2022 at 7:30 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > On 2022-11-15 16:20:00 +0530, Amit Kapila wrote:\n> > > > > On Tue, Nov 15, 2022 at 8:08 AM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think that'd catch a catalog snapshot. But perhaps the better answer\n> > for the catalog snapshot is to just invalidate it explicitly. The user doesn't\n> > have control over the catalog snapshot being taken, and it's not too hard to\n> > imagine the walsender code triggering one somewhere.\n> >\n> > So maybe we should add something like:\n> >\n> > InvalidateCatalogSnapshot(); /* about to overwrite MyProc->xmin */\n> >\n> \n> The comment \"/* about to overwrite MyProc->xmin */\" is unclear to me.\n> We already have a check (/* so we don't overwrite the existing value\n> */\n> if (TransactionIdIsValid(MyProc->xmin))) in SnapBuildInitialSnapshot()\n> which ensures that we don't overwrite MyProc->xmin, so the above\n> comment seems contradictory to me.\n\nThe point is that catalog snapshots could easily end up setting MyProc->xmin,\neven though the caller hasn't done anything wrong. So the\nInvalidateCatalogSnapshot() would avoid erroring out in a number of scenarios.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Nov 2022 09:45:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Thu, Nov 17, 2022 at 11:15 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-17 10:44:18 +0530, Amit Kapila wrote:\n> > On Wed, Nov 16, 2022 at 11:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-11-16 14:22:01 +0530, Amit Kapila wrote:\n> > > > On Wed, Nov 16, 2022 at 7:30 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > On 2022-11-15 16:20:00 +0530, Amit Kapila wrote:\n> > > > > > On Tue, Nov 15, 2022 at 8:08 AM Andres Freund <andres@anarazel.de> wrote:\n> > > I don't think that'd catch a catalog snapshot. But perhaps the better answer\n> > > for the catalog snapshot is to just invalidate it explicitly. The user doesn't\n> > > have control over the catalog snapshot being taken, and it's not too hard to\n> > > imagine the walsender code triggering one somewhere.\n> > >\n> > > So maybe we should add something like:\n> > >\n> > > InvalidateCatalogSnapshot(); /* about to overwrite MyProc->xmin */\n> > >\n> >\n> > The comment \"/* about to overwrite MyProc->xmin */\" is unclear to me.\n> > We already have a check (/* so we don't overwrite the existing value\n> > */\n> > if (TransactionIdIsValid(MyProc->xmin))) in SnapBuildInitialSnapshot()\n> > which ensures that we don't overwrite MyProc->xmin, so the above\n> > comment seems contradictory to me.\n>\n> The point is that catalog snapshots could easily end up setting MyProc->xmin,\n> even though the caller hasn't done anything wrong. So the\n> InvalidateCatalogSnapshot() would avoid erroring out in a number of scenarios.\n>\n\nOkay, updated the patch accordingly.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 18 Nov 2022 11:20:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\nOn 2022-11-18 11:20:36 +0530, Amit Kapila wrote:\n> Okay, updated the patch accordingly.\n\nAssuming it passes tests etc, this'd work for me.\n\n- Andres\n\n\n", "msg_date": "Fri, 18 Nov 2022 17:05:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Sat, Nov 19, 2022 at 6:35 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-18 11:20:36 +0530, Amit Kapila wrote:\n> > Okay, updated the patch accordingly.\n>\n> Assuming it passes tests etc, this'd work for me.\n>\n\nThanks, Pushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Nov 2022 13:01:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Mon, Nov 21, 2022 at 4:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Nov 19, 2022 at 6:35 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-11-18 11:20:36 +0530, Amit Kapila wrote:\n> > > Okay, updated the patch accordingly.\n> >\n> > Assuming it passes tests etc, this'd work for me.\n> >\n>\n> Thanks, Pushed.\n\nThe same assertion failure has been reported on another thread[1].\nSince I could reproduce this issue several times in my environment\nI've investigated the root cause.\n\nI think there is a race condition of updating\nprocArray->replication_slot_xmin by CreateInitDecodingContext() and\nLogicalConfirmReceivedLocation().\n\nWhat I observed in the test was that a walsender process called:\nSnapBuildProcessRunningXacts()\n LogicalIncreaseXminForSlot()\n LogicalConfirmReceivedLocation()\n ReplicationSlotsComputeRequiredXmin(false).\n\nIn ReplicationSlotsComputeRequiredXmin() it acquired the\nReplicationSlotControlLock and got 0 as the minimum xmin since there\nwas no wal sender having effective_xmin. Before calling\nProcArraySetReplicationSlotXmin() (i.e. before acquiring\nProcArrayLock), another walsender process called\nCreateInitDecodingContext(), acquired ProcArrayLock, computed\nslot->effective_catalog_xmin, called\nReplicationSlotsComputeRequiredXmin(true). Since its\neffective_catalog_xmin had been set, it got 39968 as the minimum xmin,\nand updated replication_slot_xmin. However, as soon as the second\nwalsender released ProcArrayLock, the first walsender updated the\nreplication_slot_xmin to 0. After that, the second walsender called\nSnapBuildInitialSnapshot(), and GetOldestSafeDecodingTransactionId()\nreturned an XID newer than snap->xmin.\n\nOne idea to fix this issue is that in\nReplicationSlotsComputeRequiredXmin(), we compute the minimum xmin\nwhile holding both ProcArrayLock and ReplicationSlotControlLock, and\nrelease only ReplicationSlotsControlLock before updating the\nreplication_slot_xmin. I'm concerned it will increase the contention\non ProcArrayLock but I've attached the patch for discussion.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/tencent_7EB71DA5D7BA00EB0B429DCE45D0452B6406%40qq.com\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 8 Dec 2022 11:46:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Thu, Dec 8, 2022 at 8:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> The same assertion failure has been reported on another thread[1].\n> Since I could reproduce this issue several times in my environment\n> I've investigated the root cause.\n>\n> I think there is a race condition of updating\n> procArray->replication_slot_xmin by CreateInitDecodingContext() and\n> LogicalConfirmReceivedLocation().\n>\n> What I observed in the test was that a walsender process called:\n> SnapBuildProcessRunningXacts()\n> LogicalIncreaseXminForSlot()\n> LogicalConfirmReceivedLocation()\n> ReplicationSlotsComputeRequiredXmin(false).\n>\n> In ReplicationSlotsComputeRequiredXmin() it acquired the\n> ReplicationSlotControlLock and got 0 as the minimum xmin since there\n> was no wal sender having effective_xmin.\n>\n\nWhat about the current walsender process which is processing\nrunning_xacts via SnapBuildProcessRunningXacts()? Isn't that walsender\nslot's effective_xmin have a non-zero value? If not, then why?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:54:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Dear Sawada-san,\r\n\r\nThank you for making the patch! I'm still considering whether this approach is\r\ncorrect, but I can put a comment to your patch anyway.\r\n\r\n```\r\n-\tAssert(!already_locked || LWLockHeldByMe(ProcArrayLock));\r\n-\r\n-\tif (!already_locked)\r\n-\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\r\n+\tAssert(LWLockHeldByMe(ProcArrayLock));\r\n```\r\n\r\nIn this function, we regard that the ProcArrayLock has been already acquired as\r\nexclusive mode and modify data. I think LWLockHeldByMeInMode() should be used\r\ninstead of LWLockHeldByMe().\r\nI confirmed that there is only one caller that uses ReplicationSlotsComputeRequiredXmin(true)\r\nand it acquires exclusive lock correctly, but it can avoid future bug.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Fri, 27 Jan 2023 11:01:19 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Dear Amit, Sawada-san,\r\n\r\nI have also reproduced the failure on PG15 with some debug log, and I agreed that\r\nsomebody changed procArray->replication_slot_xmin to InvalidTransactionId.\r\n\r\n> > The same assertion failure has been reported on another thread[1].\r\n> > Since I could reproduce this issue several times in my environment\r\n> > I've investigated the root cause.\r\n> >\r\n> > I think there is a race condition of updating\r\n> > procArray->replication_slot_xmin by CreateInitDecodingContext() and\r\n> > LogicalConfirmReceivedLocation().\r\n> >\r\n> > What I observed in the test was that a walsender process called:\r\n> > SnapBuildProcessRunningXacts()\r\n> > LogicalIncreaseXminForSlot()\r\n> > LogicalConfirmReceivedLocation()\r\n> > ReplicationSlotsComputeRequiredXmin(false).\r\n> >\r\n> > In ReplicationSlotsComputeRequiredXmin() it acquired the\r\n> > ReplicationSlotControlLock and got 0 as the minimum xmin since there\r\n> > was no wal sender having effective_xmin.\r\n> >\r\n> \r\n> What about the current walsender process which is processing\r\n> running_xacts via SnapBuildProcessRunningXacts()? Isn't that walsender\r\n> slot's effective_xmin have a non-zero value? If not, then why?\r\n\r\nNormal walsenders which are not for tablesync create a replication slot with\r\nNOEXPORT_SNAPSHOT option. I think in this case, CreateInitDecodingContext() is\r\ncalled with need_full_snapshot = false, and slot->effective_xmin is not updated.\r\nIt is set as InvalidTransactionId at ReplicationSlotCreate() and no functions update\r\nthat. Hence the slot acquired by the walsender may have Invalid effective_min.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Sat, 28 Jan 2023 14:54:22 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Sat, Jan 28, 2023 at 11:54 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit, Sawada-san,\n>\n> I have also reproduced the failure on PG15 with some debug log, and I agreed that\n> somebody changed procArray->replication_slot_xmin to InvalidTransactionId.\n>\n> > > The same assertion failure has been reported on another thread[1].\n> > > Since I could reproduce this issue several times in my environment\n> > > I've investigated the root cause.\n> > >\n> > > I think there is a race condition of updating\n> > > procArray->replication_slot_xmin by CreateInitDecodingContext() and\n> > > LogicalConfirmReceivedLocation().\n> > >\n> > > What I observed in the test was that a walsender process called:\n> > > SnapBuildProcessRunningXacts()\n> > > LogicalIncreaseXminForSlot()\n> > > LogicalConfirmReceivedLocation()\n> > > ReplicationSlotsComputeRequiredXmin(false).\n> > >\n> > > In ReplicationSlotsComputeRequiredXmin() it acquired the\n> > > ReplicationSlotControlLock and got 0 as the minimum xmin since there\n> > > was no wal sender having effective_xmin.\n> > >\n> >\n> > What about the current walsender process which is processing\n> > running_xacts via SnapBuildProcessRunningXacts()? Isn't that walsender\n> > slot's effective_xmin have a non-zero value? If not, then why?\n>\n> Normal walsenders which are not for tablesync create a replication slot with\n> NOEXPORT_SNAPSHOT option. I think in this case, CreateInitDecodingContext() is\n> called with need_full_snapshot = false, and slot->effective_xmin is not updated.\n\nRight. This is how we create a slot used by an apply worker.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 00:45:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Sun, Jan 29, 2023 at 9:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Jan 28, 2023 at 11:54 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Dear Amit, Sawada-san,\n> >\n> > I have also reproduced the failure on PG15 with some debug log, and I agreed that\n> > somebody changed procArray->replication_slot_xmin to InvalidTransactionId.\n> >\n> > > > The same assertion failure has been reported on another thread[1].\n> > > > Since I could reproduce this issue several times in my environment\n> > > > I've investigated the root cause.\n> > > >\n> > > > I think there is a race condition of updating\n> > > > procArray->replication_slot_xmin by CreateInitDecodingContext() and\n> > > > LogicalConfirmReceivedLocation().\n> > > >\n> > > > What I observed in the test was that a walsender process called:\n> > > > SnapBuildProcessRunningXacts()\n> > > > LogicalIncreaseXminForSlot()\n> > > > LogicalConfirmReceivedLocation()\n> > > > ReplicationSlotsComputeRequiredXmin(false).\n> > > >\n> > > > In ReplicationSlotsComputeRequiredXmin() it acquired the\n> > > > ReplicationSlotControlLock and got 0 as the minimum xmin since there\n> > > > was no wal sender having effective_xmin.\n> > > >\n> > >\n> > > What about the current walsender process which is processing\n> > > running_xacts via SnapBuildProcessRunningXacts()? Isn't that walsender\n> > > slot's effective_xmin have a non-zero value? If not, then why?\n> >\n> > Normal walsenders which are not for tablesync create a replication slot with\n> > NOEXPORT_SNAPSHOT option. I think in this case, CreateInitDecodingContext() is\n> > called with need_full_snapshot = false, and slot->effective_xmin is not updated.\n>\n> Right. This is how we create a slot used by an apply worker.\n>\n\nI was thinking about how that led to this problem because\nGetOldestSafeDecodingTransactionId() ignores InvalidTransactionId. It\nseems that is possible when both builder->xmin and\nreplication_slot_catalog_xmin precede replication_slot_catalog_xmin.\nDo you see any different reason for it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Jan 2023 10:27:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Mon, Jan 30, 2023 at 10:27 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 29, 2023 at 9:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Jan 28, 2023 at 11:54 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > >\n> > > Dear Amit, Sawada-san,\n> > >\n> > > I have also reproduced the failure on PG15 with some debug log, and I agreed that\n> > > somebody changed procArray->replication_slot_xmin to InvalidTransactionId.\n> > >\n> > > > > The same assertion failure has been reported on another thread[1].\n> > > > > Since I could reproduce this issue several times in my environment\n> > > > > I've investigated the root cause.\n> > > > >\n> > > > > I think there is a race condition of updating\n> > > > > procArray->replication_slot_xmin by CreateInitDecodingContext() and\n> > > > > LogicalConfirmReceivedLocation().\n> > > > >\n> > > > > What I observed in the test was that a walsender process called:\n> > > > > SnapBuildProcessRunningXacts()\n> > > > > LogicalIncreaseXminForSlot()\n> > > > > LogicalConfirmReceivedLocation()\n> > > > > ReplicationSlotsComputeRequiredXmin(false).\n> > > > >\n> > > > > In ReplicationSlotsComputeRequiredXmin() it acquired the\n> > > > > ReplicationSlotControlLock and got 0 as the minimum xmin since there\n> > > > > was no wal sender having effective_xmin.\n> > > > >\n> > > >\n> > > > What about the current walsender process which is processing\n> > > > running_xacts via SnapBuildProcessRunningXacts()? Isn't that walsender\n> > > > slot's effective_xmin have a non-zero value? If not, then why?\n> > >\n> > > Normal walsenders which are not for tablesync create a replication slot with\n> > > NOEXPORT_SNAPSHOT option. I think in this case, CreateInitDecodingContext() is\n> > > called with need_full_snapshot = false, and slot->effective_xmin is not updated.\n> >\n> > Right. This is how we create a slot used by an apply worker.\n> >\n>\n> I was thinking about how that led to this problem because\n> GetOldestSafeDecodingTransactionId() ignores InvalidTransactionId.\n>\n\nI have reproduced it manually. For this, I had to manually make the\ndebugger call ReplicationSlotsComputeRequiredXmin(false) via path\nSnapBuildProcessRunningXacts()->LogicalIncreaseXminForSlot()->LogicalConfirmReceivedLocation()\n->ReplicationSlotsComputeRequiredXmin(false) for the apply worker. The\nsequence of events is something like (a) the replication_slot_xmin for\ntablesync worker is overridden by apply worker as zero as explained in\nSawada-San's email, (b) another transaction happened on the publisher\nthat will increase the value of ShmemVariableCache->nextXid (c)\ntablesync worker invokes\nSnapBuildInitialSnapshot()->GetOldestSafeDecodingTransactionId() which\nwill return an oldestSafeXid which is higher than snapshot's xmin.\nThis happens because replication_slot_xmin has an InvalidTransactionId\nvalue and we won't consider replication_slot_catalog_xmin because\ncatalogOnly flag is false and there is no other open running\ntransaction. I think we should try to get a simplified test to\nreproduce this problem if possible.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:34:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Thu, Dec 8, 2022 at 8:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Nov 21, 2022 at 4:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> One idea to fix this issue is that in\n> ReplicationSlotsComputeRequiredXmin(), we compute the minimum xmin\n> while holding both ProcArrayLock and ReplicationSlotControlLock, and\n> release only ReplicationSlotsControlLock before updating the\n> replication_slot_xmin. I'm concerned it will increase the contention\n> on ProcArrayLock but I've attached the patch for discussion.\n>\n\nBut what kind of workload are you worried about? This will be called\nwhile processing XLOG_RUNNING_XACTS to update\nprocArray->replication_slot_xmin/procArray->replication_slot_catalog_xmin\nonly when required. So, if we want we can test some concurrent\nworkloads along with walsenders doing the decoding to check if it\nimpacts performance.\n\nWhat other way we can fix this? Do you think we can try to avoid\nretreating xmin values in ProcArraySetReplicationSlotXmin() to avoid\nthis problem? Personally, I think taking the lock as proposed by your\npatch is a better idea. BTW, this problem seems to be only logical\nreplication specific, so if we are too worried then we can change this\nlocking only for logical replication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Jan 2023 16:54:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Mon, Jan 30, 2023 at 11:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I have reproduced it manually. For this, I had to manually make the\n> debugger call ReplicationSlotsComputeRequiredXmin(false) via path\n> SnapBuildProcessRunningXacts()->LogicalIncreaseXminForSlot()->LogicalConfirmReceivedLocation()\n> ->ReplicationSlotsComputeRequiredXmin(false) for the apply worker. The\n> sequence of events is something like (a) the replication_slot_xmin for\n> tablesync worker is overridden by apply worker as zero as explained in\n> Sawada-San's email, (b) another transaction happened on the publisher\n> that will increase the value of ShmemVariableCache->nextXid (c)\n> tablesync worker invokes\n> SnapBuildInitialSnapshot()->GetOldestSafeDecodingTransactionId() which\n> will return an oldestSafeXid which is higher than snapshot's xmin.\n> This happens because replication_slot_xmin has an InvalidTransactionId\n> value and we won't consider replication_slot_catalog_xmin because\n> catalogOnly flag is false and there is no other open running\n> transaction. I think we should try to get a simplified test to\n> reproduce this problem if possible.\n>\n\nHere are steps to reproduce it manually with the help of a debugger:\n\nSession-1\n==========\nselect pg_create_logical_replication_slot('s', 'test_decoding');\ncreate table t2(c1 int);\nselect pg_replication_slot_advance('s', pg_current_wal_lsn()); --\nDebug this statement. Stop before taking procarraylock in\nProcArraySetReplicationSlotXmin.\n\nSession-2\n============\npsql -d postgres\nBegin;\n\nSession-3\n===========\npsql -d \"dbname=postgres replication=database\"\n\nbegin transaction isolation level repeatable read read only;\nCREATE_REPLICATION_SLOT slot1 LOGICAL test_decoding USE_SNAPSHOT;\n--Debug this statement. Stop in SnapBuildInitialSnapshot() before\ntaking procarraylock\n\nSession-1\n==========\nContinue debugging and finish execution of\nProcArraySetReplicationSlotXmin. Verify\nprocArray->replication_slot_xmin is zero.\n\nSession-2\n=========\nSelect txid_current();\nCommit;\n\nSession-3\n==========\nContinue debugging.\nVerify that safeXid follows snap->xmin. This leads to assertion (in\nback branches) or error (in HEAD).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Jan 2023 16:57:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Fri, Jan 27, 2023 at 4:31 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Thank you for making the patch! I'm still considering whether this approach is\n> correct, but I can put a comment to your patch anyway.\n>\n> ```\n> - Assert(!already_locked || LWLockHeldByMe(ProcArrayLock));\n> -\n> - if (!already_locked)\n> - LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> + Assert(LWLockHeldByMe(ProcArrayLock));\n> ```\n>\n> In this function, we regard that the ProcArrayLock has been already acquired as\n> exclusive mode and modify data. I think LWLockHeldByMeInMode() should be used\n> instead of LWLockHeldByMe().\n>\n\nRight, this is even evident from the comments atop\nReplicationSlotsComputeRequiredXmin(\"If already_locked is true,\nProcArrayLock has already been acquired exclusively.\". But, I am not\nsure if it is a good idea to remove 'already_locked' parameter,\nespecially in back branches as this is an exposed API.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Jan 2023 16:59:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Mon, Jan 30, 2023 at 8:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 27, 2023 at 4:31 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Thank you for making the patch! I'm still considering whether this approach is\n> > correct, but I can put a comment to your patch anyway.\n> >\n> > ```\n> > - Assert(!already_locked || LWLockHeldByMe(ProcArrayLock));\n> > -\n> > - if (!already_locked)\n> > - LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > + Assert(LWLockHeldByMe(ProcArrayLock));\n> > ```\n> >\n> > In this function, we regard that the ProcArrayLock has been already acquired as\n> > exclusive mode and modify data. I think LWLockHeldByMeInMode() should be used\n> > instead of LWLockHeldByMe().\n> >\n>\n> Right, this is even evident from the comments atop\n> ReplicationSlotsComputeRequiredXmin(\"If already_locked is true,\n> ProcArrayLock has already been acquired exclusively.\".\n\nAgreed, will fix in the next version patch.\n\n> But, I am not\n> sure if it is a good idea to remove 'already_locked' parameter,\n> especially in back branches as this is an exposed API.\n\nYes, we should not remove the already_locked parameter in\nbackbranches. So I was thinking of keeping it on back branches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 21:41:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Mon, Jan 30, 2023 at 8:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 8, 2022 at 8:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Nov 21, 2022 at 4:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > One idea to fix this issue is that in\n> > ReplicationSlotsComputeRequiredXmin(), we compute the minimum xmin\n> > while holding both ProcArrayLock and ReplicationSlotControlLock, and\n> > release only ReplicationSlotsControlLock before updating the\n> > replication_slot_xmin. I'm concerned it will increase the contention\n> > on ProcArrayLock but I've attached the patch for discussion.\n> >\n>\n> But what kind of workload are you worried about? This will be called\n> while processing XLOG_RUNNING_XACTS to update\n> procArray->replication_slot_xmin/procArray->replication_slot_catalog_xmin\n> only when required. So, if we want we can test some concurrent\n> workloads along with walsenders doing the decoding to check if it\n> impacts performance.\n>\n\nI was slightly concerned about holding ProcArrayLock while iterating\nover replication slots especially when there are many replication\nslots in the system. But you're right; we need it only when processing\nXLOG_RUNINNG_XACTS and it's not frequent. So it doesn't introduce\nvisible overhead or negligible overhead.\n\n> What other way we can fix this? Do you think we can try to avoid\n> retreating xmin values in ProcArraySetReplicationSlotXmin() to avoid\n> this problem? Personally, I think taking the lock as proposed by your\n> patch is a better idea.\n\nAgreed.\n\n> BTW, this problem seems to be only logical\n> replication specific, so if we are too worried then we can change this\n> locking only for logical replication.\n\nYes, but I agree that there won't be a big overhead by this fix.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Jan 2023 10:19:13 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Mon, Jan 30, 2023 at 9:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jan 30, 2023 at 8:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 27, 2023 at 4:31 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > >\n> > > Thank you for making the patch! I'm still considering whether this approach is\n> > > correct, but I can put a comment to your patch anyway.\n> > >\n> > > ```\n> > > - Assert(!already_locked || LWLockHeldByMe(ProcArrayLock));\n> > > -\n> > > - if (!already_locked)\n> > > - LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > > + Assert(LWLockHeldByMe(ProcArrayLock));\n> > > ```\n> > >\n> > > In this function, we regard that the ProcArrayLock has been already acquired as\n> > > exclusive mode and modify data. I think LWLockHeldByMeInMode() should be used\n> > > instead of LWLockHeldByMe().\n> > >\n> >\n> > Right, this is even evident from the comments atop\n> > ReplicationSlotsComputeRequiredXmin(\"If already_locked is true,\n> > ProcArrayLock has already been acquired exclusively.\".\n>\n> Agreed, will fix in the next version patch.\n>\n> > But, I am not\n> > sure if it is a good idea to remove 'already_locked' parameter,\n> > especially in back branches as this is an exposed API.\n>\n> Yes, we should not remove the already_locked parameter in\n> backbranches. So I was thinking of keeping it on back branches.\n>\n\nI've attached patches for HEAD and backbranches. Please review them.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 31 Jan 2023 14:41:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Tue, Jan 31, 2023 at 11:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jan 30, 2023 at 9:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached patches for HEAD and backbranches. Please review them.\n>\n\nShall we add a comment like the one below in\nReplicationSlotsComputeRequiredXmin()?\ndiff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\nindex f286918f69..e28d48bca7 100644\n--- a/src/backend/replication/slot.c\n+++ b/src/backend/replication/slot.c\n@@ -840,6 +840,13 @@ ReplicationSlotsComputeRequiredXmin(bool already_locked)\n\n Assert(ReplicationSlotCtl != NULL);\n\n+ /*\n+ * It is possible that by the time we compute the agg_xmin\nhere and before\n+ * updating replication_slot_xmin, the CreateInitDecodingContext() will\n+ * compute and update replication_slot_xmin. So, we need to acquire\n+ * ProcArrayLock here to avoid retreating the value of\nreplication_slot_xmin.\n+ */\n+\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 31 Jan 2023 12:25:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Tue, Jan 31, 2023 at 3:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 31, 2023 at 11:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jan 30, 2023 at 9:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached patches for HEAD and backbranches. Please review them.\n> >\n>\n> Shall we add a comment like the one below in\n> ReplicationSlotsComputeRequiredXmin()?\n> diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\n> index f286918f69..e28d48bca7 100644\n> --- a/src/backend/replication/slot.c\n> +++ b/src/backend/replication/slot.c\n> @@ -840,6 +840,13 @@ ReplicationSlotsComputeRequiredXmin(bool already_locked)\n>\n> Assert(ReplicationSlotCtl != NULL);\n>\n> + /*\n> + * It is possible that by the time we compute the agg_xmin\n> here and before\n> + * updating replication_slot_xmin, the CreateInitDecodingContext() will\n> + * compute and update replication_slot_xmin. So, we need to acquire\n> + * ProcArrayLock here to avoid retreating the value of\n> replication_slot_xmin.\n> + */\n> +\n\nAgreed. It looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Jan 2023 15:59:38 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Tue, Jan 31, 2023 at 3:59 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jan 31, 2023 at 3:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 31, 2023 at 11:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Jan 30, 2023 at 9:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached patches for HEAD and backbranches. Please review them.\n> > >\n> >\n> > Shall we add a comment like the one below in\n> > ReplicationSlotsComputeRequiredXmin()?\n> > diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\n> > index f286918f69..e28d48bca7 100644\n> > --- a/src/backend/replication/slot.c\n> > +++ b/src/backend/replication/slot.c\n> > @@ -840,6 +840,13 @@ ReplicationSlotsComputeRequiredXmin(bool already_locked)\n> >\n> > Assert(ReplicationSlotCtl != NULL);\n> >\n> > + /*\n> > + * It is possible that by the time we compute the agg_xmin\n> > here and before\n> > + * updating replication_slot_xmin, the CreateInitDecodingContext() will\n> > + * compute and update replication_slot_xmin. So, we need to acquire\n> > + * ProcArrayLock here to avoid retreating the value of\n> > replication_slot_xmin.\n> > + */\n> > +\n>\n> Agreed. It looks good to me.\n\nAttached updated patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 31 Jan 2023 21:37:58 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Attached updated patches.\n>\n\nThanks, Andres, others, do you see a better way to fix this problem? I\nhave reproduced it manually and the steps are shared at [1] and\nSawada-San also reproduced it, see [2].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KDFeh%3DZbvSWPx%3Dir2QOXBxJbH0K8YqifDtG3xJENLR%2Bw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAD21AoDKJBB6p4X-%2B057Vz44Xyc-zDFbWJ%2Bg9FL6qAF5PC2iFg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 1 Feb 2023 11:23:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Attached updated patches.\n>\n\nIn back-branch patches, the change is as below:\n+ *\n+ * NB: the caller must hold ProcArrayLock in an exclusive mode regardless of\n+ * already_locked which is unused now but kept for ABI compatibility.\n */\n void\n ProcArraySetReplicationSlotXmin(TransactionId xmin, TransactionId catalog_xmin,\n bool already_locked)\n {\n- Assert(!already_locked || LWLockHeldByMe(ProcArrayLock));\n-\n- if (!already_locked)\n- LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n+ Assert(LWLockHeldByMeInMode(ProcArrayLock, LW_EXCLUSIVE));\n\nThis change looks odd to me. I think it would be better to pass\n'already_locked' as true from the caller.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Feb 2023 16:49:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 11:23:57 +0530, Amit Kapila wrote:\n> On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Attached updated patches.\n> >\n>\n> Thanks, Andres, others, do you see a better way to fix this problem? I\n> have reproduced it manually and the steps are shared at [1] and\n> Sawada-San also reproduced it, see [2].\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1KDFeh%3DZbvSWPx%3Dir2QOXBxJbH0K8YqifDtG3xJENLR%2Bw%40mail.gmail.com\n> [2] - https://www.postgresql.org/message-id/CAD21AoDKJBB6p4X-%2B057Vz44Xyc-zDFbWJ%2Bg9FL6qAF5PC2iFg%40mail.gmail.com\n\nHm. It's worrysome to now hold ProcArrayLock exclusively while iterating over\nthe slots. ReplicationSlotsComputeRequiredXmin() can be called at a\nnon-neglegible frequency. Callers like CreateInitDecodingContext(), that pass\nalready_locked=true worry me a lot less, because obviously that's not a very\nfrequent operation.\n\nThis is particularly not great because we need to acquire\nReplicationSlotControlLock while already holding ProcArrayLock.\n\n\nBut clearly there's a pretty large hole in the lock protection right now. I'm\na bit confused about why we (Robert and I, or just I) thought it's ok to do it\nthis way.\n\n\nI wonder if we could instead invert the locks, and hold\nReplicationSlotControlLock until after ProcArraySetReplicationSlotXmin(), and\nacquire ProcArrayLock just for ProcArraySetReplicationSlotXmin(). That'd mean\nthat already_locked = true callers have to do a bit more work (we have to be\nsure the locks are always acquired in the same order, or we end up in\nunresolved deadlock land), but I think we can live with that.\n\n\nThis would still allow concurrent invocations of\nReplicationSlotsComputeRequiredXmin() come up with slightly different values,\nbut that's possible with the proposed patch as well, as effective_xmin is\nupdated without any of the other locks. But I don't see a problem with that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Feb 2023 11:49:03 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hi,\n\nOn 2023-02-07 11:49:03 -0800, Andres Freund wrote:\n> On 2023-02-01 11:23:57 +0530, Amit Kapila wrote:\n> > On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Attached updated patches.\n> > >\n> >\n> > Thanks, Andres, others, do you see a better way to fix this problem? I\n> > have reproduced it manually and the steps are shared at [1] and\n> > Sawada-San also reproduced it, see [2].\n> >\n> > [1] - https://www.postgresql.org/message-id/CAA4eK1KDFeh%3DZbvSWPx%3Dir2QOXBxJbH0K8YqifDtG3xJENLR%2Bw%40mail.gmail.com\n> > [2] - https://www.postgresql.org/message-id/CAD21AoDKJBB6p4X-%2B057Vz44Xyc-zDFbWJ%2Bg9FL6qAF5PC2iFg%40mail.gmail.com\n> \n> Hm. It's worrysome to now hold ProcArrayLock exclusively while iterating over\n> the slots. ReplicationSlotsComputeRequiredXmin() can be called at a\n> non-neglegible frequency. Callers like CreateInitDecodingContext(), that pass\n> already_locked=true worry me a lot less, because obviously that's not a very\n> frequent operation.\n\nSeparately from this change:\n\nI wonder if we ought to change the setup in CreateInitDecodingContext() to be a\nbit less intricate. One idea:\n\nInstead of having GetOldestSafeDecodingTransactionId() compute a value, that\nwe then enter into a slot, that then computes the global horizon via\nReplicationSlotsComputeRequiredXmin(), we could have a successor to\nGetOldestSafeDecodingTransactionId() change procArray->replication_slot_xmin\n(if needed).\n\nAs long as CreateInitDecodingContext() prevents a concurent\nReplicationSlotsComputeRequiredXmin(), by holding ReplicationSlotControlLock\nexclusively, that should suffice to ensure that no \"wrong\" horizon was\ndetermined / no needed rows have been removed. And we'd not need a lock nested\ninside ProcArrayLock anymore.\n\n\nNot sure if it's sufficiently better to be worth bothering with though :(\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Feb 2023 12:05:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Wed, Feb 8, 2023 at 1:19 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-01 11:23:57 +0530, Amit Kapila wrote:\n> > On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Attached updated patches.\n> > >\n> >\n> > Thanks, Andres, others, do you see a better way to fix this problem? I\n> > have reproduced it manually and the steps are shared at [1] and\n> > Sawada-San also reproduced it, see [2].\n> >\n> > [1] - https://www.postgresql.org/message-id/CAA4eK1KDFeh%3DZbvSWPx%3Dir2QOXBxJbH0K8YqifDtG3xJENLR%2Bw%40mail.gmail.com\n> > [2] - https://www.postgresql.org/message-id/CAD21AoDKJBB6p4X-%2B057Vz44Xyc-zDFbWJ%2Bg9FL6qAF5PC2iFg%40mail.gmail.com\n>\n> Hm. It's worrysome to now hold ProcArrayLock exclusively while iterating over\n> the slots. ReplicationSlotsComputeRequiredXmin() can be called at a\n> non-neglegible frequency. Callers like CreateInitDecodingContext(), that pass\n> already_locked=true worry me a lot less, because obviously that's not a very\n> frequent operation.\n>\n> This is particularly not great because we need to acquire\n> ReplicationSlotControlLock while already holding ProcArrayLock.\n>\n>\n> But clearly there's a pretty large hole in the lock protection right now. I'm\n> a bit confused about why we (Robert and I, or just I) thought it's ok to do it\n> this way.\n>\n>\n> I wonder if we could instead invert the locks, and hold\n> ReplicationSlotControlLock until after ProcArraySetReplicationSlotXmin(), and\n> acquire ProcArrayLock just for ProcArraySetReplicationSlotXmin().\n>\n\nAlong with inverting, doesn't this mean that we need to acquire\nReplicationSlotControlLock in Exclusive mode instead of acquiring it\nin shared mode? My understanding of the above locking scheme is that\nin CreateInitDecodingContext(), we acquire ReplicationSlotControlLock\nin Exclusive mode before acquiring ProcArrayLock in Exclusive mode and\nrelease it after releasing ProcArrayLock. Then,\nReplicationSlotsComputeRequiredXmin() acquires\nReplicationSlotControlLock in Exclusive mode only when already_locked\nis false and releases it after a call to\nProcArraySetReplicationSlotXmin(). ProcArraySetReplicationSlotXmin()\nwon't change.\n\nI don't think just inverting the order without changing the lock mode\nwill solve the problem because still apply worker will be able to\noverride the replication_slot_xmin value.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 8 Feb 2023 09:43:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Wed, Feb 8, 2023 at 1:35 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-07 11:49:03 -0800, Andres Freund wrote:\n> > On 2023-02-01 11:23:57 +0530, Amit Kapila wrote:\n> > > On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Attached updated patches.\n> > > >\n> > >\n> > > Thanks, Andres, others, do you see a better way to fix this problem? I\n> > > have reproduced it manually and the steps are shared at [1] and\n> > > Sawada-San also reproduced it, see [2].\n> > >\n> > > [1] - https://www.postgresql.org/message-id/CAA4eK1KDFeh%3DZbvSWPx%3Dir2QOXBxJbH0K8YqifDtG3xJENLR%2Bw%40mail.gmail.com\n> > > [2] - https://www.postgresql.org/message-id/CAD21AoDKJBB6p4X-%2B057Vz44Xyc-zDFbWJ%2Bg9FL6qAF5PC2iFg%40mail.gmail.com\n> >\n> > Hm. It's worrysome to now hold ProcArrayLock exclusively while iterating over\n> > the slots. ReplicationSlotsComputeRequiredXmin() can be called at a\n> > non-neglegible frequency. Callers like CreateInitDecodingContext(), that pass\n> > already_locked=true worry me a lot less, because obviously that's not a very\n> > frequent operation.\n>\n> Separately from this change:\n>\n> I wonder if we ought to change the setup in CreateInitDecodingContext() to be a\n> bit less intricate. One idea:\n>\n> Instead of having GetOldestSafeDecodingTransactionId() compute a value, that\n> we then enter into a slot, that then computes the global horizon via\n> ReplicationSlotsComputeRequiredXmin(), we could have a successor to\n> GetOldestSafeDecodingTransactionId() change procArray->replication_slot_xmin\n> (if needed).\n>\n> As long as CreateInitDecodingContext() prevents a concurent\n> ReplicationSlotsComputeRequiredXmin(), by holding ReplicationSlotControlLock\n> exclusively, that should suffice to ensure that no \"wrong\" horizon was\n> determined / no needed rows have been removed. And we'd not need a lock nested\n> inside ProcArrayLock anymore.\n>\n>\n> Not sure if it's sufficiently better to be worth bothering with though :(\n>\n\nI am also not sure because it would improve concurrency for\nCreateInitDecodingContext() which shouldn't be called at a higher\nfrequency. Also, to some extent, the current coding or the approach we\nare discussing is easier to follow as we would always update\nprocArray->replication_slot_xmin after checking all the slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 8 Feb 2023 10:17:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Wed, Feb 8, 2023 at 1:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 8, 2023 at 1:19 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2023-02-01 11:23:57 +0530, Amit Kapila wrote:\n> > > On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Attached updated patches.\n> > > >\n> > >\n> > > Thanks, Andres, others, do you see a better way to fix this problem? I\n> > > have reproduced it manually and the steps are shared at [1] and\n> > > Sawada-San also reproduced it, see [2].\n> > >\n> > > [1] - https://www.postgresql.org/message-id/CAA4eK1KDFeh%3DZbvSWPx%3Dir2QOXBxJbH0K8YqifDtG3xJENLR%2Bw%40mail.gmail.com\n> > > [2] - https://www.postgresql.org/message-id/CAD21AoDKJBB6p4X-%2B057Vz44Xyc-zDFbWJ%2Bg9FL6qAF5PC2iFg%40mail.gmail.com\n> >\n> > Hm. It's worrysome to now hold ProcArrayLock exclusively while iterating over\n> > the slots. ReplicationSlotsComputeRequiredXmin() can be called at a\n> > non-neglegible frequency. Callers like CreateInitDecodingContext(), that pass\n> > already_locked=true worry me a lot less, because obviously that's not a very\n> > frequent operation.\n> >\n> > This is particularly not great because we need to acquire\n> > ReplicationSlotControlLock while already holding ProcArrayLock.\n> >\n> >\n> > But clearly there's a pretty large hole in the lock protection right now. I'm\n> > a bit confused about why we (Robert and I, or just I) thought it's ok to do it\n> > this way.\n> >\n> >\n> > I wonder if we could instead invert the locks, and hold\n> > ReplicationSlotControlLock until after ProcArraySetReplicationSlotXmin(), and\n> > acquire ProcArrayLock just for ProcArraySetReplicationSlotXmin().\n> >\n>\n> Along with inverting, doesn't this mean that we need to acquire\n> ReplicationSlotControlLock in Exclusive mode instead of acquiring it\n> in shared mode? My understanding of the above locking scheme is that\n> in CreateInitDecodingContext(), we acquire ReplicationSlotControlLock\n> in Exclusive mode before acquiring ProcArrayLock in Exclusive mode and\n> release it after releasing ProcArrayLock. Then,\n> ReplicationSlotsComputeRequiredXmin() acquires\n> ReplicationSlotControlLock in Exclusive mode only when already_locked\n> is false and releases it after a call to\n> ProcArraySetReplicationSlotXmin(). ProcArraySetReplicationSlotXmin()\n> won't change.\n\nI've attached the patch of this idea for discussion. In\nGetOldestSafeDecodingTransactionId() called by\nCreateInitDecodingContext(), we hold ReplicationSlotControlLock,\nProcArrayLock, and XidGenLock at a time. So we would need to be\ncareful about the ordering.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 9 Feb 2023 15:32:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "> On 9 Feb 2023, at 07:32, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> I've attached the patch of this idea for discussion.\n\nAmit, Andres: have you had a chance to look at the updated version of this\npatch?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 20 Jul 2023 09:34:29 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "This thread has gone for about a year here without making any\nprogress, which isn't great.\n\nOn Tue, Feb 7, 2023 at 2:49 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. It's worrysome to now hold ProcArrayLock exclusively while iterating over\n> the slots. ReplicationSlotsComputeRequiredXmin() can be called at a\n> non-neglegible frequency. Callers like CreateInitDecodingContext(), that pass\n> already_locked=true worry me a lot less, because obviously that's not a very\n> frequent operation.\n\nMaybe, but it would be good to have some data indicating whether this\nis really an issue.\n\n> I wonder if we could instead invert the locks, and hold\n> ReplicationSlotControlLock until after ProcArraySetReplicationSlotXmin(), and\n> acquire ProcArrayLock just for ProcArraySetReplicationSlotXmin(). That'd mean\n> that already_locked = true callers have to do a bit more work (we have to be\n> sure the locks are always acquired in the same order, or we end up in\n> unresolved deadlock land), but I think we can live with that.\n\nThis seems like it could be made to work, but there's apparently a\nshortage of people willing to write the patch.\n\nAs another thought, Masahiko-san writes in his proposed commit message:\n\n\"As a result, the replication_slot_xmin could be overwritten with an\nold value and retreated.\"\n\nBut what about just surgically preventing that?\nProcArraySetReplicationSlotXmin() could refuse to retreat the values,\nperhaps? If it computes an older value than what's there, it just does\nnothing?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 10:57:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Thu, 9 Feb 2023 at 12:02, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Feb 8, 2023 at 1:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Feb 8, 2023 at 1:19 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2023-02-01 11:23:57 +0530, Amit Kapila wrote:\n> > > > On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > Attached updated patches.\n> > > > >\n> > > >\n> > > > Thanks, Andres, others, do you see a better way to fix this problem? I\n> > > > have reproduced it manually and the steps are shared at [1] and\n> > > > Sawada-San also reproduced it, see [2].\n> > > >\n> > > > [1] - https://www.postgresql.org/message-id/CAA4eK1KDFeh%3DZbvSWPx%3Dir2QOXBxJbH0K8YqifDtG3xJENLR%2Bw%40mail.gmail.com\n> > > > [2] - https://www.postgresql.org/message-id/CAD21AoDKJBB6p4X-%2B057Vz44Xyc-zDFbWJ%2Bg9FL6qAF5PC2iFg%40mail.gmail.com\n> > >\n> > > Hm. It's worrysome to now hold ProcArrayLock exclusively while iterating over\n> > > the slots. ReplicationSlotsComputeRequiredXmin() can be called at a\n> > > non-neglegible frequency. Callers like CreateInitDecodingContext(), that pass\n> > > already_locked=true worry me a lot less, because obviously that's not a very\n> > > frequent operation.\n> > >\n> > > This is particularly not great because we need to acquire\n> > > ReplicationSlotControlLock while already holding ProcArrayLock.\n> > >\n> > >\n> > > But clearly there's a pretty large hole in the lock protection right now. I'm\n> > > a bit confused about why we (Robert and I, or just I) thought it's ok to do it\n> > > this way.\n> > >\n> > >\n> > > I wonder if we could instead invert the locks, and hold\n> > > ReplicationSlotControlLock until after ProcArraySetReplicationSlotXmin(), and\n> > > acquire ProcArrayLock just for ProcArraySetReplicationSlotXmin().\n> > >\n> >\n> > Along with inverting, doesn't this mean that we need to acquire\n> > ReplicationSlotControlLock in Exclusive mode instead of acquiring it\n> > in shared mode? My understanding of the above locking scheme is that\n> > in CreateInitDecodingContext(), we acquire ReplicationSlotControlLock\n> > in Exclusive mode before acquiring ProcArrayLock in Exclusive mode and\n> > release it after releasing ProcArrayLock. Then,\n> > ReplicationSlotsComputeRequiredXmin() acquires\n> > ReplicationSlotControlLock in Exclusive mode only when already_locked\n> > is false and releases it after a call to\n> > ProcArraySetReplicationSlotXmin(). ProcArraySetReplicationSlotXmin()\n> > won't change.\n>\n> I've attached the patch of this idea for discussion. In\n> GetOldestSafeDecodingTransactionId() called by\n> CreateInitDecodingContext(), we hold ReplicationSlotControlLock,\n> ProcArrayLock, and XidGenLock at a time. So we would need to be\n> careful about the ordering.\n\nI have changed the status of the patch to \"Waiting on Author\" as\nRobert's issues were not addressed yet. Feel free to change the status\naccordingly after addressing them.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 11 Jan 2024 19:55:52 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "On Thu, 11 Jan 2024 at 19:55, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 9 Feb 2023 at 12:02, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Feb 8, 2023 at 1:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Feb 8, 2023 at 1:19 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > On 2023-02-01 11:23:57 +0530, Amit Kapila wrote:\n> > > > > On Tue, Jan 31, 2023 at 6:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > Attached updated patches.\n> > > > > >\n> > > > >\n> > > > > Thanks, Andres, others, do you see a better way to fix this problem? I\n> > > > > have reproduced it manually and the steps are shared at [1] and\n> > > > > Sawada-San also reproduced it, see [2].\n> > > > >\n> > > > > [1] - https://www.postgresql.org/message-id/CAA4eK1KDFeh%3DZbvSWPx%3Dir2QOXBxJbH0K8YqifDtG3xJENLR%2Bw%40mail.gmail.com\n> > > > > [2] - https://www.postgresql.org/message-id/CAD21AoDKJBB6p4X-%2B057Vz44Xyc-zDFbWJ%2Bg9FL6qAF5PC2iFg%40mail.gmail.com\n> > > >\n> > > > Hm. It's worrysome to now hold ProcArrayLock exclusively while iterating over\n> > > > the slots. ReplicationSlotsComputeRequiredXmin() can be called at a\n> > > > non-neglegible frequency. Callers like CreateInitDecodingContext(), that pass\n> > > > already_locked=true worry me a lot less, because obviously that's not a very\n> > > > frequent operation.\n> > > >\n> > > > This is particularly not great because we need to acquire\n> > > > ReplicationSlotControlLock while already holding ProcArrayLock.\n> > > >\n> > > >\n> > > > But clearly there's a pretty large hole in the lock protection right now. I'm\n> > > > a bit confused about why we (Robert and I, or just I) thought it's ok to do it\n> > > > this way.\n> > > >\n> > > >\n> > > > I wonder if we could instead invert the locks, and hold\n> > > > ReplicationSlotControlLock until after ProcArraySetReplicationSlotXmin(), and\n> > > > acquire ProcArrayLock just for ProcArraySetReplicationSlotXmin().\n> > > >\n> > >\n> > > Along with inverting, doesn't this mean that we need to acquire\n> > > ReplicationSlotControlLock in Exclusive mode instead of acquiring it\n> > > in shared mode? My understanding of the above locking scheme is that\n> > > in CreateInitDecodingContext(), we acquire ReplicationSlotControlLock\n> > > in Exclusive mode before acquiring ProcArrayLock in Exclusive mode and\n> > > release it after releasing ProcArrayLock. Then,\n> > > ReplicationSlotsComputeRequiredXmin() acquires\n> > > ReplicationSlotControlLock in Exclusive mode only when already_locked\n> > > is false and releases it after a call to\n> > > ProcArraySetReplicationSlotXmin(). ProcArraySetReplicationSlotXmin()\n> > > won't change.\n> >\n> > I've attached the patch of this idea for discussion. In\n> > GetOldestSafeDecodingTransactionId() called by\n> > CreateInitDecodingContext(), we hold ReplicationSlotControlLock,\n> > ProcArrayLock, and XidGenLock at a time. So we would need to be\n> > careful about the ordering.\n>\n> I have changed the status of the patch to \"Waiting on Author\" as\n> Robert's issues were not addressed yet. Feel free to change the status\n> accordingly after addressing them.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 1 Feb 2024 23:50:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" }, { "msg_contents": "Hello,\n\n01.02.2024 21:20, vignesh C wrote:\n> The patch which you submitted has been awaiting your attention for\n> quite some time now. As such, we have moved it to \"Returned with\n> Feedback\" and removed it from the reviewing queue. Depending on\n> timing, this may be reversible. Kindly address the feedback you have\n> received, and resubmit the patch to the next CommitFest.\n\nWhile analyzing buildfarm failures, I found [1], which demonstrates the\nassertion failure discussed here:\n---\n031_column_list_publisher.log\nTRAP: FailedAssertion(\"TransactionIdPrecedesOrEquals(safeXid, snap->xmin)\", File: \n\"/home/bf/bf-build/skink/REL_15_STABLE/pgsql.build/../pgsql/src/backend/replication/logical/snapbuild.c\", Line: 614, \nPID: 1882382)\n---\n\nI've managed to reproduce the assertion failure on REL_15_STABLE with the\nfollowing modification:\n@@ -3928,6 +3928,7 @@ ProcArraySetReplicationSlotXmin(TransactionId xmin, TransactionId catalog_xmin,\n  {\n      Assert(!already_locked || LWLockHeldByMe(ProcArrayLock));\n\n+pg_usleep(1000);\n      if (!already_locked)\n          LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n\nusing the script:\nnumjobs=100\ncreatedb db\nexport PGDATABASE=db\n\nfor ((i=1;i<=100;i++)); do\necho \"iteration $i\"\n\nfor ((j=1;j<=numjobs;j++)); do\necho \"\nSELECT pg_create_logical_replication_slot('s$j', 'test_decoding');\nSELECT txid_current();\n\" | psql >>/dev/null 2>&1 &\n\necho \"\nBEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;\nCREATE_REPLICATION_SLOT slot$j LOGICAL test_decoding USE_SNAPSHOT;\n\" | psql -d \"dbname=db replication=database\" >>/dev/null 2>&1 &\ndone\nwait\n\nfor ((j=1;j<=numjobs;j++)); do\necho \"\nDROP_REPLICATION_SLOT slot$j;\n\" | psql -d \"dbname=db replication=database\" >/dev/null\n\necho \"SELECT pg_drop_replication_slot('s$j');\" | psql >/dev/null\ndone\n\ngrep 'TRAP' server.log && break;\ndone\n\n(with\nwal_level = logical\nmax_replication_slots = 200\nmax_wal_senders = 200\nin postgresql.conf)\n\niteration 18\nERROR:  replication slot \"slot13\" is active for PID 538431\nTRAP: FailedAssertion(\"TransactionIdPrecedesOrEquals(safeXid, snap->xmin)\", File: \"snapbuild.c\", Line: 614, PID: 538431)\n\n\nI've also confirmed that fix_concurrent_slot_xmin_update.patch fixes the\nissue.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-05-15%2020%3A55%3A17\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 11 Jun 2024 22:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in SnapBuildInitialSnapshot()" } ]
[ { "msg_contents": "Hi, hackers\n\nI found that we set the SO_TYPE_ANALYZE option in table_beginscan_analyze()\n\nstatic inline TableScanDesc\ntable_beginscan_analyze(Relation rel)\n{\n\tuint32 flags = SO_TYPE_ANALYZE;\n\n\treturn rel->rd_tableam->scan_begin(rel, NULL, 0, NULL, NULL, flags);\n}\n\nBut I didn’t find a place to handle that option.\nDo we miss something?  ex: forget handle it in table_endscan.\nElse, it’s  not used at all.\n\nThe first commit introduced this option is c3b23ae457.\nOther commits modify this option just changed the enum order.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi, hackers\n\nI found that we set the SO_TYPE_ANALYZE option in table_beginscan_analyze()\n\nstatic inline TableScanDesc\ntable_beginscan_analyze(Relation rel)\n{\n\tuint32 flags = SO_TYPE_ANALYZE;\n\n\treturn rel->rd_tableam->scan_begin(rel, NULL, 0, NULL, NULL, flags);\n}\n\nBut I didn’t find a place to handle that option.\nDo we miss something?  ex: forget handle it in table_endscan.\nElse, it’s  not used at all. \n\nThe first commit introduced this option is c3b23ae457.\nOther commits modify this option just changed the enum order.\n\n\nRegards,\nZhang Mingli", "msg_date": "Thu, 10 Nov 2022 22:06:09 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?What=E2=80=99s_?=the usage of SO_TYPE_ANALYZE" } ]
[ { "msg_contents": "Hello everyone.\n\nRecently when I was running regression tests, I got 'Database \n\"contrib_regression\" does not exist' error. After I reproduce the \nproblem, I found it is an auto-vacuum worker process who complains about \nthis error.\n\nThen I tried to analyze the code. When this auto-vacuum worker process \nis forked from PostMaster and get into `InitPostgres` in postinit.c, it \nwill do following steps:\n\n1. Use the oid of current database to search for the tuple in catalog, \nand get the database name. During this time, it will add AccessShareLock \non catalog and release it after scan;\n2. Call LockSharedObject to add RowExclusiveLock on catalog\n3. Use database name to search catalog again, make sure the tuple of \ncurrent database still exists.\n\nDuring the interval between step 1 and 2, the catalog is not protected \nby any lock, so that another backend process can drop the database \nsuccessfully, causing current process complains about database does not \nexist in step 3.\n\nThis issue could not only happen between auto vacuum worker process and \nbackend process, but also can happen between two backend processes, \ngiven the special interleaving order of processes. We can use psql to \nconnect to the database, and make the backend process stops at the \ninterval between step 1 and 2, and let another backend process drop this \ndatabase, then the first backend process will complain about this error.\n\nI am confused about whether this error should happen in regression \ntesting? Is it possible to lock the catalog at step 1 and hold it, so \nthat another process will not have the chance to drop the database, \nsince dropdb needs to lock the catalog with AccessExclusiveLock? And \nwhat is the consideration of the design at these 3 steps?\n\nHopefully to get some voice from kernel hackers, thanks~\n\n\n-- \nBest Regards,\n\nJingtang\n\n——————————————————————\n\nJingtang Zhang\n\nE-Mail: mrdrivingduck@gmail.com\nGitHub: @mrdrivingduck\n\nSent from Microsoft Surface Book 2.\n\n\n\n", "msg_date": "Thu, 10 Nov 2022 23:02:29 +0800", "msg_from": "Jingtang Zhang <mrdrivingduck@gmail.com>", "msg_from_op": true, "msg_subject": "Database \"contrib_regression\" does not exist during testing" }, { "msg_contents": "Jingtang Zhang <mrdrivingduck@gmail.com> writes:\n> Recently when I was running regression tests, I got 'Database \n> \"contrib_regression\" does not exist' error. After I reproduce the \n> problem, I found it is an auto-vacuum worker process who complains about \n> this error.\n\nThat's perfectly normal. I don't see anything to change here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Nov 2022 10:28:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Database \"contrib_regression\" does not exist during testing" } ]
[ { "msg_contents": "Greetings! Long time no see, I know. How are you, Hackers?\n\nI notice from the docs in the Postgres JSONPath type, brackets are described as:\n\n> • Square brackets ([]) are used for array access.\n\n\n https://www.postgresql.org/docs/current/datatype-json.html#DATATYPE-JSONPATH\n\nNotably they are not used for object field path specifications, in contrast to the original JSON Path design, which says:\n\n> JSONPath expressions can use the dot–notation\n> \n> $.store.book[0].title\n> \n> or the bracket–notation\n> \n> $['store']['book'][0]['title']\n\n https://goessner.net/articles/JsonPath/index.html#e2\n\nSimilarly, the current IETF RFC Draft says:\n\n> JSONPath expressions use the _bracket notation_, for example:\n> \n> $['store']['book'][0]['title']\n> \n> or the more compact _dot notation_, for example:\n> \n> $.store.book[0].title\n\n https://datatracker.ietf.org/doc/draft-ietf-jsonpath-base/\n\n\nMy question: Are there plans to support square bracket syntax for JSON object field name strings like this? Or to update to follow the standard as it’s finalized?\n\nThanks,\n\nDavid\n\n\n\n", "msg_date": "Thu, 10 Nov 2022 15:55:53 -0500", "msg_from": "\"David E. Wheeler\" <david@justatheory.com>", "msg_from_op": true, "msg_subject": "JSONPath Child Operator?" }, { "msg_contents": "Hi David,\n\nOn 2022-11-10 21:55, David E. Wheeler wrote:\n> My question: Are there plans to support square bracket syntax for JSON \n> object field name strings like this? Or to update to follow the \n> standard as it’s finalized?\n\nThis syntax is a part of \"jsonpath syntax extensions\" patchset: \nhttps://www.postgresql.org/message-id/e0fe4f7b-da0b-471c-b3da-d8adaf314357%40postgrespro.ru\n\n-- Ph.\n\n\n", "msg_date": "Mon, 30 Jan 2023 14:17:06 +0100", "msg_from": "Filipp Krylov <phil@krylov.eu>", "msg_from_op": false, "msg_subject": "Re: JSONPath Child Operator?" }, { "msg_contents": "On Jan 30, 2023, at 08:17, Filipp Krylov <phil@krylov.eu> wrote:\n\n>> My question: Are there plans to support square bracket syntax for JSON object field name strings like this? Or to update to follow the standard as it’s finalized?\n> \n> This syntax is a part of \"jsonpath syntax extensions\" patchset: https://www.postgresql.org/message-id/e0fe4f7b-da0b-471c-b3da-d8adaf314357%40postgrespro.ru\n\nNice, thanks. I learned since sending this email that SQL/JSON Path is not at all the same as plain JSON Path, so now I’m less concerned bout it. I like the new object subscript syntax, though, I’ve been thinking about this myself.\n\nD\n\n\n\n", "msg_date": "Mon, 30 Jan 2023 10:57:13 -0500", "msg_from": "David E. Wheeler <david@justatheory.com>", "msg_from_op": false, "msg_subject": "Re: JSONPath Child Operator?" } ]
[ { "msg_contents": "\nHi, hackers\n\nRecently, when I read the XidInMVCCSnapshot(), and find there are some\ntypos in the comments.\n\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex 207c4b27fd..9e8b6756fe 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -2409,7 +2409,7 @@ GetSnapshotData(Snapshot snapshot)\n \t\t * We could try to store xids into xip[] first and then into subxip[]\n \t\t * if there are too many xids. That only works if the snapshot doesn't\n \t\t * overflow because we do not search subxip[] in that case. A simpler\n-\t\t * way is to just store all xids in the subxact array because this is\n+\t\t * way is to just store all xids in the subxip array because this is\n \t\t * by far the bigger array. We just leave the xip array empty.\n \t\t *\n \t\t * Either way we need to change the way XidInMVCCSnapshot() works\ndiff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c\nindex f1f2ddac17..2524b1c585 100644\n--- a/src/backend/utils/time/snapmgr.c\n+++ b/src/backend/utils/time/snapmgr.c\n@@ -2345,7 +2345,7 @@ XidInMVCCSnapshot(TransactionId xid, Snapshot snapshot)\n \telse\n \t{\n \t\t/*\n-\t\t * In recovery we store all xids in the subxact array because it is by\n+\t\t * In recovery we store all xids in the subxip array because it is by\n \t\t * far the bigger array, and we mostly don't know which xids are\n \t\t * top-level and which are subxacts. The xip array is empty.\n \t\t *\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n\n", "msg_date": "Fri, 11 Nov 2022 11:26:13 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Typo about subxip in comments" }, { "msg_contents": "On Fri, Nov 11, 2022 at 8:56 AM Japin Li <japinli@hotmail.com> wrote:\n>\n> Recently, when I read the XidInMVCCSnapshot(), and find there are some\n> typos in the comments.\n>\n> diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\n> index 207c4b27fd..9e8b6756fe 100644\n> --- a/src/backend/storage/ipc/procarray.c\n> +++ b/src/backend/storage/ipc/procarray.c\n> @@ -2409,7 +2409,7 @@ GetSnapshotData(Snapshot snapshot)\n> * We could try to store xids into xip[] first and then into subxip[]\n> * if there are too many xids. That only works if the snapshot doesn't\n> * overflow because we do not search subxip[] in that case. A simpler\n> - * way is to just store all xids in the subxact array because this is\n> + * way is to just store all xids in the subxip array because this is\n> * by far the bigger array. We just leave the xip array empty.\n> *\n> * Either way we need to change the way XidInMVCCSnapshot() works\n> diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c\n> index f1f2ddac17..2524b1c585 100644\n> --- a/src/backend/utils/time/snapmgr.c\n> +++ b/src/backend/utils/time/snapmgr.c\n> @@ -2345,7 +2345,7 @@ XidInMVCCSnapshot(TransactionId xid, Snapshot snapshot)\n> else\n> {\n> /*\n> - * In recovery we store all xids in the subxact array because it is by\n> + * In recovery we store all xids in the subxip array because it is by\n> * far the bigger array, and we mostly don't know which xids are\n> * top-level and which are subxacts. The xip array is empty.\n> *\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Nov 2022 10:39:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo about subxip in comments" }, { "msg_contents": "On Fri, Nov 11, 2022 at 10:39:06AM +0530, Amit Kapila wrote:\n> On Fri, Nov 11, 2022 at 8:56 AM Japin Li <japinli@hotmail.com> wrote:\n> >\n> > Recently, when I read the XidInMVCCSnapshot(), and find there are some\n> > typos in the comments.\n> >\n> > diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\n> > index 207c4b27fd..9e8b6756fe 100644\n> > --- a/src/backend/storage/ipc/procarray.c\n> > +++ b/src/backend/storage/ipc/procarray.c\n> > @@ -2409,7 +2409,7 @@ GetSnapshotData(Snapshot snapshot)\n> > * We could try to store xids into xip[] first and then into subxip[]\n> > * if there are too many xids. That only works if the snapshot doesn't\n> > * overflow because we do not search subxip[] in that case. A simpler\n> > - * way is to just store all xids in the subxact array because this is\n> > + * way is to just store all xids in the subxip array because this is\n> > * by far the bigger array. We just leave the xip array empty.\n> > *\n> > * Either way we need to change the way XidInMVCCSnapshot() works\n> > diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c\n> > index f1f2ddac17..2524b1c585 100644\n> > --- a/src/backend/utils/time/snapmgr.c\n> > +++ b/src/backend/utils/time/snapmgr.c\n> > @@ -2345,7 +2345,7 @@ XidInMVCCSnapshot(TransactionId xid, Snapshot snapshot)\n> > else\n> > {\n> > /*\n> > - * In recovery we store all xids in the subxact array because it is by\n> > + * In recovery we store all xids in the subxip array because it is by\n> > * far the bigger array, and we mostly don't know which xids are\n> > * top-level and which are subxacts. The xip array is empty.\n> > *\n> >\n> \n> LGTM.\n\n+1\n\n\n", "msg_date": "Fri, 11 Nov 2022 13:33:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo about subxip in comments" }, { "msg_contents": "On Fri, Nov 11, 2022 at 11:26 AM Japin Li <japinli@hotmail.com> wrote:\n\n> Recently, when I read the XidInMVCCSnapshot(), and find there are some\n> typos in the comments.\n\n\nHmm, it seems to me 'the subxact array' is just another saying to refer\nto snapshot->subxip. I'm not sure about this being typo. But I have no\nobjection to this change, as it is more consistent with the 'xip array'\nsaying followed.\n\nThanks\nRichard\n\nOn Fri, Nov 11, 2022 at 11:26 AM Japin Li <japinli@hotmail.com> wrote:\nRecently, when I read the XidInMVCCSnapshot(), and find there are some\ntypos in the comments. Hmm, it seems to me 'the subxact array' is just another saying to referto snapshot->subxip.  I'm not sure about this being typo.  But I have noobjection to this change, as it is more consistent with the 'xip array'saying followed.ThanksRichard", "msg_date": "Fri, 11 Nov 2022 14:46:13 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo about subxip in comments" }, { "msg_contents": "On Fri, Nov 11, 2022 at 12:16 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> On Fri, Nov 11, 2022 at 11:26 AM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> Recently, when I read the XidInMVCCSnapshot(), and find there are some\n>> typos in the comments.\n>\n>\n> Hmm, it seems to me 'the subxact array' is just another saying to refer\n> to snapshot->subxip. I'm not sure about this being typo. But I have no\n> objection to this change, as it is more consistent with the 'xip array'\n> saying followed.\n>\n\nAgreed, it is more about being consistent with xip array.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Nov 2022 12:53:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo about subxip in comments" }, { "msg_contents": "\nOn Fri, 11 Nov 2022 at 15:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Nov 11, 2022 at 12:16 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>>\n>> On Fri, Nov 11, 2022 at 11:26 AM Japin Li <japinli@hotmail.com> wrote:\n>>>\n>>> Recently, when I read the XidInMVCCSnapshot(), and find there are some\n>>> typos in the comments.\n>>\n>>\n>> Hmm, it seems to me 'the subxact array' is just another saying to refer\n>> to snapshot->subxip. I'm not sure about this being typo. But I have no\n>> objection to this change, as it is more consistent with the 'xip array'\n>> saying followed.\n>>\n>\n> Agreed, it is more about being consistent with xip array.\n\nThanks for reviewings.\n\nMaybe a wrong plural in XidInMvccSnapshot().\n\n * Make a quick range check to eliminate most XIDs without looking at the\n * xip arrays.\n\nI think we should use \"xip array\" instead of \"xip arrays\".\n\nFurthermore, if the snapshot is taken during recovery, the xip array is\nempty, and we should check subxip array. How about changing \"xip arrays\"\nto \"xip or subxip array\"?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 11 Nov 2022 17:14:52 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo about subxip in comments" }, { "msg_contents": "On Fri, Nov 11, 2022 at 2:45 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Fri, 11 Nov 2022 at 15:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Nov 11, 2022 at 12:16 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> >>\n> >> On Fri, Nov 11, 2022 at 11:26 AM Japin Li <japinli@hotmail.com> wrote:\n> >>>\n> >>> Recently, when I read the XidInMVCCSnapshot(), and find there are some\n> >>> typos in the comments.\n> >>\n> >>\n> >> Hmm, it seems to me 'the subxact array' is just another saying to refer\n> >> to snapshot->subxip. I'm not sure about this being typo. But I have no\n> >> objection to this change, as it is more consistent with the 'xip array'\n> >> saying followed.\n> >>\n> >\n> > Agreed, it is more about being consistent with xip array.\n>\n> Thanks for reviewings.\n>\n> Maybe a wrong plural in XidInMvccSnapshot().\n>\n> * Make a quick range check to eliminate most XIDs without looking at the\n> * xip arrays.\n>\n> I think we should use \"xip array\" instead of \"xip arrays\".\n>\n\nI think here the comment is referring to both xip and subxip array, so\nit looks okay to me.\n\n> Furthermore, if the snapshot is taken during recovery, the xip array is\n> empty, and we should check subxip array. How about changing \"xip arrays\"\n> to \"xip or subxip array\"?\n>\n\nI don't know if that is an improvement. I think we should stick to\nyour initial proposed change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 12 Nov 2022 09:42:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo about subxip in comments" }, { "msg_contents": "\nOn Sat, 12 Nov 2022 at 12:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Nov 11, 2022 at 2:45 PM Japin Li <japinli@hotmail.com> wrote:\n>> Maybe a wrong plural in XidInMvccSnapshot().\n>>\n>> * Make a quick range check to eliminate most XIDs without looking at the\n>> * xip arrays.\n>>\n>> I think we should use \"xip array\" instead of \"xip arrays\".\n>>\n>\n> I think here the comment is referring to both xip and subxip array, so\n> it looks okay to me.\n>\n\nYeah, it means xip in normal case, and subxip in recovery case.\n\n>> Furthermore, if the snapshot is taken during recovery, the xip array is\n>> empty, and we should check subxip array. How about changing \"xip arrays\"\n>> to \"xip or subxip array\"?\n>>\n>\n> I don't know if that is an improvement. I think we should stick to\n> your initial proposed change.\n\nAgreed. Let's focus on the initial proposed change.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Sun, 13 Nov 2022 23:02:06 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo about subxip in comments" }, { "msg_contents": "On Sun, Nov 13, 2022 at 8:32 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Sat, 12 Nov 2022 at 12:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I don't know if that is an improvement. I think we should stick to\n> > your initial proposed change.\n>\n> Agreed. Let's focus on the initial proposed change.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 15 Nov 2022 12:35:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo about subxip in comments" } ]
[ { "msg_contents": "While reviewing the outer-join Vars patch, I encountered something\nconfusing me which can also be seen on HEAD. According to outer join\nidentity 3\n\n (A leftjoin (B leftjoin C on (Pbc)) on (Pab)) left join D on (Pcd)\n\nshould be equal to\n\n ((A leftjoin B on (Pab)) leftjoin C on (Pbc)) left join D on (Pcd)\n\nAssume Pbc is strict for B.\n\nIn the first form, the C/D join will be illegal because we find that Pcd\nuses A/B join's RHS (we are checking syn_righthand here, so it's {B, C})\nand is not strict for A/B join's min_righthand, which is {B}, so that we\ndecide we need to preserve the ordering of the two OJs, by adding A/B\njoin's full syntactic relset to min_lefthand.\n\nIn the second form, the C/D join will be legal, as 1) Pcd does not use\nA/B join's RHS, and 2) Pcd uses B/C join's RHS and meanwhile is strict\nfor B/C join's min_righthand.\n\nAs a result, with the second form, we may be able to generate more\noptimal plans as we have more join ordering choices.\n\nI'm wondering whether we need to insist on being strict for the lower\nOJ's min_righthand. What if we instead check strictness for its whole\nsyn_righthand?\n\nThanks\nRichard\n\nWhile reviewing the outer-join Vars patch, I encountered somethingconfusing me which can also be seen on HEAD.  According to outer joinidentity 3 (A leftjoin (B leftjoin C on (Pbc)) on (Pab)) left join D on (Pcd)should be equal to ((A leftjoin B on (Pab)) leftjoin C on (Pbc)) left join D on (Pcd)Assume Pbc is strict for B.In the first form, the C/D join will be illegal because we find that Pcduses A/B join's RHS (we are checking syn_righthand here, so it's {B, C})and is not strict for A/B join's min_righthand, which is {B}, so that wedecide we need to preserve the ordering of the two OJs, by adding A/Bjoin's full syntactic relset to min_lefthand.In the second form, the C/D join will be legal, as 1) Pcd does not useA/B join's RHS, and 2) Pcd uses B/C join's RHS and meanwhile is strictfor B/C join's min_righthand.As a result, with the second form, we may be able to generate moreoptimal plans as we have more join ordering choices.I'm wondering whether we need to insist on being strict for the lowerOJ's min_righthand.  What if we instead check strictness for its wholesyn_righthand?ThanksRichard", "msg_date": "Fri, 11 Nov 2022 19:28:02 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "A problem about join ordering" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I'm wondering whether we need to insist on being strict for the lower\n> OJ's min_righthand. What if we instead check strictness for its whole\n> syn_righthand?\n\nSurely not. What if the only point of strictness is for a rel that\nisn't part of the min_righthand? Then we could end up re-ordering\nbased on a condition that isn't actually strict for what we've\nchosen as the join's RHS.\n\nIt might be possible to change the other part of the equation and\nconsider the A/B join's min_righthand instead of syn_righthand\nwhile checking if Pcd uses A/B's RHS; but I'm not real sure about\nthat either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Nov 2022 10:24:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A problem about join ordering" }, { "msg_contents": "On Fri, Nov 11, 2022 at 11:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > I'm wondering whether we need to insist on being strict for the lower\n> > OJ's min_righthand. What if we instead check strictness for its whole\n> > syn_righthand?\n>\n> Surely not. What if the only point of strictness is for a rel that\n> isn't part of the min_righthand? Then we could end up re-ordering\n> based on a condition that isn't actually strict for what we've\n> chosen as the join's RHS.\n>\n\nI think I've got your point. You're right. And doing so would cause\nanother problem about ordering restriction as I observed. For the\nfollowing two forms\n\n (A leftjoin (B leftjoin C on (Pbc)) on (Pab)) left join D on (Pbcd)\n\nAND\n\n ((A leftjoin B on (Pab)) leftjoin C on (Pbc)) left join D on (Pbcd)\n\nAssume Pbc is strict for B, Pbcd is strict for C but not strict for B.\n\nAfter applying this change, in the first form the (BC)/D join will be\nlegal, while in the second form it is not.\n\n\n>\n> It might be possible to change the other part of the equation and\n> consider the A/B join's min_righthand instead of syn_righthand\n> while checking if Pcd uses A/B's RHS; but I'm not real sure about\n> that either.\n>\n\nThis seems a more plausible change. I tried this way and didn't find\nany abnormal behaviour. But I'm not sure either. Maybe I need to try\nharder.\n\nThanks\nRichard\n\nOn Fri, Nov 11, 2022 at 11:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> I'm wondering whether we need to insist on being strict for the lower\n> OJ's min_righthand.  What if we instead check strictness for its whole\n> syn_righthand?\n\nSurely not.  What if the only point of strictness is for a rel that\nisn't part of the min_righthand?  Then we could end up re-ordering\nbased on a condition that isn't actually strict for what we've\nchosen as the join's RHS.I think I've got your point.  You're right.  And doing so would causeanother problem about ordering restriction as I observed.  For thefollowing two forms (A leftjoin (B leftjoin C on (Pbc)) on (Pab)) left join D on (Pbcd)AND ((A leftjoin B on (Pab)) leftjoin C on (Pbc)) left join D on (Pbcd)Assume Pbc is strict for B, Pbcd is strict for C but not strict for B.After applying this change, in the first form the (BC)/D join will belegal, while in the second form it is not. \n\nIt might be possible to change the other part of the equation and\nconsider the A/B join's min_righthand instead of syn_righthand\nwhile checking if Pcd uses A/B's RHS; but I'm not real sure about\nthat either.This seems a more plausible change.  I tried this way and didn't findany abnormal behaviour.  But I'm not sure either.  Maybe I need to tryharder.ThanksRichard", "msg_date": "Mon, 14 Nov 2022 18:10:31 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A problem about join ordering" }, { "msg_contents": "On Fri, Nov 11, 2022 at 11:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > I'm wondering whether we need to insist on being strict for the lower\n> > OJ's min_righthand. What if we instead check strictness for its whole\n> > syn_righthand?\n>\n> Surely not. What if the only point of strictness is for a rel that\n> isn't part of the min_righthand? Then we could end up re-ordering\n> based on a condition that isn't actually strict for what we've\n> chosen as the join's RHS.\n>\n> It might be possible to change the other part of the equation and\n> consider the A/B join's min_righthand instead of syn_righthand\n> while checking if Pcd uses A/B's RHS; but I'm not real sure about\n> that either.\n\n\nThe problem described upthread occurs in the case where the lower OJ is\nin our LHS. For the other case where the lower OJ is in our RHS, it\nseems we also have join ordering problem. As an example, consider\n\n A leftjoin (B leftjoin (C leftjoin D on (Pcd)) on (Pbc)) on (Pac)\n\n A leftjoin ((B leftjoin C on (Pbc)) leftjoin D on (Pcd)) on (Pac)\n\nThe two forms are equal if we assume Pcd is strict for C, according to\nouter join identity 3.\n\nIn the two forms we both decide that we cannot interchange the ordering\nof A/C join and B/C join, because Pac uses B/C join's RHS. So we add\nB/C join's full syntactic relset to A/C join's min_righthand to preserve\nthe ordering. However, in the first form B/C's full syntactic relset\nincludes {B, C, D}, while in the second form it only includes {B, C}.\nAs a result, the A/(BC) join is illegal in the first form and legal in\nthe second form, and this will determine whether we can get the third\nform as below\n\n (A leftjoin (B leftjoin C on (Pbc)) on (Pac)) leftjoin D on (Pcd)\n\nThis makes me rethink whether we should use lower OJ's full syntactic\nrelset or just its min_lefthand + min_righthand to be added to\nmin_righthand to preserve ordering for this case. But I'm not sure\nabout this.\n\nAny thoughts?\n\nThanks\nRichard\n\nOn Fri, Nov 11, 2022 at 11:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> I'm wondering whether we need to insist on being strict for the lower\n> OJ's min_righthand.  What if we instead check strictness for its whole\n> syn_righthand?\n\nSurely not.  What if the only point of strictness is for a rel that\nisn't part of the min_righthand?  Then we could end up re-ordering\nbased on a condition that isn't actually strict for what we've\nchosen as the join's RHS.\n\nIt might be possible to change the other part of the equation and\nconsider the A/B join's min_righthand instead of syn_righthand\nwhile checking if Pcd uses A/B's RHS; but I'm not real sure about\nthat either. The problem described upthread occurs in the case where the lower OJ isin our LHS.  For the other case where the lower OJ is in our RHS, itseems we also have join ordering problem.  As an example, consider A leftjoin (B leftjoin (C leftjoin D on (Pcd)) on (Pbc)) on (Pac) A leftjoin ((B leftjoin C on (Pbc)) leftjoin D on (Pcd)) on (Pac)The two forms are equal if we assume Pcd is strict for C, according toouter join identity 3.In the two forms we both decide that we cannot interchange the orderingof A/C join and B/C join, because Pac uses B/C join's RHS.  So we addB/C join's full syntactic relset to A/C join's min_righthand to preservethe ordering.  However, in the first form B/C's full syntactic relsetincludes {B, C, D}, while in the second form it only includes {B, C}.As a result, the A/(BC) join is illegal in the first form and legal inthe second form, and this will determine whether we can get the thirdform as below (A leftjoin (B leftjoin C on (Pbc)) on (Pac)) leftjoin D on (Pcd)This makes me rethink whether we should use lower OJ's full syntacticrelset or just its min_lefthand + min_righthand to be added tomin_righthand to preserve ordering for this case.  But I'm not sureabout this.Any thoughts?ThanksRichard", "msg_date": "Fri, 25 Nov 2022 15:27:55 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A problem about join ordering" } ]
[ { "msg_contents": "Hi,\n\nInspired by recent commits 9fcdf2c, e813e0e and many small test\nmodules/extensions under src/test/modules, I would like to propose one\nsuch test module for Custom WAL Resource Manager feature introduced by\ncommit 5c279a6. It not only covers the code a bit, but it also\ndemonstrates usage of the feature.\n\nI'm attaching a patch herewith. Thoughts?\n\nThanks Michael Paquier for an off list chat.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 11 Nov 2022 17:01:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Add test module for Custom WAL Resource Manager feature" }, { "msg_contents": "On Fri, 2022-11-11 at 17:01 +0530, Bharath Rupireddy wrote:\n> Hi,\n> \n> Inspired by recent commits 9fcdf2c, e813e0e and many small test\n> modules/extensions under src/test/modules, I would like to propose\n> one\n> such test module for Custom WAL Resource Manager feature introduced\n> by\n> commit 5c279a6. It not only covers the code a bit, but it also\n> demonstrates usage of the feature.\n> \n> I'm attaching a patch herewith. Thoughts?\n\nGood idea. Can we take it a little further to exercise the decoding\npath, as well?\n\nFor instance, we can do something like a previous proposal[1], except\nit can now be done as an extension. If it's useful, we could even put\nit in contrib with a real RMGR ID.\n\nThough I'm also fine just adding a test module to start with.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/20ee0b0ae6958804a88fe9580157587720faf664.camel@j-davis.com\n\n\n\n", "msg_date": "Fri, 11 Nov 2022 15:10:20 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Add test module for Custom WAL Resource Manager feature" }, { "msg_contents": "On Sat, Nov 12, 2022 at 4:40 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2022-11-11 at 17:01 +0530, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > Inspired by recent commits 9fcdf2c, e813e0e and many small test\n> > modules/extensions under src/test/modules, I would like to propose\n> > one\n> > such test module for Custom WAL Resource Manager feature introduced\n> > by\n> > commit 5c279a6. It not only covers the code a bit, but it also\n> > demonstrates usage of the feature.\n> >\n> > I'm attaching a patch herewith. Thoughts?\n>\n> Good idea.\n\nThanks.\n\n> Can we take it a little further to exercise the decoding\n> path, as well? For instance, we can do something like a previous proposal[1], except\n> it can now be done as an extension. If it's useful, we could even put\n> it in contrib with a real RMGR ID.\n>\n> [1]\n> https://www.postgresql.org/message-id/20ee0b0ae6958804a88fe9580157587720faf664.camel@j-davis.com\n\nWe have tests/modules defined for testing logical decoding, no? If the\nintention is to define rm_redo in this test module, I think it's not\nrequired.\n\n> Though I'm also fine just adding a test module to start with.\n\nThanks. I would like to keep it simple.\n\nI've added some more comments and attached v2 patch herewith. Please review.\n\nI've also added a CF entry - https://commitfest.postgresql.org/41/4009/.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 14 Nov 2022 09:34:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add test module for Custom WAL Resource Manager feature" }, { "msg_contents": "On Mon, 2022-11-14 at 09:34 +0530, Bharath Rupireddy wrote:\n> Thanks. I would like to keep it simple.\n> \n> I've added some more comments and attached v2 patch herewith. Please\n> review.\n\nCommitted with some significant revisions (ae168c794f):\n\n * changed to insert a deterministic message, rather than a random\none, which allows more complete testing\n * fixed a couple bugs\n * used a static initializer for the RmgrData rather than memset,\nwhich shows a better example\n\nI also separately committed a patch to mark the argument of\nRegisterCustomRmgr as \"const\".\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 16:29:08 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Add test module for Custom WAL Resource Manager feature" }, { "msg_contents": "On Tue, Nov 15, 2022 at 04:29:08PM -0800, Jeff Davis wrote:\n> Committed with some significant revisions (ae168c794f):\n> \n> * changed to insert a deterministic message, rather than a random\n> one, which allows more complete testing\n> * fixed a couple bugs\n> * used a static initializer for the RmgrData rather than memset,\n> which shows a better example\n> \n> I also separately committed a patch to mark the argument of\n> RegisterCustomRmgr as \"const\".\n\nThis is causing the CI job to fail for 32-bit builds. Here is one\nexample in my own repository for what looks like an alignment issue: \nhttps://github.com/michaelpq/postgres/runs/9514121172\n\n[01:17:23.152] ok 1 - custom WAL resource manager has successfully registered with the server\n[01:17:23.152] not ok 2 - custom WAL resource manager has successfully written a WAL record\n[01:17:23.152] 1..2\n[01:17:23.152] # test failed\n[01:17:23.152] --- stderr ---\n[01:17:23.152] # Failed test 'custom WAL resource manager has successfully written a WAL record'\n[01:17:23.152] # at /tmp/cirrus-ci-build/src/test/modules/test_custom_rmgrs/t/001_basic.pl line 56.\n[01:17:23.152] # got: '0/151E088|test_custom_rmgrs|TEST_CUSTOM_RMGRS_MESSAGE|40|14|0|payload (10 bytes): payload123'\n[01:17:23.152] # expected: '0/151E088|test_custom_rmgrs|TEST_CUSTOM_RMGRS_MESSAGE|44|18|0|payload (10 bytes): payload123'\n[01:17:23.152] # Looks like you failed 1 test of 2.\n[01:17:23.152] \n\nNot many buildfarm members test 32b builds, but lapwing does.\n--\nMichael", "msg_date": "Wed, 16 Nov 2022 10:26:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add test module for Custom WAL Resource Manager feature" }, { "msg_contents": "On Wed, Nov 16, 2022 at 10:26:32AM +0900, Michael Paquier wrote:\n> Not many buildfarm members test 32b builds, but lapwing does.\n\nWell, it didn't take long:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-11-16%2000%3A40%3A11\n--\nMichael", "msg_date": "Wed, 16 Nov 2022 10:27:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add test module for Custom WAL Resource Manager feature" }, { "msg_contents": "On Wed, 2022-11-16 at 10:27 +0900, Michael Paquier wrote:\n> On Wed, Nov 16, 2022 at 10:26:32AM +0900, Michael Paquier wrote:\n> > Not many buildfarm members test 32b builds, but lapwing does.\n> \n> Well, it didn't take long:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-11-16%2000%3A40%3A11\n\nFixed, thank you. I'll be more diligent about pushing to github CI\nfirst.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 20:09:34 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Add test module for Custom WAL Resource Manager feature" } ]
[ { "msg_contents": "Hi hackers,\n\neqjoinsel() can be optimized by not reading MCV stats if at least one of \nthe two join attributes is unique. As primary keys are implicitly unique \nthis situation can occur frequently. For unique columns no MCV stats are \nstored and eqjoinsel_inner() and eqjoinsel_semi(), called from \neqjoinsel(), only consider MCV stats in the join selectivity computation \nif they're present on both columns. Attached is a small patch that \nimplements the skipping.\n\nWith this change we saw some queries improve planning time by more than \n2x, especially with larger values for default_statistics_target. That's \nbecause get_attstatsslot() deconstructs the array holding the MCV. The \nsize of that array depends on default_statistics_target.\n\nThanks for your consideration!\n\n--\nDavid Geier\n(ServiceNow)", "msg_date": "Fri, 11 Nov 2022 13:01:15 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "David Geier <geidav.pg@gmail.com> writes:\n> eqjoinsel() can be optimized by not reading MCV stats if at least one of \n> the two join attributes is unique.\n\nThere won't *be* any MCV stats for a column that ANALYZE perceives to\nbe unique, so I'm not quite sure where the claimed savings comes from.\n\n> With this change we saw some queries improve planning time by more than \n> 2x, especially with larger values for default_statistics_target.\n\nPlease provide a concrete example.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Nov 2022 10:16:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "Hi Tom,\n> There won't *be* any MCV stats for a column that ANALYZE perceives to\n> be unique, so I'm not quite sure where the claimed savings comes from.\nWe save if one join attribute is unique while the other isn't. In that \ncase stored MCV stats are read for the non-unique attribute but then \nnever used. This is because MCV stats in join selectivity estimation are \nonly used if they're present on both columns\n> Please provide a concrete example.\n\nA super simple case already showing a significant speedup is the \nfollowing. The more ways to join two tables and the more joins overall, \nthe higher the expected gain.\n\nCREATE TABLE bar(col INT UNIQUE);\nCREATE TABLE foo (col INT);\nINSERT INTO foo SELECT generate_series(1, 1000000, 0.5);\nSET default_statistics_target = 10000;\nANALYZE foo, bar;\n\\timing on\nEXPLAIN SELECT * FROM foo, bar WHERE foo.col = bar.col;\n\nRunning the above query five times gave me average runtimes of:\n\n- 0.62 ms without the patch and\n- 0.48 ms with the patch.\n\n--\nDavid Geier\n(ServiceNow)\n\n\n\n", "msg_date": "Mon, 14 Nov 2022 10:19:18 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "\n\nOn 11/14/22 10:19, David Geier wrote:\n> Hi Tom,\n>> There won't *be* any MCV stats for a column that ANALYZE perceives to\n>> be unique, so I'm not quite sure where the claimed savings comes from.\n>\n> We save if one join attribute is unique while the other isn't. In that\n> case stored MCV stats are read for the non-unique attribute but then\n> never used. This is because MCV stats in join selectivity estimation are\n> only used if they're present on both columns\n>\n\nRight - if we only have MCV on one side of the join, we currently end up\nloading the MCV we have only to not use it anyway. The uniqueness is a\nsimple way to detect some of those cases. I'd bet the savings can be\nquite significant for small joins and/or cases with large MCV.\n\nI wonder if we might be yet a bit smarter, though.\n\nFor example, assume the first attribute is not defined as \"unique\" but\nwe still don't have a MCV (it may be unique - or close to unique - in\npractice, or maybe it's just uniform distribution). We end up with\n\n have_mcvs1 = false\n\nCan't we just skip trying to load the second MCV? So we could do\n\n if (have_mcvs1 && HeapTupleIsValid(vardata2.statsTuple))\n { ... try loading mcv2 ... }\n\nOr perhaps what if we have a function that quickly determines if the\nattribute has MCV, without loading it? I'd bet the expensive part of\nget_attstatslot() is the deconstruct_array().\n\nWe could have a function that only does the first small loop over slots,\nand returns true/false if we have a slot of the requested stakind. It\nmight even check the isunique flag first, to make it more convenient.\n\nAnd only if both sides return \"true\" we'd load the MCV, deconstruct the\narray and all that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Nov 2022 01:27:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> Or perhaps what if we have a function that quickly determines if the\n> attribute has MCV, without loading it? I'd bet the expensive part of\n> get_attstatslot() is the deconstruct_array().\n> We could have a function that only does the first small loop over slots,\n> and returns true/false if we have a slot of the requested stakind.\n\nYeah, I like this idea.\n\n> It might even check the isunique flag first, to make it more convenient.\n\nThat would tie it to this one use-case, which doesn't seem advisable.\nI think we should forget the known-unique angle and just do quick\nchecks to see if both sides have MCVs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 19:39:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "I wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> Or perhaps what if we have a function that quickly determines if the\n>> attribute has MCV, without loading it? I'd bet the expensive part of\n>> get_attstatslot() is the deconstruct_array().\n>> We could have a function that only does the first small loop over slots,\n>> and returns true/false if we have a slot of the requested stakind.\n\n> Yeah, I like this idea.\n\nActually, looking at get_attstatslot, I realize it was already designed\nto do that -- just pass zero for flags. So we could do it as attached.\n\nWe could make some consequent simplifications by only retaining one\n\"have_mcvs\" flag, but I'm inclined to leave the rest of the code as-is.\nWe would not get much gain from that, and it would make this harder\nto undo if there ever is a reason to consider just one set of MCVs.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 17 Nov 2022 20:36:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "On Fri, Nov 18, 2022 at 9:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Actually, looking at get_attstatslot, I realize it was already designed\n> to do that -- just pass zero for flags. So we could do it as attached.\n\n\nYes, it is. Using zero flag would short-cut get_attstatsslot() to just\nreturn whether the slot type exists without loading it. Do you think we\nneed to emphasize this use case in the comments for 'flags'? It seems\ncurrently there is no such use case in the codes on HEAD.\n\nI wonder whether we need to also check statistic_proc_security_check()\nwhen determining if MCVs exists in both sides.\n\nThanks\nRichard\n\nOn Fri, Nov 18, 2022 at 9:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nActually, looking at get_attstatslot, I realize it was already designed\nto do that -- just pass zero for flags.  So we could do it as attached. Yes, it is.  Using zero flag would short-cut get_attstatsslot() to justreturn whether the slot type exists without loading it.  Do you think weneed to emphasize this use case in the comments for 'flags'?  It seemscurrently there is no such use case in the codes on HEAD.I wonder whether we need to also check statistic_proc_security_check()when determining if MCVs exists in both sides.ThanksRichard", "msg_date": "Fri, 18 Nov 2022 11:55:14 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Yes, it is. Using zero flag would short-cut get_attstatsslot() to just\n> return whether the slot type exists without loading it. Do you think we\n> need to emphasize this use case in the comments for 'flags'?\n\nPerhaps, it's not really obvious now.\n\n> I wonder whether we need to also check statistic_proc_security_check()\n> when determining if MCVs exists in both sides.\n\nYeah, I thought about hoisting the statistic_proc_security_check\ntests up into get_mcv_stats. I don't think it's a great idea\nthough. Again, it'd complicate untangling this if we ever\ngeneralize the use of MCVs in this function. Also, I don't\nthink we should be micro-optimizing the case where the security\ncheck doesn't pass --- if it doesn't, you're going to be hurting\nfrom bad plans a lot more than you are from some wasted cycles\nhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Nov 2022 00:21:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "Thanks everyone for the great feedback and suggestions.\n\n>\n>> Yes, it is. Using zero flag would short-cut get_attstatsslot() to just\n>> return whether the slot type exists without loading it. Do you think we\n>> need to emphasize this use case in the comments for 'flags'?\n> Perhaps, it's not really obvious now.\n\nComment added.\n\n\n> I wonder whether we need to also check statistic_proc_security_check()\n>> when determining if MCVs exists in both sides.\n> Yeah, I thought about hoisting the statistic_proc_security_check\n> tests up into get_mcv_stats. I don't think it's a great idea\n> though. Again, it'd complicate untangling this if we ever\n> generalize the use of MCVs in this function. Also, I don't\n> think we should be micro-optimizing the case where the security\n> check doesn't pass --- if it doesn't, you're going to be hurting\n> from bad plans a lot more than you are from some wasted cycles\n> here.\n\nSounds reasonable.\n\nAttached is v2 of the patch.\nThis is basically Tom's version plus a comment for the flags of \nget_attstatslot() as suggested by Richard.\n\nI couldn't come up with any reasonable way of writing an automated test \nfor that.\nAny ideas?\n\n--\nDavid Geier\n(ServiceNow)", "msg_date": "Fri, 18 Nov 2022 09:54:46 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "On 11/18/22 09:54, David Geier wrote:\n> Thanks everyone for the great feedback and suggestions.\n> \n>>\n>>> Yes, it is.  Using zero flag would short-cut get_attstatsslot() to just\n>>> return whether the slot type exists without loading it.  Do you think we\n>>> need to emphasize this use case in the comments for 'flags'?\n>> Perhaps, it's not really obvious now.\n> \n> Comment added.\n> \n> \n>> I wonder whether we need to also check statistic_proc_security_check()\n>>> when determining if MCVs exists in both sides.\n>> Yeah, I thought about hoisting the statistic_proc_security_check\n>> tests up into get_mcv_stats.  I don't think it's a great idea\n>> though.  Again, it'd complicate untangling this if we ever\n>> generalize the use of MCVs in this function.  Also, I don't\n>> think we should be micro-optimizing the case where the security\n>> check doesn't pass --- if it doesn't, you're going to be hurting\n>> from bad plans a lot more than you are from some wasted cycles\n>> here.\n> \n> Sounds reasonable.\n> \n> Attached is v2 of the patch.\n> This is basically Tom's version plus a comment for the flags of\n> get_attstatslot() as suggested by Richard.\n> \n\nSeems fine. I wonder if we could/could introduce a new constant for 0,\nsimilar to ATTSTATSSLOT_NUMBERS/ATTSTATSSLOT_VALUES, instead of using a\nmagic constant. Say, ATTSTATSSLOT_NONE or ATTSTATSSLOT_CHECK.\n\n> I couldn't come up with any reasonable way of writing an automated test\n> for that.\n> Any ideas?\n> \n\nI don't think you can write a test for this, because there is no change\nto behavior that can be observed by the user. If one side has no MCV,\nthe only difference is whether we try to load the other MCV or not.\nThere's no impact on estimates, because we won't use it.\n\nIMO the best thing we can do is check coverage, that the new code is\nexercised in regression tests. And I think that's fine.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Nov 2022 14:00:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 11/18/22 09:54, David Geier wrote:\n>> I couldn't come up with any reasonable way of writing an automated test\n>> for that.\n\n> I don't think you can write a test for this, because there is no change\n> to behavior that can be observed by the user.\n\nYeah, and the delta in performance is surely too small to be\nmeasured reliably in the buildfarm. I think coverage will have\nto be sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Nov 2022 09:45:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "On 11/18/22 14:00, Tomas Vondra wrote:\n> Seems fine. I wonder if we could/could introduce a new constant for 0,\n> similar to ATTSTATSSLOT_NUMBERS/ATTSTATSSLOT_VALUES, instead of using a\n> magic constant. Say, ATTSTATSSLOT_NONE or ATTSTATSSLOT_CHECK.\nGood idea. I called it ATTSTATSSLOT_EXISTS. New patch attached.\n> I don't think you can write a test for this, because there is no change\n> to behavior that can be observed by the user. If one side has no MCV,\n> the only difference is whether we try to load the other MCV or not.\n\nYeah. I thought along the lines of checking the number of pages read \nwhen the pg_stats entry is not in syscache yet. But that seems awfully \nimplementation specific. So no test provided.\n\n-- \nDavid Geier\n(ServiceNow)", "msg_date": "Fri, 18 Nov 2022 16:59:02 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" }, { "msg_contents": "David Geier <geidav.pg@gmail.com> writes:\n> On 11/18/22 14:00, Tomas Vondra wrote:\n>> Seems fine. I wonder if we could/could introduce a new constant for 0,\n>> similar to ATTSTATSSLOT_NUMBERS/ATTSTATSSLOT_VALUES, instead of using a\n>> magic constant. Say, ATTSTATSSLOT_NONE or ATTSTATSSLOT_CHECK.\n\n> Good idea. I called it ATTSTATSSLOT_EXISTS. New patch attached.\n\nNo, I don't think it's a good idea. The flags argument is documented as,\nand used as, a bitmask of multiple options. Passing zero fits fine with\nthat and is consistent with what we do elsewhere. Turning it into\nsort-of-an-enum-but-not-really isn't an improvement.\n\nI didn't like your draft comment too much, because it didn't cover\nwhat I think is the most important point: after a call with flags=0\nwe do not need a matching free_attstatsslot call to avoid leaking\nanything. (If we did, this patch would be a lot hairier.)\n\nI rewrote the comment the way I wanted it and pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Nov 2022 11:07:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimize join selectivity estimation by not reading MCV stats for\n unique join attributes" } ]
[ { "msg_contents": "Per the discussion at [1], it seems like it'd be a good idea to make\nBitmapsets into full-fledged, tagged Nodes, so that we could do things\nlike print or copy lists of them without special-case logic. The\nextra space for the NodeTag is basically free due to alignment\nconsiderations, at least on 64-bit hardware.\n\nAttached is a cleaned-up version of Amit's patch v24-0003 at [2].\nI fixed the problems with not always tagging Bitmapsets, and changed\nthe outfuncs/readfuncs logic so that Bitmapsets still print exactly\nas they did before (thus, this doesn't require a catversion bump).\n\nAs proof of concept, I removed the read_write_ignore labels from\nRelOptInfo's unique_for_rels and non_unique_for_rels fields, and\ngot nice-looking debug printout:\n\n :unique_for_rels ((b 1))\n :non_unique_for_rels <>\n\nI also removed some special-case code from indxpath.c because\nlist_member() can do the same thing now. (There might be other\nplaces that can be simplified; I didn't look very hard.)\n\nIt'd be possible to make Bitmapset fields be (mostly) not special cases\nin the copy/equal/out/read support. But I chose to leave that alone,\nbecause it'd add a little runtime overhead for indirecting through the\ngeneric support functions while not really saving any code space.\n\nBarring objections, I'd like to go ahead and push this.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/94353655-c177-1f55-7afb-b2090de33341%40enterprisedb.com\n[2] https://www.postgresql.org/message-id/CA%2BHiwqEYCLRZ2Boq_uK0pjLn_9b8XL-LmwKj7HN5kJOivUkYLg%40mail.gmail.com", "msg_date": "Fri, 11 Nov 2022 15:05:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Making Bitmapsets be valid Nodes" }, { "msg_contents": "On 11.11.22 21:05, Tom Lane wrote:\n> Per the discussion at [1], it seems like it'd be a good idea to make\n> Bitmapsets into full-fledged, tagged Nodes, so that we could do things\n> like print or copy lists of them without special-case logic. The\n> extra space for the NodeTag is basically free due to alignment\n> considerations, at least on 64-bit hardware.\n> \n> Attached is a cleaned-up version of Amit's patch v24-0003 at [2].\n> I fixed the problems with not always tagging Bitmapsets, and changed\n> the outfuncs/readfuncs logic so that Bitmapsets still print exactly\n> as they did before (thus, this doesn't require a catversion bump).\n\nThis looks good to me.\n\n\n\n", "msg_date": "Sun, 13 Nov 2022 13:22:33 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Making Bitmapsets be valid Nodes" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 11.11.22 21:05, Tom Lane wrote:\n>> Attached is a cleaned-up version of Amit's patch v24-0003 at [2].\n>> I fixed the problems with not always tagging Bitmapsets, and changed\n>> the outfuncs/readfuncs logic so that Bitmapsets still print exactly\n>> as they did before (thus, this doesn't require a catversion bump).\n\n> This looks good to me.\n\nPushed, thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Nov 2022 10:23:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Making Bitmapsets be valid Nodes" }, { "msg_contents": "On Mon, Nov 14, 2022 at 12:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 11.11.22 21:05, Tom Lane wrote:\n> >> Attached is a cleaned-up version of Amit's patch v24-0003 at [2].\n> >> I fixed the problems with not always tagging Bitmapsets, and changed\n> >> the outfuncs/readfuncs logic so that Bitmapsets still print exactly\n> >> as they did before (thus, this doesn't require a catversion bump).\n>\n> > This looks good to me.\n>\n> Pushed, thanks for looking.\n\nThanks a lot for this.\n\nI agree that it may not be worthwhile to add an extra function call by\nchanging COPY_BITMAPSET_FIELD, etc. that is currently emitted by\ngen_node_support.pl for any Bitmapset * / Relid struct members to\nCOPY_NODE_FIELD, etc.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 16:26:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making Bitmapsets be valid Nodes" } ]
[ { "msg_contents": "Proposed wording attached.\n\nThe typical WAL rules are broken for setting PD_ALL_VISIBLE. I'm OK\nwith that -- rules are meant to be broken -- but it's confusing enough\nthat I think we should (internally) document it better. This doesn't\nguarantee things won't change again in the future, but this behavior\nhas been stable for a while.\n\nThe thread here:\n\nhttps://postgr.es/m/ee47ee24-2928-96e3-a2b1-97cbe07b2c7b%40garret.ru\n\nalso indicates that external projects and tools are relying on our\nrules for the page LSNs.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 11 Nov 2022 14:14:10 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Document WAL rules related to PD_ALL_VISIBLE in README" }, { "msg_contents": "On Fri, Nov 11, 2022 at 2:14 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The typical WAL rules are broken for setting PD_ALL_VISIBLE. I'm OK\n> with that -- rules are meant to be broken -- but it's confusing enough\n> that I think we should (internally) document it better.\n\n+1. I think that there is a lot of value in being deliberate about\ninvariants like this.\n\nIf it's too awkward to list special cases like this one in some\ncentral place, then maybe those special cases shouldn't exist in the\nfirst place. I don't love the fact that we have this PD_ALL_VISIBLE\nspecial case -- \"it's kind of a hint but also not really\" doesn't\ninspire confidence. But I don't see it changing anytime soon.\n\nAcknowledging that it is an odd special case in a full throated sort\nof way at least minimizes confusion. It kind of makes sense in one\nway, I suppose -- maybe the visibility map itself is the special case.\nThe visibility map has its own unique definition of crash safe that\nmakes losing set bits tolerable, while failing to unset a bit remains\nintolerable (only the latter could result in wrong answers to\nqueries). I think that every other on-disk structure is either a pure\nhint without any accompanying WAL record, or an atomic action with a\nWAL record whose REDO routine needs to reliably reproduce the same\non-disk state as original execution (barring preexisting differences\nin how hint bits are set between original execution and a hot\nstandby).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 11 Nov 2022 14:40:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Document WAL rules related to PD_ALL_VISIBLE in README" } ]
[ { "msg_contents": "Hi, hackers\n\n\nParam RangeTblEntry *rte in function set_plain_rel_pathlist is not used at all.\n\nI look at the commit e2fa76d80b(10 years ago), it’s useless since then.\n\nAdd a path to remove it.\n\nRegards,\nZhang Mingli", "msg_date": "Sat, 12 Nov 2022 19:13:49 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Remove unused param rte in set_plain_rel_pathlist" }, { "msg_contents": "Zhang Mingli <zmlpostgres@gmail.com> writes:\n> Param RangeTblEntry *rte in function set_plain_rel_pathlist is not used at all.\n> Add a path to remove it.\n\nI'm disinclined to change that, as it'd make set_plain_rel_pathlist\ndifferent from its sibling functions, which do need the RTE.\n\nIn practice this has cost zero anyway, since set_plain_rel_pathlist\nwill surely get inlined into its sole caller.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Nov 2022 10:00:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove unused param rte in set_plain_rel_pathlist" } ]
[ { "msg_contents": "When setting up a postgres tree with Meson on an almost empty Debian 11 VM I\nhit an error on \"meson setup -Ddebug=true build .\" like this:\n\n Program python3 found: YES (/usr/bin/python3)\n meson.build:987:2: ERROR: Unknown method \"dependency\" in object.\n\nThe error in itself isn't terribly self-explanatory. According to the log the\nerror was a missing Python package:\n\n Traceback (most recent call last):\n File \"<string>\", line 20, in <module>\n File \"<string>\", line 8, in links_against_libpython\n ModuleNotFoundError: No module named ‘distutils.core'\n\nInstalling the distutils package fixes it, but it seems harsh to fail setup on\na missing package. Would something like the attached make sense?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 14 Nov 2022 14:23:02 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Error on missing Python module in Meson setup" }, { "msg_contents": "Hi,\n\nOn 2022-11-14 14:23:02 +0100, Daniel Gustafsson wrote:\n> When setting up a postgres tree with Meson on an almost empty Debian 11 VM I\n> hit an error on \"meson setup -Ddebug=true build .\" like this:\n> \n> Program python3 found: YES (/usr/bin/python3)\n> meson.build:987:2: ERROR: Unknown method \"dependency\" in object.\n> \n> The error in itself isn't terribly self-explanatory. According to the log the\n> error was a missing Python package:\n\n> Traceback (most recent call last):\n> File \"<string>\", line 20, in <module>\n> File \"<string>\", line 8, in links_against_libpython\n> ModuleNotFoundError: No module named ‘distutils.core'\n> \n> Installing the distutils package fixes it, but it seems harsh to fail setup on\n> a missing package. Would something like the attached make sense?\n\nThe error is a bit better in newer versions of meson:\nmeson.build:986: WARNING: <PythonExternalProgram 'python3' -> ['/usr/bin/python3']> is not a valid python or it is missing distutils\nbut we do still error out weirdly afterwards.\n\nWe probably should report this to the meson folks regardless of us working\naround it or not.\n\n\n> diff --git a/meson.build b/meson.build\n> index 058382046e..1a7e301fc9 100644\n> --- a/meson.build\n> +++ b/meson.build\n> @@ -984,8 +984,12 @@ pyopt = get_option('plpython')\n> if not pyopt.disabled()\n> pm = import('python')\n> python3_inst = pm.find_installation(required: pyopt.enabled())\n> - python3_dep = python3_inst.dependency(embed: true, required: pyopt.enabled())\n> - if not cc.check_header('Python.h', dependencies: python3_dep, required: pyopt.enabled())\n> + if python3_inst.found()\n> + python3_dep = python3_inst.dependency(embed: true, required: pyopt.enabled())\n> + if not cc.check_header('Python.h', dependencies: python3_dep, required: pyopt.enabled())\n> + python3_dep = not_found_dep\n> + endif\n> + else\n> python3_dep = not_found_dep\n> endif\n> else\n\nPerhaps worth simplifying a bit. What do you think about:\n\n\npyopt = get_option('plpython')\npython3_dep = not_found_dep\nif not pyopt.disabled()\n pm = import('python')\n python3_inst = pm.find_installation(required: pyopt.enabled())\n if python3_inst.found()\n python3_dep_int = python3_inst.dependency(embed: true, required: pyopt.enabled())\n if cc.check_header('Python.h', dependencies: python3_dep_int, required: pyopt.enabled())\n python3_dep = python3_dep\n endif\n endif\nendif\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 16:25:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error on missing Python module in Meson setup" }, { "msg_contents": "> On 15 Nov 2022, at 01:25, Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-14 14:23:02 +0100, Daniel Gustafsson wrote\n\n>> Installing the distutils package fixes it, but it seems harsh to fail setup on\n>> a missing package. Would something like the attached make sense?\n> \n> The error is a bit better in newer versions of meson:\n> meson.build:986: WARNING: <PythonExternalProgram 'python3' -> ['/usr/bin/python3']> is not a valid python or it is missing distutils\n> but we do still error out weirdly afterwards.\n> \n> We probably should report this to the meson folks regardless of us working\n> around it or not.\n\nThats better, but I wish meson could be more specific since it at that point\nshould know if Python works at all or is missing distutils.\n\n> Perhaps worth simplifying a bit. What do you think about:\n> \n> ...\n\nAgreed, that's a better version. The attached version with a disabler object\nis even more to the point, and seems to works on my box both with and without\ndistutils and libpython. Is this the correct way to use disablers?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 15 Nov 2022 13:41:15 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Error on missing Python module in Meson setup" } ]
[ { "msg_contents": "[ I'm intentionally forking this off as a new thread, so as to\nnot confuse the cfbot about what's the live patchset on the\nExecRTCheckPerms thread. ]\n\nAmit Langote <amitlangote09@gmail.com> writes:\n> On Sat, Nov 12, 2022 at 1:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The main thing I was wondering about in connection with that\n>> was whether to assume that there could be other future applications\n>> of the logic to perform multi-bitmapset union, intersection,\n>> etc. If so, then I'd be inclined to choose different naming and\n>> put those functions in or near to bitmapset.c. It doesn't look\n>> like Amit's code needs anything like that, but maybe somebody\n>> has an idea about other applications?\n\n> Yes, simple storage of multiple Bitmapsets in a List somewhere in a\n> parse/plan tree sounded like that would have wider enough use to add\n> proper node support for. Assuming you mean trying to generalize\n> VarAttnoSet in your patch 0004 posted at [2], I wonder if you want to\n> somehow make its indexability by varno / RT index a part of the\n> interface of the generic code you're thinking for it?\n\nFor discussion's sake, here's my current version of that 0004 patch,\nrewritten to use list-of-bitmapset as the data structure. (This\ncould actually be pulled out of the outer-join-vars patchset and\ncommitted independently, just as a minor performance improvement.\nIt doesn't quite apply cleanly to HEAD, but pretty close.)\n\nAs it stands, the new functions are still in util/clauses.c, but\nif we think they could be of general use it'd make sense to move them\neither to nodes/bitmapset.c or to some new file under backend/nodes.\n\nSome other thoughts:\n\n* The multi_bms prefix is a bit wordy, so I was thinking of shortening\nthe function names to mbms_xxx. Maybe that's too brief.\n\n* This is a pretty short list of functions so far. I'm not eager\nto build out a bunch of dead code though. Is it OK to leave it\nwith just this much functionality until someone needs more?\n\n* I'm a little hesitant about whether the API actually should be\nList-of-Bitmapset, or some dedicated struct as I had in the previous\nversion of 0004. This version is way less invasive in prepjointree.c\nthan that was, but the reason is there's ambiguity about what the\nforced_null_vars Lists actually contain, which feels error-prone.\n\nComments?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 14 Nov 2022 09:57:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "List of Bitmapset (was Re: ExecRTCheckPerms() and many prunable\n partitions)" }, { "msg_contents": " On Mon, Nov 14, 2022 at 11:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Sat, Nov 12, 2022 at 1:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The main thing I was wondering about in connection with that\n> >> was whether to assume that there could be other future applications\n> >> of the logic to perform multi-bitmapset union, intersection,\n> >> etc. If so, then I'd be inclined to choose different naming and\n> >> put those functions in or near to bitmapset.c. It doesn't look\n> >> like Amit's code needs anything like that, but maybe somebody\n> >> has an idea about other applications?\n>\n> > Yes, simple storage of multiple Bitmapsets in a List somewhere in a\n> > parse/plan tree sounded like that would have wider enough use to add\n> > proper node support for. Assuming you mean trying to generalize\n> > VarAttnoSet in your patch 0004 posted at [2], I wonder if you want to\n> > somehow make its indexability by varno / RT index a part of the\n> > interface of the generic code you're thinking for it?\n>\n> For discussion's sake, here's my current version of that 0004 patch,\n> rewritten to use list-of-bitmapset as the data structure. (This\n> could actually be pulled out of the outer-join-vars patchset and\n> committed independently, just as a minor performance improvement.\n> It doesn't quite apply cleanly to HEAD, but pretty close.)\n>\n> As it stands, the new functions are still in util/clauses.c, but\n> if we think they could be of general use it'd make sense to move them\n> either to nodes/bitmapset.c or to some new file under backend/nodes.\n\nThese multi_bms_* functions sound generic enough to me, so +1 to put\nthem in nodes/bitmapset.c. Or even a new file if the API should\ninvolve a dedicated struct enveloping the List as you write below.\n\n> Some other thoughts:\n>\n> * The multi_bms prefix is a bit wordy, so I was thinking of shortening\n> the function names to mbms_xxx. Maybe that's too brief.\n\nFWIW, multi_bms_* naming sounds fine to me.\n\n> * This is a pretty short list of functions so far. I'm not eager\n> to build out a bunch of dead code though. Is it OK to leave it\n> with just this much functionality until someone needs more?\n\n+1\n\n> * I'm a little hesitant about whether the API actually should be\n> List-of-Bitmapset, or some dedicated struct as I had in the previous\n> version of 0004. This version is way less invasive in prepjointree.c\n> than that was, but the reason is there's ambiguity about what the\n> forced_null_vars Lists actually contain, which feels error-prone.\n\nAre you thinking of something like a MultiBitmapset that wraps the\nmulti_bms List? That sounds fine to me. Another option is to make\nthe generic API be List-of-Bitmapset but keep VarAttnoSet in\nprepjointree.c and put the List in it. IMHO, VarAttnoSet is\ndefinitely more self-documenting for that patch's purposes.\n\n+ * The new member is identified by the zero-based index of the List\n+ * element it should go into, and the bit number to be set therein.\n+ */\n+List *\n+multi_bms_add_member(List *mbms, int index1, int index2)\n\nThe comment sounds a bit ambiguous, especially the \", and the bit\nnumber to be set therein.\" part. If you meant to describe the\narguments, how about mentioning their names too, as in:\n\nThe new member is identified by 'index1', the zero-based index of the\nList element it should go into, and 'index2' specifies the bit number\nto be set therein.\n\n+ /* Add empty elements to a, as needed */\n+ while (list_length(a) < list_length(b))\n+ a = lappend(a, NULL);\n+ /* forboth will stop at the end of the shorter list, which is fine */\n\nIsn't this comment unnecessary given that the while loop makes both\nlists be the same length?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Nov 2022 11:04:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: List of Bitmapset (was Re: ExecRTCheckPerms() and many prunable\n partitions)" }, { "msg_contents": "On 2022-Nov-14, Tom Lane wrote:\n\n> For discussion's sake, here's my current version of that 0004 patch,\n> rewritten to use list-of-bitmapset as the data structure.\n\nI feel that there should be more commentary that explains what a\nmulti-bms is. Not sure where, maybe just put it near the function that\nfirst appears in the file.\n\n> * The multi_bms prefix is a bit wordy, so I was thinking of shortening\n> the function names to mbms_xxx. Maybe that's too brief.\n\nI don't think the \"ulti_\" bytes add a lot, and short names are better.\nEither you know what a mbms is, or you don't. If the latter, then you\njump to one of these functions in order to find out what the data\nstructure is; after that, you can read the code and it should be clear\nenough.\n\n> * This is a pretty short list of functions so far. I'm not eager\n> to build out a bunch of dead code though. Is it OK to leave it\n> with just this much functionality until someone needs more?\n\nI agree with not adding dead code.\n\n> * I'm a little hesitant about whether the API actually should be\n> List-of-Bitmapset, or some dedicated struct as I had in the previous\n> version of 0004. This version is way less invasive in prepjointree.c\n> than that was, but the reason is there's ambiguity about what the\n> forced_null_vars Lists actually contain, which feels error-prone.\n\nHmm ... if somebody makes a mistake, does the functionality break in\nobvious ways, or is it very hard to pinpoint what happened?\n\n> +/*\n> + * multi_bms_add_member\n> + *\t\tAdd a new member to a list of bitmapsets.\n> + *\n> + * This is like bms_add_member, but for lists of bitmapsets.\n> + * The new member is identified by the zero-based index of the List\n> + * element it should go into, and the bit number to be set therein.\n> + */\n> +List *\n> +multi_bms_add_member(List *mbms, int index1, int index2)\n\nMaybe s/index1/listidx/ or bitmapidx and s/index2/bitnr/ ?\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n", "msg_date": "Tue, 15 Nov 2022 10:07:33 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: List of Bitmapset (was Re: ExecRTCheckPerms() and many prunable\n partitions)" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Nov-14, Tom Lane wrote:\n>> For discussion's sake, here's my current version of that 0004 patch,\n>> rewritten to use list-of-bitmapset as the data structure.\n\n> I feel that there should be more commentary that explains what a\n> multi-bms is. Not sure where, maybe just put it near the function that\n> first appears in the file.\n\nRight. I split the new functions out to new files multibitmapset.h/.c,\nso that the file headers can carry the overall explanation. (I duplicated\nthe text between .h and .c, which is also true of bitmapset.h/.c, but\nmaybe that's overkill.)\n\n>> * The multi_bms prefix is a bit wordy, so I was thinking of shortening\n>> the function names to mbms_xxx. Maybe that's too brief.\n\n> I don't think the \"ulti_\" bytes add a lot, and short names are better.\n\nYeah, after sleeping on it I like mbms.\n\n>> * This is a pretty short list of functions so far. I'm not eager\n>> to build out a bunch of dead code though. Is it OK to leave it\n>> with just this much functionality until someone needs more?\n\n> I agree with not adding dead code.\n\nI concluded that the only thing that makes this an odd set of functions\nto start out with is the lack of mbms_is_member; it seems asymmetric\nto have mbms_add_member but not mbms_is_member. So I added that.\nI'm content to let the rest grow out as needed.\n\n>> * I'm a little hesitant about whether the API actually should be\n>> List-of-Bitmapset, or some dedicated struct as I had in the previous\n>> version of 0004. This version is way less invasive in prepjointree.c\n>> than that was, but the reason is there's ambiguity about what the\n>> forced_null_vars Lists actually contain, which feels error-prone.\n\n> Hmm ... if somebody makes a mistake, does the functionality break in\n> obvious ways, or is it very hard to pinpoint what happened?\n\nNow that Bitmapset is a full-fledged Node type, we can make use of\ncastNode checks to verify that the input Lists contain what we expect.\nThat seems probably sufficient to catch coding errors.\n\n>> +multi_bms_add_member(List *mbms, int index1, int index2)\n\n> Maybe s/index1/listidx/ or bitmapidx and s/index2/bitnr/ ?\n\nRight. I used listidx and bitidx.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 15 Nov 2022 13:23:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: List of Bitmapset (was Re: ExecRTCheckPerms() and many prunable\n partitions)" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Nov 14, 2022 at 11:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> + * The new member is identified by the zero-based index of the List\n>> + * element it should go into, and the bit number to be set therein.\n\n> The comment sounds a bit ambiguous, especially the \", and the bit\n> number to be set therein.\" part. If you meant to describe the\n> arguments, how about mentioning their names too, as in:\n\nDone that way in the patch I just posted.\n\n>> + /* forboth will stop at the end of the shorter list, which is fine */\n\n> Isn't this comment unnecessary given that the while loop makes both\n> lists be the same length?\n\nNo, the while loop ensures that a is at least as long as b.\nIt could have started out longer, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Nov 2022 13:25:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: List of Bitmapset (was Re: ExecRTCheckPerms() and many prunable\n partitions)" }, { "msg_contents": "On Wed, Nov 16, 2022 at 3:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Mon, Nov 14, 2022 at 11:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> + * The new member is identified by the zero-based index of the List\n> >> + * element it should go into, and the bit number to be set therein.\n>\n> > The comment sounds a bit ambiguous, especially the \", and the bit\n> > number to be set therein.\" part. If you meant to describe the\n> > arguments, how about mentioning their names too, as in:\n>\n> Done that way in the patch I just posted.\n\nThanks.\n\n> >> + /* forboth will stop at the end of the shorter list, which is fine */\n>\n> > Isn't this comment unnecessary given that the while loop makes both\n> > lists be the same length?\n>\n> No, the while loop ensures that a is at least as long as b.\n> It could have started out longer, though.\n\nOops, I missed that case.\n\nThe latest version looks pretty good to me.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Nov 2022 11:35:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: List of Bitmapset (was Re: ExecRTCheckPerms() and many prunable\n partitions)" } ]
[ { "msg_contents": "Hi,\n\nI was looking at Todo item:/Consider changing error to warning for strings larger than one megabyte/ \nand after going through existing mails and suggestions. I would like to propose a patch for tsearch to change error into warning for string larger than one mb and also increase word and position limits.\n\nI've checked operations select/insertion/index, which worked fine without any error (except for the warning as intended).\n\nThoughts: I am not really sure why was it proposed in the mail to decrease len/MAXSTRLEN.\n> You could decrease len in WordEntry to 9 (512 characters) and increase \n> pos to 22 (4 Mb). Don't forget to update MAXSTRLEN and MAXSTRPOS \n> accordingly.\n\n\nI'm attaching a patch herewith. I will be glad to get some feedback on this.\n\n\nThanks,\nAnkit", "msg_date": "Tue, 15 Nov 2022 01:46:00 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Change error to warning and increase thresholds of tsearch" } ]
[ { "msg_contents": "\nHere's a couple of things I've noticed.\n\n\nandrew@ub22:HEAD $ inst.meson/bin/pg_config --libdir --ldflags\n/home/andrew/pgl/pg_head/root/HEAD/inst.meson/lib/x86_64-linux-gnu\n-fuse-ld=lld -DCOPY_PARSE_PLAN_TREES -DRAW_EXPRESSION_COVERAGE_TEST\n-DWRITE_READ_PARSE_PLAN_TREES\n\n\nAre we really intending to add a new subdirectory to the default layout?\nWhy is that x84_64-linux-gnu there?\n\nAlso, why have the CPPFLAGS made their way into the LDFLAGS? That seems\nwrong.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Nov 2022 17:41:54 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "meson oddities" }, { "msg_contents": "On Mon, Nov 14, 2022 at 05:41:54PM -0500, Andrew Dunstan wrote:\n> Also, why have the CPPFLAGS made their way into the LDFLAGS? That seems\n> wrong.\n\nNot only CPPFLAGS. I pass down some custom CFLAGS to the meson\ncommand as well, and these find their way to LDFLAGS on top of\nCFLAGS for the user-defined entries. I would not have expected that,\neither.\n--\nMichael", "msg_date": "Tue, 15 Nov 2022 08:22:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2022-11-14 17:41:54 -0500, Andrew Dunstan wrote:\n> Here's a couple of things I've noticed.\n> \n> \n> andrew@ub22:HEAD $ inst.meson/bin/pg_config --libdir --ldflags\n> /home/andrew/pgl/pg_head/root/HEAD/inst.meson/lib/x86_64-linux-gnu\n> -fuse-ld=lld -DCOPY_PARSE_PLAN_TREES -DRAW_EXPRESSION_COVERAGE_TEST\n> -DWRITE_READ_PARSE_PLAN_TREES\n> \n> \n> Are we really intending to add a new subdirectory to the default layout?\n> Why is that x84_64-linux-gnu there?\n\nIt's the platform default on, at least, debian derived distros - that's how\nyou can install 32bit/64bit libraries and libraries with different ABIs\n(e.g. linking against glibc vs linking with musl) in parallel.\n\nWe could override meson inferring that from the system if we want to, but it\ndoesn't seem like a good idea?\n\n\n> Also, why have the CPPFLAGS made their way into the LDFLAGS? That seems\n> wrong.\n\nBecause these days meson treats CPPFLAGS as part of CFLAGS as it apparently\nrepeatedly confused build system writers and users when e.g. header-presence\nchecks would only use CPPFLAGS. Some compiler options aren't entirely clearly\ndelineated, consider e.g. -isystem (influencing warning behaviour as well as\npreprocessor paths). Not sure if that's the best choice, but it's imo\ndefensible.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 15:24:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2022-11-15 08:22:59 +0900, Michael Paquier wrote:\n> I pass down some custom CFLAGS to the meson command as well, and these find\n> their way to LDFLAGS on top of CFLAGS for the user-defined entries. I would\n> not have expected that, either.\n\nWe effectively do that with autoconf as well, except that we don't mention\nthat in pg_config --ldflags. Our linking rules include CFLAGS, see e.g.:\n\n%: %.o\n\t$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n\npostgres: $(OBJS)\n\t$(CC) $(CFLAGS) $(call expand_subsys,$^) $(LDFLAGS) $(LDFLAGS_EX) $(export_dynamic) $(LIBS) -o $@\n\nifdef PROGRAM\n$(PROGRAM): $(OBJS)\n\t$(CC) $(CFLAGS) $(OBJS) $(PG_LIBS_INTERNAL) $(LDFLAGS) $(LDFLAGS_EX) $(PG_LIBS) $(LIBS) -o $@$(X)\nendif\n\n# Rule for building a shared library from a single .o file\n%.so: %.o\n\t$(CC) $(CFLAGS) $< $(LDFLAGS) $(LDFLAGS_SL) -shared -o $@\n\n\nShould we try that fact in pg_configin the meson build as well?\n\n\nMeson automatically includes compiler flags during linking because a)\napparently many dependencies (.pc files etc) specify linker flags in CFLAGS b)\nat least some kinds of LTO requires compiler flags being present during\n\"linking\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 15:48:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "\nOn 2022-11-14 Mo 18:24, Andres Freund wrote:\n> Hi,\n>\n> On 2022-11-14 17:41:54 -0500, Andrew Dunstan wrote:\n>> Here's a couple of things I've noticed.\n>>\n>>\n>> andrew@ub22:HEAD $ inst.meson/bin/pg_config --libdir --ldflags\n>> /home/andrew/pgl/pg_head/root/HEAD/inst.meson/lib/x86_64-linux-gnu\n>> -fuse-ld=lld -DCOPY_PARSE_PLAN_TREES -DRAW_EXPRESSION_COVERAGE_TEST\n>> -DWRITE_READ_PARSE_PLAN_TREES\n>>\n>>\n>> Are we really intending to add a new subdirectory to the default layout?\n>> Why is that x84_64-linux-gnu there?\n> It's the platform default on, at least, debian derived distros - that's how\n> you can install 32bit/64bit libraries and libraries with different ABIs\n> (e.g. linking against glibc vs linking with musl) in parallel.\n>\n> We could override meson inferring that from the system if we want to, but it\n> doesn't seem like a good idea?\n>\n\nThat's a decision that packagers make. e.g. on my Ubuntu system\nconfigure has been run with:\n\n--libdir=${prefix}/lib/x86_64-linux-gnu\n\n\nIncidentally, Redhat flavored systems don't use this layout. they have\n/lib and /lib64, so it's far from universal.\n\n\nBut ISTM we shouldn't be presuming what packagers will do, and that\nthere is some virtue in having a default layout under ${prefix} that is\nconsistent across platforms, as is now the case with autoconf/configure.\n\n\n>> Also, why have the CPPFLAGS made their way into the LDFLAGS? That seems\n>> wrong.\n> Because these days meson treats CPPFLAGS as part of CFLAGS as it apparently\n> repeatedly confused build system writers and users when e.g. header-presence\n> checks would only use CPPFLAGS. Some compiler options aren't entirely clearly\n> delineated, consider e.g. -isystem (influencing warning behaviour as well as\n> preprocessor paths). Not sure if that's the best choice, but it's imo\n> defensible.\n>\n\nYes, I get that there is confusion around CPPFLAGS. One of my otherwise\nextremely knowledgeable colleagues told me a year or two back that he\nhad thought the CPP in CPPFLAGS referred to C++ rather that C\npreprocessor. And the authors of meson seem to have labored under a\nsimilar misapprehension, so they use 'cpp' instead of 'cxx' like just\nabout everyone else.\n\nBut it's less clear to me that a bunch of defines belong in LDFLAGS.\nShouldn't that be only things that ld itself will recognize?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 08:04:29 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 15.11.22 00:48, Andres Freund wrote:\n> We effectively do that with autoconf as well, except that we don't mention\n> that in pg_config --ldflags. Our linking rules include CFLAGS, see e.g.:\n> \n> %: %.o\n> \t$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n> \n> postgres: $(OBJS)\n> \t$(CC) $(CFLAGS) $(call expand_subsys,$^) $(LDFLAGS) $(LDFLAGS_EX) $(export_dynamic) $(LIBS) -o $@\n> \n> ifdef PROGRAM\n> $(PROGRAM): $(OBJS)\n> \t$(CC) $(CFLAGS) $(OBJS) $(PG_LIBS_INTERNAL) $(LDFLAGS) $(LDFLAGS_EX) $(PG_LIBS) $(LIBS) -o $@$(X)\n> endif\n> \n> # Rule for building a shared library from a single .o file\n> %.so: %.o\n> \t$(CC) $(CFLAGS) $< $(LDFLAGS) $(LDFLAGS_SL) -shared -o $@\n> \n> \n> Should we try that fact in pg_configin the meson build as well?\n\nIt's up to the consumer of pg_config to apply CFLAGS and LDFLAGS as they \nneed. But pg_config and pkg-config etc. should report them separately.\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 17:07:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2022-11-15 08:04:29 -0500, Andrew Dunstan wrote:\n> On 2022-11-14 Mo 18:24, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-11-14 17:41:54 -0500, Andrew Dunstan wrote:\n> >> Here's a couple of things I've noticed.\n> >>\n> >>\n> >> andrew@ub22:HEAD $ inst.meson/bin/pg_config --libdir --ldflags\n> >> /home/andrew/pgl/pg_head/root/HEAD/inst.meson/lib/x86_64-linux-gnu\n> >> -fuse-ld=lld -DCOPY_PARSE_PLAN_TREES -DRAW_EXPRESSION_COVERAGE_TEST\n> >> -DWRITE_READ_PARSE_PLAN_TREES\n> >>\n> >>\n> >> Are we really intending to add a new subdirectory to the default layout?\n> >> Why is that x84_64-linux-gnu there?\n> > It's the platform default on, at least, debian derived distros - that's how\n> > you can install 32bit/64bit libraries and libraries with different ABIs\n> > (e.g. linking against glibc vs linking with musl) in parallel.\n> >\n> > We could override meson inferring that from the system if we want to, but it\n> > doesn't seem like a good idea?\n> >\n> \n> That's a decision that packagers make. e.g. on my Ubuntu system\n> configure has been run with:\n> \n> --libdir=${prefix}/lib/x86_64-linux-gnu\n\nSure - but that doesn't mean that it's a good idea to break the distribution's\nlayout when you install from source.\n\n\n> Incidentally, Redhat flavored systems don't use this layout. they have\n> /lib and /lib64, so it's far from universal.\n\nMeson infers that and uses lib64 as the default libdir.\n\n\n> But ISTM we shouldn't be presuming what packagers will do, and that\n> there is some virtue in having a default layout under ${prefix} that is\n> consistent across platforms, as is now the case with autoconf/configure.\n\nI don't think it's a virtue to break the layout of the platform by\ne.g. installing 64bit libs into the directory containing 32bit libs.\n\n\n> And the authors of meson seem to have labored under a similar\n> misapprehension, so they use 'cpp' instead of 'cxx' like just about everyone\n> else.\n\nYea, not a fan of that either. I don't think it was a misapprehension, but a\ndecision I disagree with...\n\n\n> But it's less clear to me that a bunch of defines belong in LDFLAGS.\n> Shouldn't that be only things that ld itself will recognize?\n\nI don't think there's a clear cut line what is for ld and what\nisn't. Including stuff that influences both preprocessor and\nlinker. -ffreestanding will e.g. change preprocessor, compiler (I think), and\nlinker behaviour.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 11:04:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "\nOn 2022-11-15 Tu 14:04, Andres Freund wrote:\n>> But ISTM we shouldn't be presuming what packagers will do, and that\n>> there is some virtue in having a default layout under ${prefix} that is\n>> consistent across platforms, as is now the case with autoconf/configure.\n> I don't think it's a virtue to break the layout of the platform by\n> e.g. installing 64bit libs into the directory containing 32bit libs.\n\n\nYou might end up surprising people who have installed from source for\nyears and will have the layout suddenly changed, especially on RedHat\nflavored systems.\n\nI can work around it in the buildfarm, which does make some assumptions\nabout the layout (e.g. in the cross version pg_upgrade stuff), by\nexplicitly using --libdir.\n\n\n>> But it's less clear to me that a bunch of defines belong in LDFLAGS.\n>> Shouldn't that be only things that ld itself will recognize?\n> I don't think there's a clear cut line what is for ld and what\n> isn't. Including stuff that influences both preprocessor and\n> linker. -ffreestanding will e.g. change preprocessor, compiler (I think), and\n> linker behaviour.\n>\n\nWell it sure looks odd.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 15:47:39 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-11-15 Tu 14:04, Andres Freund wrote:\n>> I don't think it's a virtue to break the layout of the platform by\n>> e.g. installing 64bit libs into the directory containing 32bit libs.\n\n> You might end up surprising people who have installed from source for\n> years and will have the layout suddenly changed, especially on RedHat\n> flavored systems.\n\nYeah, I'm not too pleased with this idea either. The people who want\nto install according to some platform-specific plan have already figured\nout how to do that. People who are accustomed to the way PG has done\nit in the past are not likely to think this is an improvement.\t \n\nAlso, unless you intend to drop the special cases involving whether\nthe install path string contains \"postgres\" or \"pgsql\", it's already\nnot platform-standard.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Nov 2022 16:08:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2022-11-15 16:08:35 -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2022-11-15 Tu 14:04, Andres Freund wrote:\n> >> I don't think it's a virtue to break the layout of the platform by\n> >> e.g. installing 64bit libs into the directory containing 32bit libs.\n>\n> > You might end up surprising people who have installed from source for\n> > years and will have the layout suddenly changed, especially on RedHat\n> > flavored systems.\n\nJust to make sure that's clear: meson defaults to lib/ or lib64/ (depending on\nbitness obviously) on RedHat systems, not lib/i386-linux-gnu/ or\nlib/x86_64-linux-gnu.\n\n\n> Yeah, I'm not too pleased with this idea either. The people who want\n> to install according to some platform-specific plan have already figured\n> out how to do that. People who are accustomed to the way PG has done\n> it in the past are not likely to think this is an improvement.\n\nI think that's a good argument to not change the default for configure, but\nimo not a good argument for forcing 'lib' rather than the appropriate platform\ndefault in the meson build, given that that already requires changing existing\nrecipes.\n\nSmall note: I didn't intentionally make that change during the meson porting\nwork, it's just meson's default.\n\nI can live with forcing lib/, but I don't think it's the better solution long\nterm. And this seems like the best point for switching we're going to get.\n\n\nWe'd just have to add 'libdir=lib' to the default_options array in the\ntoplevel meson.build.\n\n\n> Also, unless you intend to drop the special cases involving whether\n> the install path string contains \"postgres\" or \"pgsql\", it's already\n> not platform-standard.\n\nFor me that's the best argument for forcing 'lib'. Still not quite enough to\nswing me around, because it's imo a pretty reasonable thing to want to install\na 32bit and 64bit libpq, and I don't think we should make that harder.\n\nSomewhat relatedly, I wonder if we should have a better way to enable/disable\nthe 'pgsql' path logic. It's pretty annoying that prefix basically doesn't\nwork if it doesn't contain 'pgsql' or 'postgres'.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 15:40:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 16.11.22 00:40, Andres Freund wrote:\n> Somewhat relatedly, I wonder if we should have a better way to enable/disable\n> the 'pgsql' path logic. It's pretty annoying that prefix basically doesn't\n> work if it doesn't contain 'pgsql' or 'postgres'.\n\nCould you explain this in more detail?\n\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:53:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 10:53:59 +0100, Peter Eisentraut wrote:\n> On 16.11.22 00:40, Andres Freund wrote:\n> > Somewhat relatedly, I wonder if we should have a better way to enable/disable\n> > the 'pgsql' path logic. It's pretty annoying that prefix basically doesn't\n> > work if it doesn't contain 'pgsql' or 'postgres'.\n> \n> Could you explain this in more detail?\n\nIf I just want to install postgres into a prefix without 'postgresql' added in\na bunch of directories, e.g. because I already have pg-$version to be in the\nprefix, there's really no good way to do so - you can't even specify\n--sysconfdir or such, because we just override that path.\n\nAnd because many of our binaries are major version specific you pretty much\nneed to include the major version in the prefix, making the 'postgresql' we\nadd redundant.\n\nI think the easiest way today is to use a temporary prefix and then just\nrename the installation path. But that obviously doesn't deal well with\nrpaths, at least as long as we don't use relative rpaths.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 08:40:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-16 10:53:59 +0100, Peter Eisentraut wrote:\n>> Could you explain this in more detail?\n\n> If I just want to install postgres into a prefix without 'postgresql' added in\n> a bunch of directories, e.g. because I already have pg-$version to be in the\n> prefix, there's really no good way to do so - you can't even specify\n> --sysconfdir or such, because we just override that path.\n\nAt least for the libraries, the point of the 'postgresql' subdir IMO\nis to keep backend-loadable extensions separate from random libraries.\nIt's not great that we may fail to do that depending on what the\ninitial part of the library path is.\n\nI could get behind allowing the user to specify that path explicitly\nand then not modifying it; but when we're left to our own devices\nI think we should preserve that separation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Nov 2022 11:54:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 11:54:10 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-16 10:53:59 +0100, Peter Eisentraut wrote:\n> >> Could you explain this in more detail?\n> \n> > If I just want to install postgres into a prefix without 'postgresql' added in\n> > a bunch of directories, e.g. because I already have pg-$version to be in the\n> > prefix, there's really no good way to do so - you can't even specify\n> > --sysconfdir or such, because we just override that path.\n> \n> At least for the libraries, the point of the 'postgresql' subdir IMO\n> is to keep backend-loadable extensions separate from random libraries.\n> It's not great that we may fail to do that depending on what the\n> initial part of the library path is.\n\nAgreed, extensions really should never be in a path searched by the dynamic\nlinker, even if the prefix contains 'postgres'.\n\nTo me that's a separate thing from adding postgresql to datadir, sysconfdir,\nincludedir, docdir... On a green field I'd say the 'extension library'\ndirectory should just always be extensions/ or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 09:07:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 16.11.22 18:07, Andres Freund wrote:\n>>> If I just want to install postgres into a prefix without 'postgresql' added in\n>>> a bunch of directories, e.g. because I already have pg-$version to be in the\n>>> prefix, there's really no good way to do so - you can't even specify\n>>> --sysconfdir or such, because we just override that path.\n>>\n>> At least for the libraries, the point of the 'postgresql' subdir IMO\n>> is to keep backend-loadable extensions separate from random libraries.\n>> It's not great that we may fail to do that depending on what the\n>> initial part of the library path is.\n> \n> Agreed, extensions really should never be in a path searched by the dynamic\n> linker, even if the prefix contains 'postgres'.\n> \n> To me that's a separate thing from adding postgresql to datadir, sysconfdir,\n> includedir, docdir... On a green field I'd say the 'extension library'\n> directory should just always be extensions/ or such.\n\nI think we should get the two build systems to produce the same \ninstallation layout when given equivalent options.\n\nUnless someone comes up with a proposal to address the above broader \nissues, also taking into account current packaging practices etc., then \nI think we should do a short-term solution to either port the \nsubdir-appending to the meson scripts or remove it from the makefiles \n(or maybe a bit of both).\n\n\n\n", "msg_date": "Wed, 4 Jan 2023 12:35:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2023-01-04 12:35:35 +0100, Peter Eisentraut wrote:\n> On 16.11.22 18:07, Andres Freund wrote:\n> > > > If I just want to install postgres into a prefix without 'postgresql' added in\n> > > > a bunch of directories, e.g. because I already have pg-$version to be in the\n> > > > prefix, there's really no good way to do so - you can't even specify\n> > > > --sysconfdir or such, because we just override that path.\n> > >\n> > > At least for the libraries, the point of the 'postgresql' subdir IMO\n> > > is to keep backend-loadable extensions separate from random libraries.\n> > > It's not great that we may fail to do that depending on what the\n> > > initial part of the library path is.\n> >\n> > Agreed, extensions really should never be in a path searched by the dynamic\n> > linker, even if the prefix contains 'postgres'.\n> >\n> > To me that's a separate thing from adding postgresql to datadir, sysconfdir,\n> > includedir, docdir... On a green field I'd say the 'extension library'\n> > directory should just always be extensions/ or such.\n>\n> I think we should get the two build systems to produce the same installation\n> layout when given equivalent options.\n\nI'm not convinced that that's the right thing to do. Distributions have\nhelper infrastructure for buildsystems - why should we make it harder for them\nby deviating further from the buildsystem defaults?\n\nI have yet to hear an argument why installing libraries below\n/usr/[local]/lib/{x86_64,i386,...}-linux-{gnu,musl,...}/ is the wrong thing to\ndo on Debian based systems (or similar, choosing lib64 over lib on RH based\nsystems). But at the same time I haven't heard of an argument why we should\nbreak existing scripts building with autoconf for this. To me a different\nbuildsystem is a convenient point to adapt to build path from the last decade.\n\n\n> Unless someone comes up with a proposal to address the above broader issues,\n> also taking into account current packaging practices etc., then I think we\n> should do a short-term solution to either port the subdir-appending to the\n> meson scripts or remove it from the makefiles (or maybe a bit of both).\n\nJust to be clear, with 'subdir-appending' you mean libdir defaulting to\n'lib/x86_64-linux-gnu' (or similar)? Or do you mean adding 'postgresql' into\nvarious dirs when the path doesn't already contain postgres?\n\nI did try to mirror the 'postgresql' adding bit in the meson build.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Jan 2023 11:35:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On Wed, Jan 4, 2023 at 2:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we should get the two build systems to produce the same installation\n> > layout when given equivalent options.\n>\n> I'm not convinced that that's the right thing to do. Distributions have\n> helper infrastructure for buildsystems - why should we make it harder for them\n> by deviating further from the buildsystem defaults?\n\nIf we don't do as Peter suggests, then any difference between the\nresults of one build system and the other could either be a bug or an\nintentional deviation. There will be no easy way to know which it is.\nAnd if or when people switch build systems, stuff will be randomly\ndifferent, and they won't understand why.\n\nI hear your point too. It's unpleasant for you to spend a lot of\neffort overriding meson's behavior if the result is arguably worse\nthan the default, and it has the effect of carrying forward in\nperpetuity hacks that may not have been a good idea in the first\nplace, or may not be a good idea any more. Those seem like valid\nconcerns, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 16:06:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If we don't do as Peter suggests, then any difference between the\n> results of one build system and the other could either be a bug or an\n> intentional deviation. There will be no easy way to know which it is.\n> And if or when people switch build systems, stuff will be randomly\n> different, and they won't understand why.\n\n> I hear your point too. It's unpleasant for you to spend a lot of\n> effort overriding meson's behavior if the result is arguably worse\n> than the default, and it has the effect of carrying forward in\n> perpetuity hacks that may not have been a good idea in the first\n> place, or may not be a good idea any more. Those seem like valid\n> concerns, too.\n\nYeah. I think the way forward probably needs to be to decide that\nwe are (or are not) going to make changes to the installation tree\nlayout, and then make both build systems conform to that. I don't\nreally buy the argument that it's okay to let them install different\nlayouts. I *am* prepared to listen to arguments that \"this is dumb\nand we shouldn't do it anymore\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Jan 2023 16:18:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2023-01-04 16:18:38 -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > If we don't do as Peter suggests, then any difference between the\n> > results of one build system and the other could either be a bug or an\n> > intentional deviation. There will be no easy way to know which it is.\n> > And if or when people switch build systems, stuff will be randomly\n> > different, and they won't understand why.\n\nGiven the difference is \"localized\", I think calling this out in the docs\nwould contain confusion.\n\n\n> > I hear your point too. It's unpleasant for you to spend a lot of\n> > effort overriding meson's behavior if the result is arguably worse\n> > than the default, and it has the effect of carrying forward in\n> > perpetuity hacks that may not have been a good idea in the first\n> > place, or may not be a good idea any more. Those seem like valid\n> > concerns, too.\n\nThis specific instance luckily is trivial to change from code POV.\n\n\n> Yeah. I think the way forward probably needs to be to decide that\n> we are (or are not) going to make changes to the installation tree\n> layout, and then make both build systems conform to that. I don't\n> really buy the argument that it's okay to let them install different\n> layouts. I *am* prepared to listen to arguments that \"this is dumb\n> and we shouldn't do it anymore\".\n\nWhat exactly shouldn't we do anymore?\n\nI just want to re-iterate that, in my understanding, what we're talking about\nhere is just whether libdir defaults to just \"lib\" or whether it adapts to the\nplatform default (so we end up with libdir as 'lib64' or\n'lib/x86_64-linux-gnu' etc). And *not* whether we should continue to force\n\"postgresql\" into the paths.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Jan 2023 13:29:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 04.01.23 20:35, Andres Freund wrote:\n>> Unless someone comes up with a proposal to address the above broader issues,\n>> also taking into account current packaging practices etc., then I think we\n>> should do a short-term solution to either port the subdir-appending to the\n>> meson scripts or remove it from the makefiles (or maybe a bit of both).\n> Just to be clear, with 'subdir-appending' you mean libdir defaulting to\n> 'lib/x86_64-linux-gnu' (or similar)? Or do you mean adding 'postgresql' into\n> various dirs when the path doesn't already contain postgres?\n> \n> I did try to mirror the 'postgresql' adding bit in the meson build.\n\nI meant the latter, which I see is already in there, but it doesn't \nactually fully work. It only looks at the subdirectory (like \"lib\"), \nnot the whole path (like \"/usr/local/pgsql/lib\"). With the attached \npatch I have it working and I get the same installation layout from both \nbuild systems.", "msg_date": "Wed, 4 Jan 2023 23:17:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2023-01-04 23:17:30 +0100, Peter Eisentraut wrote:\n> I meant the latter, which I see is already in there, but it doesn't actually\n> fully work. It only looks at the subdirectory (like \"lib\"), not the whole\n> path (like \"/usr/local/pgsql/lib\"). With the attached patch I have it\n> working and I get the same installation layout from both build systems.\n\nOh, oops. I tested this at some point, but I guess I over-simplified it at\nsome point.\n\nThen I have zero objections to this. One question below though.\n\n\n\n> dir_data = get_option('datadir')\n> -if not (dir_data.contains('pgsql') or dir_data.contains('postgres'))\n> +if not ((dir_prefix/dir_data).contains('pgsql') or (dir_prefix/dir_data).contains('postgres'))\n> dir_data = dir_data / pkg\n> endif\n\nHm. Perhaps we should just test once whether prefix contains pgsql/postgres,\nand then just otherwise leave the test as is? There afaict can't be a\ndir_prefix/dir_* that matches postgres/pgsql that won't also match either of\nthe components.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Jan 2023 14:53:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 04.01.23 23:53, Andres Freund wrote:\n>> dir_data = get_option('datadir')\n>> -if not (dir_data.contains('pgsql') or dir_data.contains('postgres'))\n>> +if not ((dir_prefix/dir_data).contains('pgsql') or (dir_prefix/dir_data).contains('postgres'))\n>> dir_data = dir_data / pkg\n>> endif\n> Hm. Perhaps we should just test once whether prefix contains pgsql/postgres,\n> and then just otherwise leave the test as is? There afaict can't be a\n> dir_prefix/dir_* that matches postgres/pgsql that won't also match either of\n> the components.\n\nYou mean something like\n\n dir_prefix_contains_pg =\n (dir_prefix.contains('pgsql') or dir_prefix.contains('postgres'))\n\nand\n\n if not (dir_prefix_contains_pg or\n (dir_data.contains('pgsql') or dir_data.contains('postgres'))\n\nSeems more complicated to me.\n\nI think there is also an adjacent issue: The subdir options may be \nabsolute or relative. So if you specify --prefix=/usr/local and \n--sysconfdir=/etc/postgresql, then\n\n config_paths_data.set_quoted('SYSCONFDIR', dir_prefix / dir_sysconf)\n\nwould produce something like /usr/local/etc/postgresql.\n\nI think maybe we should make all the dir_* variables absolute right at \nthe beginning, like\n\n dir_lib = get_option('libdir')\n if not fs.is_absolute(dir_lib)\n dir_lib = dir_prefix / dir_lib\n endif\n\nAnd then the appending stuff could be done after that, keeping the \ncurrent code.\n\n\n\n", "msg_date": "Wed, 11 Jan 2023 12:05:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 11.01.23 12:05, Peter Eisentraut wrote:\n> I think there is also an adjacent issue:  The subdir options may be \n> absolute or relative.  So if you specify --prefix=/usr/local and \n> --sysconfdir=/etc/postgresql, then\n> \n>     config_paths_data.set_quoted('SYSCONFDIR', dir_prefix / dir_sysconf)\n> \n> would produce something like /usr/local/etc/postgresql.\n> \n> I think maybe we should make all the dir_* variables absolute right at \n> the beginning, like\n> \n>     dir_lib = get_option('libdir')\n>     if not fs.is_absolute(dir_lib)\n>         dir_lib = dir_prefix / dir_lib\n>     endif\n> \n> And then the appending stuff could be done after that, keeping the \n> current code.\n\nHere is a proposed patch. This should fix all these issues.", "msg_date": "Thu, 19 Jan 2023 21:37:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "Hi,\n\nOn 2023-01-19 21:37:15 +0100, Peter Eisentraut wrote:\n> On 11.01.23 12:05, Peter Eisentraut wrote:\n> > I think there is also an adjacent issue:� The subdir options may be\n> > absolute or relative.� So if you specify --prefix=/usr/local and\n> > --sysconfdir=/etc/postgresql, then\n> > \n> > ��� config_paths_data.set_quoted('SYSCONFDIR', dir_prefix / dir_sysconf)\n> > \n> > would produce something like /usr/local/etc/postgresql.\n\nI don't think it would. The / operator understands absolute paths and doesn't\nadd the \"first component\" if the second component is absolute.\n\n\n> \n> dir_bin = get_option('bindir')\n> +if not fs.is_absolute(dir_bin)\n> + dir_bin = dir_prefix / dir_bin\n> +endif\n\nHm, I'm not sure this works entirely right on windows. A path like /blub isn't\nabsolute on windows, but it's not really relative either. It's a \"drive local\"\npath. I.e. relative to the current drive (c:/), but not the subdirectory\ntherein.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Jan 2023 12:45:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 19.01.23 21:45, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-19 21:37:15 +0100, Peter Eisentraut wrote:\n>> On 11.01.23 12:05, Peter Eisentraut wrote:\n>>> I think there is also an adjacent issue:  The subdir options may be\n>>> absolute or relative.  So if you specify --prefix=/usr/local and\n>>> --sysconfdir=/etc/postgresql, then\n>>>\n>>>     config_paths_data.set_quoted('SYSCONFDIR', dir_prefix / dir_sysconf)\n>>>\n>>> would produce something like /usr/local/etc/postgresql.\n> \n> I don't think it would. The / operator understands absolute paths and doesn't\n> add the \"first component\" if the second component is absolute.\n\nOh, that is interesting. In that case, this is not the right patch. We \nshould proceed with my previous patch in [0] then.\n\n[0]: \nhttps://www.postgresql.org/message-id/a6a6de12-f705-2b33-2fd9-9743277deb08@enterprisedb.com\n\n\n\n", "msg_date": "Thu, 26 Jan 2023 10:20:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 2023-01-26 10:20:58 +0100, Peter Eisentraut wrote:\n> On 19.01.23 21:45, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-01-19 21:37:15 +0100, Peter Eisentraut wrote:\n> > > On 11.01.23 12:05, Peter Eisentraut wrote:\n> > > > I think there is also an adjacent issue:� The subdir options may be\n> > > > absolute or relative.� So if you specify --prefix=/usr/local and\n> > > > --sysconfdir=/etc/postgresql, then\n> > > > \n> > > > ��� config_paths_data.set_quoted('SYSCONFDIR', dir_prefix / dir_sysconf)\n> > > > \n> > > > would produce something like /usr/local/etc/postgresql.\n> > \n> > I don't think it would. The / operator understands absolute paths and doesn't\n> > add the \"first component\" if the second component is absolute.\n> \n> Oh, that is interesting. In that case, this is not the right patch. We\n> should proceed with my previous patch in [0] then.\n\nWFM.\n\nI still think it'd be slightly more legible if we tested the prefix for\npostgres|pgsql once, rather than do the per-variable .contains() checks on the\n\"combined\" path. But it's a pretty minor difference, and I'd have no problem\nwith you comitting your version.\n\nBasically:\nis_pg_prefix = dir_prefix.contains('pgsql) or dir_prefix.contains('postgres')\n...\nif not (is_pg_prefix or dir_data.contains('pgsql') or dir_data.contains('postgres'))\n\ninstead of \"your\":\n\nif not ((dir_prefix/dir_data).contains('pgsql') or (dir_prefix/dir_data).contains('postgres'))\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 26 Jan 2023 10:05:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: meson oddities" }, { "msg_contents": "On 26.01.23 19:05, Andres Freund wrote:\n>> Oh, that is interesting. In that case, this is not the right patch. We\n>> should proceed with my previous patch in [0] then.\n> WFM.\n> \n> I still think it'd be slightly more legible if we tested the prefix for\n> postgres|pgsql once, rather than do the per-variable .contains() checks on the\n> \"combined\" path.\n\nOk, committed with that change.\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 11:58:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson oddities" } ]
[ { "msg_contents": "Hello\nAre there any plans to incorporate a formal syntax multitable/conditional\ninsert , similar to the syntax below? snowflake does have the same feature\n\nhttps://oracle-base.com/articles/9i/multitable-inserts\n\nToday, im resorting to a function that receives the necessary parameters\nfrom the attributes definition/selection area in a sql select query, called\nby each tuple retrieved. A proper syntax show be real cool\n\nThanks!\n\nHello Are there any plans to incorporate a formal syntax multitable/conditional insert , similar to the syntax below? snowflake does have the same featurehttps://oracle-base.com/articles/9i/multitable-insertsToday, im resorting to a function that receives the necessary parameters from the attributes definition/selection area in a sql select query, called by each tuple retrieved. A proper syntax show be real coolThanks!", "msg_date": "Mon, 14 Nov 2022 21:06:09 -0300", "msg_from": "Alexandre hadjinlian guerra <alexhguerra@gmail.com>", "msg_from_op": true, "msg_subject": "Multitable insert syntax support on Postgres?" }, { "msg_contents": "Hi,\n\nOn 2022-11-14 21:06:09 -0300, Alexandre hadjinlian guerra wrote:\n> Hello\n> Are there any plans to incorporate a formal syntax multitable/conditional\n> insert , similar to the syntax below? snowflake does have the same feature\n> \n> https://oracle-base.com/articles/9i/multitable-inserts\n> \n> Today, im resorting to a function that receives the necessary parameters\n> from the attributes definition/selection area in a sql select query, called\n> by each tuple retrieved. A proper syntax show be real cool\n\nI only skimmed that link, but afaict most of this can be done today in\npostgres, with a bit different syntax, using CTEs. Postgres implements this as\nan extension to the standard CTE syntax.\n\nWITH data_src AS (SELECT * FROM source_tbl),\n insert_a AS (INSERT INTO a SELECT * FROM data_src WHERE d < 5),\n insert_b AS (INSERT INTO b SELECT * FROM data_src WHERE d >= 5)\nINSERT INTO c SELECT * FROM data_src WHERE d < 5\n\nIt's a bit annoying that the last \"arm\" of the insert looks a bit\ndifference. OTOH, it's a much more general syntax.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Nov 2022 16:34:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Multitable insert syntax support on Postgres?" }, { "msg_contents": "On Mon, Nov 14, 2022 at 7:06 PM Alexandre hadjinlian guerra <\nalexhguerra@gmail.com> wrote:\n\n> Hello\n> Are there any plans to incorporate a formal syntax multitable/conditional\n> insert , similar to the syntax below? snowflake does have the same feature\n>\n> https://oracle-base.com/articles/9i/multitable-inserts\n>\n> Today, im resorting to a function that receives the necessary parameters\n> from the attributes definition/selection area in a sql select query, called\n> by each tuple retrieved. A proper syntax show be real cool\n>\n> Thanks!\n>\n\nI'm not aware of any efforts to implement this at this time, mostly because\nI don't think it's supported in the SQL Standard. Being in the standard\nwould change the question from \"why\" to \"why not\".\n\nI've used that feature when I worked with Oracle in a data warehouse\nsituation. I found it most useful when migrating data dumps from mainframes\nwhere the data file contained subrecords and in cases where one field in a\nrow changes the meaning of subsequent fields in the same row. That may\nsound like a First Normal Form violation, and it is, but such data formats\nare common in the IBM VSAM world, or at least they were in the data dumps\nthat I had to import.\n\nOn Mon, Nov 14, 2022 at 7:06 PM Alexandre hadjinlian guerra <alexhguerra@gmail.com> wrote:Hello Are there any plans to incorporate a formal syntax multitable/conditional insert , similar to the syntax below? snowflake does have the same featurehttps://oracle-base.com/articles/9i/multitable-insertsToday, im resorting to a function that receives the necessary parameters from the attributes definition/selection area in a sql select query, called by each tuple retrieved. A proper syntax show be real coolThanks!I'm not aware of any efforts to implement this at this time, mostly because I don't think it's supported in the SQL Standard. Being in the standard would change the question from \"why\" to \"why not\".I've used that feature when I worked with Oracle in a data warehouse situation. I found it most useful when migrating data dumps from mainframes where the data file contained subrecords and in cases where one field in a row changes the meaning of subsequent fields in the same row. That may sound like a First Normal Form violation, and it is, but such data formats are common in the IBM VSAM world, or at least they were in the data dumps that I had to import.", "msg_date": "Mon, 21 Nov 2022 00:02:43 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multitable insert syntax support on Postgres?" }, { "msg_contents": ">\n> WITH data_src AS (SELECT * FROM source_tbl),\n> insert_a AS (INSERT INTO a SELECT * FROM data_src WHERE d < 5),\n> insert_b AS (INSERT INTO b SELECT * FROM data_src WHERE d >= 5)\n> INSERT INTO c SELECT * FROM data_src WHERE d < 5\n>\n\nI suppose you could just do a dummy SELECT at the bottom to make it look\nmore symmetrical\n\nWITH data_src AS (SELECT * FROM source_tbl),\n insert_a AS (INSERT INTO a SELECT * FROM data_src WHERE d < 5),\n insert_b AS (INSERT INTO b SELECT * FROM data_src WHERE d >= 5)\n insert_c AS (INSERT INTO c SELECT * FROM data_src WHERE d < 5)\nSELECT true AS inserts_complete;\n\nOr maybe get some diagnostics out of it:\n\nWITH data_src AS (SELECT * FROM source_tbl),\n insert_a AS (INSERT INTO a SELECT * FROM data_src WHERE d < 5 RETURNING\nNULL),\n insert_b AS (INSERT INTO b SELECT * FROM data_src WHERE d >= 5 RETURNING\nNULL),\n insert_c AS (INSERT INTO c SELECT * FROM data_src WHERE d < 5 RETURNING\nNULL)\nSELECT\n (SELECT COUNT(*) FROM insert_a) AS new_a_rows,\n (SELECT COUNT(*) FROM insert_b) AS new_b_rows,\n (SELECT COUNT(*) FROM insert_c) AS new_c_rows;\n\nWITH data_src AS (SELECT * FROM source_tbl),\n  insert_a AS (INSERT INTO a SELECT * FROM data_src WHERE d < 5),\n  insert_b AS (INSERT INTO b SELECT * FROM data_src WHERE d >= 5)\nINSERT INTO c SELECT * FROM data_src WHERE d < 5I suppose you could just do a dummy SELECT at the bottom to make it look more symmetricalWITH data_src AS (SELECT * FROM source_tbl),  insert_a AS (INSERT INTO a SELECT * FROM data_src WHERE d < 5),  insert_b AS (INSERT INTO b SELECT * FROM data_src WHERE d >= 5)  insert_c AS (INSERT INTO c SELECT * FROM data_src WHERE d < 5)SELECT true AS inserts_complete;Or maybe get some diagnostics out of it:WITH data_src AS (SELECT * FROM source_tbl),  insert_a AS (INSERT INTO a SELECT * FROM data_src WHERE d < 5 RETURNING NULL),  insert_b AS (INSERT INTO b SELECT * FROM data_src WHERE d >= 5 RETURNING NULL),  insert_c AS (INSERT INTO c SELECT * FROM data_src WHERE d < 5 RETURNING NULL)SELECT      (SELECT COUNT(*) FROM insert_a) AS new_a_rows,       (SELECT COUNT(*) FROM insert_b) AS new_b_rows,       (SELECT COUNT(*) FROM insert_c) AS new_c_rows;", "msg_date": "Mon, 21 Nov 2022 00:09:08 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multitable insert syntax support on Postgres?" } ]
[ { "msg_contents": "Hi,\n\nIn the postgres document we notice that the --force-index-cleanup option is available in PostgreSQL server 12 and Later. We have postgres db running on 12.9 but we don’t see this option.\n\nhttps://www.postgresql.org/docs/current/app-vacuumdb.html\n\n[cid:image001.png@01D8F92E.779AB8B0]\n\n[cid:image002.png@01D8F92E.928E97E0]\n\nIs this option enabled by default? Any pointers here?\n\nAlso we notice that vacuum is happening at regular intervals but the space occupied by indexes is always increasing.\n\nSome outputs below. Auto vacuum is enabled but we notice index size is growing.\n\n$ psql -U postgres -d cgms -c \"SELECT pg_size_pretty(SUM(pg_relation_size(table_schema||'.'||table_name))) as size from information_schema.tables\"\n\nsize\n-------\n25 GB\n(1 row)\n\n$ psql -U postgres -d cgms -c \"SELECT pg_size_pretty(SUM(pg_indexes_size(table_schema||'.'||table_name) + pg_relation_size(table_schema||'.'||table_name))) as size from information_schema.tables\"\n size\n--------\n151 GB\n(1 row)\n\n$ sudo du -hsc /var/lib/pgsql/12/data\n154G /var/lib/pgsql/12/data\n154G total\n\nAppreciate if someone can give some pointers.\n\nRegards,\nKarthik", "msg_date": "Tue, 15 Nov 2022 14:45:37 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Vacuumdb --force-index-cleanup option not available in postgres 12.9" }, { "msg_contents": "On Tue, Nov 15, 2022 at 02:45:37PM +0000, Karthik Jagadish (kjagadis) wrote:\n> Hi,\n> \n> In the postgres document we notice that the --force-index-cleanup option is available in PostgreSQL server 12 and Later. We have postgres db running on 12.9 but we don’t see this option.\n> \n> https://www.postgresql.org/docs/current/app-vacuumdb.html\n\nThose are the docs for the current version (v15).\n\nvacuumdb is a client, which can run against older (or newer) servers.\n\nThe --force-index-cleanup option was added in v14, and can be used\nagainst servers back to v12. But the command-line option to the\nvacuumdb client doesn't exist before v14 (even though the server-side\nsupports it).\n\n> Also we notice that vacuum is happening at regular intervals but the\n> space occupied by indexes is always increasing. \n\nI don't know. But this busy mailing list is for development; these\nquestions would be better addressed to pgsql-user.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 15 Nov 2022 09:32:32 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Vacuumdb --force-index-cleanup option not available in postgres\n 12.9" }, { "msg_contents": "Hi,\n\nWe notice that vacuum is happening at regular intervals but the space occupied by indexes is always increasing. Any pointers as to why would this happen?\n\nSome outputs below. Auto vacuum is enabled but we notice index size is growing.\n\n$ psql -U postgres -d cgms -c \"SELECT pg_size_pretty(SUM(pg_relation_size(table_schema||'.'||table_name))) as size from information_schema.tables\"\n\nsize\n-------\n25 GB\n(1 row)\n\n$ psql -U postgres -d cgms -c \"SELECT pg_size_pretty(SUM(pg_indexes_size(table_schema||'.'||table_name) + pg_relation_size(table_schema||'.'||table_name))) as size from information_schema.tables\"\n size\n--------\n151 GB\n(1 row)\n\n$ sudo du -hsc /var/lib/pgsql/12/data\n154G /var/lib/pgsql/12/data\n154G total\n\nAppreciate if someone can give some pointers.\n\nRegards,\nKarthik\n\n\n\n\n\n\n\n\n\nHi,\n \nWe notice that vacuum is happening at regular intervals but the space occupied by indexes is always increasing. Any pointers as to why would this happen?\n \nSome outputs below. Auto vacuum is enabled but we notice index size is growing.\n \n$ psql -U postgres -d cgms -c \"SELECT pg_size_pretty(SUM(pg_relation_size(table_schema||'.'||table_name))) as size from information_schema.tables\"\n \nsize\n-------\n25 GB\n(1 row)\n \n$ psql -U postgres -d cgms -c \"SELECT pg_size_pretty(SUM(pg_indexes_size(table_schema||'.'||table_name) + pg_relation_size(table_schema||'.'||table_name))) as size from\n information_schema.tables\"\n  size\n--------\n151 GB\n(1 row)\n \n$ sudo du -hsc /var/lib/pgsql/12/data\n154G    /var/lib/pgsql/12/data\n154G    total\n \nAppreciate if someone can give some pointers.\n \nRegards,\nKarthik", "msg_date": "Tue, 15 Nov 2022 15:38:47 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Index not getting cleaned even though vacuum is running" }, { "msg_contents": "Thanks Justin for prompt response. Could you please provide the full email for pgsql-user? pgsql-user@postgresql.org<mailto:pgsql-user@postgresql.org> is not working\n\nFrom: Justin Pryzby <pryzby@telsasoft.com>\nDate: Tuesday, 15 November 2022 at 9:02 PM\nTo: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>\nSubject: Re: Vacuumdb --force-index-cleanup option not available in postgres 12.9\nOn Tue, Nov 15, 2022 at 02:45:37PM +0000, Karthik Jagadish (kjagadis) wrote:\n> Hi,\n>\n> In the postgres document we notice that the --force-index-cleanup option is available in PostgreSQL server 12 and Later. We have postgres db running on 12.9 but we don’t see this option.\n>\n> https://www.postgresql.org/docs/current/app-vacuumdb.html\n\nThose are the docs for the current version (v15).\n\nvacuumdb is a client, which can run against older (or newer) servers.\n\nThe --force-index-cleanup option was added in v14, and can be used\nagainst servers back to v12. But the command-line option to the\nvacuumdb client doesn't exist before v14 (even though the server-side\nsupports it).\n\n> Also we notice that vacuum is happening at regular intervals but the\n> space occupied by indexes is always increasing.\n\nI don't know. But this busy mailing list is for development; these\nquestions would be better addressed to pgsql-user.\n\n--\nJustin\n\n\n\n\n\n\n\n\n\nThanks Justin for prompt response. Could you please provide the full email for pgsql-user?\npgsql-user@postgresql.org is not working\n \n\nFrom:\nJustin Pryzby <pryzby@telsasoft.com>\nDate: Tuesday, 15 November 2022 at 9:02 PM\nTo: Karthik Jagadish (kjagadis) <kjagadis@cisco.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>, Prasanna Satyanarayanan (prassaty) <prassaty@cisco.com>, Chandruganth Ayyavoo Selvam (chaayyav) <chaayyav@cisco.com>\nSubject: Re: Vacuumdb --force-index-cleanup option not available in postgres 12.9\n\n\nOn Tue, Nov 15, 2022 at 02:45:37PM +0000, Karthik Jagadish (kjagadis) wrote:\n> Hi,\n> \n> In the postgres document we notice that the --force-index-cleanup option is available in PostgreSQL server 12 and Later. We have postgres db running on 12.9 but we don’t see this option.\n> \n> https://www.postgresql.org/docs/current/app-vacuumdb.html\n\nThose are the docs for the current version (v15).\n\nvacuumdb is a client, which can run against older (or newer) servers.\n\nThe --force-index-cleanup option was added in v14, and can be used\nagainst servers back to v12.  But the command-line option to the\nvacuumdb client doesn't exist before v14 (even though the server-side\nsupports it).\n\n> Also we notice that vacuum is happening at regular intervals but the\n> space occupied by indexes is always increasing.                                                                                                              \n\n\nI don't know.  But this busy mailing list is for development; these\nquestions would be better addressed to pgsql-user.\n\n-- \nJustin", "msg_date": "Tue, 15 Nov 2022 15:40:58 +0000", "msg_from": "\"Karthik Jagadish (kjagadis)\" <kjagadis@cisco.com>", "msg_from_op": true, "msg_subject": "Re: Vacuumdb --force-index-cleanup option not available in postgres\n 12.9" }, { "msg_contents": "On Tue, Nov 15, 2022 at 03:40:58PM +0000, Karthik Jagadish (kjagadis) wrote:\n> Thanks Justin for prompt response. Could you please provide the full email for pgsql-user? pgsql-user@postgresql.org<mailto:pgsql-user@postgresql.org> is not working\n\nOf course, I intended to say pgsql-general@lists.postgresql.org\n\nhttps://www.postgresql.org/list/\n\n\n", "msg_date": "Tue, 15 Nov 2022 09:43:53 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Vacuumdb --force-index-cleanup option not available in postgres\n 12.9" }, { "msg_contents": "Hi\n\n------- Original Message -------\nOn Tuesday, November 15th, 2022 at 12:38, Karthik Jagadish (kjagadis) <kjagadis@cisco.com> wrote:\n\n\n> Hi,\n> \n> We notice that vacuum is happening at regular intervals but the space occupied by indexes is always increasing. Any pointers as to why would this happen?\n> \n> Some outputs below. Auto vacuum is enabled but we notice index size is growing.\n> \n> $ psql -U postgres -d cgms -c \"SELECT pg_size_pretty(SUM(pg_relation_size(table_schema||'.'||table_name))) as size from information_schema.tables\"\n> \n> size\n> \n> -------\n> \n> 25 GB\n> \n> (1 row)\n> \n> $ psql -U postgres -d cgms -c \"SELECT pg_size_pretty(SUM(pg_indexes_size(table_schema||'.'||table_name) + pg_relation_size(table_schema||'.'||table_name))) as size from information_schema.tables\"\n> \n>   size\n> \n> --------\n> \n> 151 GB\n> \n> (1 row)\n> \n> $ sudo du -hsc /var/lib/pgsql/12/data\n> \n> 154G    /var/lib/pgsql/12/data\n> \n> 154G    total\n> \n> Appreciate if someone can give some pointers.\n> \n> Regards,\n> \n> Karthik\n\nAs far as I know vacuum just mark the space of dead rows available for future\nreuse, so I think it's expected that the size doesn't decrease.\n\n\n\"The standard form of VACUUM removes dead row versions in tables and indexes\nand marks the space available for future reuse. However, it will not return the\nspace to the operating system, except in the special case where one or more\npages at the end of a table become entirely free and an exclusive table lock\ncan be easily obtained. In contrast, VACUUM FULL actively compacts tables by\nwriting a complete new version of the table file with no dead space. This\nminimizes the size of the table, but can take a long time. It also requires\nextra disk space for the new copy of the table, until the operation completes.\"\n\nhttps://www.postgresql.org/docs/current/routine-vacuuming.html\n\n\n\n\n--\nMatheus Alcantara\n\n\n\n\n", "msg_date": "Wed, 16 Nov 2022 22:29:26 +0000", "msg_from": "Matheus Alcantara <mths.dev@pm.me>", "msg_from_op": false, "msg_subject": "Re: Index not getting cleaned even though vacuum is running" } ]
[ { "msg_contents": "Most recovery conflicts are generated in REDO routines using a\nstandard approach these days: they all call\nResolveRecoveryConflictWithSnapshot() with a latestRemovedXid argument\ntaken directly from the WAL record. Right now we don't quite present\nthis information in a uniform way, even though REDO routines apply the\ncutoffs in a uniform way.\n\nISTM that there is value consistently using the same symbol names for\nthese cutoffs in every WAL record that has such a cutoff. The REDO\nroutine doesn't care about how the cutoff was generated during\noriginal execution anyway -- it is always the responsibility of code\nthat runs during original execution (details of which will vary by\nrecord type). Consistency makes all of this fairly explicit, and makes\nit easier to use tools like pg_waldump to debug recovery conflicts --\nthe user can grep for the same generic symbol name and see everything.\n\nAttached WIP patch brings heapam's VISIBLE record type and SP-GiST's\nVACUUM_REDIRECT record type in line with this convention. It also\nchanges the symbol name from latestRemovedXid to something more\ngeneral: latestCommittedXid (since many of these WAL records don't\nactually remove anything).\n\nThe patch also documents how these cutoffs are supposed to work at a\nhigh level. We got the details slightly wrong (resulting in false\nconflicts) for several years with FREEZE_PAGE records (see bugfix\ncommit 66fbcb0d2e for details), which seems like a relatively easy\nmistake to make -- so we should try to avoid similar mistakes in the\nfuture.\n\nI'm not necessarily that attached to the name latestCommittedXid. It\nis more accurate, but it's also a little bit too similar to another\ncommon XID symbol name, latestCompletedXid. Can anyone suggest an\nalternative?\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 15 Nov 2022 10:24:05 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Standardizing how pg_waldump presents recovery conflict XID cutoffs" }, { "msg_contents": "Hi,\n\nI like the idea of this, but:\n\nOn 2022-11-15 10:24:05 -0800, Peter Geoghegan wrote:\n> I'm not necessarily that attached to the name latestCommittedXid. It\n> is more accurate, but it's also a little bit too similar to another\n> common XID symbol name, latestCompletedXid. Can anyone suggest an\n> alternative?\n\n... I strongly dislike latestCommittedXid. That seems at least as misleading\nas latestRemovedXid and has the danger of confusion with latestCompletedXid\nas you mention.\n\nHow about latestAffectedXid? Based on a quick scroll through the changed\nstructures it seems like it'd be reasonably discriptive for most?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 12:29:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "On Tue, Nov 15, 2022 at 12:29 PM Andres Freund <andres@anarazel.de> wrote:\n> ... I strongly dislike latestCommittedXid. That seems at least as misleading\n> as latestRemovedXid and has the danger of confusion with latestCompletedXid\n> as you mention.\n\n> How about latestAffectedXid?\n\nI get why you don't care for latestCommittedXid, of course, but the\nname does have some advantages. Namely:\n\n1. Most conflicts come from PRUNE records (less often index deletion\nrecords) where the XID is some heap tuple's xmax, a\ncommitted-to-everybody XID on the primary (at the point of the\noriginal execution of the prune). It makes sense to emphasize the idea\nthat snapshots running on a replica need to agree that this XID is\ndefinitely committed -- we need to kill any snapshots that don't\ndefinitely agree that this one particular XID is committed by now.\n\n2. It hints at the idea that we don't need to set any XID to do\ncleanup for aborted transactions, per the optimization in\nHeapTupleHeaderAdvanceLatestRemovedXid().\n\nPerhaps something like \"mustBeCommittedCutoff\" would work better? What\ndo you think of that? The emphasis on how things need to work on the\nREDO side seems useful.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Nov 2022 13:54:24 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "Hi,\n\nOn 2022-11-15 13:54:24 -0800, Peter Geoghegan wrote:\n> On Tue, Nov 15, 2022 at 12:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > ... I strongly dislike latestCommittedXid. That seems at least as misleading\n> > as latestRemovedXid and has the danger of confusion with latestCompletedXid\n> > as you mention.\n> \n> > How about latestAffectedXid?\n> \n> I get why you don't care for latestCommittedXid, of course, but the\n> name does have some advantages. Namely:\n> \n> 1. Most conflicts come from PRUNE records (less often index deletion\n> records) where the XID is some heap tuple's xmax, a\n> committed-to-everybody XID on the primary (at the point of the\n> original execution of the prune). It makes sense to emphasize the idea\n> that snapshots running on a replica need to agree that this XID is\n> definitely committed -- we need to kill any snapshots that don't\n> definitely agree that this one particular XID is committed by now.\n\nI don't agree that it makes sense there - to me it sounds like the record is\njust carrying the globally latest committed xid rather than something just\ndescribing the record.\n\nI also just don't think \"agreeing that a particular XID is committed\" is a\ngood description of latestRemovedXID, there's just too many ways that\n\"agreeing xid has committed\" can be understood. To me it's not obvious that\nit's about mvcc snapshots.\n\nIf we want to focus on the mvcc affects we could just go for something like\nsnapshotConflictHorizon or such.\n\n\n\n> Perhaps something like \"mustBeCommittedCutoff\" would work better? What\n> do you think of that? The emphasis on how things need to work on the\n> REDO side seems useful.\n\nI don't think \"committed\" should be part of the name.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 17:29:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "On Tue, Nov 15, 2022 at 5:29 PM Andres Freund <andres@anarazel.de> wrote:\n> If we want to focus on the mvcc affects we could just go for something like\n> snapshotConflictHorizon or such.\n\nOkay, let's go with snapshotConflictHorizon. I'll use that name in the\nnext revision, which I should be able to post tomorrow.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Nov 2022 20:48:56 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "On 2022-11-15 20:48:56 -0800, Peter Geoghegan wrote:\n> On Tue, Nov 15, 2022 at 5:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > If we want to focus on the mvcc affects we could just go for something like\n> > snapshotConflictHorizon or such.\n> \n> Okay, let's go with snapshotConflictHorizon. I'll use that name in the\n> next revision, which I should be able to post tomorrow.\n\nCool!\n\n\n", "msg_date": "Tue, 15 Nov 2022 22:55:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "On Tue, Nov 15, 2022 at 8:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Okay, let's go with snapshotConflictHorizon. I'll use that name in the\n> next revision, which I should be able to post tomorrow.\n\nAttached is a somewhat cleaned up version that uses that symbol name\nfor everything.\n\n--\nPeter Geoghegan", "msg_date": "Wed, 16 Nov 2022 14:14:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 14:14:30 -0800, Peter Geoghegan wrote:\n> /*\n> - * If 'tuple' contains any visible XID greater than latestRemovedXid,\n> - * ratchet forwards latestRemovedXid to the greatest one found.\n> - * This is used as the basis for generating Hot Standby conflicts, so\n> - * if a tuple was never visible then removing it should not conflict\n> - * with queries.\n> + * Maintain snapshotConflictHorizon for caller by ratcheting forward its value\n> + * using any committed XIDs contained in 'tuple', an obsolescent heap tuple\n> + * that caller is in the process of physically removing via pruning.\n> + * (Also supports generating index deletion snapshotConflictHorizon values.)\n\nThe \"(also...) formulation seems a bit odd. How about \"an obsolescent heap\ntuple that the caller is physically removing, e.g. via HOT pruning or index\ndeletion.\" or such?\n\n\n> + * snapshotConflictHorizon format values are how all hot Standby conflicts are\n> + * generated by REDO routines (at least wherever a granular cutoff is used).\n\nNot quite parsing for me.\n\n> + * Caller must initialize its value to InvalidTransactionId, which is generally\n> + * interpreted as \"definitely no need for a recovery conflict\".\n> + *\n> + * Final value must reflect all heap tuples that caller will physically remove\n> + * via the ongoing pruning operation. ResolveRecoveryConflictWithSnapshot() is\n> + * passed the final value (taken from caller's WAL record) by a REDO routine.\n\n> +\t/*\n> +\t * It's quite possible that final snapshotConflictHorizon value will be\n> +\t * invalid in final WAL record, indicating that we definitely don't need to\n> +\t * generate a conflict\n> +\t */\n\n*the final\n\nIsn't this already described in the header?\n\n\n> @@ -3337,12 +3337,17 @@ GetCurrentVirtualXIDs(TransactionId limitXmin, bool excludeXmin0,\n> * GetConflictingVirtualXIDs -- returns an array of currently active VXIDs.\n> *\n> * Usage is limited to conflict resolution during recovery on standby servers.\n> - * limitXmin is supplied as either latestRemovedXid, or InvalidTransactionId\n> - * in cases where we cannot accurately determine a value for latestRemovedXid.\n> + * limitXmin is supplied as either a snapshotConflictHorizon format XID, or as\n> + * InvalidTransactionId in cases where caller cannot accurately determine a\n> + * safe snapshotConflictHorizon value.\n> *\n> * If limitXmin is InvalidTransactionId then we want to kill everybody,\n> * so we're not worried if they have a snapshot or not, nor does it really\n> - * matter what type of lock we hold.\n> + * matter what type of lock we hold. Caller must avoid calling here with\n> + * snapshotConflictHorizon format XIDs that were set to InvalidTransactionId\n\nWhat are \"snapshotConflictHorizon format XIDs\"? I guess you mean format in the\nsense of having the semantics of snapshotConflictHorizon?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 15:27:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "On Wed, Nov 16, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:\n> The \"(also...) formulation seems a bit odd. How about \"an obsolescent heap\n> tuple that the caller is physically removing, e.g. via HOT pruning or index\n> deletion.\" or such?\n\nOkay, WFM.\n\n> > + * snapshotConflictHorizon format values are how all hot Standby conflicts are\n> > + * generated by REDO routines (at least wherever a granular cutoff is used).\n>\n> Not quite parsing for me.\n\nI meant something like: this is a cutoff that works in the same way as\nany other cutoff involved with recovery conflicts, in general, with\nthe exception of those cases that have very coarse grained conflicts,\nsuch as DROP TABLESPACE.\n\nI suppose it would be better to just say the first part. Will fix.\n\n> > + /*\n> > + * It's quite possible that final snapshotConflictHorizon value will be\n> > + * invalid in final WAL record, indicating that we definitely don't need to\n> > + * generate a conflict\n> > + */\n>\n> *the final\n>\n> Isn't this already described in the header?\n\nSort of, but arguably it makes sense to call it out specifically.\nThough on second thought, yeah, lets just get rid of it.\n\n> What are \"snapshotConflictHorizon format XIDs\"? I guess you mean format in the\n> sense of having the semantics of snapshotConflictHorizon?\n\nYes. That is the only possible way that any recovery conflict ever\nworks on the REDO side, with the exception of a few\nnot-very-interesting cases such as DROP TABLESPACE.\n\nGetConflictingVirtualXIDs() assigns a special meaning to\nInvalidTransactionId which is the *opposite* of the special meaning\nthat snapshotConflictHorizon-based values assign to\nInvalidTransactionId. At one point they actually did the same\ndefinition for InvalidTransactionId, but that was changed soon after\nhot standby first went in (when we taught btree delete records to not\nuse ludicrously conservative cutoffs that caused needless conflicts).\n\nAnyway, worth calling this out directly in these comments IMV. We're\naddressing two closely related things that assign opposite meanings to\nInvalidTransactionId, which is rather confusing.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Nov 2022 15:37:40 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 15:37:40 -0800, Peter Geoghegan wrote:\n> On Wed, Nov 16, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > What are \"snapshotConflictHorizon format XIDs\"? I guess you mean format in the\n> > sense of having the semantics of snapshotConflictHorizon?\n> \n> Yes. That is the only possible way that any recovery conflict ever\n> works on the REDO side, with the exception of a few\n> not-very-interesting cases such as DROP TABLESPACE.\n> \n> GetConflictingVirtualXIDs() assigns a special meaning to\n> InvalidTransactionId which is the *opposite* of the special meaning\n> that snapshotConflictHorizon-based values assign to\n> InvalidTransactionId. At one point they actually did the same\n> definition for InvalidTransactionId, but that was changed soon after\n> hot standby first went in (when we taught btree delete records to not\n> use ludicrously conservative cutoffs that caused needless conflicts).\n> \n> Anyway, worth calling this out directly in these comments IMV. We're\n> addressing two closely related things that assign opposite meanings to\n> InvalidTransactionId, which is rather confusing.\n\nIt makes sense to call this out, but I'd\ns/snapshotConflictHorizon format XIDs/cutoff with snapshotConflictHorizon semantics/\n\nor such?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 16:25:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "On Wed, Nov 16, 2022 at 4:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > Anyway, worth calling this out directly in these comments IMV. We're\n> > addressing two closely related things that assign opposite meanings to\n> > InvalidTransactionId, which is rather confusing.\n>\n> It makes sense to call this out, but I'd\n> s/snapshotConflictHorizon format XIDs/cutoff with snapshotConflictHorizon semantics/\n>\n> or such?\n\nWFM.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Nov 2022 16:34:36 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "On Wed, Nov 16, 2022 at 4:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> WFM.\n\nAttached is v3.\n\nPlan is to commit this later on today, barring objections.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Thu, 17 Nov 2022 09:02:48 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" }, { "msg_contents": "On Thu, Nov 17, 2022 at 9:02 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Plan is to commit this later on today, barring objections.\n\nPushed, thanks.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 17 Nov 2022 14:57:12 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Standardizing how pg_waldump presents recovery conflict XID\n cutoffs" } ]
[ { "msg_contents": "Hi,\nI was looking at the commit:\n\ncommit 2fe3bdbd691a5d11626308e7d660440be6c210c8\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Tue Nov 15 15:35:37 2022 +0100\n\n Check return value of pclose() correctly\n\nIn src/bin/pg_ctl/pg_ctl.c :\n\n if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL ||\npclose(fd) != 0)\n\nIf the fgets() call doesn't return NULL, the pclose() would be skipped.\nSince the original pclose() call was removed, wouldn't this lead to fd\nleaking ?\n\nPlease see attached patch for my proposal.\n\nCheers", "msg_date": "Tue, 15 Nov 2022 10:43:51 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "closing file in adjust_data_dir" }, { "msg_contents": "On Tue, Nov 15, 2022 at 10:43 AM Ted Yu <yuzhihong@gmail.com> wrote:\n\n> Hi,\n> I was looking at the commit:\n>\n> commit 2fe3bdbd691a5d11626308e7d660440be6c210c8\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date: Tue Nov 15 15:35:37 2022 +0100\n>\n> Check return value of pclose() correctly\n>\n> In src/bin/pg_ctl/pg_ctl.c :\n>\n> if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL ||\n> pclose(fd) != 0)\n>\n> If the fgets() call doesn't return NULL, the pclose() would be skipped.\n> Since the original pclose() call was removed, wouldn't this lead to fd\n> leaking ?\n>\n> Please see attached patch for my proposal.\n>\n> Cheers\n>\n\nThere was potential leak of fd in patch v1.\n\nPlease take a look at patch v2.\n\nThanks", "msg_date": "Tue, 15 Nov 2022 14:34:45 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "\nOn Wed, 16 Nov 2022 at 02:43, Ted Yu <yuzhihong@gmail.com> wrote:\n> Hi,\n> I was looking at the commit:\n>\n> commit 2fe3bdbd691a5d11626308e7d660440be6c210c8\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date: Tue Nov 15 15:35:37 2022 +0100\n>\n> Check return value of pclose() correctly\n>\n> In src/bin/pg_ctl/pg_ctl.c :\n>\n> if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL ||\n> pclose(fd) != 0)\n>\n> If the fgets() call doesn't return NULL, the pclose() would be skipped.\n> Since the original pclose() call was removed, wouldn't this lead to fd\n> leaking ?\n>\n> Please see attached patch for my proposal.\n>\n> Cheers\n\nI think we should check whether fd is NULL or not, otherwise, segmentation\nfault maybe occur.\n\n+\tif (pclose(fd) != 0)\n+\t{\n+\t\twrite_stderr(_(\"%s: could not close the file following command \\\"%s\\\"\\n\"), progname, cmd);\n+\t\texit(1);\n+\t}\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:02:40 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "On Tue, Nov 15, 2022 at 6:02 PM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Wed, 16 Nov 2022 at 02:43, Ted Yu <yuzhihong@gmail.com> wrote:\n> > Hi,\n> > I was looking at the commit:\n> >\n> > commit 2fe3bdbd691a5d11626308e7d660440be6c210c8\n> > Author: Peter Eisentraut <peter@eisentraut.org>\n> > Date: Tue Nov 15 15:35:37 2022 +0100\n> >\n> > Check return value of pclose() correctly\n> >\n> > In src/bin/pg_ctl/pg_ctl.c :\n> >\n> > if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL ||\n> > pclose(fd) != 0)\n> >\n> > If the fgets() call doesn't return NULL, the pclose() would be skipped.\n> > Since the original pclose() call was removed, wouldn't this lead to fd\n> > leaking ?\n> >\n> > Please see attached patch for my proposal.\n> >\n> > Cheers\n>\n> I think we should check whether fd is NULL or not, otherwise, segmentation\n> fault maybe occur.\n>\n> + if (pclose(fd) != 0)\n> + {\n> + write_stderr(_(\"%s: could not close the file following\n> command \\\"%s\\\"\\n\"), progname, cmd);\n> + exit(1);\n> + }\n>\n> Hi,\nThat check is a few line above:\n\n+ if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n {\n\nCheers\n\nOn Tue, Nov 15, 2022 at 6:02 PM Japin Li <japinli@hotmail.com> wrote:\nOn Wed, 16 Nov 2022 at 02:43, Ted Yu <yuzhihong@gmail.com> wrote:\n> Hi,\n> I was looking at the commit:\n>\n> commit 2fe3bdbd691a5d11626308e7d660440be6c210c8\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date:   Tue Nov 15 15:35:37 2022 +0100\n>\n>     Check return value of pclose() correctly\n>\n> In src/bin/pg_ctl/pg_ctl.c :\n>\n>     if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL ||\n> pclose(fd) != 0)\n>\n> If the fgets() call doesn't return NULL, the pclose() would be skipped.\n> Since the original pclose() call was removed, wouldn't this lead to fd\n> leaking ?\n>\n> Please see attached patch for my proposal.\n>\n> Cheers\n\nI think we should check whether fd is NULL or not, otherwise, segmentation\nfault maybe occur.\n\n+       if (pclose(fd) != 0)\n+       {\n+               write_stderr(_(\"%s: could not close the file following command \\\"%s\\\"\\n\"), progname, cmd);\n+               exit(1);\n+       }Hi,That check is a few line above:+       if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)        {Cheers", "msg_date": "Tue, 15 Nov 2022 18:06:34 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "\nOn Wed, 16 Nov 2022 at 10:02, Japin Li <japinli@hotmail.com> wrote:\n> I think we should check whether fd is NULL or not, otherwise, segmentation\n> fault maybe occur.\n>\n> +\tif (pclose(fd) != 0)\n> +\t{\n> +\t\twrite_stderr(_(\"%s: could not close the file following command \\\"%s\\\"\\n\"), progname, cmd);\n> +\t\texit(1);\n> +\t}\n\nSorry for the noise, I misunderstand it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:09:00 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "\nOn Wed, 16 Nov 2022 at 10:06, Ted Yu <yuzhihong@gmail.com> wrote:\n>> Hi,\n> That check is a few line above:\n>\n> + if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n> {\n>\n> Cheers\n\nThanks for the explanation. Comment on v2 patch.\n\n \tfd = popen(cmd, \"r\");\n-\tif (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL || pclose(fd) != 0)\n+\tif (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n \t{\n+\t\tpclose(fd);\n \t\twrite_stderr(_(\"%s: could not determine the data directory using command \\\"%s\\\"\\n\"), progname, cmd);\n \t\texit(1);\n \t}\n\nHere, segfault maybe occurs if fd is NULL. I think we can remove pclose()\nsafely since the process will exit.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:35:37 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "On Tue, Nov 15, 2022 at 6:35 PM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Wed, 16 Nov 2022 at 10:06, Ted Yu <yuzhihong@gmail.com> wrote:\n> >> Hi,\n> > That check is a few line above:\n> >\n> > + if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n> > {\n> >\n> > Cheers\n>\n> Thanks for the explanation. Comment on v2 patch.\n>\n> fd = popen(cmd, \"r\");\n> - if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL ||\n> pclose(fd) != 0)\n> + if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n> {\n> + pclose(fd);\n> write_stderr(_(\"%s: could not determine the data directory\n> using command \\\"%s\\\"\\n\"), progname, cmd);\n> exit(1);\n> }\n>\n> Here, segfault maybe occurs if fd is NULL. I think we can remove pclose()\n> safely since the process will exit.\n>\n> --\n> Regrads,\n> Japin Li.\n> ChengDu WenWu Information Technology Co.,Ltd.\n>\n\nThat means we're going back to v1 of the patch.\n\nCheers\n\nOn Tue, Nov 15, 2022 at 6:35 PM Japin Li <japinli@hotmail.com> wrote:\nOn Wed, 16 Nov 2022 at 10:06, Ted Yu <yuzhihong@gmail.com> wrote:\n>> Hi,\n> That check is a few line above:\n>\n> +       if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n>         {\n>\n> Cheers\n\nThanks for the explanation.  Comment on v2 patch.\n\n        fd = popen(cmd, \"r\");\n-       if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL || pclose(fd) != 0)\n+       if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n        {\n+               pclose(fd);\n                write_stderr(_(\"%s: could not determine the data directory using command \\\"%s\\\"\\n\"), progname, cmd);\n                exit(1);\n        }\n\nHere, segfault maybe occurs if fd is NULL.  I think we can remove pclose()\nsafely since the process will exit.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.That means we're going back to v1 of the patch.Cheers", "msg_date": "Tue, 15 Nov 2022 18:52:38 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "\nOn Wed, 16 Nov 2022 at 10:52, Ted Yu <yuzhihong@gmail.com> wrote:\n> On Tue, Nov 15, 2022 at 6:35 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> fd = popen(cmd, \"r\");\n>> - if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL ||\n>> pclose(fd) != 0)\n>> + if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n>> {\n>> + pclose(fd);\n>> write_stderr(_(\"%s: could not determine the data directory\n>> using command \\\"%s\\\"\\n\"), progname, cmd);\n>> exit(1);\n>> }\n>>\n>> Here, segfault maybe occurs if fd is NULL. I think we can remove pclose()\n>> safely since the process will exit.\n>>\n>\n> That means we're going back to v1 of the patch.\n>\n\nAfter some rethinking, I find the origin code do not have problems.\n\nIf fd is NULL or fgets() returns NULL, the process exits. Otherwise, we call\npclose() to close fd. The code isn't straightforward, however, it is correct.\n\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 16 Nov 2022 11:11:52 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "On Tue, Nov 15, 2022 at 7:12 PM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Wed, 16 Nov 2022 at 10:52, Ted Yu <yuzhihong@gmail.com> wrote:\n> > On Tue, Nov 15, 2022 at 6:35 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> fd = popen(cmd, \"r\");\n> >> - if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL\n> ||\n> >> pclose(fd) != 0)\n> >> + if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n> >> {\n> >> + pclose(fd);\n> >> write_stderr(_(\"%s: could not determine the data\n> directory\n> >> using command \\\"%s\\\"\\n\"), progname, cmd);\n> >> exit(1);\n> >> }\n> >>\n> >> Here, segfault maybe occurs if fd is NULL. I think we can remove\n> pclose()\n> >> safely since the process will exit.\n> >>\n> >\n> > That means we're going back to v1 of the patch.\n> >\n>\n> After some rethinking, I find the origin code do not have problems.\n>\n> If fd is NULL or fgets() returns NULL, the process exits. Otherwise, we\n> call\n> pclose() to close fd. The code isn't straightforward, however, it is\n> correct.\n>\n>\n>\n> Please read this sentence from my first post:\n\nIf the fgets() call doesn't return NULL, the pclose() would be skipped.\n\nOn Tue, Nov 15, 2022 at 7:12 PM Japin Li <japinli@hotmail.com> wrote:\nOn Wed, 16 Nov 2022 at 10:52, Ted Yu <yuzhihong@gmail.com> wrote:\n> On Tue, Nov 15, 2022 at 6:35 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>         fd = popen(cmd, \"r\");\n>> -       if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL ||\n>> pclose(fd) != 0)\n>> +       if (fd == NULL || fgets(filename, sizeof(filename), fd) == NULL)\n>>         {\n>> +               pclose(fd);\n>>                 write_stderr(_(\"%s: could not determine the data directory\n>> using command \\\"%s\\\"\\n\"), progname, cmd);\n>>                 exit(1);\n>>         }\n>>\n>> Here, segfault maybe occurs if fd is NULL.  I think we can remove pclose()\n>> safely since the process will exit.\n>>\n>\n> That means we're going back to v1 of the patch.\n>\n\nAfter some rethinking, I find the origin code do not have problems.\n\nIf fd is NULL or fgets() returns NULL, the process exits.  Otherwise, we call\npclose() to close fd.  The code isn't straightforward, however, it is correct.\n\nPlease read this sentence from my first post:If the fgets() call doesn't return NULL, the pclose() would be skipped.", "msg_date": "Tue, 15 Nov 2022 19:15:57 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "\nOn Wed, 16 Nov 2022 at 11:15, Ted Yu <yuzhihong@gmail.com> wrote:\n> On Tue, Nov 15, 2022 at 7:12 PM Japin Li <japinli@hotmail.com> wrote:\n>> After some rethinking, I find the origin code do not have problems.\n>>\n>> If fd is NULL or fgets() returns NULL, the process exits. Otherwise, we\n>> call\n>> pclose() to close fd. The code isn't straightforward, however, it is\n>> correct.\n>>\n>>\n>>\n>> Please read this sentence from my first post:\n>\n> If the fgets() call doesn't return NULL, the pclose() would be skipped.\n\nfgets() returns non-NULL, it means the second condition is false, and\nit will check the third condition, which calls pclose(), so it cannot\nbe skipped, right?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 16 Nov 2022 11:26:25 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "On Tue, Nov 15, 2022 at 7:26 PM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Wed, 16 Nov 2022 at 11:15, Ted Yu <yuzhihong@gmail.com> wrote:\n> > On Tue, Nov 15, 2022 at 7:12 PM Japin Li <japinli@hotmail.com> wrote:\n> >> After some rethinking, I find the origin code do not have problems.\n> >>\n> >> If fd is NULL or fgets() returns NULL, the process exits. Otherwise, we\n> >> call\n> >> pclose() to close fd. The code isn't straightforward, however, it is\n> >> correct.\n>\n> Hi,\nPlease take a look at the following:\n\nhttps://en.cppreference.com/w/c/io/fgets\n\nQuote: If the failure has been caused by some other error, sets the\n*error* indicator\n(see ferror() <https://en.cppreference.com/w/c/io/ferror>) on stream. The\ncontents of the array pointed to by str are indeterminate (it may not even\nbe null-terminated).\n\nI think we shouldn't assume that the fd doesn't need to be closed when NULL\nis returned from fgets().\n\nCheers\n\nOn Tue, Nov 15, 2022 at 7:26 PM Japin Li <japinli@hotmail.com> wrote:\nOn Wed, 16 Nov 2022 at 11:15, Ted Yu <yuzhihong@gmail.com> wrote:\n> On Tue, Nov 15, 2022 at 7:12 PM Japin Li <japinli@hotmail.com> wrote:\n>> After some rethinking, I find the origin code do not have problems.\n>>\n>> If fd is NULL or fgets() returns NULL, the process exits.  Otherwise, we\n>> call\n>> pclose() to close fd.  The code isn't straightforward, however, it is\n>> correct.Hi,Please take a look at the following:https://en.cppreference.com/w/c/io/fgets Quote: If the failure has been caused by some other error, sets the error indicator (see ferror()) on stream. The contents of the array pointed to by str are indeterminate (it may not even be null-terminated).I think we shouldn't assume that the fd doesn't need to be closed when NULL is returned from fgets().Cheers", "msg_date": "Tue, 15 Nov 2022 19:31:55 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "On 16.11.22 04:31, Ted Yu wrote:\n> On Wed, 16 Nov 2022 at 11:15, Ted Yu <yuzhihong@gmail.com\n> <mailto:yuzhihong@gmail.com>> wrote:\n> > On Tue, Nov 15, 2022 at 7:12 PM Japin Li <japinli@hotmail.com\n> <mailto:japinli@hotmail.com>> wrote:\n> >> After some rethinking, I find the origin code do not have problems.\n> >>\n> >> If fd is NULL or fgets() returns NULL, the process exits. \n> Otherwise, we\n> >> call\n> >> pclose() to close fd.  The code isn't straightforward, however,\n> it is\n> >> correct.\n> \n> Hi,\n> Please take a look at the following:\n> \n> https://en.cppreference.com/w/c/io/fgets \n> <https://en.cppreference.com/w/c/io/fgets>\n> Quote: If the failure has been caused by some other error, sets the \n> /error/ indicator (see ferror() \n> <https://en.cppreference.com/w/c/io/ferror>) on |stream|. The contents \n> of the array pointed to by |str| are indeterminate (it may not even be \n> null-terminated).\n\nThat has nothing to do with the return value of fgets().\n\n\n\n", "msg_date": "Wed, 16 Nov 2022 09:28:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: closing file in adjust_data_dir" }, { "msg_contents": "On Wed, Nov 16, 2022 at 12:28 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 16.11.22 04:31, Ted Yu wrote:\n> > On Wed, 16 Nov 2022 at 11:15, Ted Yu <yuzhihong@gmail.com\n> > <mailto:yuzhihong@gmail.com>> wrote:\n> > > On Tue, Nov 15, 2022 at 7:12 PM Japin Li <japinli@hotmail.com\n> > <mailto:japinli@hotmail.com>> wrote:\n> > >> After some rethinking, I find the origin code do not have\n> problems.\n> > >>\n> > >> If fd is NULL or fgets() returns NULL, the process exits.\n> > Otherwise, we\n> > >> call\n> > >> pclose() to close fd. The code isn't straightforward, however,\n> > it is\n> > >> correct.\n> >\n> > Hi,\n> > Please take a look at the following:\n> >\n> > https://en.cppreference.com/w/c/io/fgets\n> > <https://en.cppreference.com/w/c/io/fgets>\n> > Quote: If the failure has been caused by some other error, sets the\n> > /error/ indicator (see ferror()\n> > <https://en.cppreference.com/w/c/io/ferror>) on |stream|. The contents\n> > of the array pointed to by |str| are indeterminate (it may not even be\n> > null-terminated).\n>\n> That has nothing to do with the return value of fgets().\n>\n> Hi, Peter:\nHere is how the return value from pclose() is handled in other places:\n\n+ if (pclose_rc != 0)\n+ {\n+ ereport(ERROR,\n\nThe above is very easy to understand.\nWhile the check in `adjust_data_dir` is somewhat harder to comprehend.\n\nI think the formation presented in patch v1 aligns with existing checks of\nthe return value from pclose().\nIt also gives a unique error message in the case that the return value from\npclose() indicates an error.\n\nCheers\n\nOn Wed, Nov 16, 2022 at 12:28 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 16.11.22 04:31, Ted Yu wrote:\n>     On Wed, 16 Nov 2022 at 11:15, Ted Yu <yuzhihong@gmail.com\n>     <mailto:yuzhihong@gmail.com>> wrote:\n>      > On Tue, Nov 15, 2022 at 7:12 PM Japin Li <japinli@hotmail.com\n>     <mailto:japinli@hotmail.com>> wrote:\n>      >> After some rethinking, I find the origin code do not have problems.\n>      >>\n>      >> If fd is NULL or fgets() returns NULL, the process exits. \n>     Otherwise, we\n>      >> call\n>      >> pclose() to close fd.  The code isn't straightforward, however,\n>     it is\n>      >> correct.\n> \n> Hi,\n> Please take a look at the following:\n> \n> https://en.cppreference.com/w/c/io/fgets \n> <https://en.cppreference.com/w/c/io/fgets>\n> Quote: If the failure has been caused by some other error, sets the \n> /error/ indicator (see ferror() \n> <https://en.cppreference.com/w/c/io/ferror>) on |stream|. The contents \n> of the array pointed to by |str| are indeterminate (it may not even be \n> null-terminated).\n\nThat has nothing to do with the return value of fgets().\nHi, Peter:Here is how the return value from pclose() is handled in other places:+               if (pclose_rc != 0)+               {+                       ereport(ERROR, The above is very easy to understand.While the check in `adjust_data_dir` is somewhat harder to comprehend.I think the formation presented in patch v1 aligns with existing checks of the return value from pclose().It also gives a unique error message in the case that the return value from pclose() indicates an error.Cheers", "msg_date": "Wed, 16 Nov 2022 05:51:48 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: closing file in adjust_data_dir" } ]
[ { "msg_contents": "Hello,\n\n\nI am looking at todo item (#1) /Implement DISTINCT clause in window \naggregates/ and while looking at code, I found distinct tightly coupled \nwith Agg function. Looking at another todo item(#2) /Do we really need \nso much duplicated code between Agg and WindowAgg/?  I was wondering \nwhat is general stance on this? Is #2 per-requisite for #1?\n\nIf that is not case, I assume making distinct a standalone piece should \nbe an option too?\n\nWould be glad if someone should shed light on this. Thanks.\n\n\nRegards,\nAnkit\n\n\n\n\n\n\nHello,\n\n\n I am looking at todo item (#1) Implement DISTINCT clause in\n window aggregates and while looking at code, I found\n distinct tightly coupled with Agg function. Looking at another\n todo item(#2) Do we really need so much duplicated code\n between Agg and WindowAgg?  I was wondering what is general\n stance on this? Is #2 per-requisite for #1?\n\n If that is not case, I assume making distinct a standalone piece\n should be an option too?\nWould be glad if someone should shed light on this. Thanks.\n\n Regards,\n Ankit", "msg_date": "Wed, 16 Nov 2022 00:17:05 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Distinct tightly coupled with Agg" }, { "msg_contents": "\nOn 16/11/22 00:26, Tom Lane wrote:\n> Ankit Kumar Pandey <itsankitkp@gmail.com> writes:\n>> I am looking at todo item (#1) /Implement DISTINCT clause in window\n>> aggregates/ and while looking at code, I found distinct tightly coupled\n>> with Agg function. Looking at another todo item(#2) /Do we really need\n>> so much duplicated code between Agg and WindowAgg/?  I was wondering\n>> what is general stance on this? Is #2 per-requisite for #1?\n> No, I think #2 is just a general statement of annoyance. It'd be\n> great if someone finds a way to refactor things to improve that; but\n> seeing that window functions operate in a much different environment\n> than plain aggregates, I'm not holding my breath. It's certainly\n> not a prerequisite for any other work in the area.\n>\n> \t\t\tregards, tom lane\n\nMakes sense, thank you.\n\nRegards,\n\nAnkit\n\n\n\n", "msg_date": "Wed, 16 Nov 2022 00:20:06 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Distinct tightly coupled with Agg" }, { "msg_contents": "Ankit Kumar Pandey <itsankitkp@gmail.com> writes:\n> I am looking at todo item (#1) /Implement DISTINCT clause in window \n> aggregates/ and while looking at code, I found distinct tightly coupled \n> with Agg function. Looking at another todo item(#2) /Do we really need \n> so much duplicated code between Agg and WindowAgg/?  I was wondering \n> what is general stance on this? Is #2 per-requisite for #1?\n\nNo, I think #2 is just a general statement of annoyance. It'd be\ngreat if someone finds a way to refactor things to improve that; but\nseeing that window functions operate in a much different environment\nthan plain aggregates, I'm not holding my breath. It's certainly\nnot a prerequisite for any other work in the area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Nov 2022 13:56:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Distinct tightly coupled with Agg" } ]
[ { "msg_contents": "Hello all,\n\nAs mentioned here [1] it might be interesting to complete the returned\ninformation by version() when compiled with meson by including the\nhost_system.\n\n[1]\nhttps://www.postgresql.org/message-id/20221115195318.5v5ynapmkusgyzks%40awork3.anarazel.de\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Wed, 16 Nov 2022 00:08:56 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Meson add host_system to PG_VERSION_STR" }, { "msg_contents": "On Wed, Nov 16, 2022 at 12:08:56AM +0100, Juan José Santamaría Flecha wrote:\n> As mentioned here [1] it might be interesting to complete the returned\n> information by version() when compiled with meson by including the\n> host_system.\n\nThe meson build provides extra_version, which would be able to do the\nsame, no? The information would be appended to PG_VERSION_STR through\nPG_VERSION.\n--\nMichael", "msg_date": "Wed, 16 Nov 2022 09:01:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Meson add host_system to PG_VERSION_STR" }, { "msg_contents": "On 16.11.22 01:01, Michael Paquier wrote:\n> On Wed, Nov 16, 2022 at 12:08:56AM +0100, Juan José Santamaría Flecha wrote:\n>> As mentioned here [1] it might be interesting to complete the returned\n>> information by version() when compiled with meson by including the\n>> host_system.\n> \n> The meson build provides extra_version, which would be able to do the\n> same, no? The information would be appended to PG_VERSION_STR through\n> PG_VERSION.\n\nI think this is meant to achieve parity between the version strings \ngenerated by configure and by meson.\n\nPerhaps some examples before and after on different platforms could be \nshown.\n\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:50:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Meson add host_system to PG_VERSION_STR" }, { "msg_contents": "On Wed, Nov 16, 2022 at 10:50 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 16.11.22 01:01, Michael Paquier wrote:\n> >\n> > The meson build provides extra_version, which would be able to do the\n> > same, no? The information would be appended to PG_VERSION_STR through\n> > PG_VERSION.\n>\n> I think this is meant to achieve parity between the version strings\n> generated by configure and by meson.\n>\n> Perhaps some examples before and after on different platforms could be\n> shown.\n>\n\nYes, that would make clear what the patch is trying to do. For version() we\nget:\n\nConfigure:\n PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Debian\n6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n\nMeson:\n PostgreSQL 16devel on x86_64, compiled by gcc-6.3.0\n\nPatched:\n PostgreSQL 16devel on x86_64-linux, compiled by gcc-6.3.0\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Nov 16, 2022 at 10:50 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 16.11.22 01:01, Michael Paquier wrote:> \n> The meson build provides extra_version, which would be able to do the\n> same, no?  The information would be appended to PG_VERSION_STR through\n> PG_VERSION.\n\nI think this is meant to achieve parity between the version strings \ngenerated by configure and by meson.\n\nPerhaps some examples before and after on different platforms could be \nshown.Yes, that would make clear what the patch is trying to do. For version() we get:Configure: PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bitMeson: PostgreSQL 16devel on x86_64, compiled by gcc-6.3.0Patched: PostgreSQL 16devel on x86_64-linux, compiled by gcc-6.3.0Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 16 Nov 2022 14:12:05 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Meson add host_system to PG_VERSION_STR" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 09:01:04 +0900, Michael Paquier wrote:\n> On Wed, Nov 16, 2022 at 12:08:56AM +0100, Juan Jos� Santamar�a Flecha wrote:\n> > As mentioned here [1] it might be interesting to complete the returned\n> > information by version() when compiled with meson by including the\n> > host_system.\n>\n> The meson build provides extra_version, which would be able to do the\n> same, no? The information would be appended to PG_VERSION_STR through\n> PG_VERSION.\n\nI don't really follow: Including the operating system in PG_VERSION_STR,\nas we're doing in autoconf, seems orthogonal to extra_version? Adding linux\ninto extra_version would result in linux showing up in e.g.\nSHOW server_version;\nwhich doesn't seem right.\n\n\nI think there's a further deficiency in the PG_VERSION_STR the meson build\ngenerates - we use the build system's CPU. Autoconf shows $host, not $build.\n\n\nFor comparison, on my machine autoconf shows:\n PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc-12 (Debian 12.2.0-9) 12.2.0, 64-bit\nwhereas with meson we currently end up with\n PostgreSQL 16devel on x86_64, compiled by gcc-13.0.0\n\nI still don't think it makes sense to try to copy (or invoke)\nconfig.guess. Particularly when targetting windows, but even just having to\nkeep updating config.guess in perpituity seems unnecessary.\n\nGiven we're looking at improving this, should we also add 32/64-bit piece?\n\nIf so, we probably should move building PG_VERSION_STR to later so we can use\nSIZEOF_VOID_P - configure.ac does that too.\n\nWith extra_version set to -andres the attached results in:\n\nPostgreSQL 16devel-andres on x86_64-linux, compiled by gcc-13.0.0, 64-bit\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 16 Nov 2022 11:02:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson add host_system to PG_VERSION_STR" }, { "msg_contents": "On Wed, Nov 16, 2022 at 8:02 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Given we're looking at improving this, should we also add 32/64-bit piece?\n>\n> If so, we probably should move building PG_VERSION_STR to later so we can\n> use\n> SIZEOF_VOID_P - configure.ac does that too.\n>\n> With extra_version set to -andres the attached results in:\n>\n> PostgreSQL 16devel-andres on x86_64-linux, compiled by gcc-13.0.0, 64-bit\n>\n\nWFM.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Nov 16, 2022 at 8:02 PM Andres Freund <andres@anarazel.de> wrote:\nGiven we're looking at improving this, should we also add 32/64-bit piece?\n\nIf so, we probably should move building PG_VERSION_STR to later so we can use\nSIZEOF_VOID_P - configure.ac does that too.\n\nWith extra_version set to -andres the attached results in:\n\nPostgreSQL 16devel-andres on x86_64-linux, compiled by gcc-13.0.0, 64-bitWFM.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 16 Nov 2022 22:32:47 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Meson add host_system to PG_VERSION_STR" }, { "msg_contents": "On Wed, Nov 16, 2022 at 02:12:05PM +0100, Juan José Santamaría Flecha wrote:\n> On Wed, Nov 16, 2022 at 10:50 AM Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n>> Perhaps some examples before and after on different platforms could be\n>> shown.\n> \n> Yes, that would make clear what the patch is trying to do. For version() we\n> get:\n> \n> Configure:\n> PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Debian\n> 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n> \n> Meson:\n> PostgreSQL 16devel on x86_64, compiled by gcc-6.3.0\n> \n> Patched:\n> PostgreSQL 16devel on x86_64-linux, compiled by gcc-6.3.0\n\nAh, thanks. I was not following this point. Adding the host\ninformation for consistency makes sense, indeed.\n--\nMichael", "msg_date": "Thu, 17 Nov 2022 11:35:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Meson add host_system to PG_VERSION_STR" }, { "msg_contents": "On Thu, Nov 17, 2022 at 3:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> Ah, thanks. I was not following this point. Adding the host\n> information for consistency makes sense, indeed.\n>\n\nI've added an entry [1] in the commitfest so we don't miss this subject.\n\n[1] https://commitfest.postgresql.org/41/4057/\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Nov 17, 2022 at 3:35 AM Michael Paquier <michael@paquier.xyz> wrote:\nAh, thanks.  I was not following this point.  Adding the host\ninformation for consistency makes sense, indeed.I've added an entry [1] in the commitfest so we don't miss this subject.[1] https://commitfest.postgresql.org/41/4057/Regards,Juan José Santamaría Flecha", "msg_date": "Fri, 9 Dec 2022 14:53:10 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Meson add host_system to PG_VERSION_STR" }, { "msg_contents": "Hi,\n\nOn 2022-12-09 14:53:10 +0100, Juan Jos� Santamar�a Flecha wrote:\n> I've added an entry [1] in the commitfest so we don't miss this subject.\n\nI indeed had forgotten. Pushed now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Dec 2022 08:58:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Meson add host_system to PG_VERSION_STR" } ]
[ { "msg_contents": "Hello,\n\nWe've accidentally found a subtle bug introduced by\n\ncommit 9d9c02ccd1aea8e9131d8f4edb21bf1687e40782\nAuthor: David Rowley\nDate: Fri Apr 8 10:34:36 2022 +1200\n\n Teach planner and executor about monotonic window funcs\n\n\nOn a 32-bit system Valgrind reports use-after-free when running the \n\"window\" test:\n\n==35487== Invalid read of size 4\n==35487== at 0x48398A4: memcpy (vg_replace_strmem.c:1035)\n==35487== by 0x1A2902: fill_val (heaptuple.c:287)\n==35487== by 0x1A2902: heap_fill_tuple (heaptuple.c:336)\n==35487== by 0x1A3C29: heap_form_minimal_tuple (heaptuple.c:1412)\n==35487== by 0x3D4555: tts_virtual_copy_minimal_tuple (execTuples.c:290)\n==35487== by 0x72FC33: ExecCopySlotMinimalTuple (tuptable.h:473)\n==35487== by 0x72FC33: tuplesort_puttupleslot (tuplesortvariants.c:610)\n==35487== by 0x403463: ExecSort (nodeSort.c:153)\n==35487== by 0x3D0C8E: ExecProcNodeFirst (execProcnode.c:464)\n==35487== by 0x40AF09: ExecProcNode (executor.h:259)\n==35487== by 0x40AF09: begin_partition (nodeWindowAgg.c:1106)\n==35487== by 0x40D259: ExecWindowAgg (nodeWindowAgg.c:2125)\n==35487== by 0x3D0C8E: ExecProcNodeFirst (execProcnode.c:464)\n==35487== by 0x405E17: ExecProcNode (executor.h:259)\n==35487== by 0x405E17: SubqueryNext (nodeSubqueryscan.c:53)\n==35487== by 0x3D41C7: ExecScanFetch (execScan.c:133)\n==35487== by 0x3D41C7: ExecScan (execScan.c:199)\n==35487== Address 0xe3e8af0 is 168 bytes inside a block of size 8,192 \nalloc'd\n==35487== at 0x483463B: malloc (vg_replace_malloc.c:299)\n==35487== by 0x712B63: AllocSetContextCreateInternal (aset.c:446)\n==35487== by 0x3D82BE: CreateExprContextInternal (execUtils.c:253)\n==35487== by 0x3D84DC: CreateExprContext (execUtils.c:303)\n==35487== by 0x3D8750: ExecAssignExprContext (execUtils.c:482)\n==35487== by 0x40BC1A: ExecInitWindowAgg (nodeWindowAgg.c:2382)\n==35487== by 0x3D1232: ExecInitNode (execProcnode.c:346)\n==35487== by 0x4035E0: ExecInitSort (nodeSort.c:265)\n==35487== by 0x3D11AB: ExecInitNode (execProcnode.c:321)\n==35487== by 0x40BD36: ExecInitWindowAgg (nodeWindowAgg.c:2432)\n==35487== by 0x3D1232: ExecInitNode (execProcnode.c:346)\n==35487== by 0x405E99: ExecInitSubqueryScan (nodeSubqueryscan.c:126)\n\n\nIt's faster to run just this test under Valgrind:\n\n\tmake installcheck-test TESTS='test_setup window'\n\n\nThis can also be reproduced on a 64-bit system by forcing int8 to be \npassed by reference:\n\n--- a/src/include/pg_config_manual.h\n+++ b/src/include/pg_config_manual.h\n@@ -82,9 +82,7 @@\n *\n * Changing this requires an initdb.\n */\n-#if SIZEOF_VOID_P >= 8\n-#define USE_FLOAT8_BYVAL 1\n-#endif\n+#undef USE_FLOAT8_BYVAL\n\n /*\n * When we don't have native spinlocks, we use semaphores to simulate \nthem.\n\n\nFuthermore, zeroing freed memory makes the test fail:\n\n--- a/src/include/utils/memdebug.h\n+++ b/src/include/utils/memdebug.h\n@@ -39,7 +39,7 @@ static inline void\n wipe_mem(void *ptr, size_t size)\n {\n VALGRIND_MAKE_MEM_UNDEFINED(ptr, size);\n- memset(ptr, 0x7F, size);\n+ memset(ptr, 0, size);\n VALGRIND_MAKE_MEM_NOACCESS(ptr, size);\n }\n\n$ cat src/test/regress/regression.diffs\ndiff -U3 \n/home/sergey/pgwork/devel/src/src/test/regress/expected/window.out \n/home/sergey/pgwork/devel/src/src/test/regress/results/window.out\n--- /home/sergey/pgwork/devel/src/src/test/regress/expected/window.out \n2022-11-03 18:26:52.203624217 +0300\n+++ /home/sergey/pgwork/devel/src/src/test/regress/results/window.out \n2022-11-16 01:47:18.494273352 +0300\n@@ -3721,7 +3721,8 @@\n -----------+-------+--------+-------------+----+----+----+----\n personnel | 5 | 3500 | 12-10-2007 | 2 | 1 | 2 | 2\n sales | 3 | 4800 | 08-01-2007 | 3 | 1 | 3 | 3\n-(2 rows)\n+ sales | 4 | 4800 | 08-08-2007 | 3 | 0 | 3 | 3\n+(3 rows)\n\n -- Tests to ensure we don't push down the run condition when it's not \nvalid to\n -- do so.\n\n\nThe failing query is:\n\nSELECT * FROM\n (SELECT *,\n count(salary) OVER (PARTITION BY depname || '') c1, -- w1\n row_number() OVER (PARTITION BY depname) rn, -- w2\n count(*) OVER (PARTITION BY depname) c2, -- w2\n count(*) OVER (PARTITION BY '' || depname) c3 -- w3\n FROM empsalary\n) e WHERE rn <= 1 AND c1 <= 3;\n\n\nAs far as I understand, ExecWindowAgg for the intermediate WindowAgg \nnode switches into pass-through mode, stops evaluating row_number(), and \nreturns the previous value instead. But if int8 is passed by reference, \nthe previous value stored in econtext->ecxt_aggvalues becomes a dangling \npointer when the per-output-tuple memory context is reset.\n\nAttaching a patch that makes the window test fail on a 64-bit system.\n\nBest regards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/", "msg_date": "Wed, 16 Nov 2022 02:38:12 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Bug in row_number() optimization" }, { "msg_contents": "On Wed, Nov 16, 2022 at 7:38 AM Sergey Shinderuk <s.shinderuk@postgrespro.ru>\nwrote:\n\n> The failing query is:\n> SELECT * FROM\n> (SELECT *,\n> count(salary) OVER (PARTITION BY depname || '') c1, -- w1\n> row_number() OVER (PARTITION BY depname) rn, -- w2\n> count(*) OVER (PARTITION BY depname) c2, -- w2\n> count(*) OVER (PARTITION BY '' || depname) c3 -- w3\n> FROM empsalary\n> ) e WHERE rn <= 1 AND c1 <= 3;\n> As far as I understand, ExecWindowAgg for the intermediate WindowAgg\n> node switches into pass-through mode, stops evaluating row_number(), and\n> returns the previous value instead. But if int8 is passed by reference,\n> the previous value stored in econtext->ecxt_aggvalues becomes a dangling\n> pointer when the per-output-tuple memory context is reset.\n\n\nYeah, you're right. In this example the window function row_number()\ngoes into pass-through mode after the second evaluation because its\nrun condition does not hold true any more. The remaining run would just\nreturn the result from the second evaluation, which is stored in\necontext->ecxt_aggvalues[wfuncno].\n\nIf int8 is configured as pass-by-ref, the precomputed value from the\nsecond evaluation is actually located in a memory area from context\necxt_per_tuple_memory, with its pointer stored in ecxt_aggvalues. As\nthis memory context is reset once per tuple, we would be prone to wrong\nresults.\n\nI tried with memory context ecxt_per_query_memory when evaluating\nwindow function in the case where int8 is configured as pass-by-ref and\nI can see the problem vanishes. I'm using the changes as below\n\n--- a/src/backend/executor/nodeWindowAgg.c\n+++ b/src/backend/executor/nodeWindowAgg.c\n@@ -1027,8 +1027,14 @@ eval_windowfunction(WindowAggState *winstate,\nWindowStatePerFunc perfuncstate,\n {\n LOCAL_FCINFO(fcinfo, FUNC_MAX_ARGS);\n MemoryContext oldContext;\n-\n- oldContext =\nMemoryContextSwitchTo(winstate->ss.ps.ps_ExprContext->ecxt_per_tuple_memory);\n+ MemoryContext evalWfuncContext;\n+\n+#ifdef USE_FLOAT8_BYVAL\n+ evalWfuncContext =\nwinstate->ss.ps.ps_ExprContext->ecxt_per_tuple_memory;\n+#else\n+ evalWfuncContext =\nwinstate->ss.ps.ps_ExprContext->ecxt_per_query_memory;\n+#endif\n+ oldContext = MemoryContextSwitchTo(evalWfuncContext);\n\nThanks\nRichard\n\nOn Wed, Nov 16, 2022 at 7:38 AM Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\nThe failing query is:\nSELECT * FROM\n   (SELECT *,\n           count(salary) OVER (PARTITION BY depname || '') c1, -- w1\n           row_number() OVER (PARTITION BY depname) rn, -- w2\n           count(*) OVER (PARTITION BY depname) c2, -- w2\n           count(*) OVER (PARTITION BY '' || depname) c3 -- w3\n    FROM empsalary\n) e WHERE rn <= 1 AND c1 <= 3;\nAs far as I understand, ExecWindowAgg for the intermediate WindowAgg \nnode switches into pass-through mode, stops evaluating row_number(), and \nreturns the previous value instead. But if int8 is passed by reference, \nthe previous value stored in econtext->ecxt_aggvalues becomes a dangling \npointer when the per-output-tuple memory context is reset. Yeah, you're right.  In this example the window function row_number()goes into pass-through mode after the second evaluation because itsrun condition does not hold true any more.  The remaining run would justreturn the result from the second evaluation, which is stored inecontext->ecxt_aggvalues[wfuncno].If int8 is configured as pass-by-ref, the precomputed value from thesecond evaluation is actually located in a memory area from contextecxt_per_tuple_memory, with its pointer stored in ecxt_aggvalues.  Asthis memory context is reset once per tuple, we would be prone to wrongresults.I tried with memory context ecxt_per_query_memory when evaluatingwindow function in the case where int8 is configured as pass-by-ref andI can see the problem vanishes.  I'm using the changes as below--- a/src/backend/executor/nodeWindowAgg.c+++ b/src/backend/executor/nodeWindowAgg.c@@ -1027,8 +1027,14 @@ eval_windowfunction(WindowAggState *winstate, WindowStatePerFunc perfuncstate, {        LOCAL_FCINFO(fcinfo, FUNC_MAX_ARGS);        MemoryContext oldContext;--       oldContext = MemoryContextSwitchTo(winstate->ss.ps.ps_ExprContext->ecxt_per_tuple_memory);+       MemoryContext evalWfuncContext;++#ifdef USE_FLOAT8_BYVAL+       evalWfuncContext = winstate->ss.ps.ps_ExprContext->ecxt_per_tuple_memory;+#else+       evalWfuncContext = winstate->ss.ps.ps_ExprContext->ecxt_per_query_memory;+#endif+       oldContext = MemoryContextSwitchTo(evalWfuncContext);ThanksRichard", "msg_date": "Tue, 22 Nov 2022 15:44:57 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Tue, Nov 22, 2022 at 3:44 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Wed, Nov 16, 2022 at 7:38 AM Sergey Shinderuk <\n> s.shinderuk@postgrespro.ru> wrote:\n>\n>> The failing query is:\n>> SELECT * FROM\n>> (SELECT *,\n>> count(salary) OVER (PARTITION BY depname || '') c1, -- w1\n>> row_number() OVER (PARTITION BY depname) rn, -- w2\n>> count(*) OVER (PARTITION BY depname) c2, -- w2\n>> count(*) OVER (PARTITION BY '' || depname) c3 -- w3\n>> FROM empsalary\n>> ) e WHERE rn <= 1 AND c1 <= 3;\n>> As far as I understand, ExecWindowAgg for the intermediate WindowAgg\n>> node switches into pass-through mode, stops evaluating row_number(), and\n>> returns the previous value instead. But if int8 is passed by reference,\n>> the previous value stored in econtext->ecxt_aggvalues becomes a dangling\n>> pointer when the per-output-tuple memory context is reset.\n>\n>\n> Yeah, you're right. In this example the window function row_number()\n> goes into pass-through mode after the second evaluation because its\n> run condition does not hold true any more. The remaining run would just\n> return the result from the second evaluation, which is stored in\n> econtext->ecxt_aggvalues[wfuncno].\n>\n> If int8 is configured as pass-by-ref, the precomputed value from the\n> second evaluation is actually located in a memory area from context\n> ecxt_per_tuple_memory, with its pointer stored in ecxt_aggvalues. As\n> this memory context is reset once per tuple, we would be prone to wrong\n> results.\n>\n\nRegarding how to fix this problem, firstly I believe we need to evaluate\nwindow functions in the per-tuple memory context, as the HEAD does.\nWhen we decide we need to go into pass-through mode, I'm thinking that\nwe can just copy out the results of the last evaluation to the per-query\nmemory context, while still storing their pointers in ecxt_aggvalues.\n\nDoes this idea work?\n\nThanks\nRichard\n\nOn Tue, Nov 22, 2022 at 3:44 PM Richard Guo <guofenglinux@gmail.com> wrote:On Wed, Nov 16, 2022 at 7:38 AM Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\nThe failing query is:\nSELECT * FROM\n   (SELECT *,\n           count(salary) OVER (PARTITION BY depname || '') c1, -- w1\n           row_number() OVER (PARTITION BY depname) rn, -- w2\n           count(*) OVER (PARTITION BY depname) c2, -- w2\n           count(*) OVER (PARTITION BY '' || depname) c3 -- w3\n    FROM empsalary\n) e WHERE rn <= 1 AND c1 <= 3;\nAs far as I understand, ExecWindowAgg for the intermediate WindowAgg \nnode switches into pass-through mode, stops evaluating row_number(), and \nreturns the previous value instead. But if int8 is passed by reference, \nthe previous value stored in econtext->ecxt_aggvalues becomes a dangling \npointer when the per-output-tuple memory context is reset. Yeah, you're right.  In this example the window function row_number()goes into pass-through mode after the second evaluation because itsrun condition does not hold true any more.  The remaining run would justreturn the result from the second evaluation, which is stored inecontext->ecxt_aggvalues[wfuncno].If int8 is configured as pass-by-ref, the precomputed value from thesecond evaluation is actually located in a memory area from contextecxt_per_tuple_memory, with its pointer stored in ecxt_aggvalues.  Asthis memory context is reset once per tuple, we would be prone to wrongresults. Regarding how to fix this problem, firstly I believe we need to evaluatewindow functions in the per-tuple memory context, as the HEAD does.When we decide we need to go into pass-through mode, I'm thinking thatwe can just copy out the results of the last evaluation to the per-querymemory context, while still storing their pointers in ecxt_aggvalues.Does this idea work?ThanksRichard", "msg_date": "Thu, 24 Nov 2022 11:16:18 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On 24.11.2022 06:16, Richard Guo wrote:\n> Regarding how to fix this problem, firstly I believe we need to evaluate\n> window functions in the per-tuple memory context, as the HEAD does.\n> When we decide we need to go into pass-through mode, I'm thinking that\n> we can just copy out the results of the last evaluation to the per-query\n> memory context, while still storing their pointers in ecxt_aggvalues.\n> \n> Does this idea work?\nAlthough I'm not familiar with the code, this makes sense to me.\n\nYou proposed:\n\n+#ifdef USE_FLOAT8_BYVAL\n+ evalWfuncContext = \nwinstate->ss.ps.ps_ExprContext->ecxt_per_tuple_memory;\n+#else\n+ evalWfuncContext = \nwinstate->ss.ps.ps_ExprContext->ecxt_per_query_memory;\n+#endif\n\nShouldn't we handle any pass-by-reference type the same? I suppose, a \nuser-defined window function can return some other type, not int8.\n\nBest regards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Thu, 24 Nov 2022 14:52:30 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Fri, 25 Nov 2022 at 00:52, Sergey Shinderuk\n<s.shinderuk@postgrespro.ru> wrote:\n> Shouldn't we handle any pass-by-reference type the same? I suppose, a\n> user-defined window function can return some other type, not int8.\n\nThanks for reporting this and to you and Richard for working on a fix.\n\nI've just looked at it and it seems that valgrind is complaining\nbecause a tuple formed by an upper-level WindowAgg contains a pointer\nto free'd memory due to the byref type and eval_windowaggregates() not\nhaving been executed to fill in ecxt_aggvalues and ecxt_aggnulls on\nthe lower-level WindowAgg.\n\nSince upper-level WindowAggs cannot reference values calculated in\nsome lower-level WindowAgg, why can't we just NULLify the pointers\ninstead? See attached.\n\nIt is possible to have a monotonic window function that does not\nreturn int8. Technically something like MAX(text_col) OVER (PARTITION\nBY somecol ORDER BY text_col) is monotonically increasing, it's just\nthat I didn't add a support function to tell the planner about that.\nSomeone could come along in the future and suggest we do that and show\nus some convincing use case. So whatever the fix, it cannot assume\nthe window function's return type is int8.\n\nDavid", "msg_date": "Fri, 25 Nov 2022 12:34:29 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Fri, Nov 25, 2022 at 7:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Since upper-level WindowAggs cannot reference values calculated in\n> some lower-level WindowAgg, why can't we just NULLify the pointers\n> instead? See attached.\n\n\nVerified the problem is fixed with this patch. I'm not familiar with\nthe WindowAgg execution codes. As far as I understand, this patch works\nbecause we set ecxt_aggnulls to true, making it a NULL value. And the\ntop-level WindowAgg node's \"Filter\" is strict so that it can filter out\nall the tuples that don't match the intermediate WindowAgg node's run\ncondition. So I find the comments about \"WindowAggs above us cannot\nreference the result of another WindowAgg\" confusing. But maybe I'm\nmissing something.\n\nThanks\nRichard\n\nOn Fri, Nov 25, 2022 at 7:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\nSince upper-level WindowAggs cannot reference values calculated in\nsome lower-level WindowAgg, why can't we just NULLify the pointers\ninstead? See attached. Verified the problem is fixed with this patch.  I'm not familiar withthe WindowAgg execution codes.  As far as I understand, this patch worksbecause we set ecxt_aggnulls to true, making it a NULL value.  And thetop-level WindowAgg node's \"Filter\" is strict so that it can filter outall the tuples that don't match the intermediate WindowAgg node's runcondition.  So I find the comments about \"WindowAggs above us cannotreference the result of another WindowAgg\" confusing.  But maybe I'mmissing something.ThanksRichard", "msg_date": "Fri, 25 Nov 2022 11:00:27 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Fri, 25 Nov 2022 at 16:00, Richard Guo <guofenglinux@gmail.com> wrote:\n> Verified the problem is fixed with this patch. I'm not familiar with\n> the WindowAgg execution codes. As far as I understand, this patch works\n> because we set ecxt_aggnulls to true, making it a NULL value. And the\n> top-level WindowAgg node's \"Filter\" is strict so that it can filter out\n> all the tuples that don't match the intermediate WindowAgg node's run\n> condition. So I find the comments about \"WindowAggs above us cannot\n> reference the result of another WindowAgg\" confusing. But maybe I'm\n> missing something.\n\nThere are two different pass-through modes that the WindowAgg can move\ninto when it detects that the run condition is no longer true:\n\n1) WINDOWAGG_PASSTHROUGH and\n2) WINDOWAGG_PASSTHROUGH_STRICT\n\n#2 is used when the WindowAgg is the top-level one in this query\nlevel. Remember we'll need multiple WindowAgg nodes when there are\nmultiple different windows to evaluate. The reason that we need #1 is\nthat if there are multiple WindowAggs, then the top-level one (or just\nany WindowAgg above it) might need all the rows from a lower-level\nWindowAgg. For example:\n\nSELECT * FROM (SELECT row_number() over(order by id) rn, sum(qty) over\n(order by date) qty from t) t where rn <= 10;\n\nif the \"order by id\" window is evaluated first, we can't stop\noutputting rows when rn <= 10 is no longer true as the \"order by date\"\nwindow might need those. In this case, once rn <= 10 is no longer\ntrue, the WindowAgg for that window would go into\nWINDOWAGG_PASSTHROUGH. This means we can stop window func evaluation\non any additional rows. The final query will never see rn==11, so we\ndon't need to generate that.\n\nThe problem is that once the \"order by id\" window stops evaluating the\nwindow funcs, if the window result is byref, then we leave junk in the\naggregate output columns. Since we continue to output rows from that\nWindowAgg for the top-level \"order by date\" window, we don't want to\nform tuples with free'd memory.\n\nSince nothing in the query will ever seen rn==11 and beyond, there's\nno need to put anything in that part of the output tuple. We can just\nmake it an SQL NULL.\n\nWhat I mean by \"WindowAggs above us cannot reference the result of\nanother WindowAgg\" is that the evaluation of sum(qty) over (order by\ndate) can't see the \"rn\" column. SQL does not allow it. If it did,\nthat would have to look something like:\n\nSELECT * FROM (SELECT SUM(row_number() over (order by id)) over (order\nby date) qty from t); -- not valid SQL\n\nWINDOWAGG_PASSTHROUGH_STRICT not only does not evaluate window funcs,\nit also does not even bother to store tuples in the tuple store. In\nthis case there's no higher-level WindowAgg that will need these\ntuples, so we can just read our sub-node until we find the next\npartition, or stop when there's no PARTITION BY clause.\n\nJust thinking of the patch a bit more, what I wrote ends up\ncontinually zeroing the values and marking the columns as NULL. Likely\nwe can just do this once when we do: winstate->status =\nWINDOWAGG_PASSTHROUGH; I'll test that out and make sure it works.\n\nDavid\n\n\n", "msg_date": "Fri, 25 Nov 2022 16:26:00 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Fri, Nov 25, 2022 at 11:26 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> There are two different pass-through modes that the WindowAgg can move\n> into when it detects that the run condition is no longer true:\n>\n> 1) WINDOWAGG_PASSTHROUGH and\n> 2) WINDOWAGG_PASSTHROUGH_STRICT\n>\n> #2 is used when the WindowAgg is the top-level one in this query\n> level. Remember we'll need multiple WindowAgg nodes when there are\n> multiple different windows to evaluate. The reason that we need #1 is\n> that if there are multiple WindowAggs, then the top-level one (or just\n> any WindowAgg above it) might need all the rows from a lower-level\n> WindowAgg.\n\n\nThanks for the explanation! I think now I understand pass-through modes\nmuch better.\n\n\n> What I mean by \"WindowAggs above us cannot reference the result of\n> another WindowAgg\" is that the evaluation of sum(qty) over (order by\n> date) can't see the \"rn\" column. SQL does not allow it.\n\n\nI think I get your point. Yeah, the 'rn' column is not needed for the\nevaluation of WindowAggs above. But ISTM it might be needed to evaluate\nthe quals of WindowAggs above. Such as in the plan below\n\nexplain (costs off) SELECT * FROM\n (SELECT\n count(salary) OVER (PARTITION BY depname || '') c1, -- w1\n row_number() OVER (PARTITION BY depname) rn -- w2\n FROM empsalary\n) e WHERE rn <= 1;\n QUERY PLAN\n-------------------------------------------------------------------\n Subquery Scan on e\n -> WindowAgg\n Filter: ((row_number() OVER (?)) <= 1)\n -> Sort\n Sort Key: (((empsalary.depname)::text || ''::text))\n -> WindowAgg\n Run Condition: (row_number() OVER (?) <= 1)\n -> Sort\n Sort Key: empsalary.depname\n -> Seq Scan on empsalary\n(10 rows)\n\nThe 'rn' column is calculated in the lower-level WindowAgg, and it is\nneeded to evaluate the 'Filter' of the upper-level WindowAgg. In\npass-through mode, the lower-level WindowAgg would not be evaluated any\nmore, so we need to mark the 'rn' column to something that can false the\n'Filter'. Considering the 'Filter' is a strict function, marking it as\nNULL would do. I think this is why this patch works.\n\n\n> Just thinking of the patch a bit more, what I wrote ends up\n> continually zeroing the values and marking the columns as NULL. Likely\n> we can just do this once when we do: winstate->status =\n> WINDOWAGG_PASSTHROUGH;\n\n\nYes, I also think we can do this only once when we go into pass-through\nmode.\n\nThanks\nRichard\n\nOn Fri, Nov 25, 2022 at 11:26 AM David Rowley <dgrowleyml@gmail.com> wrote:\nThere are two different pass-through modes that the WindowAgg can move\ninto when it detects that the run condition is no longer true:\n\n1) WINDOWAGG_PASSTHROUGH and\n2) WINDOWAGG_PASSTHROUGH_STRICT\n\n#2 is used when the WindowAgg is the top-level one in this query\nlevel. Remember we'll need multiple WindowAgg nodes when there are\nmultiple different windows to evaluate.  The reason that we need #1 is\nthat if there are multiple WindowAggs, then the top-level one (or just\nany WindowAgg above it) might need all the rows from a lower-level\nWindowAgg.  Thanks for the explanation!  I think now I understand pass-through modesmuch better. \nWhat I mean by \"WindowAggs above us cannot reference the result of\nanother WindowAgg\" is that the evaluation of sum(qty) over (order by\ndate) can't see the \"rn\" column. SQL does not allow it.  I think I get your point.  Yeah, the 'rn' column is not needed for theevaluation of WindowAggs above.  But ISTM it might be needed to evaluatethe quals of WindowAggs above.  Such as in the plan belowexplain (costs off) SELECT * FROM   (SELECT           count(salary) OVER (PARTITION BY depname || '') c1, -- w1           row_number() OVER (PARTITION BY depname) rn -- w2    FROM empsalary) e WHERE rn <= 1;                            QUERY PLAN------------------------------------------------------------------- Subquery Scan on e   ->  WindowAgg         Filter: ((row_number() OVER (?)) <= 1)         ->  Sort               Sort Key: (((empsalary.depname)::text || ''::text))               ->  WindowAgg                     Run Condition: (row_number() OVER (?) <= 1)                     ->  Sort                           Sort Key: empsalary.depname                           ->  Seq Scan on empsalary(10 rows)The 'rn' column is calculated in the lower-level WindowAgg, and it isneeded to evaluate the 'Filter' of the upper-level WindowAgg.  Inpass-through mode, the lower-level WindowAgg would not be evaluated anymore, so we need to mark the 'rn' column to something that can false the'Filter'.  Considering the 'Filter' is a strict function, marking it asNULL would do.  I think this is why this patch works. \nJust thinking of the patch a bit more, what I wrote ends up\ncontinually zeroing the values and marking the columns as NULL. Likely\nwe can just do this once when we do: winstate->status =\nWINDOWAGG_PASSTHROUGH;  Yes, I also think we can do this only once when we go into pass-throughmode.ThanksRichard", "msg_date": "Fri, 25 Nov 2022 20:46:11 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On 25.11.2022 15:46, Richard Guo wrote:\n> Considering the 'Filter' is a strict function, marking it as\n> NULL would do.  I think this is why this patch works.\n\nWhat about user-defined operators? I created my own <= operator for int8 \nwhich returns true on null input, and put it in a btree operator class. \nWith my operator I get:\n\n depname | empno | salary | enroll_date | c1 | rn | c2 | c3\n-----------+-------+--------+-------------+----+----+----+----\n personnel | 5 | 3500 | 2007-12-10 | 2 | 1 | 2 | 2\n sales | 3 | 4800 | 2007-08-01 | 3 | 1 | 3 | 3\n sales | 4 | 4800 | 2007-08-08 | 3 | | | 3\n(3 rows)\n\nAdmittedly, it's weird that (null <= 1) evaluates to true. But does it \nviolate the contract of the btree operator class or something? Didn't \nfind a clear answer in the docs.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Fri, 25 Nov 2022 19:01:09 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> What about user-defined operators? I created my own <= operator for int8 \n> which returns true on null input, and put it in a btree operator class. \n> Admittedly, it's weird that (null <= 1) evaluates to true. But does it \n> violate the contract of the btree operator class or something? Didn't \n> find a clear answer in the docs.\n\nIt's pretty unlikely that this would work during an actual index scan.\nI'm fairly sure that btree (and other index AMs) have hard-wired\nassumptions that index operators are strict.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Nov 2022 11:19:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Sat, 26 Nov 2022 at 05:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> > What about user-defined operators? I created my own <= operator for int8\n> > which returns true on null input, and put it in a btree operator class.\n> > Admittedly, it's weird that (null <= 1) evaluates to true. But does it\n> > violate the contract of the btree operator class or something? Didn't\n> > find a clear answer in the docs.\n>\n> It's pretty unlikely that this would work during an actual index scan.\n> I'm fairly sure that btree (and other index AMs) have hard-wired\n> assumptions that index operators are strict.\n\nIf we're worried about that then we could just restrict this\noptimization to only work with strict quals.\n\nThe proposal to copy the datums into the query context does not seem\nto me to be a good idea. If there are a large number of partitions\nthen it sounds like we'll leak lots of memory. We could invent some\npartition context that we reset after each partition, but that's\nprobably more complexity than it would be worth.\n\nI've attached a draft patch to move the code to nullify the aggregate\nresults so that's only done once per partition and adjusted the\nplanner to limit this to strict quals.\n\nDavid", "msg_date": "Mon, 28 Nov 2022 13:23:16 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On 28.11.2022 03:23, David Rowley wrote:\n> On Sat, 26 Nov 2022 at 05:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n>>> What about user-defined operators? I created my own <= operator for int8\n>>> which returns true on null input, and put it in a btree operator class.\n>>> Admittedly, it's weird that (null <= 1) evaluates to true. But does it\n>>> violate the contract of the btree operator class or something? Didn't\n>>> find a clear answer in the docs.\n>>\n>> It's pretty unlikely that this would work during an actual index scan.\n>> I'm fairly sure that btree (and other index AMs) have hard-wired\n>> assumptions that index operators are strict.\n> \n> If we're worried about that then we could just restrict this\n> optimization to only work with strict quals.\n\nNot sure this is necessary if btree operators must be strict anyway.\n\n\n> The proposal to copy the datums into the query context does not seem\n> to me to be a good idea. If there are a large number of partitions\n> then it sounds like we'll leak lots of memory. We could invent some\n> partition context that we reset after each partition, but that's\n> probably more complexity than it would be worth.\n\nAh, good point.\n\n\n> I've attached a draft patch to move the code to nullify the aggregate\n> results so that's only done once per partition and adjusted the\n> planner to limit this to strict quals.\n\nNot quite sure that we don't need to do anything for the \nWINDOWAGG_PASSTHROUGH_STRICT case. Although, we won't return any more \ntuples for the current partition, we still call ExecProject with \ndangling pointers. Is it okay?\n\n\n+ if (!func_strict(opexpr->opfuncid))\n+ return false;\n\nShould return true instead?\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Mon, 28 Nov 2022 12:59:27 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Mon, Nov 28, 2022 at 5:59 PM Sergey Shinderuk <s.shinderuk@postgrespro.ru>\nwrote:\n\n> Not quite sure that we don't need to do anything for the\n> WINDOWAGG_PASSTHROUGH_STRICT case. Although, we won't return any more\n> tuples for the current partition, we still call ExecProject with\n> dangling pointers. Is it okay?\n\n\nAFAIU once we go into WINDOWAGG_PASSTHROUGH_STRICT we will spool all the\nremaining tuples in the current partition without storing them and then\nmove to the next partition if available and become WINDOWAGG_RUN again\nor become WINDOWAGG_DONE if there are no further partitions. It seems\nwe would not have chance to see the dangling pointers.\n\n\n> + if (!func_strict(opexpr->opfuncid))\n> + return false;\n>\n> Should return true instead?\n\n\nYeah, you're right. This should be a thinko.\n\nThanks\nRichard\n\nOn Mon, Nov 28, 2022 at 5:59 PM Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\nNot quite sure that we don't need to do anything for the \nWINDOWAGG_PASSTHROUGH_STRICT case. Although, we won't return any more \ntuples for the current partition, we still call ExecProject with \ndangling pointers. Is it okay? AFAIU once we go into WINDOWAGG_PASSTHROUGH_STRICT we will spool all theremaining tuples in the current partition without storing them and thenmove to the next partition if available and become WINDOWAGG_RUN againor become WINDOWAGG_DONE if there are no further partitions.  It seemswe would not have chance to see the dangling pointers. \n+   if (!func_strict(opexpr->opfuncid))\n+       return false;\n\nShould return true instead? Yeah, you're right.  This should be a thinko.ThanksRichard", "msg_date": "Thu, 1 Dec 2022 16:18:22 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On 01.12.2022 11:18, Richard Guo wrote:\n> \n> On Mon, Nov 28, 2022 at 5:59 PM Sergey Shinderuk \n> <s.shinderuk@postgrespro.ru <mailto:s.shinderuk@postgrespro.ru>> wrote:\n> \n> Not quite sure that we don't need to do anything for the\n> WINDOWAGG_PASSTHROUGH_STRICT case. Although, we won't return any more\n> tuples for the current partition, we still call ExecProject with\n> dangling pointers. Is it okay?\n> \n> AFAIU once we go into WINDOWAGG_PASSTHROUGH_STRICT we will spool all the\n> remaining tuples in the current partition without storing them and then\n> move to the next partition if available and become WINDOWAGG_RUN again\n> or become WINDOWAGG_DONE if there are no further partitions.  It seems\n> we would not have chance to see the dangling pointers.\n\nMaybe I'm missing something, but the previous call to spool_tuples() \nmight have read extra tuples (if the tuplestore spilled to disk), and \nafter switching to WINDOWAGG_PASSTHROUGH_STRICT mode we nevertheless \nwould loop through these extra tuples and call ExecProject if only to \nincrement winstate->currentpos.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Thu, 1 Dec 2022 14:21:47 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Fri, 2 Dec 2022 at 00:21, Sergey Shinderuk\n<s.shinderuk@postgrespro.ru> wrote:\n> Maybe I'm missing something, but the previous call to spool_tuples()\n> might have read extra tuples (if the tuplestore spilled to disk), and\n> after switching to WINDOWAGG_PASSTHROUGH_STRICT mode we nevertheless\n> would loop through these extra tuples and call ExecProject if only to\n> increment winstate->currentpos.\n\nThe tuples which are spooled in the WindowAgg node are the ones from\nthe WindowAgg's subnode. Since these don't contain the results of the\nWindowFunc, then I don't think there's any issue with what's stored in\nany of the spooled tuples.\n\nWhat matters is what we pass along to the node that's reading from the\nWindowAgg. If we NULL out the memory where we store the WindowFunc\n(and maybe an Aggref) results then the ExecProject in ExecWindowAgg()\nwill no longer fill the WindowAgg's output slot with the address of\nfree'd memory (or some stale byval value which has lingered for byval\nreturn type WindowFuncs).\n\nSince the patch I sent sets the context's ecxt_aggnulls to true, it\nmeans that when we do the ExecProject(), the EEOP_WINDOW_FUNC in\nExecInterpExpr (or the JIT equivalent) will put an SQL NULL in the\n*output* slot for the WindowAgg node. The same is true for\nEEOP_AGGREFs as the WindowAgg node that we are running in\nWINDOWAGG_PASSTHROUGH mode could also contain normal aggregate\nfunctions, not just WindowFuncs.\n\nDavid\n\n\n", "msg_date": "Mon, 5 Dec 2022 17:11:46 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Mon, 28 Nov 2022 at 22:59, Sergey Shinderuk\n<s.shinderuk@postgrespro.ru> wrote:\n>\n> On 28.11.2022 03:23, David Rowley wrote:\n> > On Sat, 26 Nov 2022 at 05:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It's pretty unlikely that this would work during an actual index scan.\n> >> I'm fairly sure that btree (and other index AMs) have hard-wired\n> >> assumptions that index operators are strict.\n> >\n> > If we're worried about that then we could just restrict this\n> > optimization to only work with strict quals.\n>\n> Not sure this is necessary if btree operators must be strict anyway.\n\nI'd rather see the func_strict() test in there. You've already\ndemonstrated you can get wrong results with a non-strict operator. I'm\nnot disputing that it sounds like a broken operator class or not. I\njust want to ensure we don't leave any holes open for this\noptimisation to return incorrect results.\n\nDavid\n\n\n", "msg_date": "Mon, 5 Dec 2022 17:16:53 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" }, { "msg_contents": "On Thu, 1 Dec 2022 at 21:18, Richard Guo <guofenglinux@gmail.com> wrote:\n>> + if (!func_strict(opexpr->opfuncid))\n>> + return false;\n>>\n>> Should return true instead?\n>\n>\n> Yeah, you're right. This should be a thinko.\n\nYeah, oops. That's wrong.\n\nI've adjusted that in the attached.\n\nI'm keen to move along and push the fix for this bug. If there are no\nobjections to the method in the attached and also adding the\nrestriction to limit the optimization to only working with strict\nOpExprs, then I'm going to push this, likely about 24 hours from now.\n\nDavid", "msg_date": "Mon, 5 Dec 2022 17:39:37 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in row_number() optimization" } ]
[ { "msg_contents": "The attached patch is a contrib module to set login restrictions on users with \ntoo many authentication failure. The administrator could manage several GUC \nparameters to control the login restrictions which are listed below.\n- set the wait time when password authentication fails.\n- allow the wait time grows when users of the same IP consecutively logon failed.\n- set the maximum authentication failure number from the same user. The system \nwill prevent a user who gets too many authentication failures from entering the\ndatabase.\n\n\nI hope this will be useful to future development.\nThanks.\n-----\n\n\nzhcheng@ceresdata.com", "msg_date": "Wed, 16 Nov 2022 11:37:25 +0800 (GMT+08:00)", "msg_from": "=?UTF-8?B?5oiQ5LmL54SV?= <zhcheng@ceresdata.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?Y29udHJpYjogYXV0aF9kZWxheSBtb2R1bGU=?=" }, { "msg_contents": "=?UTF-8?B?5oiQ5LmL54SV?= <zhcheng@ceresdata.com> writes:\n> The attached patch is a contrib module to set login restrictions on users with \n> too many authentication failure. The administrator could manage several GUC \n> parameters to control the login restrictions which are listed below.\n> - set the wait time when password authentication fails.\n> - allow the wait time grows when users of the same IP consecutively logon failed.\n> - set the maximum authentication failure number from the same user. The system \n> will prevent a user who gets too many authentication failures from entering the\n> database.\n\nI'm not yet forming an opinion on whether this is useful enough\nto accept. However, I wonder why you chose to add this functionality\nto auth_delay instead of making a new, independent module.\nIt seems fairly unrelated to what auth_delay does, and the\nnewly-created requirement that the module be preloaded might\npossibly break some existing use-case for auth_delay.\n\nAlso, a patch that lacks user documentation and has no code comments to\nspeak of seems unlikely to draw serious review.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 17:37:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?UTF-8?B?Y29udHJpYjogYXV0aF9kZWxheSBtb2R1bGU=?=" }, { "msg_contents": "On Thu, Nov 17, 2022 at 05:37:51PM -0500, Tom Lane wrote:\n> =?UTF-8?B?5oiQ5LmL54SV?= <zhcheng@ceresdata.com> writes:\n> > The attached patch is a contrib module to set login restrictions on users with\n> > too many authentication failure. The administrator could manage several GUC\n> > parameters to control the login restrictions which are listed below.\n> > - set the wait time when password authentication fails.\n> > - allow the wait time grows when users of the same IP consecutively logon failed.\n> > - set the maximum authentication failure number from the same user. The system\n> > will prevent a user who gets too many authentication failures from entering the\n> > database.\n>\n> I'm not yet forming an opinion on whether this is useful enough\n> to accept.\n\nI'm not sure that doing that on the backend side is really a great idea, an\nattacker will still be able to exhaust available connection slots.\n\nIf your instance is reachable from some untrusted network (which already sounds\nscary), it's much easier to simply configure something like fail2ban to provide\nthe same feature in a more efficient way. You can even block access to other\nservices too while at it.\n\nNote that there's also an extension to log failed connection attempts on an\nalternate file with a fixed simple format if you're worried about your regular\nlogs are too verbose.\n\n\n", "msg_date": "Fri, 18 Nov 2022 15:13:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: contrib: auth_delay module" } ]
[ { "msg_contents": "\nOn Wed, 16 Nov 2022 at 12:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> Hi, hackers,\n>\n> ITYM pgsql-hackers, this is off-topic here.\n>\n\nSorry for typo the email address.\n\n>> When I'm reviewing patch [1], I find there is a memory leak in\n>> adjust_data_dir(), the cmd was allocated by psprintf(), but forget\n>> releasing.\n>\n> Can't get excited about it in pg_ctl; that program won't run\n> long enough for anybody to notice.\n>\n\nYeah, it won't run a long time. I find that the memory of my_exec_path\nwas released, so I think we also should do release on cmd. IMO, this is\na bit confused when should we do release the memory of variables for\nshort lifetime?\n\n[Here is the origin contents which I send a wrong mail-list]\n\nHi, hackers,\n\nWhen I'm reviewing patch [1], I find there is a memory leak in\nadjust_data_dir(), the cmd was allocated by psprintf(), but forget\nreleasing.\n\n[1] https://www.postgresql.org/message-id/CALte62y3yZpHNFnYVz1uACaFbmb6go9fyeRaO5uHF5XaxtarbA%40mail.gmail.com\n\ndiff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c\nindex ceab603c47..ace2d676fc 100644\n--- a/src/bin/pg_ctl/pg_ctl.c\n+++ b/src/bin/pg_ctl/pg_ctl.c\n@@ -2159,6 +2159,7 @@ adjust_data_dir(void)\n \t\twrite_stderr(_(\"%s: could not determine the data directory using command \\\"%s\\\"\\n\"), progname, cmd);\n \t\texit(1);\n \t}\n+\tfree(cmd);\n \tfree(my_exec_path);\n \n \t/* strip trailing newline and carriage return */\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 16 Nov 2022 13:21:33 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory leak in adjust_data_dir" } ]
[ { "msg_contents": "Hi,\n\n\n\nA quick question on merge regress-test\n\n\n\nhttps://github.com/postgres/postgres/blob/REL_15_STABLE/src/test/regress/expected/merge.out#L846<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres%2Fblob%2FREL_15_STABLE%2Fsrc%2Ftest%2Fregress%2Fexpected%2Fmerge.out%23L846&data=05%7C01%7CTejeswar.Mupparti%40microsoft.com%7Ca57053f63d654a01c1f608dac77bbf59%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638041631582821358%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=9T3%2FhFcEK9dhslD6suiz6%2B5kbcPDIaZ24uDDy8WUbqs%3D&reserved=0>\n\n\n\nshould there be an ERROR or comment needs a fix? What's the expected behavior?\n\n\n\nRegards\n\nTeja\n\n[Text Description automatically generated]", "msg_date": "Wed, 16 Nov 2022 05:46:04 +0000", "msg_from": "Teja Mupparti <Tejeswar.Mupparti@microsoft.com>", "msg_from_op": true, "msg_subject": "MERGE regress test" }, { "msg_contents": "On 2022-Nov-16, Teja Mupparti wrote:\n\n> A quick question on merge regress-test\n> \n> https://github.com/postgres/postgres/blob/REL_15_STABLE/src/test/regress/expected/merge.out#L846\nj \n> should there be an ERROR or comment needs a fix? What's the expected behavior?\n\nHmm, good find. As I recall, I was opposed to the idea of throwing an\nerror if the WHEN expression writes to the database, and the previous\nimplementation had some hole, so I just ripped it out after discussing\nit; but I evidently failed to notice this test case about it.\n\nHowever, I can't find any mailing list discussion about this point.\nMaybe I just asked Simon off-list about it.\n\nIMO just deleting that test is a sufficient fix.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://archives.postgresql.org/message-id/482D1632.8010507@sigaev.ru\n\n\n", "msg_date": "Wed, 16 Nov 2022 18:37:14 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE regress test" }, { "msg_contents": "On 2022-Nov-16, Alvaro Herrera wrote:\n\n> Hmm, good find. As I recall, I was opposed to the idea of throwing an\n> error if the WHEN expression writes to the database, and the previous\n> implementation had some hole, so I just ripped it out after discussing\n> it; but I evidently failed to notice this test case about it.\n\nAh, I found out what happened, and my memory as usual is betraying me.\nThis was changed before I was involved with the patch at all: Pavan\nchanged it between his v18[1] and v19[2]:\n\n if (action->whenqual)\n {\n- int64 startWAL = GetXactWALBytes();\n- bool qual = ExecQual(action->whenqual, econtext);\n-\n- /*\n- * SQL Standard says that WHEN AND conditions must not\n- * write to the database, so check we haven't written\n- * any WAL during the test. Very sensible that is, since\n- * we can end up evaluating some tests multiple times if\n- * we have concurrent activity and complex WHEN clauses.\n- *\n- * XXX If we had some clear form of functional labelling\n- * we could use that, if we trusted it.\n- */\n- if (startWAL < GetXactWALBytes())\n- ereport(ERROR,\n- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n- errmsg(\"cannot write to database within WHEN AND condition\")));\n\nThis is what Peter Geoghegan had to say about it at the time:\n\n> This needs to go. Apart from the fact that GetXactWALBytes() is buggy\n> (it returns int64 for the unsigned type XLogRecPtr), the whole idea\n> just seems unnecessary. I don't see why this is any different to using\n> a volatile function in a regular UPDATE.\n\nPavan just forgot to remove the test. I'll do so now.\n\n[1] https://postgr.es/m/CABOikdPFCcgp7=zoN4M=y0TefW4Q9dPAU+Oy5jN5A+hWYdnvNg@mail.gmail.com\n[2] https://postgr.es/m/CABOikdOUoaXt1H885TC_cA1LoErEejqdVDZqG62rQkiZPPyg0Q@mail.gmail.com\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)\n\n\n", "msg_date": "Fri, 18 Nov 2022 20:08:05 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE regress test" }, { "msg_contents": "On 2022-Nov-18, Alvaro Herrera wrote:\n\n> Pavan just forgot to remove the test. I'll do so now.\n\nDone now. Thanks for reporting.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La conclusión que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusión de ellos\" (Tanenbaum)\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:29:08 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE regress test" } ]
[ { "msg_contents": "Hi,\n\nA customer seems to have run into $subject. Here's a reproducer they shared:\n\nCREATE TABLE test (id integer, category integer, rate numeric);\nINSERT INTO test\nSELECT x.id,\n y.category,\n random() * 10 AS rate\nFROM generate_series(1, 1000000) AS x(id)\nINNER JOIN generate_series(1, 25) AS y(category)\n ON 1 = 1;\nSELECT * FROM crosstab('SELECT id, category, rate FROM test ORDER BY\n1, 2') AS final_result(id integer, \"1\" numeric, \"2\" numeric, \"3\"\nnumeric, \"4\" numeric, \"5\" numeric);\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 106095.766 ms (01:46.096)\n!?> \\q\n\nWith the following logged:\n\nLOG: server process (PID 121846) was terminated by signal 9: Killed\nDETAIL: Failed process was running: SELECT * FROM crosstab('SELECT\nid, category, rate FROM test ORDER BY 1, 2') AS final_result(id\ninteger, \"1\" numeric, \"2\" numeric, \"3\" numeric, \"4\" numeric, \"5\"\nnumeric);\n\nThe problem seems to be spi_printtup() continuing to allocate memory\nto expand _SPI_current->tuptable to store the result of crosstab()'s\ninput query that's executed using:\n\n /* Retrieve the desired rows */\n ret = SPI_execute(sql, true, 0);\n\nNote that this asks SPI to retrieve and store *all* result rows of the\nquery in _SPI_current->tuptable, and if there happen to be so many\nrows, as in the case of above example, spi_printtup() ends up asking\nfor a bit too much memory.\n\nThe easiest fix for this seems to be for crosstab() to use open a\ncursor (SPI_cursor_open) and fetch the rows in batches\n(SPI_cursor_fetch) rather than all in one go. I have implemented that\nin the attached. Maybe the patch should address other functions that\npotentially have the same problem.\n\nI also wondered about fixing this by making _SPI_current->tuptable use\na tuplestore that can spill to disk as its backing store rather than a\nplain C HeapTuple array, but haven't checked how big of a change that\nwould be; SPI_tuptable is referenced in many places across the tree.\nThough I suspect that idea has enough merits to give that a try\nsomeday.\n\nThoughts on whether this should be fixed and the fix be back-patched?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 16 Nov 2022 16:47:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "out of memory in crosstab()" }, { "msg_contents": "On 11/16/22 02:47, Amit Langote wrote:\n> A customer seems to have run into $subject. Here's a reproducer they shared:\n\n> With the following logged:\n> \n> LOG: server process (PID 121846) was terminated by signal 9: Killed\n\nThat's the Linux OOM killer. Was this running in a container or under \nsystemd with memory.limit_in_bytes set? If so, perhaps they need a \nhigher setting.\n\n\n> The problem seems to be spi_printtup() continuing to allocate memory\n> to expand _SPI_current->tuptable to store the result of crosstab()'s\n> input query that's executed using:\n> \n> /* Retrieve the desired rows */\n> ret = SPI_execute(sql, true, 0);\n> \n> Note that this asks SPI to retrieve and store *all* result rows of the\n> query in _SPI_current->tuptable, and if there happen to be so many\n> rows, as in the case of above example, spi_printtup() ends up asking\n> for a bit too much memory.\n\ncheck\n\n> The easiest fix for this seems to be for crosstab() to use open a\n> cursor (SPI_cursor_open) and fetch the rows in batches\n> (SPI_cursor_fetch) rather than all in one go. I have implemented that\n> in the attached. Maybe the patch should address other functions that\n> potentially have the same problem.\n\nSeems reasonable. I didn't look that closely at the patch, but I do \nthink that there needs to be some justification for the selected batch \nsize and/or make it configurable.\n\n> I also wondered about fixing this by making _SPI_current->tuptable use\n> a tuplestore that can spill to disk as its backing store rather than a\n> plain C HeapTuple array, but haven't checked how big of a change that\n> would be; SPI_tuptable is referenced in many places across the tree.\n> Though I suspect that idea has enough merits to give that a try\n> someday.\n\nSeems like a separate patch at the very least\n\n> Thoughts on whether this should be fixed and the fix be back-patched?\n\n-1 on backpatching -- this is not a bug, and the changes are non-trivial\n\nJoe\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 16 Nov 2022 07:56:24 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: out of memory in crosstab()" } ]
[ { "msg_contents": "Hi Hackers,\n\nThis is Myo Wai Thant.\nI found out that there is a mistake written in executor/README file.\n\nThe actions of MERGE command can be specified as follows: INSERT, UPDATE, DELETE and DO NOTHING.\nHowever, in the README file, the ‘UPDATE’ word is described 2 times instead of ‘DELETE’.\n\nTherefore, I attached the patch file which fix this word usage.\nIt would be great if you could take a look at it.\n\nThank you.\nBest Regards,\nMyo Wai Thant", "msg_date": "Wed, 16 Nov 2022 08:37:12 +0000", "msg_from": "\"Waithant Myo (Fujitsu)\" <myo.waithant@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix the README file for MERGE command" }, { "msg_contents": "On Wed, Nov 16, 2022 at 4:37 PM Waithant Myo (Fujitsu) <\nmyo.waithant@fujitsu.com> wrote:\n\n> The actions of MERGE command can be specified as follows: INSERT, UPDATE,\n> DELETE and DO NOTHING.\n>\n> However, in the README file, the ‘UPDATE’ word is described 2 times\n> instead of ‘DELETE’.\n>\n>\n>\n> Therefore, I attached the patch file which fix this word usage.\n>\n> It would be great if you could take a look at it.\n>\n\nApparently this is a typo. Good catch! +1.\n\nThanks\nRichard\n\nOn Wed, Nov 16, 2022 at 4:37 PM Waithant Myo (Fujitsu) <myo.waithant@fujitsu.com> wrote: \nThe actions of MERGE command can be specified as follows: INSERT, UPDATE, DELETE and DO NOTHING.\nHowever, in the README file, the ‘UPDATE’ word is described 2 times instead of ‘DELETE’.\n \nTherefore, I attached the patch file which fix this word usage.\nIt would be great if you could take a look at it. Apparently this is a typo.  Good catch! +1.ThanksRichard", "msg_date": "Thu, 17 Nov 2022 09:31:01 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix the README file for MERGE command" }, { "msg_contents": "Hi Richard,\r\n\r\nThank you for your time.\r\n\r\nBest Regards,\r\nMyo Wai Thant\r\nFrom: Richard Guo <guofenglinux@gmail.com>\r\nSent: Thursday, November 17, 2022 10:31 AM\r\nTo: Myo, Waithant/Myo W. <myo.waithant@fujitsu.com>\r\nCc: pgsql-hackers@lists.postgresql.org\r\nSubject: Re: Fix the README file for MERGE command\r\n\r\n\r\nOn Wed, Nov 16, 2022 at 4:37 PM Waithant Myo (Fujitsu) <myo.waithant@fujitsu.com<mailto:myo.waithant@fujitsu.com>> wrote:\r\nThe actions of MERGE command can be specified as follows: INSERT, UPDATE, DELETE and DO NOTHING.\r\nHowever, in the README file, the ‘UPDATE’ word is described 2 times instead of ‘DELETE’.\r\n\r\nTherefore, I attached the patch file which fix this word usage.\r\nIt would be great if you could take a look at it.\r\n\r\nApparently this is a typo. Good catch! +1.\r\n\r\nThanks\r\nRichard\r\n\n\n\n\n\n\n\n\n\nHi Richard,\n \nThank you for your time.\n \nBest Regards,\nMyo Wai Thant\n\nFrom: Richard Guo <guofenglinux@gmail.com>\r\n\nSent: Thursday, November 17, 2022 10:31 AM\nTo: Myo, Waithant/Myo\nW. <myo.waithant@fujitsu.com>\nCc: pgsql-hackers@lists.postgresql.org\nSubject: Re: Fix the README file for MERGE command\n\n \n\n\n \n\n\n\nOn Wed, Nov 16, 2022 at 4:37 PM Waithant Myo (Fujitsu) <myo.waithant@fujitsu.com> wrote: \n\n\n\n\n\nThe actions of MERGE command can be specified as follows: INSERT, UPDATE, DELETE and DO NOTHING.\nHowever, in the README file, the\r\n‘UPDATE’ word is described 2 times instead of\r\n‘DELETE’.\n \nTherefore, I attached the patch file which fix this word usage.\nIt would be great if you could take a look at it.\n\n\n\n\n\n \n\n\nApparently this is a typo.  Good catch! +1.\n\r\nThanks\r\nRichard", "msg_date": "Thu, 17 Nov 2022 05:16:47 +0000", "msg_from": "\"Waithant Myo (Fujitsu)\" <myo.waithant@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix the README file for MERGE command" }, { "msg_contents": "> On 17 Nov 2022, at 02:31, Richard Guo <guofenglinux@gmail.com> wrote:\n> On Wed, Nov 16, 2022 at 4:37 PM Waithant Myo (Fujitsu) <myo.waithant@fujitsu.com <mailto:myo.waithant@fujitsu.com>> wrote: \n> Therefore, I attached the patch file which fix this word usage.\n> \n> It would be great if you could take a look at it.\n> \n> Apparently this is a typo. Good catch! +1.\n\n\nAgreed. I've applied this down to v15 where MERGE was introduced. Thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 10:12:34 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix the README file for MERGE command" } ]
[ { "msg_contents": "Hi All,\nFunctions like lappend_*() in list.c do not modify the second\nargument. So it can be qualified as const. Any reason why we don't do\nthat? Is it because the target pointer ptr_value is not const\nqualified?\n\nIn my code, I am using lappend() and passing it the output of\npq_getmsgstring() which returns const char *. The list is used to just\ncollect these pointers to be scanned later a few times within the same\nfunction. So there is no possibility of freeing or changing area\nwithin the StringInfo. So the coding practice though questionable, is\nsafe and avoids unnecessary pallocs. But SonarQube does complain about\nit.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 16 Nov 2022 16:04:54 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "const qualifier for list APIs" }, { "msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> Functions like lappend_*() in list.c do not modify the second\n> argument. So it can be qualified as const. Any reason why we don't do\n> that? Is it because the target pointer ptr_value is not const\n> qualified?\n\nIt would be a lie in many (most?) cases, wherever somebody later pulls the\npointer out of the list without applying \"const\" to it. So I can't see\nthat adding \"const\" there would be an improvement.\n\n> So the coding practice though questionable, is\n> safe and avoids unnecessary pallocs. But SonarQube does complain about\n> it.\n\nMaybe an explicit cast to (void *) would shut it up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Nov 2022 09:45:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: const qualifier for list APIs" } ]
[ { "msg_contents": "Hi!\n\n\nPROBLEM\n\nOur customer stumble onto the next behaviour of the Postgres cluster: if\ndisk space is exhausted, Postgres continues to\nwork until WAL can be successfully written. Thus, upon disk space\nexhaustion, clients will get an “ERROR: could not\nextend file “base/XXXXX/XXXXX”: No space left on device” messages and\ntransactions will be aborted. But the cluster\ncontinues to work for a quite some time.\n\nThis behaviour of the PostgreSQL, of course, is perfectly legit. Cluster\njust translate OS error to the user and can do\nnothing about it, expecting space may be available later.\n\nOn the other hand, users continues to send more data and having more and\nmore transactions to be aborted.\n\nThere are next possible ways to diagnose described situation:\n —external monitoring system;\n —log analysis;\n —create/drop table and analyse results.\n\nEach one have advantages and disadvantages. I'm not going to dive deeper\nhere, if you don't mind.\n\nThe customer, mentioned above, in this particular case, would be glad to be\nable to have a mechanism to stop the cluster.\nAgain, in this concrete case.\n\n\nPROPOSAL\n\nMy proposal is to add a tablespace option in order to be able to configure\nwhich behaviour is appropriate for a\nparticular user. I've decided to call this option “on_no_space” for now. If\nanyone has a better naming for this feature,\nplease, report.\n\nSo, the idea is to add both GUC and tablespace option “on_no_space”. The\ntablespace option defines the behaviour of the\ncluster for a particular tablespace in “on_no_space” situation. The GUC\ndefines the default value of tablespace option.\n\nPatch is posted as PoC is attached.\n\nHere's what it looks like:\n===============================================================================================\n== Create 100Mb disk\n$ dd if=/dev/zero of=/tmp/foo.img bs=100M count=1\n$ mkfs.ext4 /tmp/foo.img\n$ mkdir /tmp/foo\n$ sudo mount -t ext4 -o loop /tmp/foo.img /tmp/foo\n$ sudo chown -R orlov:orlov /tmp/foo\n===============================================================================================\n== Into psql\npostgres=# CREATE TABLESPACE foo LOCATION '/tmp/foo' WITH\n(on_no_space=fatal);\nCREATE TABLESPACE\npostgres=# \\db+\n List of tablespaces\n Name | Owner | Location | Access privileges | Options |\n Size | Description\n------------+-------+----------+-------------------+---------------------+---------+-------------\n foo | orlov | /tmp/foo | | {on_no_space=fatal} |\n0 bytes |\n...\n\npostgres=# CREATE TABLE bar(qux int, quux text) WITH (autovacuum_enabled =\nfalse) TABLESPACE foo;\nCREATE TABLE\npostgres=# INSERT INTO bar(qux, quux) SELECT id, md5(id::text) FROM\ngenerate_series(1, 10000000) AS id;\nFATAL: could not extend file \"pg_tblspc/16384/PG_16_202211121/5/16385\": No\nspace left on device\nHINT: Check free disk space.\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n===============================================================================================\n\n\nCAVEATS\n\nAgain, I've posted this patch as a PoC. This is not a complete realization\nof described functionality. AFAICS, there are\nnext problems:\n - I have to put get_tablespace_elevel call in RelationGetBufferForTuple in\norder to tablespace in cache; overwise,\n cache miss in get_tablespace triggers assertion failing in lock.c:887\n(Assert(\"!IsRelationExtensionLockHeld\")).\n This assertion was added by commit 15ef6ff4 (see [0] for details).\n - What error should be when mdextend called not to insert a tuple into a\nheap (WAL applying, for example)?\n\nMaybe, adding just GUC without ability to customize certain tablespaces to\ndefine \"out of disk space\" behaviour is enough?\nI would appreciate it if you give your opinions on a subject.\n\n-- \nBest regards,\nMaxim Orlov.\n\n[0]\nhttps://www.postgresql.org/message-id/flat/CAD21AoCmT3cFQUN4aVvzy5chw7DuzXrJCbrjTU05B%2BSs%3DGn1LA%40mail.gmail.com", "msg_date": "Wed, 16 Nov 2022 15:59:09 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "[PoC] configurable out of disk space elog level" }, { "msg_contents": "Hi, Maxim!\n> My proposal is to add a tablespace option in order to be able to configure which behaviour is appropriate for a\n> particular user. I've decided to call this option “on_no_space” for now. If anyone has a better naming for this feature,\n> please, report.\n>\n> So, the idea is to add both GUC and tablespace option “on_no_space”. The tablespace option defines the behaviour of the\n> cluster for a particular tablespace in “on_no_space” situation. The GUC defines the default value of tablespace option.\n\nI suppose there can be a kind of attack with this feature i.e.\n\n- If someone already has his own tablespace he can do:\nALTER TABLESPACE my SET on_no_space=fatal; // This needs tablespace\nownership, not superuser permission.\n- Then fill up his own db with garbage to fill his tablespace.\n- Then all the db cluster will go fatal, even if the other users'\ntablespaces are almost free.\n\nIf this can be avoided, I think the patch can be useful.\n\nRegards,\nPavel Borisov.\n\n\n", "msg_date": "Wed, 16 Nov 2022 17:35:50 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] configurable out of disk space elog level" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 15:59:09 +0300, Maxim Orlov wrote:\n> Patch is posted as PoC is attached.\n\n> --- a/src/backend/storage/smgr/md.c\n> +++ b/src/backend/storage/smgr/md.c\n> @@ -40,6 +40,7 @@\n> #include \"storage/sync.h\"\n> #include \"utils/hsearch.h\"\n> #include \"utils/memutils.h\"\n> +#include \"utils/spccache.h\"\n> \n> /*\n> *\tThe magnetic disk storage manager keeps track of open file\n> @@ -479,14 +480,16 @@ mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n> \n> \tif ((nbytes = FileWrite(v->mdfd_vfd, buffer, BLCKSZ, seekpos, WAIT_EVENT_DATA_FILE_EXTEND)) != BLCKSZ)\n> \t{\n> +\t\tint elevel = get_tablespace_elevel(reln->smgr_rlocator.locator.spcOid);\n> +\n\nYou can't do catalog access below the bufmgr.c layer. It could lead to all\nkinds of nastiness, including potentially recursing back to md.c. Even leaving\nthat aside, we can't do catalog accesses in all kinds of environments that\nthis currently is active in - most importantly it's affecting the startup\nprocess. We don't do catalog accesses in the startup process, and even if we\nwere to do so, we couldn't unconditionally because the catalog might not even\nbe consistent at this point (nor is it guaranteed that the wal_level even\nallows to access catalogs during recovery).\n\nI'm not convinced by the usecase in the first place, but purely technically I\nthink it's not viable to make this a tablespace option.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 09:41:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PoC] configurable out of disk space elog level" }, { "msg_contents": "On Wed, 16 Nov 2022 at 20:41, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n> You can't do catalog access below the bufmgr.c layer. It could lead to all\n> kinds of nastiness, including potentially recursing back to md.c. Even\n> leaving\n>\nYep, this is my biggest concern. It turns out, that the way to make such a\nfeature is to use just GUC for all tablespaces or\nforward elevel \"from above\".\n\n\n> that aside, we can't do catalog accesses in all kinds of environments that\n> this currently is active in - most importantly it's affecting the startup\n> process. We don't do catalog accesses in the startup process, and even if\n> we\n> were to do so, we couldn't unconditionally because the catalog might not\n> even\n> be consistent at this point (nor is it guaranteed that the wal_level even\n> allows to access catalogs during recovery).\n>\nYep, that is why I do use in get_tablespace_elevel:\n+ /*\n+ * Use GUC level only in normal mode.\n+ */\n+ if (!IsNormalProcessingMode())\n+ return ERROR;\n\nAnyway, I appreciate the opinion, thank you!\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Wed, 16 Nov 2022 at 20:41, Andres Freund <andres@anarazel.de> wrote:Hi,\nYou can't do catalog access below the bufmgr.c layer. It could lead to all\nkinds of nastiness, including potentially recursing back to md.c. Even leavingYep, this is my biggest concern. It turns out, that the way to make such a feature is to use just GUC for all tablespaces or forward elevel \"from above\".  \nthat aside, we can't do catalog accesses in all kinds of environments that\nthis currently is active in - most importantly it's affecting the startup\nprocess. We don't do catalog accesses in the startup process, and even if we\nwere to do so, we couldn't unconditionally because the catalog might not even\nbe consistent at this point (nor is it guaranteed that the wal_level even\nallows to access catalogs during recovery).Yep, that is why I do use in get_tablespace_elevel:+       /*+        * Use GUC level only in normal mode.+        */+       if (!IsNormalProcessingMode())+               return ERROR;Anyway, I appreciate the opinion, thank you!-- Best regards,Maxim Orlov.", "msg_date": "Thu, 17 Nov 2022 14:40:02 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] configurable out of disk space elog level" }, { "msg_contents": "On 2022-11-17 14:40:02 +0300, Maxim Orlov wrote:\n> On Wed, 16 Nov 2022 at 20:41, Andres Freund <andres@anarazel.de> wrote:\n> > that aside, we can't do catalog accesses in all kinds of environments that\n> > this currently is active in - most importantly it's affecting the startup\n> > process. We don't do catalog accesses in the startup process, and even if\n> > we\n> > were to do so, we couldn't unconditionally because the catalog might not\n> > even\n> > be consistent at this point (nor is it guaranteed that the wal_level even\n> > allows to access catalogs during recovery).\n> >\n> Yep, that is why I do use in get_tablespace_elevel:\n> + /*\n> + * Use GUC level only in normal mode.\n> + */\n> + if (!IsNormalProcessingMode())\n> + return ERROR;\n> \n> Anyway, I appreciate the opinion, thank you!\n\nThe startup process is in normal processing mode.\n\n\n", "msg_date": "Thu, 17 Nov 2022 07:45:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PoC] configurable out of disk space elog level" }, { "msg_contents": "On Wed, Nov 16, 2022 at 7:59 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n> The customer, mentioned above, in this particular case, would be glad to be able to have a mechanism to stop the cluster.\n> Again, in this concrete case.\n>\n> My proposal is to add a tablespace option in order to be able to configure which behaviour is appropriate for a\n> particular user. I've decided to call this option “on_no_space” for now. If anyone has a better naming for this feature,\n> please, report.\n\nI don't think this is a good feature to add to PostgreSQL. First, it's\nunclear why stopping the cluster is a desirable behavior. It doesn't\nstop the user transactions from failing; it just makes them fail in\nsome other way. Now it is of course perfectly legitimate for a\nparticular user to want that particular behavior anyway, but there are\na bunch of other things that a user could equally legitimately want to\ndo, like page the DBA or trigger a failover or kill off sessions that\nare using large temporary relations or whatever. And, equally, there\nare many other operating system errors to which a user could want the\ndatabase system to respond in similar ways. For example, someone might\nwant any given one of those treatments when an I/O error occurs\nwriting to the data directory, or a read-only filesystem error, or a\npermission denied error.\n\nHaving a switch for one particular kind of error (out of many that\ncould possibly occur) that triggers one particular coping strategy\n(out of many that could possibly be used) seems far too specific a\nthing to add as a core feature. And even if we had something much more\ngeneral, I'm not sure why that should go into the database rather than\nbeing implemented outside it. After all, nothing at all prevents the\nuser from scanning the database logs for \"out of space\" errors and\nshutting down the database if any are found. While you're at it, you\ncould make your monitoring script also check the free space on the\nrelevant partition using statfs() and page somebody if the utilization\ngoes above 95% or whatever threshold you like, which would probably\navoid service outages much more effectively than $SUBJECT.\n\nI just can't see much real benefit in putting this logic inside the database.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Nov 2022 14:55:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PoC] configurable out of disk space elog level" }, { "msg_contents": "> I don't think this is a good feature to add to PostgreSQL. First, it's\n> unclear why stopping the cluster is a desirable behavior. It doesn't\n> stop the user transactions from failing; it just makes them fail in\n> some other way. Now it is of course perfectly legitimate for a\n> particular user to want that particular behavior anyway, but there are\n> a bunch of other things that a user could equally legitimately want to\n> do, like page the DBA or trigger a failover or kill off sessions that\n> are using large temporary relations or whatever. And, equally, there\n> are many other operating system errors to which a user could want the\n> database system to respond in similar ways. For example, someone might\n> want any given one of those treatments when an I/O error occurs\n> writing to the data directory, or a read-only filesystem error, or a\n> permission denied error.\n>\n> Having a switch for one particular kind of error (out of many that\n> could possibly occur) that triggers one particular coping strategy\n> (out of many that could possibly be used) seems far too specific a\n> thing to add as a core feature. And even if we had something much more\n> general, I'm not sure why that should go into the database rather than\n> being implemented outside it. After all, nothing at all prevents the\n> user from scanning the database logs for \"out of space\" errors and\n> shutting down the database if any are found.\n\n\nYes, you are right. Actually, this is one of possible ways to deal with\ndescribed situation I\nmentioned above. And if I would deal with such a task, I would make it via\nlog monitoring.\nThe question was: \"could we be more general here?\". Probably, not.\n\n\n> While you're at it, you\n> could make your monitoring script also check the free space on the\n> relevant partition using statfs() and page somebody if the utilization\n> goes above 95% or whatever threshold you like, which would probably\n> avoid service outages much more effectively than $SUBJECT.\n>\n> I just can't see much real benefit in putting this logic inside the\n> database.\n>\n\nOK, I got it. Thanks for your thoughts!\n\n-- \nBest regards,\nMaxim Orlov.\n\n\nI don't think this is a good feature to add to PostgreSQL. First, it's\nunclear why stopping the cluster is a desirable behavior. It doesn't\nstop the user transactions from failing; it just makes them fail in\nsome other way. Now it is of course perfectly legitimate for a\nparticular user to want that particular behavior anyway, but there are\na bunch of other things that a user could equally legitimately want to\ndo, like page the DBA or trigger a failover or kill off sessions that\nare using large temporary relations or whatever. And, equally, there\nare many other operating system errors to which a user could want the\ndatabase system to respond in similar ways. For example, someone might\nwant any given one of those treatments when an I/O error occurs\nwriting to the data directory, or a read-only filesystem error, or a\npermission denied error.\n\nHaving a switch for one particular kind of error (out of many that\ncould possibly occur) that triggers one particular coping strategy\n(out of many that could possibly be used) seems far too specific a\nthing to add as a core feature. And even if we had something much more\ngeneral, I'm not sure why that should go into the database rather than\nbeing implemented outside it. After all, nothing at all prevents the\nuser from scanning the database logs for \"out of space\" errors and\nshutting down the database if any are found.  Yes, you are right. Actually, this is one of possible ways to deal with described situation Imentioned above. And if I would deal with such a task, I would make it via log monitoring.The question was: \"could we be more general here?\". Probably, not. While you're at it, you\ncould make your monitoring script also check the free space on the\nrelevant partition using statfs() and page somebody if the utilization\ngoes above 95% or whatever threshold you like, which would probably\navoid service outages much more effectively than $SUBJECT.\n\nI just can't see much real benefit in putting this logic inside the database.OK, I got it. Thanks for your thoughts! -- Best regards,Maxim Orlov.", "msg_date": "Fri, 18 Nov 2022 18:16:04 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PoC] configurable out of disk space elog level" }, { "msg_contents": "On Thu, 17 Nov 2022 at 14:56, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Having a switch for one particular kind of error (out of many that\n> could possibly occur) that triggers one particular coping strategy\n> (out of many that could possibly be used) seems far too specific a\n> thing to add as a core feature. And even if we had something much more\n> general, I'm not sure why that should go into the database rather than\n> being implemented outside it. After all, nothing at all prevents the\n> user from scanning the database logs for \"out of space\" errors and\n> shutting down the database if any are found. While you're at it, you\n> could make your monitoring script also check the free space on the\n> relevant partition using statfs() and page somebody if the utilization\n> goes above 95% or whatever threshold you like, which would probably\n> avoid service outages much more effectively than $SUBJECT.\n\nI have often thought we report a lot of errors to the user as\ntransaction errors that database admins are often going to feel they\nwould rather treat as system-wide errors. Often the error the user\nsees seems like a very low level error with no context that they can't\ndo anythign about. This seems to be an example of that.\n\nI don't really have a good solution for it but I do think most users\nwould rather deal with these errors at a higher level than individual\nqueries from individual clients. Out of disk space, hardware errors,\nout of memory, etc they would rather handle in one centralized place\nas a global condition. You can work around that with\nmiddleware/libraries/monitoring but it's kind of working around the\ndatabase giving you the information at the wrong time and place for\nyour needs.\n\n\n", "msg_date": "Tue, 22 Nov 2022 22:50:19 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PoC] configurable out of disk space elog level" } ]
[ { "msg_contents": "Hello!\n\nThe previous discussion was here:\nhttps://www.postgresql.org/message-id/flat/b570c367-ba38-95f3-f62d-5f59b9808226%40inbox.ru\n\n>On 15.11.2022 04:59, Tom Lane wrote:\n>> \"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n>> \n>> Additionally\n>> i've tried to reduce overall number of nodes previously\n>> used in this test in a similar way.\n> \n> Optimizing existing tests isn't an answer to that. We could\n> install those optimizations without adding a new test case.\n\nHere is a separate patch for the node usage optimization mentioned above.\nIt decreases the CPU usage during 100_bugs.pl by about 30%.\n\nThere are also some experimental data: 100_bugs-CPU-usage.txt\n\n\nWould be glad for any comments and concerns.\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 16 Nov 2022 17:41:16 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": true, "msg_subject": "Make a 100_bugs.pl test more faster." }, { "msg_contents": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> Here is a separate patch for the node usage optimization mentioned above.\n> It decreases the CPU usage during 100_bugs.pl by about 30%.\n\nHmm ... as written, this isn't testing the same thing, because you\ndidn't disable the FOR ALL TABLES publications created in the earlier\nsteps, so we're redundantly syncing more publications in the later\nones. Cleaning those up seems to make it a little faster yet,\nso pushed with that adjustment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Nov 2022 12:37:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make a 100_bugs.pl test more faster." } ]
[ { "msg_contents": "Since 2fe3bdbd691a, initdb has been failing on malleefowl:\n\nperforming post-bootstrap initialization ... sh: locale: not found\n2022-11-15 23:48:44.288 EST [10436] FATAL: could not execute command \n\"locale -a\": command not found\n2022-11-15 23:48:44.288 EST [10436] STATEMENT: SELECT \npg_import_system_collations('pg_catalog');\n\nThat's precisely the kind of thing this patch was supposed to catch, but \nobviously it's not good that initdb is now failing.\n\nFirst of all, is this a standard installation of this OS, or is perhaps \nsomething incomplete, broken, or unusual about the current OS installation?\n\n\n", "msg_date": "Wed, 16 Nov 2022 19:59:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "locale -a missing on Alpine Linux?" }, { "msg_contents": "## Peter Eisentraut (peter.eisentraut@enterprisedb.com):\n\n> First of all, is this a standard installation of this OS, or is perhaps \n> something incomplete, broken, or unusual about the current OS installation?\n\nAlpine uses musl libc, on which you need package musl-locales to get\na /usr/bin/locale.\nhttps://pkgs.alpinelinux.org/package/edge/community/x86/musl-locales\nhttps://musl.libc.org/\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Wed, 16 Nov 2022 20:21:20 +0100", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: locale -a missing on Alpine Linux?" }, { "msg_contents": "Christoph Moench-Tegeder <cmt@burggraben.net> writes:\n> ## Peter Eisentraut (peter.eisentraut@enterprisedb.com):\n>> First of all, is this a standard installation of this OS, or is perhaps \n>> something incomplete, broken, or unusual about the current OS installation?\n\n> Alpine uses musl libc, on which you need package musl-locales to get\n> a /usr/bin/locale.\n> https://pkgs.alpinelinux.org/package/edge/community/x86/musl-locales\n\nAh. And that also shows that if you didn't install that package,\nyou don't have any locales either, except presumably C/POSIX.\n\nSo probably we should treat failure of the locale command as okay\nand just press on with no non-built-in locales.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Nov 2022 14:25:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: locale -a missing on Alpine Linux?" }, { "msg_contents": "On 16.11.22 20:25, Tom Lane wrote:\n> Christoph Moench-Tegeder <cmt@burggraben.net> writes:\n>> ## Peter Eisentraut (peter.eisentraut@enterprisedb.com):\n>>> First of all, is this a standard installation of this OS, or is perhaps\n>>> something incomplete, broken, or unusual about the current OS installation?\n> \n>> Alpine uses musl libc, on which you need package musl-locales to get\n>> a /usr/bin/locale.\n>> https://pkgs.alpinelinux.org/package/edge/community/x86/musl-locales\n> \n> Ah. And that also shows that if you didn't install that package,\n> you don't have any locales either, except presumably C/POSIX.\n> \n> So probably we should treat failure of the locale command as okay\n> and just press on with no non-built-in locales.\n\nThat's basically what we had before, so I have just reverted that part \nof my original patch.\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 12:21:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: locale -a missing on Alpine Linux?" }, { "msg_contents": "\nOn 2022-11-16 We 14:25, Tom Lane wrote:\n> Christoph Moench-Tegeder <cmt@burggraben.net> writes:\n>> ## Peter Eisentraut (peter.eisentraut@enterprisedb.com):\n>>> First of all, is this a standard installation of this OS, or is perhaps \n>>> something incomplete, broken, or unusual about the current OS installation?\n>> Alpine uses musl libc, on which you need package musl-locales to get\n>> a /usr/bin/locale.\n>> https://pkgs.alpinelinux.org/package/edge/community/x86/musl-locales\n> Ah. And that also shows that if you didn't install that package,\n> you don't have any locales either, except presumably C/POSIX.\n>\n> So probably we should treat failure of the locale command as okay\n> and just press on with no non-built-in locales.\n\n\nmalleefowl is a docker instance (mostly docker images is what Alpine is\nused for). It would be extremely easy to recreate the image and add in\nmusl-locales, but maybe we should just leave it as it is to test the\ncase where the command isn't available.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 08:46:21 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: locale -a missing on Alpine Linux?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> malleefowl is a docker instance (mostly docker images is what Alpine is\n> used for). It would be extremely easy to recreate the image and add in\n> musl-locales, but maybe we should just leave it as it is to test the\n> case where the command isn't available.\n\nAgreed, leave it as-is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 09:18:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: locale -a missing on Alpine Linux?" } ]
[ { "msg_contents": "Hi Robert,\n\nIn 9c08aea6a you introduce the block-by-block strategy for creating a\ncopy of the database. In the main loop, this utilizes this call:\n\nbuf = ReadBufferWithoutRelcache(rlocator, MAIN_FORKNUM, blkno,\nRBM_NORMAL, bstrategy, false);\n\nHere, the last parameter is \"false\" for the permanence factor of this\nrelation. Since we know that pg_class is in fact a permanent\nrelation, this ends up causing issues for the TDE patches that I am\nworking on updating, due using the opposite value when calculating the\npage's IV and thus failing the decryption when trying to create a\ndatabase based on template0.\n\nIs there a reason why this needs to be \"false\" here? I recognize that\nthis routine is accessing the table outside of a formal connection, so\nthere might be more subtle effects that I am not aware of. If so this\nshould be documented. If it's an oversight, I think we should change\nto be \"true\" to match the actual permanence state of the relation.\n\nI did test changing it to true and didn't notice any adverse effects\nin `make installcheck-world`, but let me know if there is more to this\nstory than meets the eye.\n\n(I did review the original discussion at\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoYtcdxBjLh31DLxUXHxFVMPGzrU5_T%3DCYCvRyFHywSBUQ%40mail.gmail.com\nand did not notice any discussion of this specific parameter choice.)\n\nThanks,\n\nDavid\n\n\n", "msg_date": "Wed, 16 Nov 2022 15:16:51 -0600", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "ScanSourceDatabasePgClass" } ]
[ { "msg_contents": "Hi,\n\nI am working on polishing my patch to make CI use sanitizers. Unfortunately\nusing -fsanitize=alignment,undefined causes tests to fail on 32bit builds.\n\nhttps://cirrus-ci.com/task/5092504471601152\nhttps://api.cirrus-ci.com/v1/artifact/task/5092504471601152/testrun/build-32/testrun/recovery/022_crash_temp_files/log/022_crash_temp_files_node_crash.log\n\n../src/backend/storage/lmgr/proc.c:1173:2: runtime error: member access within misaligned address 0xf4019e54 for type 'struct PGPROC', which requires 8 byte alignment\n0xf4019e54: note: pointer points here\n e0 0d 09 f4 54 9e 01 f4 54 9e 01 f4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n ^\n==65203==Using libbacktrace symbolizer.\n #0 0x57076f46 in ProcSleep ../src/backend/storage/lmgr/proc.c:1173\n #1 0x57054cf7 in WaitOnLock ../src/backend/storage/lmgr/lock.c:1859\n #2 0x57058e4f in LockAcquireExtended ../src/backend/storage/lmgr/lock.c:1101\n #3 0x57058f82 in LockAcquire ../src/backend/storage/lmgr/lock.c:752\n #4 0x57051bb8 in XactLockTableWait ../src/backend/storage/lmgr/lmgr.c:702\n #5 0x569c31b3 in _bt_doinsert ../src/backend/access/nbtree/nbtinsert.c:225\n #6 0x569cff09 in btinsert ../src/backend/access/nbtree/nbtree.c:200\n #7 0x569ac19d in index_insert ../src/backend/access/index/indexam.c:193\n #8 0x56c72af6 in ExecInsertIndexTuples ../src/backend/executor/execIndexing.c:416\n #9 0x56d014c7 in ExecInsert ../src/backend/executor/nodeModifyTable.c:1065\n...\n\n\nI can reproduce this locally.\n\nAt first I thought the problem was caused by:\n46d6e5f5679 Display the time when the process started waiting for the lock, in pg_locks, take 2\n\nas pg_atomic_uint64 is 8 byte aligned on x86 - otherwise one gets into\nterrible terrible performance territory because atomics can be split across\ncachelines - but 46d6e5f5679 didn't teach ProcGlobalShmemSize() /\nInitProcGlobal() that allocations need to be aligned to a larger\nsize. However, we've made ShmemAllocRaw() use cacheline alignment, which\nshould suffice. And indeed - ProcGlobal->allProcs is aligned correctly, and\nsizeof(PGPROC) % 8 == 0. It doesn't seem great to rely on that, but ...\n\n\nPrinting out *proc in proc.c:1173 seems indicates clearly that it's not a\nvalid proc for some reason.\n\n(gdb) p myHeldLocks\n$26 = 0\n(gdb) p lock->waitProcs\n$27 = {links = {prev = 0xf33c4b5c, next = 0xf33c4b5c}, size = 0}\n(gdb) p &(waitQueue->links)\n$29 = (SHM_QUEUE *) 0xf33c4b5c\n(gdb) p proc\n$30 = (PGPROC *) 0xf33c4b5c\n\nAfaict the problem is that\n\t\tproc = (PGPROC *) &(waitQueue->links);\n\nis a gross gross hack - this isn't actually a PGPROC, it's pointing to an\nSHM_QUEUE, but *not* one embedded in PGPROC. It kinda works because ->links\nis at offset 0 in PGPROC, which means that\n\tSHMQueueInsertBefore(&(proc->links), &(MyProc->links));\nwill turn &proc->links back into waitQueue->links. Which we then can enqueue\nagain.\n\nI don't see the point of this hack, even leaving ubsan's valid complaints\naside. Why bother having this, sometimes, fake PGPROC pointer when we could\njust use a SHM_QUEUE* to determine the insertion point?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 17:42:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "ubsan fails on 32bit builds" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 17:42:30 -0800, Andres Freund wrote:\n> Afaict the problem is that\n> \t\tproc = (PGPROC *) &(waitQueue->links);\n> \n> is a gross gross hack - this isn't actually a PGPROC, it's pointing to an\n> SHM_QUEUE, but *not* one embedded in PGPROC. It kinda works because ->links\n> is at offset 0 in PGPROC, which means that\n> \tSHMQueueInsertBefore(&(proc->links), &(MyProc->links));\n> will turn &proc->links back into waitQueue->links. Which we then can enqueue\n> again.\n> \n> I don't see the point of this hack, even leaving ubsan's valid complaints\n> aside. Why bother having this, sometimes, fake PGPROC pointer when we could\n> just use a SHM_QUEUE* to determine the insertion point?\n\nAs done in the attached patch. With this ubsan passes both on 32bit and 64bit.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 16 Nov 2022 23:28:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ubsan fails on 32bit builds" }, { "msg_contents": "On Wed, Nov 16, 2022 at 8:42 PM Andres Freund <andres@anarazel.de> wrote:\n> Afaict the problem is that\n> proc = (PGPROC *) &(waitQueue->links);\n>\n> is a gross gross hack - this isn't actually a PGPROC, it's pointing to an\n> SHM_QUEUE, but *not* one embedded in PGPROC. It kinda works because ->links\n> is at offset 0 in PGPROC, which means that\n> SHMQueueInsertBefore(&(proc->links), &(MyProc->links));\n> will turn &proc->links back into waitQueue->links. Which we then can enqueue\n> again.\n\nNot that I object to a targeted fix, but it's been 10 years since\nslist and dlist were committed, and we really ought to eliminate\nSHM_QUEUE entirely in favor of using those. It's basically an\nopen-coded implementation of something for which we now have a\ntoolkit. Not that it's impossible to make this kind of mistake with a\ntoolkit, but in general open-coding the same logic in multiple places\nincreases the risk of bugs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Nov 2022 14:20:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ubsan fails on 32bit builds" }, { "msg_contents": "Hi,\n\nOn 2022-11-17 14:20:47 -0500, Robert Haas wrote:\n> On Wed, Nov 16, 2022 at 8:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > Afaict the problem is that\n> > proc = (PGPROC *) &(waitQueue->links);\n> >\n> > is a gross gross hack - this isn't actually a PGPROC, it's pointing to an\n> > SHM_QUEUE, but *not* one embedded in PGPROC. It kinda works because ->links\n> > is at offset 0 in PGPROC, which means that\n> > SHMQueueInsertBefore(&(proc->links), &(MyProc->links));\n> > will turn &proc->links back into waitQueue->links. Which we then can enqueue\n> > again.\n> \n> Not that I object to a targeted fix\n\nShould we backpatch this fix? Likely this doesn't cause active breakage\noutside of 32bit builds under ubsan, but that's not an unreasonable thing to\nwant to do in the backbranches.\n\n\n> but it's been 10 years since\n> slist and dlist were committed, and we really ought to eliminate\n> SHM_QUEUE entirely in favor of using those. It's basically an\n> open-coded implementation of something for which we now have a\n> toolkit. Not that it's impossible to make this kind of mistake with a\n> toolkit, but in general open-coding the same logic in multiple places\n> increases the risk of bugs.\n\nAgreed. I had started on a set of patches for some of the SHM_QUEUE uses, but\nsomehow we ended up a bit stuck on the naming of dlist_delete variant that\nafterwards zeroes next/prev so we can replace SHMQueueIsDetached() uses.\n\nShould probably find and rebase those patches...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Nov 2022 12:13:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ubsan fails on 32bit builds" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-17 14:20:47 -0500, Robert Haas wrote:\n>> Not that I object to a targeted fix\n\n> Should we backpatch this fix? Likely this doesn't cause active breakage\n> outside of 32bit builds under ubsan, but that's not an unreasonable thing to\n> want to do in the backbranches.\n\n+1 for backpatching what you showed.\n\n>> but it's been 10 years since\n>> slist and dlist were committed, and we really ought to eliminate\n>> SHM_QUEUE entirely in favor of using those.\n\n> Agreed. I had started on a set of patches for some of the SHM_QUEUE uses, but\n> somehow we ended up a bit stuck on the naming of dlist_delete variant that\n> afterwards zeroes next/prev so we can replace SHMQueueIsDetached() uses.\n\nAlso +1, but of course for HEAD only.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 15:15:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ubsan fails on 32bit builds" }, { "msg_contents": "On Fri, Nov 18, 2022 at 9:13 AM Andres Freund <andres@anarazel.de> wrote:\n> Agreed. I had started on a set of patches for some of the SHM_QUEUE uses, but\n> somehow we ended up a bit stuck on the naming of dlist_delete variant that\n> afterwards zeroes next/prev so we can replace SHMQueueIsDetached() uses.\n>\n> Should probably find and rebase those patches...\n\nhttps://www.postgresql.org/message-id/flat/20200211042229.msv23badgqljrdg2%40alap3.anarazel.de\n\n\n", "msg_date": "Fri, 18 Nov 2022 10:13:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ubsan fails on 32bit builds" } ]
[ { "msg_contents": "Hi.\n\nPG13+. Assume we have two identical queries with no arguments (as a plain\ntext, e.g. passed to PQexec - NOT to PQexecParams!):\n\n- one with \"a=X AND b IN(...)\"\n- and one with \"a=X and b=ANY('{...}')\n\nThe question: is it guaranteed that the planner will always choose\nidentical plans for them (or, at least, the plan for ANY will not match an\nexisting index worse than the plan with IN)? In my experiments it shows\nthat the answer is \"yes\", but I don't know the PG internals to make sure\nthat it's true in ALL situations.\n\nAssumptions:\n\n- The number of values in IN/ANY is of medium cardinality (say, 10-100\nvalues)\n- Again, all those values are static; no parameters are involved; plain\nsimple SQL as a text\n- There is also another column \"a\" which is compared against a constant\n(\"a\" is of 1000x lower cardinality than \"b\" to make it interesting), and an\nindex on (a, b)\n\nExample:\n\ncreate table test(a bigint, b bigint);\ncreate index test_idx on test(a, b);\n\ntruncate test;\ninsert into test(a, b) select round(s/10000), s from generate_series(1,\n1000000) s;\n\n# explain analyze select * from test where *a=10 and b in(1,2,3)*;\n----------------------------------------------------------------------------\n Index Only Scan using test_idx on test (cost=0.42..13.31 rows=1 width=16)\n Index Cond: ((a = 10) AND (b = ANY ('{1,2,3}'::bigint[])))\n\n# explain analyze select * from test where *a=10 and b=any('{1,2,3}')*;\n----------------------------------------------------------------------------\n Index Only Scan using test_idx on test (cost=0.42..13.31 rows=1 width=16)\n Index Cond: ((a = 10) AND (b = ANY ('{1,2,3}'::bigint[])))\n\nIt shows exactly the same plan here. *Would it always be the same, or it\nmay be different?* (E.g. my worry is that for IN variant, the planner can\nuse the statistics against the actual values in that IN parentheses, whilst\nwhen using ANY('{...}'), I can imagine that in some circumstances, it would\nignore the actual values within the literal and instead build a \"generic\"\nplan which may not utilize the index properly; is it the case?)\n\nP.S.\nA slightly correlated question was raised on StackOverflow, e.g.\nhttps://stackoverflow.com/questions/34627026/in-vs-any-operator-in-postgresql\n- but it wasn't about this particular usecase, they were discussing more\ncomplicated things, like when each element of an IN/ANY clause is actually\na pair of elements (which makes the planner go crazy in some\ncircumstances). My question is way simpler, I don't deal with clauses like\n\"IN((1,2),(3,4),(5,6))\" etc.; it's all about the plain 2-column selects.\nUnfortunately, it's super-hard to find more info about this question,\nbecause both \"in\" and \"any\" are stop-words in search engines, so they don't\nshow good answers, including any of the PG mail list archives.\n\nP.S.S.\nWhy does the answer matter? Because for \"IN(1,2,3)\" case, e.g.\npg_stat_statements generalizes the query to \"IN($1,$2,$3)\" and doesn't\ncoalesce multiple queries into one, whilst for \"=ANY('{1,2,3}')\", it\ncoalesces them all to \"=ANY($1)\". Having those 1,2,3,... of a different\ncardinality all the time, the logs/stats are flooded with the useless\nvariants of the same query basically. It also applies to e.g. logging to\nDatadog which normalizes the queries. We'd love to use \"=ANY(...)\" variant\neverywhere and never use IN() anymore, but are scared of getting some\nunexpected regressions.\n\nHi.PG13+. Assume we have two identical queries with no arguments (as a plain text, e.g. passed to PQexec - NOT to PQexecParams!):- one with \"a=X AND b IN(...)\"- and one with \"a=X and b=ANY('{...}')The question: is it guaranteed that the planner will always choose identical plans for them (or, at least, the plan for ANY will not match an existing index worse than the plan with IN)? In my experiments it shows that the answer is \"yes\", but I don't know the PG internals to make sure that it's true in ALL situations.Assumptions:- The number of values in IN/ANY is of medium cardinality (say, 10-100 values)- Again, all those values are static; no parameters are involved; plain simple SQL as a text- There is also another column \"a\" which is compared against a constant (\"a\" is of 1000x lower cardinality than \"b\" to make it interesting), and an index on (a, b)Example:create table test(a bigint, b bigint);create index test_idx on test(a, b);truncate test;insert into test(a, b) select round(s/10000), s from generate_series(1, 1000000) s;# explain analyze select * from test where a=10 and b in(1,2,3);---------------------------------------------------------------------------- Index Only Scan using test_idx on test  (cost=0.42..13.31 rows=1 width=16)   Index Cond: ((a = 10) AND (b = ANY ('{1,2,3}'::bigint[])))# explain analyze select * from test where a=10 and b=any('{1,2,3}');---------------------------------------------------------------------------- Index Only Scan using test_idx on test  (cost=0.42..13.31 rows=1 width=16)   Index Cond: ((a = 10) AND (b = ANY ('{1,2,3}'::bigint[])))It shows exactly the same plan here. Would it always be the same, or it may be different? (E.g. my worry is that for IN variant, the planner can use the statistics against the actual values in that IN parentheses, whilst when using ANY('{...}'), I can imagine that in some circumstances, it would ignore the actual values within the literal and instead build a \"generic\" plan which may not utilize the index properly; is it the case?)P.S.A slightly correlated question was raised on StackOverflow, e.g. https://stackoverflow.com/questions/34627026/in-vs-any-operator-in-postgresql - but it wasn't about this particular usecase, they were discussing more complicated things, like when each element of an IN/ANY clause is actually a pair of elements (which makes the planner go crazy in some circumstances). My question is way simpler, I don't deal with clauses like \"IN((1,2),(3,4),(5,6))\" etc.; it's all about the plain 2-column selects. Unfortunately, it's super-hard to find more info about this question, because both \"in\" and \"any\" are stop-words in search engines, so they don't show good answers, including any of the PG mail list archives.P.S.S.Why does the answer matter? Because for \"IN(1,2,3)\" case, e.g. pg_stat_statements generalizes the query to \"IN($1,$2,$3)\" and doesn't coalesce multiple queries into one, whilst for \"=ANY('{1,2,3}')\", it coalesces them all to \"=ANY($1)\". Having those 1,2,3,... of a different cardinality all the time, the logs/stats are flooded with the useless variants of the same query basically. It also applies to e.g. logging to Datadog which normalizes the queries. We'd love to use \"=ANY(...)\" variant everywhere and never use IN() anymore, but are scared of getting some unexpected regressions.", "msg_date": "Wed, 16 Nov 2022 19:40:13 -0800", "msg_from": "Dmitry Koterov <dmitry.koterov@gmail.com>", "msg_from_op": true, "msg_subject": "Is the plan for IN(1,2,3) always the same as for =ANY('{1,2,3}') when\n using PQexec with no params?" }, { "msg_contents": "Dmitry Koterov <dmitry.koterov@gmail.com> writes:\n> PG13+. Assume we have two identical queries with no arguments (as a plain\n> text, e.g. passed to PQexec - NOT to PQexecParams!):\n\n> - one with \"a=X AND b IN(...)\"\n> - and one with \"a=X and b=ANY('{...}')\n\n> The question: is it guaranteed that the planner will always choose\n> identical plans for them (or, at least, the plan for ANY will not match an\n> existing index worse than the plan with IN)?\n\nThis depends greatly on what \"...\" represents. But if it's a list\nof constants, they're probably equivalent. transformAExprIn()\noffers some caveats:\n\n * We try to generate a ScalarArrayOpExpr from IN/NOT IN, but this is only\n * possible if there is a suitable array type available. If not, we fall\n * back to a boolean condition tree with multiple copies of the lefthand\n * expression. Also, any IN-list items that contain Vars are handled as\n * separate boolean conditions, because that gives the planner more scope\n * for optimization on such clauses.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Nov 2022 22:51:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is the plan for IN(1,2,3) always the same as for =ANY('{1,2,3}')\n when using PQexec with no params?" }, { "msg_contents": "Thanks Tom!\n\nIt sounds like that for multi-value IN/ANY, they act the same way as you\nmentioned. *But I found the difference in plans for the single-value\nvariant.*\n\nImagine we have a btree(a, b) index. Compare two queries for one-element\nuse case:\n\n1. a='aaa' AND b=ANY('{bbb}')\n2. a='aaa' AND b IN('bbb')\n\nThey may produce different plans: IN() always coalesces to field='aaa' in\nthe plan, whilst =ANY() always remains =ANY(). This causes PG to choose a\n\"post-filtering\" plan sometimes:\n\n1. For =ANY: Index Cond: (a='aaa'); Filter: b=ANY('{bbb}')\n2. For IN(): Index Cond: (a='aaa') AND (b='bbb')\n\nDo you think that this difference is significant? Or maybe something is off\nin the planner, should it treat them differently by design, is it intended?\n\nBelow is an example screenshot from the production database with real data.\nWe see that IN(20) is literally the same as =20 (green marker), whilst\n=any('{20}') causes PG to use post-filtering. (The cardinality of data in\n\"type\" field is low, just several unique values there in the entire table,\nso it probably doesn't make a big difference, whether a post-filtering is\nused or not, but anyways, the difference between IN() and =ANY looks a\nlittle scary. The index is \"btree (cred_id, external_id, type)\".)\n\n[image: image.png]\n\nThanks!\n\nOn Wed, Nov 16, 2022 at 7:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dmitry Koterov <dmitry.koterov@gmail.com> writes:\n> > PG13+. Assume we have two identical queries with no arguments (as a plain\n> > text, e.g. passed to PQexec - NOT to PQexecParams!):\n>\n> > - one with \"a=X AND b IN(...)\"\n> > - and one with \"a=X and b=ANY('{...}')\n>\n> > The question: is it guaranteed that the planner will always choose\n> > identical plans for them (or, at least, the plan for ANY will not match\n> an\n> > existing index worse than the plan with IN)?\n>\n> This depends greatly on what \"...\" represents. But if it's a list\n> of constants, they're probably equivalent. transformAExprIn()\n> offers some caveats:\n>\n> * We try to generate a ScalarArrayOpExpr from IN/NOT IN, but this is\n> only\n> * possible if there is a suitable array type available. If not, we\n> fall\n> * back to a boolean condition tree with multiple copies of the\n> lefthand\n> * expression. Also, any IN-list items that contain Vars are handled\n> as\n> * separate boolean conditions, because that gives the planner more\n> scope\n> * for optimization on such clauses.\n>\n> regards, tom lane\n>", "msg_date": "Tue, 6 Dec 2022 20:45:30 -0800", "msg_from": "Dmitry Koterov <dmitry.koterov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is the plan for IN(1,2,3) always the same as for =ANY('{1,2,3}')\n when using PQexec with no params?" } ]
[ { "msg_contents": "Hi, hackers\n\nI find some typo about xl_running_xacts in comments.\nAttached a patch to fix those.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Thu, 17 Nov 2022 15:06:38 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Typo for xl_running_xacts" }, { "msg_contents": "> On 17 Nov 2022, at 08:06, Japin Li <japinli@hotmail.com> wrote:\n\n> I find some typo about xl_running_xacts in comments.\n> Attached a patch to fix those.\n\nThanks, applied!\n\n> -\t * might look that we could use xl_running_xact's ->xids information to\n> +\t * might look that we could use xl_running_xacts->xids information to\n\nI'm not a native english speaker, but since xl_running_xacts refers to a name\nin this case and not an indication of plural possession, ending with a 's is\ncorrect even if it ends with an 's', so I instead changed this to\n\"xl_running_xacts's\".\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 09:22:09 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Typo for xl_running_xacts" }, { "msg_contents": "\nOn Thu, 17 Nov 2022 at 16:22, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 17 Nov 2022, at 08:06, Japin Li <japinli@hotmail.com> wrote:\n>\n>> I find some typo about xl_running_xacts in comments.\n>> Attached a patch to fix those.\n>\n> Thanks, applied!\n>\n>> -\t * might look that we could use xl_running_xact's ->xids information to\n>> +\t * might look that we could use xl_running_xacts->xids information to\n>\n> I'm not a native english speaker, but since xl_running_xacts refers to a name\n> in this case and not an indication of plural possession, ending with a 's is\n> correct even if it ends with an 's', so I instead changed this to\n> \"xl_running_xacts's\".\n\nThanks, I found other places use \"xl_running_xacs'\",\nsuch as \"xl_running_xacts' oldestRunningXid\" in snapbuild.c.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 17 Nov 2022 16:47:44 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo for xl_running_xacts" } ]
[ { "msg_contents": "I noticed an odd behavior today in pg_stat_statements query\nnormalization for queries called from SQL-language functions. If I\nhave three functions that call an essentially identical query (the\nfunctions are only marked SECURITY DEFINER to prevent inlining):\n\nmaciek=# create or replace function f1(f1param text) returns text\nlanguage sql as 'select f1param' security definer;\nCREATE FUNCTION\nmaciek=# create or replace function f2(f2param text) returns text\nlanguage sql as 'select f2param' security definer;\nCREATE FUNCTION\nmaciek=# create or replace function f3(text) returns text language sql\nas 'select $1' security definer;\nCREATE FUNCTION\n\nand I have pg_stat_statements.track = 'all', so that queries called\nfrom functions are tracked, these all end up with the same query id in\npg_stat_statements, but the query text includes the parameter name (if\none is referenced in the query in the function). E.g., if I call f1\nfirst, then f2 and f3, I get:\n\nmaciek=# select queryid, query, calls from pg_stat_statements where\nqueryid = 6741491046520556186;\n queryid | query | calls\n---------------------+----------------+-------\n 6741491046520556186 | select f1param | 3\n(1 row)\n\nIf I call f3 first, then f2 and f1, I get\n\nmaciek=# select queryid, query, calls from pg_stat_statements where\nqueryid = 6741491046520556186;\n queryid | query | calls\n---------------------+-----------+-------\n 6741491046520556186 | select $1 | 3\n(1 row)\n\nI understand that the query text may be captured differently for\ndifferent queries that map to the same id, but it seems confusing that\nparameter names referenced in queries called from functions are not\nnormalized away, since they're not germane to the query execution\nitself, and the context of the function is otherwise stripped away by\nthis point. I would expect that all three of these queries end up in\npg_stat_statements with the query text \"select $1\".Thoughts?\n\nThanks,\nMaciek\n\n\n", "msg_date": "Wed, 16 Nov 2022 23:26:09 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Odd behavior with pg_stat_statements and queries called from SQL\n functions" }, { "msg_contents": "Hi,\n\nOn Wed, Nov 16, 2022 at 11:26:09PM -0800, Maciek Sakrejda wrote:\n> I noticed an odd behavior today in pg_stat_statements query\n> normalization for queries called from SQL-language functions. If I\n> have three functions that call an essentially identical query (the\n> functions are only marked SECURITY DEFINER to prevent inlining):\n>\n> maciek=# create or replace function f1(f1param text) returns text\n> language sql as 'select f1param' security definer;\n> CREATE FUNCTION\n> maciek=# create or replace function f2(f2param text) returns text\n> language sql as 'select f2param' security definer;\n> CREATE FUNCTION\n> maciek=# create or replace function f3(text) returns text language sql\n> as 'select $1' security definer;\n> CREATE FUNCTION\n> [...]\n> maciek=# select queryid, query, calls from pg_stat_statements where\n> queryid = 6741491046520556186;\n> queryid | query | calls\n> ---------------------+----------------+-------\n> 6741491046520556186 | select f1param | 3\n> (1 row)\n>\n> If I call f3 first, then f2 and f1, I get\n>\n> maciek=# select queryid, query, calls from pg_stat_statements where\n> queryid = 6741491046520556186;\n> queryid | query | calls\n> ---------------------+-----------+-------\n> 6741491046520556186 | select $1 | 3\n> (1 row)\n>\n> I understand that the query text may be captured differently for\n> different queries that map to the same id, but it seems confusing that\n> parameter names referenced in queries called from functions are not\n> normalized away, since they're not germane to the query execution\n> itself, and the context of the function is otherwise stripped away by\n> this point. I would expect that all three of these queries end up in\n> pg_stat_statements with the query text \"select $1\".Thoughts?\n\nNone of those queries actually contain any constant, so the query text is just\nsaved as-is in all the versions.\n\nI'm not sure that doing normalization for parameters would give way better\nresults. It's true that a parameter name can change between different\nfunctions running the exact same statements, but is it really likely to happen?\nAnd what if the two functions have different number of parameters in different\norders? $1 could mean different things in different cases, and good luck\nfinding out which one it is. At least with the parameter name you have a\nchance to figure out what the parameter was exactly.\n\n\n", "msg_date": "Thu, 17 Nov 2022 17:14:26 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Odd behavior with pg_stat_statements and queries called from SQL\n functions" }, { "msg_contents": "On Thu, Nov 17, 2022 at 2:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Nov 16, 2022 at 11:26:09PM -0800, Maciek Sakrejda wrote:\n> > I noticed an odd behavior today in pg_stat_statements query\n> > normalization for queries called from SQL-language functions. If I\n> > have three functions that call an essentially identical query (the\n> > functions are only marked SECURITY DEFINER to prevent inlining):\n> >\n> > maciek=# create or replace function f1(f1param text) returns text\n> > language sql as 'select f1param' security definer;\n> > CREATE FUNCTION\n> > maciek=# create or replace function f2(f2param text) returns text\n> > language sql as 'select f2param' security definer;\n> > CREATE FUNCTION\n> > maciek=# create or replace function f3(text) returns text language sql\n> > as 'select $1' security definer;\n> > CREATE FUNCTION\n> > [...]\n> > maciek=# select queryid, query, calls from pg_stat_statements where\n> > queryid = 6741491046520556186;\n> > queryid | query | calls\n> > ---------------------+----------------+-------\n> > 6741491046520556186 | select f1param | 3\n> > (1 row)\n> >\n> > If I call f3 first, then f2 and f1, I get\n> >\n> > maciek=# select queryid, query, calls from pg_stat_statements where\n> > queryid = 6741491046520556186;\n> > queryid | query | calls\n> > ---------------------+-----------+-------\n> > 6741491046520556186 | select $1 | 3\n> > (1 row)\n> >\n> > I understand that the query text may be captured differently for\n> > different queries that map to the same id, but it seems confusing that\n> > parameter names referenced in queries called from functions are not\n> > normalized away, since they're not germane to the query execution\n> > itself, and the context of the function is otherwise stripped away by\n> > this point. I would expect that all three of these queries end up in\n> > pg_stat_statements with the query text \"select $1\".Thoughts?\n>\n> None of those queries actually contain any constant, so the query text is just\n> saved as-is in all the versions.\n>\n> I'm not sure that doing normalization for parameters would give way better\n> results. It's true that a parameter name can change between different\n> functions running the exact same statements, but is it really likely to happen?\n\nMultiple functions running the same query is quite possible. I am\nwondering why it took so long to identify this behaviour.\n\n> And what if the two functions have different number of parameters in different\n> orders? $1 could mean different things in different cases, and good luck\n> finding out which one it is. At least with the parameter name you have a\n> chance to figure out what the parameter was exactly.\n>\nReporting one of the parameters as is is a problem yes. Can the\nparameters be converted into some normailzed form like replacing\nparameters with ? (or some constant string indicating parameter)\neverywhere.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 17 Nov 2022 15:00:23 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Odd behavior with pg_stat_statements and queries called from SQL\n functions" } ]
[ { "msg_contents": "Hi,\n\nWhen I was reviewing one of the patches, I found a typo in the\ncomments of SH_LOOKUP and SH_LOOKUP_HASH. I felt \"lookup up\" should\nhave been \"lookup\".\nAttached a patch to modify it.\n\nRegards,\nVignesh", "msg_date": "Thu, 17 Nov 2022 15:26:14 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Typo in SH_LOOKUP and SH_LOOKUP_HASH comments" }, { "msg_contents": "> On 17 Nov 2022, at 10:56, vignesh C <vignesh21@gmail.com> wrote:\n\n> When I was reviewing one of the patches, I found a typo in the\n> comments of SH_LOOKUP and SH_LOOKUP_HASH. I felt \"lookup up\" should\n> have been \"lookup\".\n> Attached a patch to modify it.\n\nI agree with that, applied to HEAD. Thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 13:18:36 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Typo in SH_LOOKUP and SH_LOOKUP_HASH comments" }, { "msg_contents": "On Thu, 17 Nov 2022 at 17:48, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 17 Nov 2022, at 10:56, vignesh C <vignesh21@gmail.com> wrote:\n>\n> > When I was reviewing one of the patches, I found a typo in the\n> > comments of SH_LOOKUP and SH_LOOKUP_HASH. I felt \"lookup up\" should\n> > have been \"lookup\".\n> > Attached a patch to modify it.\n>\n> I agree with that, applied to HEAD. Thanks!\n\nThanks for pushing this change.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 18 Nov 2022 06:37:04 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo in SH_LOOKUP and SH_LOOKUP_HASH comments" } ]
[ { "msg_contents": "Hi,\nI was looking at commit aca992040951c7665f1701cd25d48808eda7a809\n\nI think the check of msg after the switch statement is not necessary. The\nvariable msg is used afterward.\nIf there is (potential) missing case in switch statement, the compiler\nwould warn.\n\nHow about removing the check ?\n\nThanks\n\ndiff --git a/src/backend/commands/dropcmds.c\nb/src/backend/commands/dropcmds.c\nindex db906f530e..55996940eb 100644\n--- a/src/backend/commands/dropcmds.c\n+++ b/src/backend/commands/dropcmds.c\n@@ -518,9 +518,6 @@ does_not_exist_skipping(ObjectType objtype, Node\n*object)\n\n /* no default, to let compiler warn about missing case */\n }\n- if (!msg)\n- elog(ERROR, \"unrecognized object type: %d\", (int) objtype);\n-\n if (!args)\n ereport(NOTICE, (errmsg(msg, name)));\n else\n\nHi,I was looking at commit aca992040951c7665f1701cd25d48808eda7a809I think the check of msg after the switch statement is not necessary. The variable msg is used afterward.If there is (potential) missing case in switch statement, the compiler would warn.How about removing the check ?Thanksdiff --git a/src/backend/commands/dropcmds.c b/src/backend/commands/dropcmds.cindex db906f530e..55996940eb 100644--- a/src/backend/commands/dropcmds.c+++ b/src/backend/commands/dropcmds.c@@ -518,9 +518,6 @@ does_not_exist_skipping(ObjectType objtype, Node *object)             /* no default, to let compiler warn about missing case */     }-    if (!msg)-        elog(ERROR, \"unrecognized object type: %d\", (int) objtype);-     if (!args)         ereport(NOTICE, (errmsg(msg, name)));     else", "msg_date": "Thu, 17 Nov 2022 04:12:47 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "redundant check of msg in does_not_exist_skipping" }, { "msg_contents": "\nOn Thu, 17 Nov 2022 at 20:12, Ted Yu <yuzhihong@gmail.com> wrote:\n> Hi,\n> I was looking at commit aca992040951c7665f1701cd25d48808eda7a809\n>\n> I think the check of msg after the switch statement is not necessary. The\n> variable msg is used afterward.\n> If there is (potential) missing case in switch statement, the compiler\n> would warn.\n>\n> How about removing the check ?\n>\n\nI think we cannot remove the check, for example, if objtype is OBJECT_OPFAMILY,\nand schema_does_not_exist_skipping() returns true, the so the msg keeps NULL,\nif we remove this check, a sigfault might be occurd in ereport().\n\n case OBJECT_OPFAMILY:\n {\n List *opfname = list_copy_tail(castNode(List, object), 1);\n\n if (!schema_does_not_exist_skipping(opfname, &msg, &name))\n {\n msg = gettext_noop(\"operator family \\\"%s\\\" does not exist for access method \\\"%s\\\", skipping\");\n name = NameListToString(opfname);\n args = strVal(linitial(castNode(List, object)));\n }\n }\n break;\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 17 Nov 2022 23:06:22 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: redundant check of msg in does_not_exist_skipping" }, { "msg_contents": "\nOn Thu, 17 Nov 2022 at 23:06, Japin Li <japinli@hotmail.com> wrote:\n> On Thu, 17 Nov 2022 at 20:12, Ted Yu <yuzhihong@gmail.com> wrote:\n>> Hi,\n>> I was looking at commit aca992040951c7665f1701cd25d48808eda7a809\n>>\n>> I think the check of msg after the switch statement is not necessary. The\n>> variable msg is used afterward.\n>> If there is (potential) missing case in switch statement, the compiler\n>> would warn.\n>>\n>> How about removing the check ?\n>>\n>\n> I think we cannot remove the check, for example, if objtype is OBJECT_OPFAMILY,\n> and schema_does_not_exist_skipping() returns true, the so the msg keeps NULL,\n> if we remove this check, a sigfault might be occurd in ereport().\n>\n> case OBJECT_OPFAMILY:\n> {\n> List *opfname = list_copy_tail(castNode(List, object), 1);\n>\n> if (!schema_does_not_exist_skipping(opfname, &msg, &name))\n> {\n> msg = gettext_noop(\"operator family \\\"%s\\\" does not exist for access method \\\"%s\\\", skipping\");\n> name = NameListToString(opfname);\n> args = strVal(linitial(castNode(List, object)));\n> }\n> }\n> break;\n\nSorry, I didn't look into schema_does_not_exist_skipping(), and after look\ninto schema_does_not_exist_skipping function and others, all paths that go\nout switch branch has non-NULL for msg, so we can remove this check safely.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 17 Nov 2022 23:19:26 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: redundant check of msg in does_not_exist_skipping" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Thu, 17 Nov 2022 at 23:06, Japin Li <japinli@hotmail.com> wrote:\n>> I think we cannot remove the check, for example, if objtype is OBJECT_OPFAMILY,\n>> and schema_does_not_exist_skipping() returns true, the so the msg keeps NULL,\n>> if we remove this check, a sigfault might be occurd in ereport().\n\n> Sorry, I didn't look into schema_does_not_exist_skipping(), and after look\n> into schema_does_not_exist_skipping function and others, all paths that go\n> out switch branch has non-NULL for msg, so we can remove this check safely.\n\nThis is a completely bad idea. If it takes that level of analysis\nto see that msg can't be null, we should leave the test in place.\nAny future modification of either this code or what it calls could\nbreak the conclusion.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 10:55:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: redundant check of msg in does_not_exist_skipping" }, { "msg_contents": "On Thu, Nov 17, 2022 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This is a completely bad idea. If it takes that level of analysis\n> to see that msg can't be null, we should leave the test in place.\n> Any future modification of either this code or what it calls could\n> break the conclusion.\n\n+1. Also, even if the check were quite obviously useless, it's cheap\ninsurance. It's difficult to believe that it hurts performance in any\nmeasurable way. If anything, we would benefit from having more sanity\nchecks in our code, rather than fewer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Nov 2022 15:04:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: redundant check of msg in does_not_exist_skipping" }, { "msg_contents": "\n\n> On Nov 18, 2022, at 4:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Nov 17, 2022 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This is a completely bad idea. If it takes that level of analysis\n>> to see that msg can't be null, we should leave the test in place.\n>> Any future modification of either this code or what it calls could\n>> break the conclusion.\n> \n> +1. Also, even if the check were quite obviously useless, it's cheap\n> insurance. It's difficult to believe that it hurts performance in any\n> measurable way. If anything, we would benefit from having more sanity\n> checks in our code, rather than fewer.\n> \n\nThanks for the explanation! Got it.\n\n\n\n", "msg_date": "Fri, 18 Nov 2022 00:10:25 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: redundant check of msg in does_not_exist_skipping" } ]
[ { "msg_contents": "Hi Hackers,\n\nwhile testing the developer settings of PSQL (14.5) I came across this\nissue:\n\npostgres=# CREATE UNLOGGED TABLE stats (\npostgres(# pg_hash BIGINT NOT NULL,\npostgres(# category TEXT NOT NULL,\npostgres(# PRIMARY KEY (pg_hash, category)\npostgres(# );\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nChecking the stack trace I found this:\nProgram received signal SIGSEGV, Segmentation fault.\n0x0000000000ab6662 in smgrwrite (reln=0x0, forknum=INIT_FORKNUM,\nblocknum=0, buffer=0x2b5eec0 \"\", skipFsync=true)\n at\n/opt/postgresql-src/debug-build/../src/backend/storage/smgr/smgr.c:526\n526 smgrsw[reln->smgr_which].smgr_write(reln, forknum, blocknum,\n(gdb) bt\n#0 0x0000000000ab6662 in smgrwrite (reln=0x0, forknum=INIT_FORKNUM,\nblocknum=0, buffer=0x2b5eec0 \"\", skipFsync=true) at\n/opt/postgresql-src/debug-build/../src/backend/storage/smgr/smgr.c:526\n#1 0x000000000056991b in btbuildempty (index=0x7fe60ac9be60) at\n/opt/postgresql-src/debug-build/../src/backend/access/nbtree/nbtree.c:166\n#2 0x0000000000623ad9 in index_build (heapRelation=0x7fe60ac9c078,\nindexRelation=0x7fe60ac9be60, indexInfo=0x2b4c330, isreindex=false,\nparallel=true) at\n/opt/postgresql-src/debug-build/../src/backend/catalog/index.c:3028\n#3 0x0000000000621886 in index_create (heapRelation=0x7fe60ac9c078,\nindexRelationName=0x2b4c448 \"stats_pkey\", indexRelationId=16954,\nparentIndexRelid=0, parentConstraintId=0, relFileNode=0,\nindexInfo=0x2b4c330, indexColNames=0x2b4bee8, accessMethodObjectId=403,\ntableSpaceId=0, collationObjectId=0x2b4c560, classObjectId=0x2b4c580,\ncoloptions=0x2b4c5a0, reloptions=0,\n flags=3, constr_flags=0, allow_system_table_mods=false,\nis_internal=false, constraintId=0x7ffef5cc4a7c) at\n/opt/postgresql-src/debug-build/../src/backend/catalog/index.c:1232\n#4 0x000000000074af6e in DefineIndex (relationId=16949, stmt=0x2b527a0,\nindexRelationId=0, parentIndexId=0, parentConstraintId=0,\nis_alter_table=false, check_rights=true, check_not_in_use=true,\nskip_build=false, quiet=false) at\n/opt/postgresql-src/debug-build/../src/backend/commands/indexcmds.c:1164\n#5 0x0000000000ac8d78 in ProcessUtilitySlow (pstate=0x2b49230,\npstmt=0x2b48fe8, queryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n\n pg_hash BIGINT NOT NULL,\\n category TEXT NOT NULL,\\n PRIMARY KEY\n(pg_hash, category)\\n);\", context=PROCESS_UTILITY_SUBCOMMAND, params=0x0,\nqueryEnv=0x0, dest=0xe9ceb0 <donothingDR>, qc=0x0)\n at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:1535\n#6 0x0000000000ac6637 in standard_ProcessUtility (pstmt=0x2b48fe8,\nqueryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n pg_hash BIGINT\nNOT NULL,\\n category TEXT NOT NULL,\\n PRIMARY KEY (pg_hash,\ncategory)\\n);\", readOnlyTree=false, context=PROCESS_UTILITY_SUBCOMMAND,\nparams=0x0, queryEnv=0x0, dest=0xe9ceb0 <donothingDR>, qc=0x0)\n at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:1066\n#7 0x0000000000ac548b in ProcessUtility (pstmt=0x2b48fe8,\nqueryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n pg_hash BIGINT\nNOT NULL,\\n category TEXT NOT NULL,\\n PRIMARY KEY (pg_hash,\ncategory)\\n);\", readOnlyTree=false, context=PROCESS_UTILITY_SUBCOMMAND,\nparams=0x0, queryEnv=0x0, dest=0xe9ceb0 <donothingDR>, qc=0x0)\n at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:527\n#8 0x0000000000ac7e5e in ProcessUtilitySlow (pstate=0x2b52b10,\npstmt=0x2a72d28, queryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n\n pg_hash BIGINT NOT NULL,\\n category TEXT NOT NULL,\\n PRIMARY KEY\n(pg_hash, category)\\n);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\nqueryEnv=0x0, dest=0x2a72df8, qc=0x7ffef5cc6c10)\n at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:1244\n#9 0x0000000000ac6637 in standard_ProcessUtility (pstmt=0x2a72d28,\nqueryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n pg_hash BIGINT\nNOT NULL,\\n category TEXT NOT NULL,\\n PRIMARY KEY (pg_hash,\ncategory)\\n);\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL,\nparams=0x0, queryEnv=0x0, dest=0x2a72df8, qc=0x7ffef5cc6c10)\n at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:1066\n#10 0x0000000000ac548b in ProcessUtility (pstmt=0x2a72d28,\nqueryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n pg_hash BIGINT\nNOT NULL,\\n category TEXT NOT NULL,\\n PRIMARY KEY (pg_hash,\ncategory)\\n);\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL,\nparams=0x0, queryEnv=0x0, dest=0x2a72df8, qc=0x7ffef5cc6c10)\n at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:527\n#11 0x0000000000ac4aad in PortalRunUtility (portal=0x2b06bf0,\npstmt=0x2a72d28, isTopLevel=true, setHoldSnapshot=false, dest=0x2a72df8,\nqc=0x7ffef5cc6c10) at\n/opt/postgresql-src/debug-build/../src/backend/tcop/pquery.c:1155\n#12 0x0000000000ac3b57 in PortalRunMulti (portal=0x2b06bf0,\nisTopLevel=true, setHoldSnapshot=false, dest=0x2a72df8, altdest=0x2a72df8,\nqc=0x7ffef5cc6c10) at\n/opt/postgresql-src/debug-build/../src/backend/tcop/pquery.c:1312\n#13 0x0000000000ac306f in PortalRun (portal=0x2b06bf0,\ncount=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2a72df8,\naltdest=0x2a72df8, qc=0x7ffef5cc6c10) at\n/opt/postgresql-src/debug-build/../src/backend/tcop/pquery.c:788\n#14 0x0000000000abdfad in exec_simple_query (query_string=0x2a71650 \"CREATE\nUNLOGGED TABLE stats (\\n pg_hash BIGINT NOT NULL,\\n category TEXT NOT\nNULL,\\n PRIMARY KEY (pg_hash, category)\\n);\") at\n/opt/postgresql-src/debug-build/../src/backend/tcop/postgres.c:1213\n#15 0x0000000000abd1fb in PostgresMain (argc=1, argv=0x7ffef5cc6e50,\ndbname=0x2a9cb90 \"postgres\", username=0x2a9cb68 \"host_user\") at\n/opt/postgresql-src/debug-build/../src/backend/tcop/postgres.c:4496\n#16 0x00000000009c2b4a in BackendRun (port=0x2a964c0) at\n/opt/postgresql-src/debug-build/../src/backend/postmaster/postmaster.c:4530\n#17 0x00000000009c2074 in BackendStartup (port=0x2a964c0) at\n/opt/postgresql-src/debug-build/../src/backend/postmaster/postmaster.c:4252\n#18 0x00000000009c0e27 in ServerLoop () at\n/opt/postgresql-src/debug-build/../src/backend/postmaster/postmaster.c:1745\n#19 0x00000000009be275 in PostmasterMain (argc=3, argv=0x2a6add0) at\n/opt/postgresql-src/debug-build/../src/backend/postmaster/postmaster.c:1417\n#20 0x0000000000896dc3 in main (argc=3, argv=0x2a6add0) at\n/opt/postgresql-src/debug-build/../src/backend/main/main.c:209\n\nThe error does not appear if the table is not defined as UNLOGGED, or if\nthe primary key is not compound.\nIs it that the specific developer option is not used by the community to\nrun tests?\n\nKind regards,\n\n-- \nSpiros\n(ServiceNow)\n\nHi Hackers,while testing the developer settings of PSQL (14.5) I came across this issue:postgres=# CREATE UNLOGGED TABLE stats (postgres(#     pg_hash BIGINT NOT NULL,postgres(#     category TEXT NOT NULL,postgres(#     PRIMARY KEY (pg_hash, category)postgres(# );server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.The connection to the server was lost. Attempting reset: Failed.Checking the stack trace I found this:Program received signal SIGSEGV, Segmentation fault.0x0000000000ab6662 in smgrwrite (reln=0x0, forknum=INIT_FORKNUM, blocknum=0, buffer=0x2b5eec0 \"\", skipFsync=true)    at /opt/postgresql-src/debug-build/../src/backend/storage/smgr/smgr.c:526526             smgrsw[reln->smgr_which].smgr_write(reln, forknum, blocknum,(gdb) bt#0  0x0000000000ab6662 in smgrwrite (reln=0x0, forknum=INIT_FORKNUM, blocknum=0, buffer=0x2b5eec0 \"\", skipFsync=true) at /opt/postgresql-src/debug-build/../src/backend/storage/smgr/smgr.c:526#1  0x000000000056991b in btbuildempty (index=0x7fe60ac9be60) at /opt/postgresql-src/debug-build/../src/backend/access/nbtree/nbtree.c:166#2  0x0000000000623ad9 in index_build (heapRelation=0x7fe60ac9c078, indexRelation=0x7fe60ac9be60, indexInfo=0x2b4c330, isreindex=false, parallel=true) at /opt/postgresql-src/debug-build/../src/backend/catalog/index.c:3028#3  0x0000000000621886 in index_create (heapRelation=0x7fe60ac9c078, indexRelationName=0x2b4c448 \"stats_pkey\", indexRelationId=16954, parentIndexRelid=0, parentConstraintId=0, relFileNode=0, indexInfo=0x2b4c330, indexColNames=0x2b4bee8, accessMethodObjectId=403, tableSpaceId=0, collationObjectId=0x2b4c560, classObjectId=0x2b4c580, coloptions=0x2b4c5a0, reloptions=0,    flags=3, constr_flags=0, allow_system_table_mods=false, is_internal=false, constraintId=0x7ffef5cc4a7c) at /opt/postgresql-src/debug-build/../src/backend/catalog/index.c:1232#4  0x000000000074af6e in DefineIndex (relationId=16949, stmt=0x2b527a0, indexRelationId=0, parentIndexId=0, parentConstraintId=0, is_alter_table=false, check_rights=true, check_not_in_use=true, skip_build=false, quiet=false) at /opt/postgresql-src/debug-build/../src/backend/commands/indexcmds.c:1164#5  0x0000000000ac8d78 in ProcessUtilitySlow (pstate=0x2b49230, pstmt=0x2b48fe8, queryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n    pg_hash BIGINT NOT NULL,\\n    category TEXT NOT NULL,\\n    PRIMARY KEY (pg_hash, category)\\n);\", context=PROCESS_UTILITY_SUBCOMMAND, params=0x0, queryEnv=0x0, dest=0xe9ceb0 <donothingDR>, qc=0x0)    at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:1535#6  0x0000000000ac6637 in standard_ProcessUtility (pstmt=0x2b48fe8, queryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n    pg_hash BIGINT NOT NULL,\\n    category TEXT NOT NULL,\\n    PRIMARY KEY (pg_hash, category)\\n);\", readOnlyTree=false, context=PROCESS_UTILITY_SUBCOMMAND, params=0x0, queryEnv=0x0, dest=0xe9ceb0 <donothingDR>, qc=0x0)    at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:1066#7  0x0000000000ac548b in ProcessUtility (pstmt=0x2b48fe8, queryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n    pg_hash BIGINT NOT NULL,\\n    category TEXT NOT NULL,\\n    PRIMARY KEY (pg_hash, category)\\n);\", readOnlyTree=false, context=PROCESS_UTILITY_SUBCOMMAND, params=0x0, queryEnv=0x0, dest=0xe9ceb0 <donothingDR>, qc=0x0)    at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:527#8  0x0000000000ac7e5e in ProcessUtilitySlow (pstate=0x2b52b10, pstmt=0x2a72d28, queryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n    pg_hash BIGINT NOT NULL,\\n    category TEXT NOT NULL,\\n    PRIMARY KEY (pg_hash, category)\\n);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2a72df8, qc=0x7ffef5cc6c10)    at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:1244#9  0x0000000000ac6637 in standard_ProcessUtility (pstmt=0x2a72d28, queryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n    pg_hash BIGINT NOT NULL,\\n    category TEXT NOT NULL,\\n    PRIMARY KEY (pg_hash, category)\\n);\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2a72df8, qc=0x7ffef5cc6c10)    at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:1066#10 0x0000000000ac548b in ProcessUtility (pstmt=0x2a72d28, queryString=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n    pg_hash BIGINT NOT NULL,\\n    category TEXT NOT NULL,\\n    PRIMARY KEY (pg_hash, category)\\n);\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2a72df8, qc=0x7ffef5cc6c10)    at /opt/postgresql-src/debug-build/../src/backend/tcop/utility.c:527#11 0x0000000000ac4aad in PortalRunUtility (portal=0x2b06bf0, pstmt=0x2a72d28, isTopLevel=true, setHoldSnapshot=false, dest=0x2a72df8, qc=0x7ffef5cc6c10) at /opt/postgresql-src/debug-build/../src/backend/tcop/pquery.c:1155#12 0x0000000000ac3b57 in PortalRunMulti (portal=0x2b06bf0, isTopLevel=true, setHoldSnapshot=false, dest=0x2a72df8, altdest=0x2a72df8, qc=0x7ffef5cc6c10) at /opt/postgresql-src/debug-build/../src/backend/tcop/pquery.c:1312#13 0x0000000000ac306f in PortalRun (portal=0x2b06bf0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2a72df8, altdest=0x2a72df8, qc=0x7ffef5cc6c10) at /opt/postgresql-src/debug-build/../src/backend/tcop/pquery.c:788#14 0x0000000000abdfad in exec_simple_query (query_string=0x2a71650 \"CREATE UNLOGGED TABLE stats (\\n    pg_hash BIGINT NOT NULL,\\n    category TEXT NOT NULL,\\n    PRIMARY KEY (pg_hash, category)\\n);\") at /opt/postgresql-src/debug-build/../src/backend/tcop/postgres.c:1213#15 0x0000000000abd1fb in PostgresMain (argc=1, argv=0x7ffef5cc6e50, dbname=0x2a9cb90 \"postgres\", username=0x2a9cb68 \"host_user\") at /opt/postgresql-src/debug-build/../src/backend/tcop/postgres.c:4496#16 0x00000000009c2b4a in BackendRun (port=0x2a964c0) at /opt/postgresql-src/debug-build/../src/backend/postmaster/postmaster.c:4530#17 0x00000000009c2074 in BackendStartup (port=0x2a964c0) at /opt/postgresql-src/debug-build/../src/backend/postmaster/postmaster.c:4252#18 0x00000000009c0e27 in ServerLoop () at /opt/postgresql-src/debug-build/../src/backend/postmaster/postmaster.c:1745#19 0x00000000009be275 in PostmasterMain (argc=3, argv=0x2a6add0) at /opt/postgresql-src/debug-build/../src/backend/postmaster/postmaster.c:1417#20 0x0000000000896dc3 in main (argc=3, argv=0x2a6add0) at /opt/postgresql-src/debug-build/../src/backend/main/main.c:209The error does not appear if the table is not defined as UNLOGGED, or if the primary key is not compound.Is it that the specific developer option is not used by the community to run tests?Kind regards,-- Spiros(ServiceNow)", "msg_date": "Thu, 17 Nov 2022 14:23:27 +0200", "msg_from": "Spyridon Dimitrios Agathos <spyridon.dimitrios.agathos@gmail.com>", "msg_from_op": true, "msg_subject": "CREATE UNLOGGED TABLE seq faults when debug_discard_caches=1" }, { "msg_contents": "Spyridon Dimitrios Agathos <spyridon.dimitrios.agathos@gmail.com> writes:\n> while testing the developer settings of PSQL (14.5) I came across this\n> issue:\n\n> postgres=# CREATE UNLOGGED TABLE stats (\n> postgres(# pg_hash BIGINT NOT NULL,\n> postgres(# category TEXT NOT NULL,\n> postgres(# PRIMARY KEY (pg_hash, category)\n> postgres(# );\n> server closed the connection unexpectedly\n\nHmm ... confirmed in the v14 branch, but v15 and HEAD are fine,\nevidently as a result of commit f10f0ae42 having replaced this\nunprotected use of index->rd_smgr.\n\nI wonder whether we ought to back-patch f10f0ae42. We could\nleave the RelationOpenSmgr macro in existence to avoid unnecessary\nbreakage of extension code, but stop using it within our own code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 11:24:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE UNLOGGED TABLE seq faults when debug_discard_caches=1" }, { "msg_contents": "I wrote:\n> I wonder whether we ought to back-patch f10f0ae42. We could\n> leave the RelationOpenSmgr macro in existence to avoid unnecessary\n> breakage of extension code, but stop using it within our own code.\n\nConcretely, about like this for v14 (didn't look at the older\nbranches yet).\n\nI'm not sure whether to recommend that outside extensions switch to using\nRelationGetSmgr in pre-v15 branches. If they do, they run a risk\nof compile failure should they be built against old back-branch\nheaders. Once compiled, though, they'd work against any minor release\n(since RelationGetSmgr is static inline, not something in the core\nbackend). So maybe that'd be good enough, and keeping their code in\nsync with what they need for v15 would be worth something.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 17 Nov 2022 12:51:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE UNLOGGED TABLE seq faults when debug_discard_caches=1" }, { "msg_contents": "Hi Tom,\n\nBack-patching but keeping RelationOpenSgmr() for extensions sounds \nreasonable.\n\nOn a different note: are we frequently running our tests suites with \ndebug_discard_caches=1 enabled?\nIt doesn't seem like. I just ran make check with debug_discard_caches=1 on\n\n- latest master: everything passes.\n- version 14.5: fails in create_index, create_index_spgist, create_view.\n\nSo the buggy code path is at least covered by the tests. But it seems \nlike we could have found it earlier by regularly running with \ndebug_discard_caches=1.\n\n--\nDavid Geier\n(ServiceNow)\n\nOn 11/17/22 18:51, Tom Lane wrote:\n> I wrote:\n>> I wonder whether we ought to back-patch f10f0ae42. We could\n>> leave the RelationOpenSmgr macro in existence to avoid unnecessary\n>> breakage of extension code, but stop using it within our own code.\n> Concretely, about like this for v14 (didn't look at the older\n> branches yet).\n>\n> I'm not sure whether to recommend that outside extensions switch to using\n> RelationGetSmgr in pre-v15 branches. If they do, they run a risk\n> of compile failure should they be built against old back-branch\n> headers. Once compiled, though, they'd work against any minor release\n> (since RelationGetSmgr is static inline, not something in the core\n> backend). So maybe that'd be good enough, and keeping their code in\n> sync with what they need for v15 would be worth something.\n>\n> \t\t\tregards, tom lane\n>\n\n\n", "msg_date": "Fri, 18 Nov 2022 13:43:31 +0100", "msg_from": "David Geier <geidav.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE UNLOGGED TABLE seq faults when debug_discard_caches=1" }, { "msg_contents": "David Geier <geidav.pg@gmail.com> writes:\n> On a different note: are we frequently running our tests suites with \n> debug_discard_caches=1 enabled?\n> It doesn't seem like.\n\nHmm. Buildfarm members avocet and trilobite are supposed to be\ndoing that, but their runtimes of late put the lie to it.\nConfiguration option got lost somewhere?\n\nprion is running with -DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE,\nwhich I would have thought would be enough to catch this, but I guess\nnot.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Nov 2022 09:43:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE UNLOGGED TABLE seq faults when debug_discard_caches=1" }, { "msg_contents": "\n\nOn 11/18/22 15:43, Tom Lane wrote:\n> David Geier <geidav.pg@gmail.com> writes:\n>> On a different note: are we frequently running our tests suites with \n>> debug_discard_caches=1 enabled?\n>> It doesn't seem like.\n> \n> Hmm. Buildfarm members avocet and trilobite are supposed to be\n> doing that, but their runtimes of late put the lie to it.\n> Configuration option got lost somewhere?\n> \n\nYup, my bad - I forgot to tweak CPPFLAGS when upgrading the buildfarm\nclient to v12. Fixed, next run should be with\n\n CPPFLAGS => '-DCLOBBER_CACHE_ALWAYS',\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Nov 2022 15:53:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE UNLOGGED TABLE seq faults when debug_discard_caches=1" }, { "msg_contents": "Hi hackers,\n\nMr Lane, thank you for backporting this also to version 13. It seems\nto be occuring in the wild (without debug_discard_caches) for real\nuser too when doing a lot of \"CREATE INDEX i ON\nunlogged_table_truncated_after_crash (x,y)\" which sometimes (rarely)\nresults in SIGSEGV11. I've reproduced it also on 13.9 recently thanks\nto \"break *btbuildempty / call InvalidateSystemCaches()\".\n\nI'm leaving partial stack trace so that other might find it (note the:\nsmgrwrite reln=0x0):\n\n#0 smgrwrite (reln=0x0, forknum=INIT_FORKNUM, blocknum=0,\nbuffer=0xeef828 \"\", skipFsync=true) at smgr.c:516\n#1 0x00000000004e5492 in btbuildempty (index=0x7f201fc3c7e0) at nbtree.c:178\n#2 0x00000000005417f4 in index_build\n(heapRelation=heapRelation@entry=0x7f201fc49dd0,\nindexRelation=indexRelation@entry=0x7f201fc3c7e0,\nindexInfo=indexInfo@entry=0x1159dd8,\n#3 0x0000000000542838 in index_create\n(heapRelation=heapRelation@entry=0x7f201fc49dd0,\nindexRelationName=indexRelationName@entry=0x1159f38 \"xxxxxxxx\",\nindexRelationId=yyyyy..)\n#4 0x00000000005db9c8 in DefineIndex\n(relationId=relationId@entry=1804880199, stmt=stmt@entry=0xf2fab8,\nindexRelationId=indexRelationId@entry=0,\nparentIndexId=parentIndexId@entry=0\n\n-Jakub Wartak.\n\nOn Fri, Nov 25, 2022 at 9:48 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 11/18/22 15:43, Tom Lane wrote:\n> > David Geier <geidav.pg@gmail.com> writes:\n> >> On a different note: are we frequently running our tests suites with\n> >> debug_discard_caches=1 enabled?\n> >> It doesn't seem like.\n> >\n> > Hmm. Buildfarm members avocet and trilobite are supposed to be\n> > doing that, but their runtimes of late put the lie to it.\n> > Configuration option got lost somewhere?\n> >\n>\n> Yup, my bad - I forgot to tweak CPPFLAGS when upgrading the buildfarm\n> client to v12. Fixed, next run should be with\n>\n> CPPFLAGS => '-DCLOBBER_CACHE_ALWAYS',\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n>\n\n\n", "msg_date": "Fri, 25 Nov 2022 09:57:24 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATE UNLOGGED TABLE seq faults when debug_discard_caches=1" } ]
[ { "msg_contents": "Hi,\n\npg_stat_bgwriter view currently reports checkpointer stats as well. It\nis that way because historically checkpointer was part of bgwriter\nuntil the commits 806a2ae and bf405ba, that went into PG 9.2,\nseparated them out. I think it is time for us to separate checkpointer\nstats to its own view. I discussed it in another thread [1] and it\nseems like there's an unequivocal agreement for the proposal.\n\nI'm attaching a patch introducing a new pg_stat_checkpointer view,\nwith this change the pg_stat_bgwriter view only focuses on bgwriter\nrelated stats. The patch does mostly mechanical changes. I'll add it\nto CF in a bit.\n\nThoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20221116181433.y2hq2pirtbqmmndt%40awork3.anarazel.de#b873a4bd7d8d7ec70750a7047db33f56\nhttps://www.postgresql.org/message-id/CA%2BTgmoYCu6RpuJ3cZz0e7cZSfaVb%3Dcr6iVcgGMGd5dxX0MYNRA%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 17 Nov 2022 18:21:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 11/17/22 1:51 PM, Bharath Rupireddy wrote:\n> Hi,\n> \n> pg_stat_bgwriter view currently reports checkpointer stats as well. It\n> is that way because historically checkpointer was part of bgwriter\n> until the commits 806a2ae and bf405ba, that went into PG 9.2,\n> separated them out. I think it is time for us to separate checkpointer\n> stats to its own view. I discussed it in another thread [1] and it\n> seems like there's an unequivocal agreement for the proposal.\n> \n> I'm attaching a patch introducing a new pg_stat_checkpointer view,\n> with this change the pg_stat_bgwriter view only focuses on bgwriter\n> related stats. The patch does mostly mechanical changes. I'll add it\n> to CF in a bit.\n> \n> Thoughts?\n\n+1 for the dedicated view.\n\n+ <para>\n+ The <structname>pg_stat_checkpointer</structname> view will always have a\n+ single row, containing global data for the cluster.\n+ </para>\n\nwhat about \"containing data about checkpointer activity of the cluster\"? (to provide more \"details\" (even if that seems obvious given the name of the view) and be consistent with the pg_stat_wal description too).\nAnd if it makes sense to you, While at it, why not go for \"containing data about bgwriter activity of the cluster\" for pg_stat_bgwriter too?\n\n+CREATE VIEW pg_stat_checkpointer AS\n+ SELECT\n+ pg_stat_get_timed_checkpoints() AS checkpoints_timed,\n+ pg_stat_get_requested_checkpoints() AS checkpoints_req,\n+ pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n+ pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n+ pg_stat_get_buf_written_checkpoints() AS buffers_checkpoint,\n+ pg_stat_get_buf_written_backend() AS buffers_backend,\n+ pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n+ pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n\nI don't think we should keep the checkpoints_ prefix (or _checkpoint suffix) in the column names now that they belong to a dedicated view (also the pg_stat_bgwriter view's columns don't have a\nbgwriter prefix/suffix).\n\nAnd while at it, I'm not sure the wal_ suffix in pg_stat_wal make sense too.\n\nThe idea is to have consistent naming between the views and their columns: I'd vote without prefix/suffix.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 22 Nov 2022 08:55:58 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Tue, Nov 22, 2022 at 1:26 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi,\n>\n> On 11/17/22 1:51 PM, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > pg_stat_bgwriter view currently reports checkpointer stats as well. It\n> > is that way because historically checkpointer was part of bgwriter\n> > until the commits 806a2ae and bf405ba, that went into PG 9.2,\n> > separated them out. I think it is time for us to separate checkpointer\n> > stats to its own view. I discussed it in another thread [1] and it\n> > seems like there's an unequivocal agreement for the proposal.\n> >\n> > I'm attaching a patch introducing a new pg_stat_checkpointer view,\n> > with this change the pg_stat_bgwriter view only focuses on bgwriter\n> > related stats. The patch does mostly mechanical changes. I'll add it\n> > to CF in a bit.\n> >\n> > Thoughts?\n>\n> +1 for the dedicated view.\n>\n> + <para>\n> + The <structname>pg_stat_checkpointer</structname> view will always have a\n> + single row, containing global data for the cluster.\n> + </para>\n>\n> what about \"containing data about checkpointer activity of the cluster\"? (to provide more \"details\" (even if that seems obvious given the name of the view) and be consistent with the pg_stat_wal description too).\n> And if it makes sense to you, While at it, why not go for \"containing data about bgwriter activity of the cluster\" for pg_stat_bgwriter too?\n\nNice catch. Modified.\n\n> +CREATE VIEW pg_stat_checkpointer AS\n> + SELECT\n> + pg_stat_get_timed_checkpoints() AS checkpoints_timed,\n> + pg_stat_get_requested_checkpoints() AS checkpoints_req,\n> + pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> + pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> + pg_stat_get_buf_written_checkpoints() AS buffers_checkpoint,\n> + pg_stat_get_buf_written_backend() AS buffers_backend,\n> + pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> + pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n>\n> I don't think we should keep the checkpoints_ prefix (or _checkpoint suffix) in the column names now that they belong to a dedicated view (also the pg_stat_bgwriter view's columns don't have a\n> bgwriter prefix/suffix).\n>\n> And while at it, I'm not sure the wal_ suffix in pg_stat_wal make sense too.\n>\n> The idea is to have consistent naming between the views and their columns: I'd vote without prefix/suffix.\n\n-1. If the prefix is removed, some column names become unreadable -\ntimed, requested, write_time, sync_time, buffers. We might think of\nrenaming those columns to something more readable, I tend to not do\nthat as it can break largely the application/service layer/monitoring\ntools, of course even with the new pg_stat_checkpointer view, we can't\navoid that, however the changes are less i.e. replace pg_stat_bgwriter\nwith the new view.\n\nI'm attaching the v2 patch for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 22 Nov 2022 18:08:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 2022-11-22 18:08:28 +0530, Bharath Rupireddy wrote:\n> diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\n> index 2d8104b090..131d949dfb 100644\n> --- a/src/backend/catalog/system_views.sql\n> +++ b/src/backend/catalog/system_views.sql\n> @@ -1105,18 +1105,22 @@ CREATE VIEW pg_stat_archiver AS\n> \n> CREATE VIEW pg_stat_bgwriter AS\n> SELECT\n> - pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,\n> - pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req,\n> - pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> - pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> - pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> - pg_stat_get_buf_written_backend() AS buffers_backend,\n> - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> pg_stat_get_buf_alloc() AS buffers_alloc,\n> pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n> \n> +CREATE VIEW pg_stat_checkpointer AS\n> + SELECT\n> + pg_stat_get_timed_checkpoints() AS checkpoints_timed,\n> + pg_stat_get_requested_checkpoints() AS checkpoints_req,\n> + pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> + pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> + pg_stat_get_buf_written_checkpoints() AS buffers_checkpoint,\n> + pg_stat_get_buf_written_backend() AS buffers_backend,\n> + pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> + pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n\n\nI think we should consider deprecating the pg_stat_bgwriter columns but\nleaving them in place for a few years. New stuff should only be added to\npg_stat_checkpointer, but we don't need to break old monitoring queries.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Nov 2022 12:53:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:23 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-22 18:08:28 +0530, Bharath Rupireddy wrote:\n> >\n> > CREATE VIEW pg_stat_bgwriter AS\n> > SELECT\n> > - pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,\n> > - pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req,\n> > - pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> > - pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> > - pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> > pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> > pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> > - pg_stat_get_buf_written_backend() AS buffers_backend,\n> > - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> > pg_stat_get_buf_alloc() AS buffers_alloc,\n> > pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n>\n>\n> I think we should consider deprecating the pg_stat_bgwriter columns but\n> leaving them in place for a few years. New stuff should only be added to\n> pg_stat_checkpointer, but we don't need to break old monitoring queries.\n\nMay I know what it means to deprecate pg_stat_bgwriter columns? Are\nyou suggesting to add deprecation warnings to corresponding functions\npg_stat_get_bgwriter_buf_written_clean(),\npg_stat_get_bgwriter_maxwritten_clean(), pg_stat_get_buf_alloc() and\npg_stat_get_bgwriter_stat_reset_time() and in the docs? And eventually\ndo away with the bgwriter stats and the file pgstat_bgwriter.c? Aren't\nthe bgwriter stats buf_written_clean, maxwritten_clean and buf_alloc\nuseful?\n\nI think we need to discuss the pg_stat_bgwriter deprecation separately\nindependent of the patch here, no?\n\nPS: I noticed some discussion here\nhttps://www.postgresql.org/message-id/20221121003815.qnwlnz2lhkow2e5w%40awork3.anarazel.de,\nI haven't spent enough time on it.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 11:39:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 2022-11-23 11:39:43 +0530, Bharath Rupireddy wrote:\n> On Wed, Nov 23, 2022 at 2:23 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-11-22 18:08:28 +0530, Bharath Rupireddy wrote:\n> > >\n> > > CREATE VIEW pg_stat_bgwriter AS\n> > > SELECT\n> > > - pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,\n> > > - pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req,\n> > > - pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> > > - pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> > > - pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> > > pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> > > pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> > > - pg_stat_get_buf_written_backend() AS buffers_backend,\n> > > - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> > > pg_stat_get_buf_alloc() AS buffers_alloc,\n> > > pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n> >\n> >\n> > I think we should consider deprecating the pg_stat_bgwriter columns but\n> > leaving them in place for a few years. New stuff should only be added to\n> > pg_stat_checkpointer, but we don't need to break old monitoring queries.\n> \n> May I know what it means to deprecate pg_stat_bgwriter columns?\n\nAdd a note to the docs saying that the columns will be removed.\n\n\n> Are\n> you suggesting to add deprecation warnings to corresponding functions\n> pg_stat_get_bgwriter_buf_written_clean(),\n> pg_stat_get_bgwriter_maxwritten_clean(), pg_stat_get_buf_alloc() and\n> pg_stat_get_bgwriter_stat_reset_time() and in the docs?\n\nI'm thinking of the checkpoint related columns in pg_stat_bgwriter.\n\nIf we move, rather than duplicate, the pg_stat_bgwriter columns to\npg_stat_checkpointer, everyone will have to update their monitoring scripts\nwhen upgrading and will need to add version dependency if they monitor\nmultiple versions. If we instead keep the duplicated columns in\npg_stat_bgwriter for 5 years, users can reloy on pg_stat_checkpointer in all\nsupported versions.\n\nTo be clear, it isn't a very heavy burden for users to make these\nadjustments. But if it only costs us a few lines to keep the old columns for a\nbit, that seems worth it.\n\n\n> And eventually do away with the bgwriter stats and the file\n> pgstat_bgwriter.c? Aren't the bgwriter stats buf_written_clean,\n> maxwritten_clean and buf_alloc useful?\n\nCorrect, I don't think we should remove those.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Nov 2022 15:02:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Sat, Nov 26, 2022 at 4:32 AM Andres Freund <andres@anarazel.de> wrote:\n>\n\nThanks Andres for reviewing.\n\n> > May I know what it means to deprecate pg_stat_bgwriter columns?\n>\n> Add a note to the docs saying that the columns will be removed.\n\nDone.\n\n> > Are\n> > you suggesting to add deprecation warnings to corresponding functions\n> > pg_stat_get_bgwriter_buf_written_clean(),\n> > pg_stat_get_bgwriter_maxwritten_clean(), pg_stat_get_buf_alloc() and\n> > pg_stat_get_bgwriter_stat_reset_time() and in the docs?\n>\n> I'm thinking of the checkpoint related columns in pg_stat_bgwriter.\n\nAdded note in the docs alongside each deprecated pg_stat_bgwriter's\ncheckpoint related column.\n\n> If we move, rather than duplicate, the pg_stat_bgwriter columns to\n> pg_stat_checkpointer, everyone will have to update their monitoring scripts\n> when upgrading and will need to add version dependency if they monitor\n> multiple versions. If we instead keep the duplicated columns in\n> pg_stat_bgwriter for 5 years, users can reloy on pg_stat_checkpointer in all\n> supported versions.\n\nAgree. However, it's a bit difficult to keep track of deprecated\nthings and come back after 5 years to clean them up unless \"some\"\npostgres hacker/developer/user notices it again. Perhaps, adding a\nseparate section, say 'Deprecated and To-Be-Removed, in\nhttps://wiki.postgresql.org/wiki/Main_Page is a good idea.\n\n> To be clear, it isn't a very heavy burden for users to make these\n> adjustments. But if it only costs us a few lines to keep the old columns for a\n> bit, that seems worth it.\n\nYes.\n\nI'm attaching the v3 patch for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 28 Nov 2022 16:08:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Tue, Nov 22, 2022 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n> I think we should consider deprecating the pg_stat_bgwriter columns but\n> leaving them in place for a few years. New stuff should only be added to\n> pg_stat_checkpointer, but we don't need to break old monitoring queries.\n\nI vote to just remove them. I think that most people won't update\ntheir queries until they are forced to do so. I don't think it\nmatters very much when we force them to do that.\n\nOur track record in following through on deprecations is pretty bad\ntoo, which is another consideration.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 12:58:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Mon, Nov 28, 2022 at 11:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Nov 22, 2022 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we should consider deprecating the pg_stat_bgwriter columns but\n> > leaving them in place for a few years. New stuff should only be added to\n> > pg_stat_checkpointer, but we don't need to break old monitoring queries.\n>\n> I vote to just remove them. I think that most people won't update\n> their queries until they are forced to do so. I don't think it\n> matters very much when we force them to do that.\n>\n> Our track record in following through on deprecations is pretty bad\n> too, which is another consideration.\n\nHm. I'm fine with either way. Even if we remove the checkpointer\ncolumns from pg_stat_bgwriter, the changes that one needs to do are so\nminimal and straightforward because the column names aren't changed,\njust the view name.\n\nHaving said that, I don't have a strong opinion here. I'll leave it to\nthe other hacker's opinion and/or committer's discretion.\n\nFWIW - here's the v2 patch that gets rid of checkpointer columns from\npg_stat_bgwriter\nhttps://www.postgresql.org/message-id/CALj2ACX8jFET1C3bs_edz_8JRcMg5nz8Y7ryjGaCsfnVpAYoVQ%40mail.gmail.com\nand here's the v3 patch that deprecates\nhttps://www.postgresql.org/message-id/CALj2ACUjEvPQYGJHmH2FrAj1gmvHskNrOeNUr7xnwjtkYVZvEQ%40mail.gmail.com.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 29 Nov 2022 13:05:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Mon, Nov 28, 2022 at 11:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Nov 22, 2022 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we should consider deprecating the pg_stat_bgwriter columns but\n> > leaving them in place for a few years. New stuff should only be added to\n> > pg_stat_checkpointer, but we don't need to break old monitoring queries.\n>\n> I vote to just remove them. I think that most people won't update\n> their queries until they are forced to do so. I don't think it\n> matters very much when we force them to do that.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 29 Nov 2022 18:22:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 11/28/22 6:58 PM, Robert Haas wrote:\n> On Tue, Nov 22, 2022 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n>> I think we should consider deprecating the pg_stat_bgwriter columns but\n>> leaving them in place for a few years. New stuff should only be added to\n>> pg_stat_checkpointer, but we don't need to break old monitoring queries.\n> \n> I vote to just remove them. I think that most people won't update\n> their queries until they are forced to do so. I don't think it\n> matters very much when we force them to do that.\n> \n> Our track record in following through on deprecations is pretty bad\n> too, which is another consideration.\n> \n\nSame point of view.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 29 Nov 2022 15:05:58 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Mon, 28 Nov 2022 at 13:00, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Nov 22, 2022 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n\n> I vote to just remove them. I think that most people won't update\n> their queries until they are forced to do so. I don't think it\n> matters very much when we force them to do that.\n\nI would tend to agree.\n\nIf we wanted to have a deprecation period I think the smooth way to do\nit would be to introduce two new functions/views with the new split.\nThen mark the entire old view as deprecated. That way there isn't a\nmix of new and old columns in the same view/function.\n\nI don't think it's really necessary to do that here but there'll\nprobably be instances where it would be worth doing.\n\n-- \ngreg\n\n\n", "msg_date": "Tue, 29 Nov 2022 17:29:12 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 2022-11-28 12:58:48 -0500, Robert Haas wrote:\n> On Tue, Nov 22, 2022 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we should consider deprecating the pg_stat_bgwriter columns but\n> > leaving them in place for a few years. New stuff should only be added to\n> > pg_stat_checkpointer, but we don't need to break old monitoring queries.\n> \n> I vote to just remove them. I think that most people won't update\n> their queries until they are forced to do so.\n\nSeems most agree with that... WFM.\n\nBut:\n\n\n> I don't think it matters very much when we force them to do that.\n\nI don't think that's true. If we remove the columns when the last version\nwithout pg_stat_checkpointer has gone out of support, users don't need to have\nversion switches in their monitoring setups.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Nov 2022 16:31:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Wed, Nov 30, 2022 at 6:01 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-11-28 12:58:48 -0500, Robert Haas wrote:\n> > On Tue, Nov 22, 2022 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I think we should consider deprecating the pg_stat_bgwriter columns but\n> > > leaving them in place for a few years. New stuff should only be added to\n> > > pg_stat_checkpointer, but we don't need to break old monitoring queries.\n> >\n> > I vote to just remove them. I think that most people won't update\n> > their queries until they are forced to do so.\n>\n> Seems most agree with that... WFM.\n\nThanks. I'm attaching the v2 patch from upthread again here as we all\nagree to remove checkpointer columns from pg_stat_bgwriter view and\nhave them in the new view pg_stat_checkpointer.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 30 Nov 2022 12:04:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 11/30/22 7:34 AM, Bharath Rupireddy wrote:\n> On Wed, Nov 30, 2022 at 6:01 AM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> On 2022-11-28 12:58:48 -0500, Robert Haas wrote:\n>>> On Tue, Nov 22, 2022 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n>>>> I think we should consider deprecating the pg_stat_bgwriter columns but\n>>>> leaving them in place for a few years. New stuff should only be added to\n>>>> pg_stat_checkpointer, but we don't need to break old monitoring queries.\n>>>\n>>> I vote to just remove them. I think that most people won't update\n>>> their queries until they are forced to do so.\n>>\n>> Seems most agree with that... WFM.\n> \n> Thanks. I'm attaching the v2 patch from upthread again here as we all\n> agree to remove checkpointer columns from pg_stat_bgwriter view and\n> have them in the new view pg_stat_checkpointer.\n> \n\n+CREATE VIEW pg_stat_checkpointer AS\n+ SELECT\n+ pg_stat_get_timed_checkpoints() AS checkpoints_timed,\n+ pg_stat_get_requested_checkpoints() AS checkpoints_req,\n+ pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n+ pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n+ pg_stat_get_buf_written_checkpoints() AS buffers_checkpoint,\n+ pg_stat_get_buf_written_backend() AS buffers_backend,\n+ pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n+ pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n+\n\nI still think that having checkpoints_ prefix in a pg_stat_checkpointer view sounds \"weird\" (made sense when they were part of pg_stat_bgwriter)\n\nmaybe we could have something like this instead?\n\n+CREATE VIEW pg_stat_checkpointer AS\n+ SELECT\n+ pg_stat_get_timed_checkpoints() AS num_timed,\n+ pg_stat_get_requested_checkpoints() AS num_req,\n+ pg_stat_get_checkpoint_write_time() AS total_write_time,\n+ pg_stat_get_checkpoint_sync_time() AS total_sync_time,\n+ pg_stat_get_buf_written_checkpoints() AS buffers_checkpoint,\n+ pg_stat_get_buf_written_backend() AS buffers_backend,\n+ pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n+ pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n+\n\nThat's a nit in any case and the patch LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 08:13:14 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Wed, Nov 30, 2022 at 12:44 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> +CREATE VIEW pg_stat_checkpointer AS\n> + SELECT\n> + pg_stat_get_timed_checkpoints() AS checkpoints_timed,\n> + pg_stat_get_requested_checkpoints() AS checkpoints_req,\n> + pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> + pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> + pg_stat_get_buf_written_checkpoints() AS buffers_checkpoint,\n> + pg_stat_get_buf_written_backend() AS buffers_backend,\n> + pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> + pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n> +\n>\n> I still think that having checkpoints_ prefix in a pg_stat_checkpointer view sounds \"weird\" (made sense when they were part of pg_stat_bgwriter)\n>\n> maybe we could have something like this instead?\n>\n> +CREATE VIEW pg_stat_checkpointer AS\n> + SELECT\n> + pg_stat_get_timed_checkpoints() AS num_timed,\n> + pg_stat_get_requested_checkpoints() AS num_req,\n> + pg_stat_get_checkpoint_write_time() AS total_write_time,\n> + pg_stat_get_checkpoint_sync_time() AS total_sync_time,\n> + pg_stat_get_buf_written_checkpoints() AS buffers_checkpoint,\n> + pg_stat_get_buf_written_backend() AS buffers_backend,\n> + pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> + pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n> +\n\nI don't have a strong opinion about changing column names. However, if\nwe were to change it, I prefer to use names that\nPgStat_CheckpointerStats has. BTW, that's what\nPgStat_BgWriterStats/pg_stat_bgwriter and\nPgStat_ArchiverStats/pg_stat_archiver uses.\ntypedef struct PgStat_CheckpointerStats\n{\n PgStat_Counter timed_checkpoints;\n PgStat_Counter requested_checkpoints;\n PgStat_Counter checkpoint_write_time; /* times in milliseconds */\n PgStat_Counter checkpoint_sync_time;\n PgStat_Counter buf_written_checkpoints;\n PgStat_Counter buf_written_backend;\n PgStat_Counter buf_fsync_backend;\n TimestampTz stat_reset_timestamp;\n} PgStat_CheckpointerStats;\n\n> That's a nit in any case and the patch LGTM.\n\nThanks.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 17:15:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Wed, Nov 30, 2022 at 5:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I don't have a strong opinion about changing column names. However, if\n> we were to change it, I prefer to use names that\n> PgStat_CheckpointerStats has. BTW, that's what\n> PgStat_BgWriterStats/pg_stat_bgwriter and\n> PgStat_ArchiverStats/pg_stat_archiver uses.\n\nAfter thinking about this a while, I convinced myself to change the\ncolumn names to be a bit more meaningful. I still think having\ncheckpoints in the column names is needed because it also has other\nbackend related columns. I'm attaching the v4 patch for further\nreview.\nCREATE VIEW pg_stat_checkpointer AS\n SELECT\n pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n pg_stat_get_buf_written_checkpoints() AS buffers_written_checkpoints,\n pg_stat_get_buf_written_backend() AS buffers_written_backend,\n pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 2 Dec 2022 11:20:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Thu, Dec 1, 2022 at 9:50 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Nov 30, 2022 at 5:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I don't have a strong opinion about changing column names. However, if\n> > we were to change it, I prefer to use names that\n> > PgStat_CheckpointerStats has. BTW, that's what\n> > PgStat_BgWriterStats/pg_stat_bgwriter and\n> > PgStat_ArchiverStats/pg_stat_archiver uses.\n>\n> After thinking about this a while, I convinced myself to change the\n> column names to be a bit more meaningful. I still think having\n> checkpoints in the column names is needed because it also has other\n> backend related columns. I'm attaching the v4 patch for further\n> review.\n> CREATE VIEW pg_stat_checkpointer AS\n> SELECT\n> pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n> pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n> pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> pg_stat_get_buf_written_checkpoints() AS\n> buffers_written_checkpoints,\n> pg_stat_get_buf_written_backend() AS buffers_written_backend,\n> pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n> pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n>\n\nIMO, “buffers_written_checkpoints” is confusing. What do you think?\n\n\n\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nOn Thu, Dec 1, 2022 at 9:50 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Nov 30, 2022 at 5:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I don't have a strong opinion about changing column names. However, if\n> we were to change it, I prefer to use names that\n> PgStat_CheckpointerStats has. BTW, that's what\n> PgStat_BgWriterStats/pg_stat_bgwriter and\n> PgStat_ArchiverStats/pg_stat_archiver uses.\n\nAfter thinking about this a while, I convinced myself to change the\ncolumn names to be a bit more meaningful. I still think having\ncheckpoints in the column names is needed because it also has other\nbackend related columns. I'm attaching the v4 patch for further\nreview.\nCREATE VIEW pg_stat_checkpointer AS\n    SELECT\n        pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n        pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n        pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n        pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n        pg_stat_get_buf_written_checkpoints() AS buffers_written_checkpoints,\n        pg_stat_get_buf_written_backend() AS buffers_written_backend,\n        pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n        pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\nIMO, “buffers_written_checkpoints” is confusing. What do you think?\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 1 Dec 2022 23:24:46 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Dec 2, 2022 at 12:54 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> On Thu, Dec 1, 2022 at 9:50 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Wed, Nov 30, 2022 at 5:15 PM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> >\n>> > I don't have a strong opinion about changing column names. However, if\n>> > we were to change it, I prefer to use names that\n>> > PgStat_CheckpointerStats has. BTW, that's what\n>> > PgStat_BgWriterStats/pg_stat_bgwriter and\n>> > PgStat_ArchiverStats/pg_stat_archiver uses.\n>>\n>> After thinking about this a while, I convinced myself to change the\n>> column names to be a bit more meaningful. I still think having\n>> checkpoints in the column names is needed because it also has other\n>> backend related columns. I'm attaching the v4 patch for further\n>> review.\n>> CREATE VIEW pg_stat_checkpointer AS\n>> SELECT\n>> pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n>> pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n>> pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n>> pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n>> pg_stat_get_buf_written_checkpoints() AS buffers_written_checkpoints,\n>> pg_stat_get_buf_written_backend() AS buffers_written_backend,\n>> pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n>> pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n>\n>\n> IMO, “buffers_written_checkpoints” is confusing. What do you think?\n\nThanks. We can be \"more and more\" meaningful by naming\nbuffers_written_by_checkpoints, buffers_written_by_backend,\nbuffers_fsync_by_backend. However, I don't think that's a good idea\nhere as names get too long.\n\nHaving said that, I'll leave it to the committer's discretion.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 2 Dec 2022 13:00:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 12/2/22 6:50 AM, Bharath Rupireddy wrote:\n> On Wed, Nov 30, 2022 at 5:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> I don't have a strong opinion about changing column names. However, if\n>> we were to change it, I prefer to use names that\n>> PgStat_CheckpointerStats has. BTW, that's what\n>> PgStat_BgWriterStats/pg_stat_bgwriter and\n>> PgStat_ArchiverStats/pg_stat_archiver uses.\n> \n> After thinking about this a while, I convinced myself to change the\n> column names to be a bit more meaningful. I still think having\n> checkpoints in the column names is needed because it also has other\n> backend related columns. I'm attaching the v4 patch for further\n> review.\n> CREATE VIEW pg_stat_checkpointer AS\n> SELECT\n> pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n> pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n> pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> pg_stat_get_buf_written_checkpoints() AS buffers_written_checkpoints,\n> pg_stat_get_buf_written_backend() AS buffers_written_backend,\n> pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n> pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n> \n\nThanks!\n\nPatch LGTM, marking it as Ready for Committer.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 2 Dec 2022 08:36:38 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "The patch looks good to me.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Dec 2, 2022 at 11:20 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Nov 30, 2022 at 5:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I don't have a strong opinion about changing column names. However, if\n> > we were to change it, I prefer to use names that\n> > PgStat_CheckpointerStats has. BTW, that's what\n> > PgStat_BgWriterStats/pg_stat_bgwriter and\n> > PgStat_ArchiverStats/pg_stat_archiver uses.\n>\n> After thinking about this a while, I convinced myself to change the\n> column names to be a bit more meaningful. I still think having\n> checkpoints in the column names is needed because it also has other\n> backend related columns. I'm attaching the v4 patch for further\n> review.\n> CREATE VIEW pg_stat_checkpointer AS\n> SELECT\n> pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n> pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n> pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> pg_stat_get_buf_written_checkpoints() AS buffers_written_checkpoints,\n> pg_stat_get_buf_written_backend() AS buffers_written_backend,\n> pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n> pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Dec 2022 16:14:12 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Dec 02, 2022 at 08:36:38AM +0100, Drouvot, Bertrand wrote:\n> Patch LGTM, marking it as Ready for Committer.\n\nUnfortunately, this patch no longer applies. Bharath, would you mind\nposting a rebased version?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 16:09:50 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Dec 2, 2022 at 1:07 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Patch LGTM, marking it as Ready for Committer.\n\nHad to rebase, attached v5 patch for further consideration.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 21 Jan 2023 05:56:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Sat, Jan 21, 2023 at 5:56 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Dec 2, 2022 at 1:07 PM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n> >\n> > Patch LGTM, marking it as Ready for Committer.\n>\n> Had to rebase, attached v5 patch for further consideration.\n\nOne more rebase due to 28e626bd (pgstat: Infrastructure for more\ndetailed IO statistics). PSA v6 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 9 Feb 2023 12:21:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 2023-02-09 12:21:51 +0530, Bharath Rupireddy wrote:\n> @@ -1105,18 +1105,22 @@ CREATE VIEW pg_stat_archiver AS\n> \n> CREATE VIEW pg_stat_bgwriter AS\n> SELECT\n> - pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,\n> - pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req,\n> - pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> - pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> - pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> - pg_stat_get_buf_written_backend() AS buffers_backend,\n> - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> pg_stat_get_buf_alloc() AS buffers_alloc,\n> pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n> \n> +CREATE VIEW pg_stat_checkpointer AS\n> + SELECT\n> + pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n> + pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n> + pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> + pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> + pg_stat_get_buf_written_checkpoints() AS buffers_written_checkpoints,\n> + pg_stat_get_buf_written_backend() AS buffers_written_backend,\n> + pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n> + pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n> +\n\nI don't think the backend written stats belong more accurately in\npg_stat_checkpointer than pg_stat_bgwriter.\n\n\nI continue to be worried about breaking just about any postgres monitoring\nsetup.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Feb 2023 23:03:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Thu, Feb 9, 2023 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n\nThanks for looking at this.\n\n> On 2023-02-09 12:21:51 +0530, Bharath Rupireddy wrote:\n> > @@ -1105,18 +1105,22 @@ CREATE VIEW pg_stat_archiver AS\n> >\n> > CREATE VIEW pg_stat_bgwriter AS\n> > SELECT\n> > - pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,\n> > - pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req,\n> > - pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> > - pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> > - pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> > pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> > pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> > - pg_stat_get_buf_written_backend() AS buffers_backend,\n> > - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> > pg_stat_get_buf_alloc() AS buffers_alloc,\n> > pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n> >\n> > +CREATE VIEW pg_stat_checkpointer AS\n> > + SELECT\n> > + pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n> > + pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n> > + pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> > + pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> > + pg_stat_get_buf_written_checkpoints() AS buffers_written_checkpoints,\n> > + pg_stat_get_buf_written_backend() AS buffers_written_backend,\n> > + pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n> > + pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n> > +\n>\n> I don't think the backend written stats belong more accurately in\n> pg_stat_checkpointer than pg_stat_bgwriter.\n\nWe accumulate buffers_written_backend and buffers_fsync_backend of all\nbackends under checkpointer stats to show the aggregated results to\nthe users. I think this is correct because the checkpointer is the one\nthat processes fsync requests (of course, backends themselves can\nfsync when needed, that's what the buffers_fsync_backend shows),\nwhereas bgwriter doesn't perform IIUC.\n\n> I continue to be worried about breaking just about any postgres monitoring\n> setup.\n\nHm. Yes, it requires minimal and straightforward changes in monitoring\nscripts. Please note that we separated out bgwriter and checkpointer\nin v9.2 12 years ago but we haven't had a chance to separate the stats\nso far. We might do it at some point of time, IMHO this is that time.\n\nWe did away with promote_trigger_file (cd4329d) very recently. The\nagreement was that the changes required to move on to other mechanisms\nof promotion are minimal, hence we didn't want it to be first\ndeprecated and then removed.\n\n From the discussion upthread, it looks like Robert, Amit, Bertrand,\nGreg and myself are in favour of not having a deprecated version but\nmoving them to the new pg_stat_checkpointer view.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 Feb 2023 19:00:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "Hi,\n\nOn 2023-02-09 19:00:00 +0530, Bharath Rupireddy wrote:\n> On Thu, Feb 9, 2023 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-02-09 12:21:51 +0530, Bharath Rupireddy wrote:\n> > > @@ -1105,18 +1105,22 @@ CREATE VIEW pg_stat_archiver AS\n> > >\n> > > CREATE VIEW pg_stat_bgwriter AS\n> > > SELECT\n> > > - pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,\n> > > - pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req,\n> > > - pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> > > - pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> > > - pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> > > pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean,\n> > > pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean,\n> > > - pg_stat_get_buf_written_backend() AS buffers_backend,\n> > > - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync,\n> > > pg_stat_get_buf_alloc() AS buffers_alloc,\n> > > pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;\n> > >\n> > > +CREATE VIEW pg_stat_checkpointer AS\n> > > + SELECT\n> > > + pg_stat_get_timed_checkpoints() AS timed_checkpoints,\n> > > + pg_stat_get_requested_checkpoints() AS requested_checkpoints,\n> > > + pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> > > + pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> > > + pg_stat_get_buf_written_checkpoints() AS buffers_written_checkpoints,\n> > > + pg_stat_get_buf_written_backend() AS buffers_written_backend,\n> > > + pg_stat_get_buf_fsync_backend() AS buffers_fsync_backend,\n> > > + pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n> > > +\n> >\n> > I don't think the backend written stats belong more accurately in\n> > pg_stat_checkpointer than pg_stat_bgwriter.\n> \n> We accumulate buffers_written_backend and buffers_fsync_backend of all\n> backends under checkpointer stats to show the aggregated results to\n> the users. I think this is correct because the checkpointer is the one\n> that processes fsync requests (of course, backends themselves can\n> fsync when needed, that's what the buffers_fsync_backend shows),\n> whereas bgwriter doesn't perform IIUC.\n\nThat's true for buffers_fsync_backend, but not true for\nbuffers_backend/buffers_written_backend.\n\nThat isn't tied to checkpointer.\n\n\nI think if we end up breaking compat, we should just drop that column. The\npg_stat_io patch from Melanie, which I hope to finish committing by tomorrow,\nprovides that in a more useful way, in a less confusing place.\n\nI'm not sure it's worth having buffers_fsync_backend in pg_stat_checkpointer\nin that case. You can get nearly the same information from pg_stat_io as well\n(except fsyncs for SLRUs that couldn't be put into the queue, which you'd not\nsee right now - hard to believe that ever happens at a relelvant frequency).\n\n\n> > I continue to be worried about breaking just about any postgres monitoring\n> > setup.\n> \n> Hm. Yes, it requires minimal and straightforward changes in monitoring\n> scripts. Please note that we separated out bgwriter and checkpointer\n> in v9.2 12 years ago but we haven't had a chance to separate the stats\n> so far. We might do it at some point of time, IMHO this is that time.\n\n> We did away with promote_trigger_file (cd4329d) very recently. The\n> agreement was that the changes required to move on to other mechanisms\n> of promotion are minimal, hence we didn't want it to be first\n> deprecated and then removed.\n\nThat's not really comparable, because we have had pg_ctl promote for a long\ntime. You can use it across all supported versions. pg_promote() is nearly\nthere as well. Whereas there's no way to use same query across all versions.\n\nIME there also exist a lot more hand-rolled monitoring setups\nthan hand-rolled automatic promotion setups.\n\n\n> From the discussion upthread, it looks like Robert, Amit, Bertrand,\n> Greg and myself are in favour of not having a deprecated version but\n> moving them to the new pg_stat_checkpointer view.\n\nYep, and I think you are all wrong, and that this is just going to cause\nunnecessary pain :). I'm not going to try to prevent the patch from going in\nbecause of this, just to be clear.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Feb 2023 16:46:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Thu, Feb 09, 2023 at 04:46:04PM -0800, Andres Freund wrote:\n> I think if we end up breaking compat, we should just drop that\n> column.\n\nIndeed.\n\n> Yep, and I think you are all wrong, and that this is just going to cause\n> unnecessary pain :). I'm not going to try to prevent the patch from going in\n> because of this, just to be clear.\n\nCatalog attributes have faced a lot of renames across the years, with\nthe same potential of breakages for monitoring tools. I am not saying\nthat all of them are justified, but we have usually done so because it\nmakes sense to reshape things in the way they are now, thinking\nlong-term. Splitting pg_stat_bgwriter into two views does not strike\nme as something that bad, TBH, because it becomes clearer which stats\nare attached to which process (bgwriter or checkpointer). (Note: I\nhave not checked in details the stats switching to the new view and\nhow pertinent each choice is.)\n--\nMichael", "msg_date": "Fri, 10 Feb 2023 14:59:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Feb 10, 2023 at 6:16 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> That's true for buffers_fsync_backend, but not true for\n> buffers_backend/buffers_written_backend.\n>\n> That isn't tied to checkpointer.\n>\n> I think if we end up breaking compat, we should just drop that column. The\n> pg_stat_io patch from Melanie, which I hope to finish committing by tomorrow,\n> provides that in a more useful way, in a less confusing place.\n\nOn Fri, Feb 10, 2023 at 11:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Feb 09, 2023 at 04:46:04PM -0800, Andres Freund wrote:\n> > I think if we end up breaking compat, we should just drop that\n> > column.\n>\n> Indeed.\n\nYeah, pg_stat_io is a better place to track the backend IO stats. I\nremoved buffers_backend, please see the attached 0001 patch.\n\nOn Fri, Feb 10, 2023 at 6:16 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> I'm not sure it's worth having buffers_fsync_backend in pg_stat_checkpointer\n> in that case. You can get nearly the same information from pg_stat_io as well\n> (except fsyncs for SLRUs that couldn't be put into the queue, which you'd not\n> see right now - hard to believe that ever happens at a relelvant frequency).\n\nI think it'd be better to move the SLRU fsync stats during checkpoints\nto the pg_stat_slru view, then it can be a one-stop view to track all\nSLRU IO stats. This lets us remove buffers_fsync_backend too, please\nsee the attached 0002 patch. However, one metric we might miss is the\nnumber of times checkpointer missed to absorb the fsync requests. If\nneeded, this metric can be added to the new pg_stat_checkpointer view.\n\n> > Yep, and I think you are all wrong, and that this is just going to cause\n> > unnecessary pain :). I'm not going to try to prevent the patch from going in\n> > because of this, just to be clear.\n>\n> Catalog attributes have faced a lot of renames across the years, with\n> the same potential of breakages for monitoring tools. I am not saying\n> that all of them are justified, but we have usually done so because it\n> makes sense to reshape things in the way they are now, thinking\n> long-term. Splitting pg_stat_bgwriter into two views does not strike\n> me as something that bad, TBH, because it becomes clearer which stats\n> are attached to which process (bgwriter or checkpointer). (Note: I\n> have not checked in details the stats switching to the new view and\n> how pertinent each choice is.)\n\nThanks. FWIW, I've attached the patch introducing pg_stat_checkpointer\nas 0003 here.\n\nPlease review the attached v7 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 10 Feb 2023 22:00:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Feb 10, 2023 at 10:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Thanks. FWIW, I've attached the patch introducing pg_stat_checkpointer\n> as 0003 here.\n>\n> Please review the attached v7 patch set.\n\nNeeded a rebase. Please review the attached v8 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 13 Feb 2023 11:31:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Mon, Feb 13, 2023 at 11:31:03AM +0530, Bharath Rupireddy wrote:\n> Needed a rebase. Please review the attached v8 patch set.\n\nI was looking at this patch, and got a few comments.\n\nFWIW, I kind of agree with the feeling of Bertrand upthread that using\n\"checkpoint_\" in the attribute names for the new view is globally\ninconsistent. After 0003, we get: \n=# select attname from pg_attribute\n where attrelid = 'pg_stat_checkpointer'::regclass\n and attnum > 0;\n attname\n-----------------------------\n timed_checkpoints\n requested_checkpoints\n checkpoint_write_time\n checkpoint_sync_time\n buffers_written_checkpoints\n stats_reset\n(6 rows)\n=# select attname from pg_attribute\n where attrelid = 'pg_stat_bgwriter'::regclass and\n attnum > 0;\n attname\n------------------\n buffers_clean\n maxwritten_clean\n buffers_alloc\n stats_reset\n(4 rows)\n\nThe view for the bgwriter does not do that. I'd suggest to use\nfunctions that are named as pg_stat_get_checkpoint_$att with shorter\n$atts. It is true that \"timed\" is a bit confusing, because it refers\nto a number of checkpoints, and that can be read as a time value (?).\nSo how about num_timed? And for the others num_requested and\nbuffers_written?\n\n+ * Unlike the checkpoint fields, reqquests related fields are protected by \n\ns/reqquests/requests/.\n\n SlruSyncFileTag(SlruCtl ctl, const FileTag *ftag, char *path)\n {\n+\tSlruShared\tshared = ctl->shared;\n \tint\t\t\tfd;\n \tint\t\t\tsave_errno;\n \tint\t\t\tresult;\n \n+\t/* update the stats counter of flushes */\n+\tpgstat_count_slru_flush(shared->slru_stats_idx);\n\nWhy is that in 0002? Isn't that something we should treat as a bug\nfix of its own, even backpatching it to make sure that the flush\nrequests for individual commit_ts, multixact and clog files are\ncounted in the stats?\n\nSaying that, I am OK with moving ahead with 0001 and 0002 to remove\nbuffers_backend_fsync and buffers_backend from pg_stat_bgwriter, but\nit is better to merge them into a single commit. It helps with 0003\nand this would encourage the use of pg_stat_io that does a better job\noverall with more field granularity.\n--\nMichael", "msg_date": "Thu, 26 Oct 2023 10:59:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Thu, Oct 26, 2023 at 7:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> I was looking at this patch, and got a few comments.\n\nThanks.\n\n> The view for the bgwriter does not do that. I'd suggest to use\n> functions that are named as pg_stat_get_checkpoint_$att with shorter\n> $atts. It is true that \"timed\" is a bit confusing, because it refers\n> to a number of checkpoints, and that can be read as a time value (?).\n> So how about num_timed? And for the others num_requested and\n> buffers_written?\n\n+1. PSA v9-0003.\n\n> + * Unlike the checkpoint fields, reqquests related fields are protected by\n>\n> s/reqquests/requests/.\n\nFixed.\n\n> SlruSyncFileTag(SlruCtl ctl, const FileTag *ftag, char *path)\n> {\n> + SlruShared shared = ctl->shared;\n> int fd;\n> int save_errno;\n> int result;\n>\n> + /* update the stats counter of flushes */\n> + pgstat_count_slru_flush(shared->slru_stats_idx);\n>\n> Why is that in 0002? Isn't that something we should treat as a bug\n> fix of its own, even backpatching it to make sure that the flush\n> requests for individual commit_ts, multixact and clog files are\n> counted in the stats?\n\n+1. I included the fix in a separate patch 0002 here.\n\n> Saying that, I am OK with moving ahead with 0001 and 0002 to remove\n> buffers_backend_fsync and buffers_backend from pg_stat_bgwriter, but\n> it is better to merge them into a single commit. It helps with 0003\n> and this would encourage the use of pg_stat_io that does a better job\n> overall with more field granularity.\n\nI merged v8 0001 and 0002 into one single patch, PSA v9-0001.\n\nPSA v9 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 26 Oct 2023 22:55:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Thu, Oct 26, 2023 at 10:55:00PM +0530, Bharath Rupireddy wrote:\n> On Thu, Oct 26, 2023 at 7:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Why is that in 0002? Isn't that something we should treat as a bug\n>> fix of its own, even backpatching it to make sure that the flush\n>> requests for individual commit_ts, multixact and clog files are\n>> counted in the stats?\n> \n> +1. I included the fix in a separate patch 0002 here.\n\nHmm. As per the existing call of pgstat_count_slru_flush() in\nSimpleLruWriteAll(), routine called SimpleLruFlush() until ~13 and\ndee663f78439, an incrementation of 1 for slru_stats_idx refers to all\nthe flushes for all the dirty data of this SLRU in a single pass.\nThis addition actually means that we would now increment the counter\nfor each sync request, changing its meaning. Perhaps there is an\nargument for changing the meaning of this parameter to be based on the\nnumber of flush requests completed, but if that were to happen it\nwould be better to remove pgstat_count_slru_flush() in\nSimpleLruWriteAll(), I guess, or just aggregate this counter once,\nwhich would be cheaper.\n\n>> Saying that, I am OK with moving ahead with 0001 and 0002 to remove\n>> buffers_backend_fsync and buffers_backend from pg_stat_bgwriter, but\n>> it is better to merge them into a single commit. It helps with 0003\n>> and this would encourage the use of pg_stat_io that does a better job\n>> overall with more field granularity.\n> \n> I merged v8 0001 and 0002 into one single patch, PSA v9-0001.\n\nv9-0001 is OK, so I have applied it.\n\nv9-0003 is mostly a mechanical change, and it passes the eye test. I\nhave spotted two nits.\n\n+CREATE VIEW pg_stat_checkpointer AS\n+ SELECT\n+ pg_stat_get_checkpointer_num_timed() AS num_timed,\n+ pg_stat_get_checkpointer_num_requested() AS num_requested,\n+ pg_stat_get_checkpointer_write_time() AS write_time,\n+ pg_stat_get_checkpointer_sync_time() AS sync_time,\n+ pg_stat_get_checkpointer_buffers_written() AS buffers_written,\n+ pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n\nOkay with this layer. I am wondering if others have opinions to\nshare about these names for the attributes of pg_stat_checkpointer,\nthough.\n\n- single row, containing global data for the cluster.\n+ single row, containing data about the bgwriter of the cluster.\n\n\"bgwriter\" is used in one place of the docs, so perhaps \"background\nwriter\" is better here?\n\nThe error message generated for an incorrect target needs to be\nupdated in pg_stat_reset_shared():\n=# select pg_stat_reset_shared('checkpointe');\nERROR: 22023: unrecognized reset target: \"checkpointe\"\nHINT: Target must be \"archiver\", \"bgwriter\", \"io\", \"recovery_prefetch\", or \"wal\".\n--\nMichael", "msg_date": "Fri, 27 Oct 2023 11:33:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Oct 27, 2023 at 8:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hmm. As per the existing call of pgstat_count_slru_flush() in\n> SimpleLruWriteAll(), routine called SimpleLruFlush() until ~13 and\n> dee663f78439, an incrementation of 1 for slru_stats_idx refers to all\n> the flushes for all the dirty data of this SLRU in a single pass.\n>\n> This addition actually means that we would now increment the counter\n> for each sync request, changing its meaning. Perhaps there is an\n> argument for changing the meaning of this parameter to be based on the\n> number of flush requests completed, but if that were to happen it\n> would be better to remove pgstat_count_slru_flush() in\n> SimpleLruWriteAll(), I guess, or just aggregate this counter once,\n> which would be cheaper.\n\nRight. Interestingly, there are 2 SLRU flush related wait events\nWAIT_EVENT_SLRU_SYNC (\"Waiting for SLRU data to reach durable storage\nfollowing a page write\") and WAIT_EVENT_SLRU_FLUSH_SYNC (\"Waiting for\nSLRU data to reach durable storage during a checkpoint or database\nshutdown\"). And, we're counting the SLRU flushes in two of these\nplaces into one single stat variable. These two events look confusing\nand may be useful if shown in an aggregated way.\n\nA possible way is to move existing pgstat_count_slru_flush in\nSimpleLruWriteAll closer to pg_fsync and WAIT_EVENT_SLRU_SYNC in\nSlruPhysicalWritePage, remove WAIT_EVENT_SLRU_FLUSH_SYNC completely,\nuse WAIT_EVENT_SLRU_SYNC in SlruSyncFileTag and count the flushes in\nSlruSyncFileTag. This aggregated way is much simpler IMV.\n\nAnother possible way is to have separate stat variables for each of\nthe SLRU flushes WAIT_EVENT_SLRU_SYNC and WAIT_EVENT_SLRU_FLUSH_SYNC\nand expose them separately in pg_stat_slru. I don't like this\napproach.\n\n> v9-0003 is mostly a mechanical change, and it passes the eye test.\n\nThanks. Indeed it contains mechanical changes.\n\n> I have spotted two nits.\n>\n> +CREATE VIEW pg_stat_checkpointer AS\n> + SELECT\n> + pg_stat_get_checkpointer_num_timed() AS num_timed,\n> + pg_stat_get_checkpointer_num_requested() AS num_requested,\n> + pg_stat_get_checkpointer_write_time() AS write_time,\n> + pg_stat_get_checkpointer_sync_time() AS sync_time,\n> + pg_stat_get_checkpointer_buffers_written() AS buffers_written,\n> + pg_stat_get_checkpointer_stat_reset_time() AS stats_reset;\n>\n> Okay with this layer. I am wondering if others have opinions to\n> share about these names for the attributes of pg_stat_checkpointer,\n> though.\n\nI think these column names are a good fit in this context as we can't\njust call timed/requested/write/sync and having \"checkpoint\" makes the\ncolumn names long unnecessarily. FWIW, I see some of the user-exposed\nfield names starting with num_* (num_nonnulls, num_nulls, num_lwlocks,\nnum_transactions).\n\n> - single row, containing global data for the cluster.\n> + single row, containing data about the bgwriter of the cluster.\n>\n> \"bgwriter\" is used in one place of the docs, so perhaps \"background\n> writer\" is better here?\n\n+1. Changed in the attached v10-0001.\n\n> The error message generated for an incorrect target needs to be\n> updated in pg_stat_reset_shared():\n> =# select pg_stat_reset_shared('checkpointe');\n> ERROR: 22023: unrecognized reset target: \"checkpointe\"\n> HINT: Target must be \"archiver\", \"bgwriter\", \"io\", \"recovery_prefetch\", or \"wal\".\n\n+1. Changed in the attached v10-001. FWIW, having a test case in\nstats.sql emitting this error message and hint would have helped here.\nIf okay, I can add one.\n\nPS: I'll park the SLRU flush related patch aside until the approach is\nfinalized. I'm posting the pg_stat_checkpointer patch as v10-0001.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 27 Oct 2023 10:23:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Oct 27, 2023 at 10:23:34AM +0530, Bharath Rupireddy wrote:\n> A possible way is to move existing pgstat_count_slru_flush in\n> SimpleLruWriteAll closer to pg_fsync and WAIT_EVENT_SLRU_SYNC in\n> SlruPhysicalWritePage, remove WAIT_EVENT_SLRU_FLUSH_SYNC completely,\n> use WAIT_EVENT_SLRU_SYNC in SlruSyncFileTag and count the flushes in\n> SlruSyncFileTag. This aggregated way is much simpler IMV.\n> \n> Another possible way is to have separate stat variables for each of\n> the SLRU flushes WAIT_EVENT_SLRU_SYNC and WAIT_EVENT_SLRU_FLUSH_SYNC\n> and expose them separately in pg_stat_slru. I don't like this\n> approach.\n\nThis touches an area covered by a different patch, registered in this\ncommit fest as well:\nhttps://www.postgresql.org/message-id/CAMm1aWb18EpT0whJrjG+-nyhNouXET6ZUw0pNYYAe+NezpvsAA@mail.gmail.com\n\nSo perhaps we'd better move the discussion there. The patch posted\nthere is going to need a rebase anyway once the split with\npg_stat_checkpointer is introduced.\n\n>> The error message generated for an incorrect target needs to be\n>> updated in pg_stat_reset_shared():\n>> =# select pg_stat_reset_shared('checkpointe');\n>> ERROR: 22023: unrecognized reset target: \"checkpointe\"\n>> HINT: Target must be \"archiver\", \"bgwriter\", \"io\", \"recovery_prefetch\", or \"wal\".\n> \n> +1. Changed in the attached v10-001. FWIW, having a test case in\n> stats.sql emitting this error message and hint would have helped here.\n> If okay, I can add one.\n> \n> PS: I'll park the SLRU flush related patch aside until the approach is\n> finalized. I'm posting the pg_stat_checkpointer patch as v10-0001.\n\nThanks. That seems OK. I don't have the wits to risk my weekend on\nbuildfarm failures if any, so that will have to wait a bit.\n--\nMichael", "msg_date": "Fri, 27 Oct 2023 14:02:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Oct 27, 2023 at 10:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 27, 2023 at 10:23:34AM +0530, Bharath Rupireddy wrote:\n> > A possible way is to move existing pgstat_count_slru_flush in\n> > SimpleLruWriteAll closer to pg_fsync and WAIT_EVENT_SLRU_SYNC in\n> > SlruPhysicalWritePage, remove WAIT_EVENT_SLRU_FLUSH_SYNC completely,\n> > use WAIT_EVENT_SLRU_SYNC in SlruSyncFileTag and count the flushes in\n> > SlruSyncFileTag. This aggregated way is much simpler IMV.\n> >\n> > Another possible way is to have separate stat variables for each of\n> > the SLRU flushes WAIT_EVENT_SLRU_SYNC and WAIT_EVENT_SLRU_FLUSH_SYNC\n> > and expose them separately in pg_stat_slru. I don't like this\n> > approach.\n>\n> This touches an area covered by a different patch, registered in this\n> commit fest as well:\n> https://www.postgresql.org/message-id/CAMm1aWb18EpT0whJrjG+-nyhNouXET6ZUw0pNYYAe+NezpvsAA@mail.gmail.com\n>\n> So perhaps we'd better move the discussion there. The patch posted\n> there is going to need a rebase anyway once the split with\n> pg_stat_checkpointer is introduced.\n\nYeah, I'll re-look at the SLRU stuff and the other thread next week.\n\n> > finalized. I'm posting the pg_stat_checkpointer patch as v10-0001.\n>\n> Thanks. That seems OK. I don't have the wits to risk my weekend on\n> buildfarm failures if any, so that will have to wait a bit.\n\nHm, okay :)\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Oct 2023 11:04:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Fri, Oct 27, 2023 at 10:23:34AM +0530, Bharath Rupireddy wrote:\n> +1. Changed in the attached v10-001. FWIW, having a test case in\n> stats.sql emitting this error message and hint would have helped here.\n> If okay, I can add one.\n\nThat may be something to do. At least it was missed on this thread,\neven if we don't add a new pgstat shared type every day.\n\n> PS: I'll park the SLRU flush related patch aside until the approach is\n> finalized. I'm posting the pg_stat_checkpointer patch as v10-0001.\n\n+-- Test that reset_shared with checkpointer specified as the stats type works\n+SELECT stats_reset AS checkpointer_reset_ts FROM pg_stat_checkpointer \\gset\n+SELECT pg_stat_reset_shared('checkpointer');\n+SELECT stats_reset > :'checkpointer_reset_ts'::timestamptz FROM pg_stat_checkpointer;\n+SELECT stats_reset AS checkpointer_reset_ts FROM pg_stat_checkpointer \\gset\n\nNote that you have forgotten to update the test of\npg_stat_reset_shared(NULL) to check for the value of\ncheckpointer_reset_ts. I've added an extra SELECT to check that for\npg_stat_checkpointer, and applied v8.\n--\nMichael", "msg_date": "Mon, 30 Oct 2023 09:49:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Mon, Oct 30, 2023 at 6:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 27, 2023 at 10:23:34AM +0530, Bharath Rupireddy wrote:\n> > +1. Changed in the attached v10-001. FWIW, having a test case in\n> > stats.sql emitting this error message and hint would have helped here.\n> > If okay, I can add one.\n>\n> That may be something to do. At least it was missed on this thread,\n> even if we don't add a new pgstat shared type every day.\n\nRight. Adding test coverage for the error-case is no bad IMV\n(https://coverage.postgresql.org/src/backend/utils/adt/pgstatfuncs.c.gcov.html).\nHere's the attached 0001 patch for that.\n\n> > PS: I'll park the SLRU flush related patch aside until the approach is\n> > finalized. I'm posting the pg_stat_checkpointer patch as v10-0001.\n>\n> +-- Test that reset_shared with checkpointer specified as the stats type works\n> +SELECT stats_reset AS checkpointer_reset_ts FROM pg_stat_checkpointer \\gset\n> +SELECT pg_stat_reset_shared('checkpointer');\n> +SELECT stats_reset > :'checkpointer_reset_ts'::timestamptz FROM pg_stat_checkpointer;\n> +SELECT stats_reset AS checkpointer_reset_ts FROM pg_stat_checkpointer \\gset\n>\n> Note that you have forgotten to update the test of\n> pg_stat_reset_shared(NULL) to check for the value of\n> checkpointer_reset_ts. I've added an extra SELECT to check that for\n> pg_stat_checkpointer, and applied v8.\n\nOh, thanks for taking care of this. While at it, I noticed that\nthere's no coverage for pg_stat_reset_shared('recovery_prefetch') and\nXLogPrefetchResetStats()\nhttps://coverage.postgresql.org/src/backend/access/transam/xlogprefetcher.c.gcov.html.\nMost of the recovery_prefetch code is covered with recovery/streaming\nrelated tests, but the reset stats part is missing. So, I've added\ncoverage for it in the attached 0002.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 30 Oct 2023 11:59:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Introduce a new view for checkpointer related stats" }, { "msg_contents": "On Mon, Oct 30, 2023 at 11:59:10AM +0530, Bharath Rupireddy wrote:\n> Oh, thanks for taking care of this. While at it, I noticed that\n> there's no coverage for pg_stat_reset_shared('recovery_prefetch') and\n> XLogPrefetchResetStats()\n> https://coverage.postgresql.org/src/backend/access/transam/xlogprefetcher.c.gcov.html.\n> Most of the recovery_prefetch code is covered with recovery/streaming\n> related tests, but the reset stats part is missing. So, I've added\n> coverage for it in the attached 0002.\n\nIndeed, good catch. I didn't notice the hole in the coverage reports.\nI have merged 0001 and 0002 together, and applied them as of\n5b2147d9fcc1.\n--\nMichael", "msg_date": "Tue, 31 Oct 2023 07:43:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Introduce a new view for checkpointer related stats" } ]
[ { "msg_contents": "Hi,\n\n Recently read the code, I find that when calling DefineIndex\n from ProcessUtilitySlow, is_alter_table will be set true if\n this statement is came from expandTableLikeClause.\n\n I check the code of DefineIndex, there are only two places use\n is_alter_table:\n 1. the function index_check_primary_key\n 2. print a debug log on what the statement is\n\n For 1, since we are doing create table xxx (like yyy including\n indexes), we are sure that the check relationHasPrimaryKey in the\n function index_check_primary_key will be satisfied because we are just\n create the new table.\n\n For 2, I do not think it will mislead the user if we print it as\n CreateStmt.\n\n Based on the above, I think we can always a false value\n for is_alter_table when DefineIndex is called from\n ProcessUtilitySlow.\n\n Here I attach a patch. Any ideas?\n Thanks a lot.\n\nBest,\nZhenghua Lyu", "msg_date": "Thu, 17 Nov 2022 21:08:42 +0800", "msg_from": "=?UTF-8?B?5q2j5Y2O5ZCV?= <kainwen@gmail.com>", "msg_from_op": true, "msg_subject": "Don't treate IndexStmt like AlterTable when DefineIndex is called\n from ProcessUtilitySlow." }, { "msg_contents": "=?UTF-8?B?5q2j5Y2O5ZCV?= <kainwen@gmail.com> writes:\n> Recently read the code, I find that when calling DefineIndex\n> from ProcessUtilitySlow, is_alter_table will be set true if\n> this statement is came from expandTableLikeClause.\n\nYeah.\n\n> Based on the above, I think we can always a false value\n> for is_alter_table when DefineIndex is called from\n> ProcessUtilitySlow.\n\nWhy do you think this is an improvement? Even if it's correct,\nthe code savings is so negligible that I'm not sure I want to\nexpend brain cells on figuring out whether it's correct. The\ncomment you want to remove does not suggest that it's optional\nwhich value we should pass, so I think the burden of proof\nis to show that this patch is okay not that somebody else\nhas to demonstrate that it isn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 17:26:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Don't treate IndexStmt like AlterTable when DefineIndex is called\n from ProcessUtilitySlow." }, { "msg_contents": "Hi, thanks a lot for your reply!\n\n> Why do you think this is an improvement?\n\nI hit the issue in Greenplum Database (Massively Parallel Postgres),\nthe MPP architecture is that coordinator dispatch statement to segments.\nThe dispatch logic is quit different for AlterTable and CreateTableLike:\n\n* alter table: for each sub command, it will not dispatch; later it will\ndispatch\n the alter table statement as a whole.\n* for create table like statement, like `create table t (like t1 including\nindexes);`\n this statement's 2nd stmt, has to dispatch to segments, but now it is\ntreated\n as alter table, the dispatch logic is broken for this case in Greenplum.\n\nI look into the issue and Greenplum Database wants to keep align with\nUpstream\nas possible. That is why I ask if we can force it to false.\n\nBest,\nZhenghua Lyu\n\n\nTom Lane <tgl@sss.pgh.pa.us> 于2022年11月18日周五 06:26写道:\n\n> =?UTF-8?B?5q2j5Y2O5ZCV?= <kainwen@gmail.com> writes:\n> > Recently read the code, I find that when calling DefineIndex\n> > from ProcessUtilitySlow, is_alter_table will be set true if\n> > this statement is came from expandTableLikeClause.\n>\n> Yeah.\n>\n> > Based on the above, I think we can always a false value\n> > for is_alter_table when DefineIndex is called from\n> > ProcessUtilitySlow.\n>\n> Why do you think this is an improvement? Even if it's correct,\n> the code savings is so negligible that I'm not sure I want to\n> expend brain cells on figuring out whether it's correct. The\n> comment you want to remove does not suggest that it's optional\n> which value we should pass, so I think the burden of proof\n> is to show that this patch is okay not that somebody else\n> has to demonstrate that it isn't.\n>\n> regards, tom lane\n>\n\nHi, thanks a lot for your reply!> Why do you think this is an improvement? I hit the issue in Greenplum Database (Massively Parallel Postgres),the MPP architecture is that coordinator dispatch statement to segments.The dispatch logic is quit different for AlterTable and CreateTableLike:* alter table: for each sub command, it will not dispatch; later it will dispatch  the alter table statement as a whole.* for create table like statement, like `create table t (like t1 including indexes);`  this statement's 2nd stmt, has to dispatch to segments, but now it is treated  as alter table, the dispatch logic is broken for this case in Greenplum.I look into the issue and Greenplum Database wants to keep align with Upstreamas possible. That is why I ask if we can force it to false.Best,Zhenghua LyuTom Lane <tgl@sss.pgh.pa.us> 于2022年11月18日周五 06:26写道:=?UTF-8?B?5q2j5Y2O5ZCV?= <kainwen@gmail.com> writes:\n>   Recently read the code, I find that when calling DefineIndex\n>   from ProcessUtilitySlow, is_alter_table will be set true if\n>   this statement is came from expandTableLikeClause.\n\nYeah.\n\n>   Based on the above, I think  we can always a false value\n>   for is_alter_table when DefineIndex is called from\n>   ProcessUtilitySlow.\n\nWhy do you think this is an improvement?  Even if it's correct,\nthe code savings is so negligible that I'm not sure I want to\nexpend brain cells on figuring out whether it's correct.  The\ncomment you want to remove does not suggest that it's optional\nwhich value we should pass, so I think the burden of proof\nis to show that this patch is okay not that somebody else\nhas to demonstrate that it isn't.\n\n                        regards, tom lane", "msg_date": "Fri, 18 Nov 2022 07:31:18 +0800", "msg_from": "=?UTF-8?B?5q2j5Y2O5ZCV?= <kainwen@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Don't treate IndexStmt like AlterTable when DefineIndex is called\n from ProcessUtilitySlow." } ]
[ { "msg_contents": "Hi,\n\n\nI have an incorrect behavior with pg_dump prior PG version 15. With \nPostgreSQL 15, thanks to commit e3fcbbd623b9ccc16cdbda374654d91a4727d173 \nthe problem is gone but for older versions it persists with locks on \npartitioned tables.\n\n\nWhen we try to dump a database where a table is locked, pg_dump wait \nuntil the lock is released, this is expected. Now if the table is \nexcluded from the dump using the -T option, obviously pg_dump is not \nconcerned by the lock. Unfortunately this is not the case when the table \nis partitioned because of the call to pg_get_partkeydef(), pg_get_expr() \nin the query generated in getTables().  Here is the use case to reproduce.\n\n\nIn a psql session execute:\n\n BEGIN;\n\n LOCK TABLE measurement;\n\nthen run a pg_dump command excluding the measurement partitions:\n\n     pg_dump -d test -T \"measurement*\" > /dev/null\n\nit will not end until the lock on the partition is released.\n\nI think the problem is the same if we use a schema exclusion where the \npartitioned table is locked.\n\n\nIs it possible to consider a backport fix? If yes, does adding the \ntable/schema filters in the query generated in getTables() is enough or \ndo you think about of a kind of backport of commit \ne3fcbbd623b9ccc16cdbda374654d91a4727d173 ?\n\n\nBest regards,\n\n-- \nGilles Darold\n\n\n\n\n\n\nHi,\n\n\nI have an incorrect behavior with pg_dump prior PG version 15.\n With PostgreSQL 15, thanks to commit\n e3fcbbd623b9ccc16cdbda374654d91a4727d173 the problem is gone but\n for older versions it persists with locks on partitioned tables.\n\n\nWhen we try to dump a database where a table is locked, pg_dump\n wait until the lock is released, this is expected. Now if the\n table is excluded from the dump using the -T option, obviously\n pg_dump is not concerned by the lock. Unfortunately this is not\n the case when the table is partitioned because of the call to\n pg_get_partkeydef(), pg_get_expr() in the query generated in\n getTables().  Here is the use case to reproduce.\n\n\nIn a psql session execute:\n\nBEGIN;\n LOCK TABLE measurement;\nthen run a pg_dump command excluding the measurement partitions:\n    pg_dump -d test -T \"measurement*\" > /dev/null\nit will not end until the lock on the partition is released.\nI think the problem is the same if we use a schema exclusion\n where the partitioned table is locked. \n\n\n\nIs it possible to consider a backport fix? If yes, does adding\n the table/schema filters in the query generated in getTables() is\n enough or do you think about of a kind of backport of commit\n e3fcbbd623b9ccc16cdbda374654d91a4727d173 ?\n\n\nBest regards,\n\n-- \nGilles Darold", "msg_date": "Thu, 17 Nov 2022 17:43:38 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "[BUG] pg_dump blocked" }, { "msg_contents": "Gilles Darold <gilles@darold.net> writes:\n> I have an incorrect behavior with pg_dump prior PG version 15. With \n> PostgreSQL 15, thanks to commit e3fcbbd623b9ccc16cdbda374654d91a4727d173 \n> the problem is gone but for older versions it persists with locks on \n> partitioned tables.\n\nI didn't want to back-patch e3fcbbd62 at the time, but it's probably aged\nlong enough now to be safe to back-patch. If we do anything here,\nit should be to back-patch the whole thing, else we've only partially\nfixed the issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 11:59:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_dump blocked" }, { "msg_contents": "Le 17/11/2022 à 17:59, Tom Lane a écrit :\n> Gilles Darold <gilles@darold.net> writes:\n>> I have an incorrect behavior with pg_dump prior PG version 15. With\n>> PostgreSQL 15, thanks to commit e3fcbbd623b9ccc16cdbda374654d91a4727d173\n>> the problem is gone but for older versions it persists with locks on\n>> partitioned tables.\n> I didn't want to back-patch e3fcbbd62 at the time, but it's probably aged\n> long enough now to be safe to back-patch. If we do anything here,\n> it should be to back-patch the whole thing, else we've only partially\n> fixed the issue.\n\n\nI can handle this work.\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 18:03:55 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_dump blocked" }, { "msg_contents": "Le 17/11/2022 à 17:59, Tom Lane a écrit :\n> Gilles Darold <gilles@darold.net> writes:\n>> I have an incorrect behavior with pg_dump prior PG version 15. With\n>> PostgreSQL 15, thanks to commit e3fcbbd623b9ccc16cdbda374654d91a4727d173\n>> the problem is gone but for older versions it persists with locks on\n>> partitioned tables.\n> I didn't want to back-patch e3fcbbd62 at the time, but it's probably aged\n> long enough now to be safe to back-patch. If we do anything here,\n> it should be to back-patch the whole thing, else we've only partially\n> fixed the issue.\n\n\nHere are the different patched following the PostgreSQL version from 11 \nto 14, they should apply on the corresponding stable branches. The \npatches only concern the move of the unsafe functions, \npg_get_partkeydef() and pg_get_expr(). They should all apply without \nproblem on their respective branch, pg_dump tap regression test passed \non all versions.\n\nRegards,\n\n-- \nGilles Darold", "msg_date": "Sat, 19 Nov 2022 07:30:59 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "Re: [BUG] pg_dump blocked" }, { "msg_contents": "Gilles Darold <gilles@darold.net> writes:\n> Le 17/11/2022 à 17:59, Tom Lane a écrit :\n>> I didn't want to back-patch e3fcbbd62 at the time, but it's probably aged\n>> long enough now to be safe to back-patch. If we do anything here,\n>> it should be to back-patch the whole thing, else we've only partially\n>> fixed the issue.\n\n> Here are the different patched following the PostgreSQL version from 11 \n> to 14, they should apply on the corresponding stable branches.\n\nReviewed and pushed --- thanks for doing the legwork!\n\nTrawling the commit log, I found the follow-on patch 3e6e86abc,\nwhich fixed another issue of the same kind. I back-patched that\ntoo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Nov 2022 12:02:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] pg_dump blocked" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/15/logical-replication-row-filter.html\nDescription:\n\nThere are several things missing here and some of them I found to be highly\nimportant:\r\n1. How can I find why a logical replication failed. Currently I only can see\nit \"does nothing\" in pg_stat_subscriptions.\r\n2. In case of copying the existing data: how can I find which tables or\npartitions were processed and which are on the processing queue (while\nmonitoring I have observed no specific order or rule).\r\n3. In case of copying the existing data there is no option to update the row\nbased on the Primary Key if it already exists at the destination. The COPY\nonly fails.\r\n4. Is it possible to restart an interrupted logical replication. If yes,\nthen how? Taking in account the already existing data!!!\r\n\r\nIMHO there are some big functionality features still missing, but they can\nbe added.\r\n\r\nThank you!", "msg_date": "Thu, 17 Nov 2022 17:32:01 +0000", "msg_from": "PG Doc comments form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "Logical replication missing information" }, { "msg_contents": "On Fri, Nov 18, 2022 at 4:50 AM PG Doc comments form\n<noreply@postgresql.org> wrote:\n>\n> The following documentation comment has been logged on the website:\n>\n> Page: https://www.postgresql.org/docs/15/logical-replication-row-filter.html\n> Description:\n\nHi,\n\nFYI - I have forwarded this post to the hacker's list, where I think\nit will receive more attention.\n\nI am not sure why that (above) page was cited -- the section \"31.3 Row\nFilters\" is specifically about row filtering, whereas the items you\nreported seem unrelated to row filters, but are generic for all\nLogical Replication.\n\n>\n> There are several things missing here and some of them I found to be highly\n> important:\n> 1. How can I find why a logical replication failed. Currently I only can see\n> it \"does nothing\" in pg_stat_subscriptions.\n\nThere should be logs reporting any replication conflicts etc. See [1]\nfor example logs. See also the answer for #2 below.\n\n> 2. In case of copying the existing data: how can I find which tables or\n> partitions were processed and which are on the processing queue (while\n> monitoring I have observed no specific order or rule).\n\nThere is no predictable processing queue or order - The initial\ntablesyncs might be happening in multiple asynchronous processes\naccording to the GUC max_sync_workers_per_subscription [2].\n\nBelow I show examples of replicating two tables (tab1 and tab2).\n\n~~\n\n From the logs you should see which table syncs have completed OK:\n\ne.g. (the initial copy is all good)\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\ndbname=test_pub' PUBLICATION pub1;\nNOTICE: created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2022-11-23 12:23:18.501 AEDT [27961] LOG: logical\nreplication apply worker for subscription \"sub1\" has started\n2022-11-23 12:23:18.513 AEDT [27963] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab1\" has\nstarted\n2022-11-23 12:23:18.524 AEDT [27965] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab2\" has\nstarted\n2022-11-23 12:23:18.593 AEDT [27963] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab1\" has\nfinished\n2022-11-23 12:23:18.611 AEDT [27965] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab2\" has\nfinished\n\ne.g. (where there is conflict in table tab2)\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\ndbname=test_pub' PUBLICATION pub1;\nNOTICE: created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2022-11-23 12:40:56.794 AEDT [23401] LOG: logical\nreplication apply worker for subscription \"sub1\" has started\n2022-11-23 12:40:56.808 AEDT [23403] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab1\" has\nstarted\n2022-11-23 12:40:56.819 AEDT [23405] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab2\" has\nstarted\n2022-11-23 12:40:56.890 AEDT [23405] ERROR: duplicate key value\nviolates unique constraint \"tab2_pkey\"\n2022-11-23 12:40:56.890 AEDT [23405] DETAIL: Key (id)=(1) already exists.\n2022-11-23 12:40:56.890 AEDT [23405] CONTEXT: COPY tab2, line 1\n2022-11-23 12:40:56.891 AEDT [3233] LOG: background worker \"logical\nreplication worker\" (PID 23405) exited with exit code 1\n2022-11-23 12:40:56.902 AEDT [23403] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab1\" has\nfinished\n...\n\n~~\n\nAlternatively, you can use some SQL query to discover which tables of\nthe subscription had attained a READY state. The READY state (denoted\nby 'r') means that the initial COPY was completed ok. The table\nreplication state is found in the 'srsubstate' column. See [3]\n\ne.g. (the initial copy is all good)\ntest_sub=# select\nsr.srsubid,sr.srrelid,s.subname,ut.relname,sr.srsubstate from\npg_statio_user_tables ut, pg_subscription_rel sr, pg_subscription s\nwhere ut.relid = sr.srrelid and s.oid=sr.srsubid;\n srsubid | srrelid | subname | relname | srsubstate\n---------+---------+---------+---------+------------\n 16418 | 16409 | sub1 | tab1 | r\n 16418 | 16402 | sub1 | tab2 | r\n(2 rows)\n\ne.g. (where it has a conflict in table tab2, so it did not get to READY state)\ntest_sub=# select\nsr.srsubid,sr.srrelid,s.subname,ut.relname,sr.srsubstate from\npg_statio_user_tables ut, pg_subscription_rel sr, pg_subscription s\nwhere ut.relid = sr.srrelid and s.oid=sr.srsubid;2022-11-23\n12:41:37.686 AEDT [24501] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab2\" has\nstarted\n2022-11-23 12:41:37.774 AEDT [24501] ERROR: duplicate key value\nviolates unique constraint \"tab2_pkey\"\n2022-11-23 12:41:37.774 AEDT [24501] DETAIL: Key (id)=(1) already exists.\n2022-11-23 12:41:37.774 AEDT [24501] CONTEXT: COPY tab2, line 1\n2022-11-23 12:41:37.775 AEDT [3233] LOG: background worker \"logical\nreplication worker\" (PID 24501) exited with exit code 1\n\n srsubid | srrelid | subname | relname | srsubstate\n---------+---------+---------+---------+------------\n 16423 | 16409 | sub1 | tab1 | r\n 16423 | 16402 | sub1 | tab2 | d\n\n> 3. In case of copying the existing data there is no option to update the row\n> based on the Primary Key if it already exists at the destination. The COPY\n> only fails.\n\nYes, the conflicts section [1] describes this -- \"A conflict will\nproduce an error and will stop the replication; it must be resolved\nmanually by the user.\"\n\n> 4. Is it possible to restart an interrupted logical replication. If yes,\n> then how? Taking in account the already existing data!!!\n\nYou can use the SUBSCRIPTION copy_data=false parameter to avoid\nre-copying initial data. But this applies to all tables of the\nsubscription so if you have a situation where there are some tables\ncopied and some not copied then you might have to either truncate the\ntables and start again, or you might want to create additional\ntemporary subscriptions with appropriate copy_data=true/false\nparameter. I guess the best course of action depends if you had\nconflicts with 1 or 2 tables or 10000 tables.\n\n>\n> IMHO there are some big functionality features still missing, but they can\n> be added.\n>\n\nI am not sure if there is missing functionality, but perhaps there is\nsome information that is harder to find than it ought to be, so I\nwould like to help first address that part.\n\n------\n[1] conflicts. https://www.postgresql.org/docs/current/logical-replication-conflicts.html\n[2] max_sync_workers_per_subscription.\nhttps://www.postgresql.org/docs/current/runtime-config-replication.html\n[3] srsubstate.\nhttps://www.postgresql.org/docs/current/catalog-pg-subscription-rel.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:44:49 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication missing information" }, { "msg_contents": "Hello!Thank you very much, the information you just sent to me is very, very valuable!\nBest regards, Cristi Boboc \n\n On Wednesday, November 23, 2022 at 04:45:09 AM GMT+2, Peter Smith <smithpb2250@gmail.com> wrote: \n \n On Fri, Nov 18, 2022 at 4:50 AM PG Doc comments form\n<noreply@postgresql.org> wrote:\n>\n> The following documentation comment has been logged on the website:\n>\n> Page: https://www.postgresql.org/docs/15/logical-replication-row-filter.html\n> Description:\n\nHi,\n\nFYI - I have forwarded this post to the hacker's list, where I think\nit will receive more attention.\n\nI am not sure why that (above) page was cited -- the section \"31.3 Row\nFilters\" is specifically about row filtering, whereas the items you\nreported seem unrelated to row filters, but are generic for all\nLogical Replication.\n\n>\n> There are several things missing here and some of them I found to be highly\n> important:\n> 1. How can I find why a logical replication failed. Currently I only can see\n> it \"does nothing\" in pg_stat_subscriptions.\n\nThere should be logs reporting any replication conflicts etc. See [1]\nfor example logs. See also the answer for #2 below.\n\n> 2. In case of copying the existing data: how can I find which tables or\n> partitions were processed and which are on the processing queue (while\n> monitoring I have observed no specific order or rule).\n\nThere is no predictable processing queue or order - The initial\ntablesyncs might be happening in multiple asynchronous processes\naccording to the GUC max_sync_workers_per_subscription [2].\n\nBelow I show examples of replicating two tables (tab1 and tab2).\n\n~~\n\n From the logs you should see which table syncs have completed OK:\n\ne.g. (the initial copy is all good)\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\ndbname=test_pub' PUBLICATION pub1;\nNOTICE:  created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2022-11-23 12:23:18.501 AEDT [27961] LOG:  logical\nreplication apply worker for subscription \"sub1\" has started\n2022-11-23 12:23:18.513 AEDT [27963] LOG:  logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab1\" has\nstarted\n2022-11-23 12:23:18.524 AEDT [27965] LOG:  logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab2\" has\nstarted\n2022-11-23 12:23:18.593 AEDT [27963] LOG:  logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab1\" has\nfinished\n2022-11-23 12:23:18.611 AEDT [27965] LOG:  logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab2\" has\nfinished\n\ne.g. (where there is conflict in table tab2)\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\ndbname=test_pub' PUBLICATION pub1;\nNOTICE:  created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2022-11-23 12:40:56.794 AEDT [23401] LOG:  logical\nreplication apply worker for subscription \"sub1\" has started\n2022-11-23 12:40:56.808 AEDT [23403] LOG:  logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab1\" has\nstarted\n2022-11-23 12:40:56.819 AEDT [23405] LOG:  logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab2\" has\nstarted\n2022-11-23 12:40:56.890 AEDT [23405] ERROR:  duplicate key value\nviolates unique constraint \"tab2_pkey\"\n2022-11-23 12:40:56.890 AEDT [23405] DETAIL:  Key (id)=(1) already exists.\n2022-11-23 12:40:56.890 AEDT [23405] CONTEXT:  COPY tab2, line 1\n2022-11-23 12:40:56.891 AEDT [3233] LOG:  background worker \"logical\nreplication worker\" (PID 23405) exited with exit code 1\n2022-11-23 12:40:56.902 AEDT [23403] LOG:  logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab1\" has\nfinished\n...\n\n~~\n\nAlternatively, you can use some SQL query to discover which tables of\nthe subscription had attained a READY state. The READY state (denoted\nby 'r') means that the initial COPY was completed ok. The table\nreplication state is found in the 'srsubstate' column. See [3]\n\ne.g. (the initial copy is all good)\ntest_sub=# select\nsr.srsubid,sr.srrelid,s.subname,ut.relname,sr.srsubstate from\npg_statio_user_tables ut, pg_subscription_rel sr, pg_subscription s\nwhere ut.relid = sr.srrelid and s.oid=sr.srsubid;\n srsubid | srrelid | subname | relname | srsubstate\n---------+---------+---------+---------+------------\n  16418 |  16409 | sub1    | tab1    | r\n  16418 |  16402 | sub1    | tab2    | r\n(2 rows)\n\ne.g. (where it has a conflict in table tab2, so it did not get to READY state)\ntest_sub=# select\nsr.srsubid,sr.srrelid,s.subname,ut.relname,sr.srsubstate from\npg_statio_user_tables ut, pg_subscription_rel sr, pg_subscription s\nwhere ut.relid = sr.srrelid and s.oid=sr.srsubid;2022-11-23\n12:41:37.686 AEDT [24501] LOG:  logical replication table\nsynchronization worker for subscription \"sub1\", table \"tab2\" has\nstarted\n2022-11-23 12:41:37.774 AEDT [24501] ERROR:  duplicate key value\nviolates unique constraint \"tab2_pkey\"\n2022-11-23 12:41:37.774 AEDT [24501] DETAIL:  Key (id)=(1) already exists.\n2022-11-23 12:41:37.774 AEDT [24501] CONTEXT:  COPY tab2, line 1\n2022-11-23 12:41:37.775 AEDT [3233] LOG:  background worker \"logical\nreplication worker\" (PID 24501) exited with exit code 1\n\n srsubid | srrelid | subname | relname | srsubstate\n---------+---------+---------+---------+------------\n  16423 |  16409 | sub1    | tab1    | r\n  16423 |  16402 | sub1    | tab2    | d\n\n> 3. In case of copying the existing data there is no option to update the row\n> based on the Primary Key if it already exists at the destination. The COPY\n> only fails.\n\nYes, the conflicts section [1] describes this --  \"A conflict will\nproduce an error and will stop the replication; it must be resolved\nmanually by the user.\"\n\n> 4. Is it possible to restart an interrupted logical replication. If yes,\n> then how? Taking in account the already existing data!!!\n\nYou can use the SUBSCRIPTION copy_data=false parameter to avoid\nre-copying initial data. But this applies to all tables of the\nsubscription so if you have a situation where there are some tables\ncopied and some not copied then you might have to either truncate the\ntables and start again, or you might want to create additional\ntemporary subscriptions with appropriate copy_data=true/false\nparameter. I guess the best course of action depends if you had\nconflicts with 1 or 2 tables or 10000 tables.\n\n>\n> IMHO there are some big functionality features still missing, but they can\n> be added.\n>\n\nI am not sure if there is missing functionality, but perhaps there is\nsome information that is harder to find than it ought to be, so I\nwould like to help first address that part.\n\n------\n[1] conflicts. https://www.postgresql.org/docs/current/logical-replication-conflicts.html\n[2] max_sync_workers_per_subscription.\nhttps://www.postgresql.org/docs/current/runtime-config-replication.html\n[3] srsubstate.\nhttps://www.postgresql.org/docs/current/catalog-pg-subscription-rel.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n \nHello!Thank you very much, the information you just sent to me is very, very valuable!Best regards, Cristi Boboc\n\n\n\n\n On Wednesday, November 23, 2022 at 04:45:09 AM GMT+2, Peter Smith <smithpb2250@gmail.com> wrote:\n \n\n\nOn Fri, Nov 18, 2022 at 4:50 AM PG Doc comments form<noreply@postgresql.org> wrote:>> The following documentation comment has been logged on the website:>> Page: https://www.postgresql.org/docs/15/logical-replication-row-filter.html> Description:Hi,FYI - I have forwarded this post to the hacker's list, where I thinkit will receive more attention.I am not sure why that (above) page was cited -- the section \"31.3 RowFilters\" is specifically about row filtering, whereas the items youreported seem unrelated to row filters, but are generic for allLogical Replication.>> There are several things missing here and some of them I found to be highly> important:> 1. How can I find why a logical replication failed. Currently I only can see> it \"does nothing\" in pg_stat_subscriptions.There should be logs reporting any replication conflicts etc. See [1]for example logs. See also the answer for #2 below.> 2. In case of copying the existing data: how can I find which tables or> partitions were processed and which are on the processing queue (while> monitoring I have observed no specific order or rule).There is no predictable processing queue or order - The initialtablesyncs might be happening in multiple asynchronous processesaccording to the GUC max_sync_workers_per_subscription [2].Below I show examples of replicating two tables (tab1 and tab2).~~From the logs you should see which table syncs have completed OK:e.g. (the initial copy is all good)test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhostdbname=test_pub' PUBLICATION pub1;NOTICE:  created replication slot \"sub1\" on publisherCREATE SUBSCRIPTIONtest_sub=# 2022-11-23 12:23:18.501 AEDT [27961] LOG:  logicalreplication apply worker for subscription \"sub1\" has started2022-11-23 12:23:18.513 AEDT [27963] LOG:  logical replication tablesynchronization worker for subscription \"sub1\", table \"tab1\" hasstarted2022-11-23 12:23:18.524 AEDT [27965] LOG:  logical replication tablesynchronization worker for subscription \"sub1\", table \"tab2\" hasstarted2022-11-23 12:23:18.593 AEDT [27963] LOG:  logical replication tablesynchronization worker for subscription \"sub1\", table \"tab1\" hasfinished2022-11-23 12:23:18.611 AEDT [27965] LOG:  logical replication tablesynchronization worker for subscription \"sub1\", table \"tab2\" hasfinishede.g. (where there is conflict in table tab2)test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhostdbname=test_pub' PUBLICATION pub1;NOTICE:  created replication slot \"sub1\" on publisherCREATE SUBSCRIPTIONtest_sub=# 2022-11-23 12:40:56.794 AEDT [23401] LOG:  logicalreplication apply worker for subscription \"sub1\" has started2022-11-23 12:40:56.808 AEDT [23403] LOG:  logical replication tablesynchronization worker for subscription \"sub1\", table \"tab1\" hasstarted2022-11-23 12:40:56.819 AEDT [23405] LOG:  logical replication tablesynchronization worker for subscription \"sub1\", table \"tab2\" hasstarted2022-11-23 12:40:56.890 AEDT [23405] ERROR:  duplicate key valueviolates unique constraint \"tab2_pkey\"2022-11-23 12:40:56.890 AEDT [23405] DETAIL:  Key (id)=(1) already exists.2022-11-23 12:40:56.890 AEDT [23405] CONTEXT:  COPY tab2, line 12022-11-23 12:40:56.891 AEDT [3233] LOG:  background worker \"logicalreplication worker\" (PID 23405) exited with exit code 12022-11-23 12:40:56.902 AEDT [23403] LOG:  logical replication tablesynchronization worker for subscription \"sub1\", table \"tab1\" hasfinished...~~Alternatively, you can use some SQL query to discover which tables ofthe subscription had attained a READY state. The READY state (denotedby 'r') means that the initial COPY was completed ok. The tablereplication state is found in the 'srsubstate' column. See [3]e.g. (the initial copy is all good)test_sub=# selectsr.srsubid,sr.srrelid,s.subname,ut.relname,sr.srsubstate frompg_statio_user_tables ut, pg_subscription_rel sr, pg_subscription swhere ut.relid = sr.srrelid and s.oid=sr.srsubid; srsubid | srrelid | subname | relname | srsubstate---------+---------+---------+---------+------------  16418 |  16409 | sub1    | tab1    | r  16418 |  16402 | sub1    | tab2    | r(2 rows)e.g. (where it has a conflict in table tab2, so it did not get to READY state)test_sub=# selectsr.srsubid,sr.srrelid,s.subname,ut.relname,sr.srsubstate frompg_statio_user_tables ut, pg_subscription_rel sr, pg_subscription swhere ut.relid = sr.srrelid and s.oid=sr.srsubid;2022-11-2312:41:37.686 AEDT [24501] LOG:  logical replication tablesynchronization worker for subscription \"sub1\", table \"tab2\" hasstarted2022-11-23 12:41:37.774 AEDT [24501] ERROR:  duplicate key valueviolates unique constraint \"tab2_pkey\"2022-11-23 12:41:37.774 AEDT [24501] DETAIL:  Key (id)=(1) already exists.2022-11-23 12:41:37.774 AEDT [24501] CONTEXT:  COPY tab2, line 12022-11-23 12:41:37.775 AEDT [3233] LOG:  background worker \"logicalreplication worker\" (PID 24501) exited with exit code 1 srsubid | srrelid | subname | relname | srsubstate---------+---------+---------+---------+------------  16423 |  16409 | sub1    | tab1    | r  16423 |  16402 | sub1    | tab2    | d> 3. In case of copying the existing data there is no option to update the row> based on the Primary Key if it already exists at the destination. The COPY> only fails.Yes, the conflicts section [1] describes this --  \"A conflict willproduce an error and will stop the replication; it must be resolvedmanually by the user.\"> 4. Is it possible to restart an interrupted logical replication. If yes,> then how? Taking in account the already existing data!!!You can use the SUBSCRIPTION copy_data=false parameter to avoidre-copying initial data. But this applies to all tables of thesubscription so if you have a situation where there are some tablescopied and some not copied then you might have to either truncate thetables and start again, or you might want to create additionaltemporary subscriptions with appropriate copy_data=true/falseparameter. I guess the best course of action depends if you hadconflicts with 1 or 2 tables or 10000 tables.>> IMHO there are some big functionality features still missing, but they can> be added.>I am not sure if there is missing functionality, but perhaps there issome information that is harder to find than it ought to be, so Iwould like to help first address that part.------[1] conflicts. https://www.postgresql.org/docs/current/logical-replication-conflicts.html[2] max_sync_workers_per_subscription.https://www.postgresql.org/docs/current/runtime-config-replication.html[3] srsubstate.https://www.postgresql.org/docs/current/catalog-pg-subscription-rel.htmlKind Regards,Peter Smith.Fujitsu Australia", "msg_date": "Wed, 23 Nov 2022 09:03:05 +0000 (UTC)", "msg_from": "Boboc Cristi <bobocc@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication missing information" } ]
[ { "msg_contents": "I happened to notice that the link to Hunspell in our documentation goes to the\nhunspell sourceforge page last updated in 2015. The project has since moved to\nGithub [0] with hunspell.sourceforge.net redirecting there, I'm not sure\nexactly when but Wikipedia updated their link entry in 2016 [1] so it seems\nabout time we do too.\n\nUnless objected to I'll apply the attached to master with the doc/ hunk to all\nsupported branches.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://hunspell.github.io/\n[1] https://en.wikipedia.org/w/index.php?title=Hunspell&diff=704306462&oldid=697230645", "msg_date": "Thu, 17 Nov 2022 22:16:19 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Outdated url to Hunspell" } ]
[ { "msg_contents": "Hi,\n\nI wonder why the walreceiver didn't start in\n008_min_recovery_point_node_3.log here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-11-16%2023%3A13%3A38\n\nThere was the case of commit 8acd8f86, but that involved a deadlocked\npostmaster whereas this one still handled a shutdown request.\n\n\n", "msg_date": "Fri, 18 Nov 2022 10:54:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Strange failure on mamba" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I wonder why the walreceiver didn't start in\n> 008_min_recovery_point_node_3.log here:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-11-16%2023%3A13%3A38\n\nmamba has been showing intermittent failures in various replication\ntests since day one. My guess is that it's slow enough to be\nparticularly subject to the signal-handler race conditions that we\nknow exist in walreceivers and elsewhere. (Now, it wasn't any faster\nin its previous incarnation as a macOS critter. But maybe modern\nNetBSD has different scheduler behavior than ancient macOS and that\ncontributes somehow. Or maybe there's some other NetBSD weirdness\nin here.)\n\nI've tried to reproduce manually, without much success :-(\n\nLike many of its other failures, there's a suggestive postmaster\nlog entry at the very end:\n\n2022-11-16 19:45:53.851 EST [2036:4] LOG: received immediate shutdown request\n2022-11-16 19:45:58.873 EST [2036:5] LOG: issuing SIGKILL to recalcitrant children\n2022-11-16 19:45:58.881 EST [2036:6] LOG: database system is shut down\n\nSo some postmaster child is stuck somewhere where it's not responding\nto SIGQUIT. While it's not unreasonable to guess that that's a\nwalreceiver, there's no hard evidence of it here. I've been wondering\nif it'd be worth patching the postmaster so that it's a bit more verbose\nabout which children it had to SIGKILL. I've also wondered about\nchanging the SIGKILL to SIGABRT in hopes of reaping a core file that\ncould be investigated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 17:08:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "On Fri, Nov 18, 2022 at 11:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I wonder why the walreceiver didn't start in\n> > 008_min_recovery_point_node_3.log here:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-11-16%2023%3A13%3A38\n>\n> mamba has been showing intermittent failures in various replication\n> tests since day one. My guess is that it's slow enough to be\n> particularly subject to the signal-handler race conditions that we\n> know exist in walreceivers and elsewhere. (Now, it wasn't any faster\n> in its previous incarnation as a macOS critter. But maybe modern\n> NetBSD has different scheduler behavior than ancient macOS and that\n> contributes somehow. Or maybe there's some other NetBSD weirdness\n> in here.)\n>\n> I've tried to reproduce manually, without much success :-(\n>\n> Like many of its other failures, there's a suggestive postmaster\n> log entry at the very end:\n>\n> 2022-11-16 19:45:53.851 EST [2036:4] LOG: received immediate shutdown request\n> 2022-11-16 19:45:58.873 EST [2036:5] LOG: issuing SIGKILL to recalcitrant children\n> 2022-11-16 19:45:58.881 EST [2036:6] LOG: database system is shut down\n>\n> So some postmaster child is stuck somewhere where it's not responding\n> to SIGQUIT. While it's not unreasonable to guess that that's a\n> walreceiver, there's no hard evidence of it here. I've been wondering\n> if it'd be worth patching the postmaster so that it's a bit more verbose\n> about which children it had to SIGKILL. I've also wondered about\n> changing the SIGKILL to SIGABRT in hopes of reaping a core file that\n> could be investigated.\n\nI wonder if it's a runtime variant of the other problem. We do\nload_file(\"libpqwalreceiver\", false) before unblocking signals but\nmaybe don't resolve the symbols until calling them, or something like\nthat...\n\n\n", "msg_date": "Fri, 18 Nov 2022 11:35:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "On Fri, Nov 18, 2022 at 11:35 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I wonder if it's a runtime variant of the other problem. We do\n> load_file(\"libpqwalreceiver\", false) before unblocking signals but\n> maybe don't resolve the symbols until calling them, or something like\n> that...\n\nHmm, no, I take that back. A key ingredient was that a symbol was\nbeing resolved inside the signal handler, which is a postmaster-only\nthing.\n\n\n", "msg_date": "Fri, 18 Nov 2022 11:47:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Nov 18, 2022 at 11:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> mamba has been showing intermittent failures in various replication\n>> tests since day one.\n\n> I wonder if it's a runtime variant of the other problem. We do\n> load_file(\"libpqwalreceiver\", false) before unblocking signals but\n> maybe don't resolve the symbols until calling them, or something like\n> that...\n\nYeah, that or some other NetBSD bug could be the explanation, too.\nWithout a stack trace it's hard to have any confidence about it,\nbut I've been unable to reproduce the problem outside the buildfarm.\n(Which is a familiar refrain. I wonder what it is about the buildfarm\nenvironment that makes it act different from the exact same code running\non the exact same machine.)\n\nSo I'd like to have some way to make the postmaster send SIGABRT instead\nof SIGKILL in the buildfarm environment. The lowest-tech way would be\nto drive that off some #define or other. We could scale it up to a GUC\nperhaps. Adjacent to that, I also wonder whether SIGABRT wouldn't be\nmore useful than SIGSTOP for the existing SendStop half-a-feature ---\nthe idea that people should collect cores manually seems mighty\nlast-century.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Nov 2022 17:47:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Hi,\n\nOn 2022-11-17 17:47:50 -0500, Tom Lane wrote:\n> Yeah, that or some other NetBSD bug could be the explanation, too.\n> Without a stack trace it's hard to have any confidence about it,\n> but I've been unable to reproduce the problem outside the buildfarm.\n> (Which is a familiar refrain. I wonder what it is about the buildfarm\n> environment that makes it act different from the exact same code running\n> on the exact same machine.)\n> \n> So I'd like to have some way to make the postmaster send SIGABRT instead\n> of SIGKILL in the buildfarm environment. The lowest-tech way would be\n> to drive that off some #define or other. We could scale it up to a GUC\n> perhaps. Adjacent to that, I also wonder whether SIGABRT wouldn't be\n> more useful than SIGSTOP for the existing SendStop half-a-feature ---\n> the idea that people should collect cores manually seems mighty\n> last-century.\n\nI suspect that having a GUC would be a good idea. I needed something similar\nrecently, debugging an occasional hang in the AIO patchset. I first tried\nsomething like your #define approach and it did cause a problematic flood of\ncore files.\n\nI ended up using libbacktrace to generate useful backtraces (vs what\nbacktrace_symbols() generates) when receiving SIGQUIT. I didn't do the legwork\nto make it properly signal safe, but it'd be doable afaiu.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Nov 2022 19:25:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-17 17:47:50 -0500, Tom Lane wrote:\n>> So I'd like to have some way to make the postmaster send SIGABRT instead\n>> of SIGKILL in the buildfarm environment. The lowest-tech way would be\n>> to drive that off some #define or other. We could scale it up to a GUC\n>> perhaps. Adjacent to that, I also wonder whether SIGABRT wouldn't be\n>> more useful than SIGSTOP for the existing SendStop half-a-feature ---\n>> the idea that people should collect cores manually seems mighty\n>> last-century.\n\n> I suspect that having a GUC would be a good idea. I needed something similar\n> recently, debugging an occasional hang in the AIO patchset. I first tried\n> something like your #define approach and it did cause a problematic flood of\n> core files.\n\nYeah, the main downside of such a thing is the risk of lots of core files\naccumulating over repeated crashes. Nonetheless, I think it'll be a\nuseful debugging aid. Here's a proposed patch. (I took the opportunity\nto kill off the long-since-unimplemented Reinit switch, too.)\n\nOne thing I'm not too clear on is if we want to send SIGABRT to the child\ngroups (ie, SIGABRT grandchild processes too). I made signal_child do\nso here, but perhaps it's overkill.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 18 Nov 2022 13:48:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Sending SIGABRT to child processes (was Re: Strange failure on mamba)" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I suspect that having a GUC would be a good idea. I needed something similar\n>> recently, debugging an occasional hang in the AIO patchset. I first tried\n>> something like your #define approach and it did cause a problematic flood of\n>> core files.\n\n> Yeah, the main downside of such a thing is the risk of lots of core files\n> accumulating over repeated crashes. Nonetheless, I think it'll be a\n> useful debugging aid. Here's a proposed patch. (I took the opportunity\n> to kill off the long-since-unimplemented Reinit switch, too.)\n\nHearing no complaints, I've pushed this and reconfigured mamba to use\nsend_abort_for_kill. Once I've got a core file or two to look at,\nI'll try to figure out what's going on there.\n\n> One thing I'm not too clear on is if we want to send SIGABRT to the child\n> groups (ie, SIGABRT grandchild processes too). I made signal_child do\n> so here, but perhaps it's overkill.\n\nAfter further thought, we do have to SIGABRT the grandchildren too,\nor they won't shut down promptly. I think there might be a small\nrisk of some programs trapping SIGABRT and doing something other than\nwhat we want; but since this is only a debug aid that's probably\ntolerable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:04:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sending SIGABRT to child processes (was Re: Strange failure on\n mamba)" }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Fri, Nov 18, 2022 at 11:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> mamba has been showing intermittent failures in various replication\n>>> tests since day one.\n\n>> I wonder if it's a runtime variant of the other problem. We do\n>> load_file(\"libpqwalreceiver\", false) before unblocking signals but\n>> maybe don't resolve the symbols until calling them, or something like\n>> that...\n\n> Yeah, that or some other NetBSD bug could be the explanation, too.\n> Without a stack trace it's hard to have any confidence about it,\n> but I've been unable to reproduce the problem outside the buildfarm.\n\nThanks to commit 51b5834cd I've now been able to capture some info\nfrom mamba's last couple of failures [1][2]. Sure enough, what is\nhappening is that postmaster children are getting stuck in recursive\nrtld symbol resolution. A couple of the stack traces I collected are\n\n#0 0xfdeede4c in ___lwp_park60 () from /usr/libexec/ld.elf_so\n#1 0xfdee3e08 in _rtld_exclusive_enter () from /usr/libexec/ld.elf_so\n#2 0xfdee59e4 in dlopen () from /usr/libexec/ld.elf_so\n#3 0x01e54ed0 in internal_load_library (\n libname=libname@entry=0xfd74cc88 \"/home/buildfarm/bf-data/HEAD/pgsql.build/tmp_install/home/buildfarm/bf-data/HEAD/inst/lib/postgresql/libpqwalreceiver.so\") at dfmgr.c:239\n#4 0x01e55c78 in load_file (filename=<optimized out>, restricted=<optimized out>) at dfmgr.c:156\n#5 0x01c5ba24 in WalReceiverMain () at walreceiver.c:292\n#6 0x01c090f8 in AuxiliaryProcessMain (auxtype=auxtype@entry=WalReceiverProcess) at auxprocess.c:161\n#7 0x01c10970 in StartChildProcess (type=WalReceiverProcess) at postmaster.c:5310\n#8 0x01c123ac in MaybeStartWalReceiver () at postmaster.c:5475\n#9 MaybeStartWalReceiver () at postmaster.c:5468\n#10 sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5131\n#11 <signal handler called>\n#12 0xfdee6b44 in _rtld_symlook_obj () from /usr/libexec/ld.elf_so\n#13 0xfdee6fc0 in _rtld_symlook_list () from /usr/libexec/ld.elf_so\n#14 0xfdee7644 in _rtld_symlook_default () from /usr/libexec/ld.elf_so\n#15 0xfdee795c in _rtld_find_symdef () from /usr/libexec/ld.elf_so\n#16 0xfdee7ad0 in _rtld_find_plt_symdef () from /usr/libexec/ld.elf_so\n#17 0xfdee1918 in _rtld_bind () from /usr/libexec/ld.elf_so\n#18 0xfdee1dc0 in _rtld_bind_secureplt_start () from /usr/libexec/ld.elf_so\nBacktrace stopped: frame did not save the PC\n\n#0 0xfdeede4c in ___lwp_park60 () from /usr/libexec/ld.elf_so\n#1 0xfdee3e08 in _rtld_exclusive_enter () from /usr/libexec/ld.elf_so\n#2 0xfdee4ba4 in _rtld_exit () from /usr/libexec/ld.elf_so\n#3 0xfd54ea74 in __cxa_finalize () from /usr/lib/libc.so.12\n#4 0xfd54e354 in exit () from /usr/lib/libc.so.12\n#5 0x01c963c0 in proc_exit (code=code@entry=0) at ipc.c:152\n#6 0x01c056e4 in AutoVacLauncherShutdown () at autovacuum.c:853\n#7 0x01c071dc in AutoVacLauncherMain (argv=0x0, argc=0) at autovacuum.c:800\n#8 0x01c07694 in StartAutoVacLauncher () at autovacuum.c:416\n#9 0x01c11d3c in reaper (postgres_signal_arg=<optimized out>) at postmaster.c:3038\n#10 <signal handler called>\n#11 0xfdee6f64 in _rtld_symlook_list () from /usr/libexec/ld.elf_so\n#12 0xfdee7644 in _rtld_symlook_default () from /usr/libexec/ld.elf_so\n#13 0xfdee795c in _rtld_find_symdef () from /usr/libexec/ld.elf_so\n#14 0xfdee7ad0 in _rtld_find_plt_symdef () from /usr/libexec/ld.elf_so\n#15 0xfdee1918 in _rtld_bind () from /usr/libexec/ld.elf_so\n#16 0xfdee1dc0 in _rtld_bind_secureplt_start () from /usr/libexec/ld.elf_so\nBacktrace stopped: frame did not save the PC\n\nwhich is pretty much just the same thing we were seeing before\ncommit 8acd8f869 :-(\n\nNow, we certainly cannot think that these are occurring early in\npostmaster startup. In the wake of 8acd8f869, we should expect\nthat there's no further need to call rtld_bind at all in the\npostmaster, but seemingly that's not so. It's very frustrating\nthat the backtrace stops where it does :-(. It's also strange\nthat we're apparently running with signals enabled whereever\nit is that rtld_bind is getting called from. Could it be that\nsigaction is failing to install the requested signal mask, so\nthat one postmaster signal handler is interrupting another?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-11-24%2021%3A45%3A29\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-11-29%2020%3A50%3A36\n\n\n", "msg_date": "Tue, 29 Nov 2022 20:44:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "On Wed, Nov 30, 2022 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Now, we certainly cannot think that these are occurring early in\n> postmaster startup. In the wake of 8acd8f869, we should expect\n> that there's no further need to call rtld_bind at all in the\n> postmaster, but seemingly that's not so. It's very frustrating\n> that the backtrace stops where it does :-(. It's also strange\n> that we're apparently running with signals enabled whereever\n> it is that rtld_bind is getting called from. Could it be that\n> sigaction is failing to install the requested signal mask, so\n> that one postmaster signal handler is interrupting another?\n\nAdd in some code that does sigaction(0, NULL, &mask) to read the\ncurrent mask and assert that it's blocked as expected in the handlers?\nStart the postmaster in gdb with a break on _rtld_bind to find all the\nplaces that reach it (unexpectedly)?\n\n\n", "msg_date": "Wed, 30 Nov 2022 15:43:20 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "On Wed, Nov 30, 2022 at 3:43 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> sigaction(0, NULL, &mask)\n\ns/sigaction/sigprocmask/\n\n\n", "msg_date": "Wed, 30 Nov 2022 15:45:36 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Hi,\n\nOn 2022-11-29 20:44:34 -0500, Tom Lane wrote:\n> Thanks to commit 51b5834cd I've now been able to capture some info\n> from mamba's last couple of failures [1][2]. Sure enough, what is\n> happening is that postmaster children are getting stuck in recursive\n> rtld symbol resolution. A couple of the stack traces I collected are\n> \n> #0 0xfdeede4c in ___lwp_park60 () from /usr/libexec/ld.elf_so\n> #1 0xfdee3e08 in _rtld_exclusive_enter () from /usr/libexec/ld.elf_so\n> #2 0xfdee59e4 in dlopen () from /usr/libexec/ld.elf_so\n> #3 0x01e54ed0 in internal_load_library (\n> libname=libname@entry=0xfd74cc88 \"/home/buildfarm/bf-data/HEAD/pgsql.build/tmp_install/home/buildfarm/bf-data/HEAD/inst/lib/postgresql/libpqwalreceiver.so\") at dfmgr.c:239\n> #4 0x01e55c78 in load_file (filename=<optimized out>, restricted=<optimized out>) at dfmgr.c:156\n> #5 0x01c5ba24 in WalReceiverMain () at walreceiver.c:292\n> #6 0x01c090f8 in AuxiliaryProcessMain (auxtype=auxtype@entry=WalReceiverProcess) at auxprocess.c:161\n> #7 0x01c10970 in StartChildProcess (type=WalReceiverProcess) at postmaster.c:5310\n> #8 0x01c123ac in MaybeStartWalReceiver () at postmaster.c:5475\n> #9 MaybeStartWalReceiver () at postmaster.c:5468\n> #10 sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5131\n> #11 <signal handler called>\n> #12 0xfdee6b44 in _rtld_symlook_obj () from /usr/libexec/ld.elf_so\n> #13 0xfdee6fc0 in _rtld_symlook_list () from /usr/libexec/ld.elf_so\n> #14 0xfdee7644 in _rtld_symlook_default () from /usr/libexec/ld.elf_so\n> #15 0xfdee795c in _rtld_find_symdef () from /usr/libexec/ld.elf_so\n> #16 0xfdee7ad0 in _rtld_find_plt_symdef () from /usr/libexec/ld.elf_so\n> #17 0xfdee1918 in _rtld_bind () from /usr/libexec/ld.elf_so\n> #18 0xfdee1dc0 in _rtld_bind_secureplt_start () from /usr/libexec/ld.elf_so\n> Backtrace stopped: frame did not save the PC\n\nDo you have any idea why the stack can't be unwound further here? Is it\npossibly indicative of a corrupted stack? I guess we'd need to dig into the\nthe netbsd libc code :(\n\n\n> which is pretty much just the same thing we were seeing before\n> commit 8acd8f869 :->\n\nWhat libraries is postgres linked against? I don't know whether -z now only\naffects the \"top-level\" dependencies of postgres, or also the dependencies of\nshared libraries that haven't been built with -z now. The only dependencies\nthat I could see being relevant are libintl and openssl.\n\nYou could try if anything changes if you set LD_BIND_NOW, that should trigger\n\"recursive\" dependencies to be loaded eagerly as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Nov 2022 21:42:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-29 20:44:34 -0500, Tom Lane wrote:\n>> Backtrace stopped: frame did not save the PC\n\n> Do you have any idea why the stack can't be unwound further here? Is it\n> possibly indicative of a corrupted stack? I guess we'd need to dig into\n> the netbsd libc code :(\n\nI did do some digging in that area previously when we were seeing this\non HPPA, and determined that the assembly code in that area was not\nbothering to establish a standard stack frame, for no very obvious\nreason :-(. I haven't studied their equivalent PPC code, but apparently\nit's equally cavalier. I recall trying to hack the HPPA code to make\nit set up the stack frame correctly, without success, but I didn't\ntry very hard. Maybe I'll have a go at that on the PPC side.\n\n> What libraries is postgres linked against? I don't know whether -z now only\n> affects the \"top-level\" dependencies of postgres, or also the dependencies of\n> shared libraries that haven't been built with -z now. The only dependencies\n> that I could see being relevant are libintl and openssl.\n\nHmm. mamba is using both --enable-nls and --with-openssl, but\nI can't see a reason why the postmaster would be interacting with\nOpenSSL post-startup in test cases that don't use SSL. Perhaps\nlibintl is doing something it shouldn't?\n\n> You could try if anything changes if you set LD_BIND_NOW, that should trigger\n> \"recursive\" dependencies to be loaded eagerly as well.\n\nGoogling LD_BIND_NOW suggests that that's a Linux thing; do you know that\nit should have an effect on NetBSD?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Nov 2022 00:55:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Hi,\n\nOn 2022-11-29 20:44:34 -0500, Tom Lane wrote:\n> It's also strange that we're apparently running with signals enabled\n> whereever it is that rtld_bind is getting called from. Could it be that\n> sigaction is failing to install the requested signal mask, so that one\n> postmaster signal handler is interrupting another?\n\nThis made me look at pqsignal_pm() / pqsignal() and realize that we wouldn't\neven notice if it failed, because they just return SIG_ERR and callers don't\ncheck. I don't think that's a likely to be related, but theoretically it could\nlead to some odd situations.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Nov 2022 22:06:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-29 20:44:34 -0500, Tom Lane wrote:\n>> It's also strange that we're apparently running with signals enabled\n>> whereever it is that rtld_bind is getting called from. Could it be that\n>> sigaction is failing to install the requested signal mask, so that one\n>> postmaster signal handler is interrupting another?\n\n> This made me look at pqsignal_pm() / pqsignal() and realize that we wouldn't\n> even notice if it failed, because they just return SIG_ERR and callers don't\n> check. I don't think that's a likely to be related, but theoretically it could\n> lead to some odd situations.\n\nYeah, I noticed that just now too. But if sigaction() failed,\nthe signal handler wouldn't get installed at all, which'd lead\nto different and more-obvious symptoms. So I doubt that that's\nwhat happened.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Nov 2022 01:15:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Hi,\n\nOn 2022-11-30 00:55:42 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > What libraries is postgres linked against? I don't know whether -z now only\n> > affects the \"top-level\" dependencies of postgres, or also the dependencies of\n> > shared libraries that haven't been built with -z now. The only dependencies\n> > that I could see being relevant are libintl and openssl.\n> \n> Hmm. mamba is using both --enable-nls and --with-openssl, but\n> I can't see a reason why the postmaster would be interacting with\n> OpenSSL post-startup in test cases that don't use SSL. Perhaps\n> libintl is doing something it shouldn't?\n\nWe do call into openssl in postmaster, via RandomCancelKey(). But we should\nhave signals masked at that point, so it shouldn't matter.\n\n\n> > You could try if anything changes if you set LD_BIND_NOW, that should trigger\n> > \"recursive\" dependencies to be loaded eagerly as well.\n> \n> Googling LD_BIND_NOW suggests that that's a Linux thing; do you know that\n> it should have an effect on NetBSD?\n\nI'm not at all sure it does, but I did see it listed in\nhttps://man.netbsd.org/ld.elf_so.1\n\n LD_BIND_NOW If defined immediate binding of Procedure Link Table\n (PLT) entries is performed instead of the default lazy\n method.\n\nso I assumed it would do the same as on linux.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Nov 2022 22:31:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Hi,\n\nOn 2022-11-29 22:31:50 -0800, Andres Freund wrote:\n> On 2022-11-30 00:55:42 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > What libraries is postgres linked against? I don't know whether -z now only\n> > > affects the \"top-level\" dependencies of postgres, or also the dependencies of\n> > > shared libraries that haven't been built with -z now. The only dependencies\n> > > that I could see being relevant are libintl and openssl.\n> > \n> > Hmm. mamba is using both --enable-nls and --with-openssl, but\n> > I can't see a reason why the postmaster would be interacting with\n> > OpenSSL post-startup in test cases that don't use SSL. Perhaps\n> > libintl is doing something it shouldn't?\n> \n> We do call into openssl in postmaster, via RandomCancelKey(). But we should\n> have signals masked at that point, so it shouldn't matter.\n\nOpenssl does some muckery with signal masks on ppc (and a few others archs,\nbut not x86), but I don't immediately see it conflicting with our code:\n\nhttps://github.com/openssl/openssl/blob/master/crypto/ppccap.c#L275\n\nIt should also already have been executed by the time we accept connections,\ndue to the __attribute__ ((constructor)).\n\n\nI didn't check where netbsd gets libcrypto and whether it does something\ndifferent than upstream openssl...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Nov 2022 22:55:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-30 00:55:42 -0500, Tom Lane wrote:\n>> Googling LD_BIND_NOW suggests that that's a Linux thing; do you know that\n>> it should have an effect on NetBSD?\n\n> I'm not at all sure it does, but I did see it listed in\n> https://man.netbsd.org/ld.elf_so.1\n> LD_BIND_NOW If defined immediate binding of Procedure Link Table\n> (PLT) entries is performed instead of the default lazy\n> method.\n\nI checked the source code, and learned that (1) yes, rtld does pay\nattention to this, and (2) the documentation lies: it has to be not\nonly defined, but nonempty, to get any effect.\n\nAlso, I dug into my stuck processes some more, and I have to take\nback the claim that this is happening later than postmaster startup.\nAll the stuck children are ones that either are launched on request\nfrom the startup process, or are launched as soon as we get the\ntermination report for the startup process. So it's plausible that\nthe problem is happening during the postmaster's first select()\nwait. I then got dirty with the assembly code, and found out that\nwhere the stack trace stops is an attempt to resolve this call:\n\n 0xfd6f7a48 <__select50+76>: bl 0xfd700ed0 <0000803c.got2.plt_pic32._sys___select50>\n\nwhich is inside libpthread.so and is trying to call something in libc.so.\nSo we successfully got to the select() function from PostmasterMain, but\nthat has a non-prelinked call to someplace else, and kaboom.\n\nIn short, looks like Andres' theory is right. It means that 8acd8f869\ndidn't actually fix anything, though it reduced the probability of the\nfailure by reducing the number of vulnerable PLT-indirect calls.\n\nI've adjusted mamba to set LD_BIND_NOW=1 in its environment.\nI've verified that that causes the call inside __select50\nto get resolved before we reach main(), so I'm hopeful that\nit will cure the issue. But it'll probably be a few weeks\nbefore we can be sure.\n\nDon't have a good idea about a non-band-aid fix. Perhaps we\nshould revert 8acd8f869 altogether, but then what? Even if\nsomebody comes up with a rewrite to avoid doing interesting\nstuff in the postmaster's signal handlers, we surely wouldn't\nrisk back-patching it.\n\nIt's possible that doing nothing is okay, at least in the\nshort term. It's probably nigh impossible to hit this\nissue on modern multi-CPU hardware. Or perhaps we could revive\nthe idea of having postmaster.c do one dummy select() call\nbefore it unblocks signals.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Nov 2022 18:33:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Hi,\n\nOn 2022-11-30 18:33:06 -0500, Tom Lane wrote:\n> Also, I dug into my stuck processes some more, and I have to take\n> back the claim that this is happening later than postmaster startup.\n> All the stuck children are ones that either are launched on request\n> from the startup process, or are launched as soon as we get the\n> termination report for the startup process. So it's plausible that\n> the problem is happening during the postmaster's first select()\n> wait. I then got dirty with the assembly code, and found out that\n> where the stack trace stops is an attempt to resolve this call:\n> \n> 0xfd6f7a48 <__select50+76>: bl 0xfd700ed0 <0000803c.got2.plt_pic32._sys___select50>\n> \n> which is inside libpthread.so and is trying to call something in libc.so.\n> So we successfully got to the select() function from PostmasterMain, but\n> that has a non-prelinked call to someplace else, and kaboom.\n\nThis whole area just seems quite broken in netbsd :(.\n\nWe're clearly doing stuff in a signal handler that we really shouldn't, but\nnot being able to call any functions implemented in libc, even if they're\nasync signal safe (as e.g. select is) means signals are basically not\nusable. Afaict this basically means that signals are *never* safe on netbsd,\nas long as there's a single external function call in a signal handler.\n\n\n\n> I've adjusted mamba to set LD_BIND_NOW=1 in its environment.\n> I've verified that that causes the call inside __select50\n> to get resolved before we reach main(), so I'm hopeful that\n> it will cure the issue. But it'll probably be a few weeks\n> before we can be sure.\n> \n> Don't have a good idea about a non-band-aid fix.\n\nIt's also a band aid, but perhaps a bit more reliable: We could link\nstatically to libc and libpthread.\n\nAnother approach could be to iterate over the loaded shared libraries during\npostmaster startup and force symbols to be resolved. IIRC there's functions\nthat'd allow that. But it seems like a lot of work to work around an OS bug.\n\n\n> Perhaps we should revert 8acd8f869 altogether, but then what?\n\nFWIW, I think we should consider using those flags everywhere for the backend\n- they make copy-on-write more effective and decrease connection overhead a\nbit, because otherwise each backend process does the same symbol resolutions\nagain and again, dirtying memory post-fork.\n\n\n> Even if somebody comes up with a rewrite to avoid doing interesting stuff in\n> the postmaster's signal handlers, we surely wouldn't risk back-patching it.\n\nWould that actually fix anything, given netbsd's brokenness? If we used a\nlatch like mechanism, the signal handler would still use functions in libc. So\npostmaster could deadlock, at least during the first execution of a signal\nhandler? So I think 8acd8f869 continues to be important...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Nov 2022 16:19:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-30 18:33:06 -0500, Tom Lane wrote:\n>> Even if somebody comes up with a rewrite to avoid doing interesting stuff in\n>> the postmaster's signal handlers, we surely wouldn't risk back-patching it.\n\n> Would that actually fix anything, given netbsd's brokenness? If we used a\n> latch like mechanism, the signal handler would still use functions in libc. So\n> postmaster could deadlock, at least during the first execution of a signal\n> handler? So I think 8acd8f869 continues to be important...\n\nI agree that \"-z now\" is a good idea for performance reasons, but\nwhat we're seeing is that it's only a partial fix for netbsd's issue,\nsince it doesn't apply to shared libraries that the postmaster pulls\nin.\n\nI'm not sure about your thesis that things are fundamentally broken.\nIt does seem like if a signal handler does SetLatch then that could\nrequire PLT resolution, and if it interrupts something else doing\nPLT resolution then we have a problem. But if it were a live\nproblem then we'd have seen instances outside of the postmaster's\nselect() wait, and we haven't.\n\nI'm kind of inclined to band-aid that select() call as previously\nsuggested, and see where we end up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Nov 2022 19:36:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange failure on mamba" } ]
[ { "msg_contents": "Patch: Global Unique Index \n\n\n\n“Global unique index” in our definition is a unique index on a partitioned table that can ensure cross-partition uniqueness using a non-partition key. This work is inspired by this email thread, “Proposal: Global Index” started back in 2019 (Link below). My colleague David and I took a different approach to implement the feature that ensures uniqueness constraint spanning multiple partitions. We achieved this mainly by using application logics without heavy modification to current Postgres’s partitioned table/index structure. In other words, a global unique index and a regular partitioned index are essentially the same in terms of their storage structure except that one can do cross-partition uniqueness check, the other cannot.\n\n\n\nhttps://www.postgresql.org/message-id/CALtqXTcurqy1PKXzP9XO%3DofLLA5wBSo77BnUnYVEZpmcA3V0ag%40mail.gmail.com\n\n\n\n- Patch -\n\nThe attached patches were generated based on commit `85d8b30724c0fd117a683cc72706f71b28463a05` on master branch.\n\n\n\n- Benefits of global unique index -\n\n1. Ensure uniqueness spanning all partitions using a non-partition key column\n\n2. Allow user to create a unique index on a non-partition key column without the need to include partition key (current Postgres enforces this)\n\n3. Query performance increase when using a single unique non-partition key column\n\n\n\n\n\n- Supported Features -\n\n1. Global unique index is supported only on btree index type\n\n2. Global unique index is useful only when created on a partitioned table.\n\n3. Cross-partition uniqueness check with CREATE UNIQUE INDEX in serial and parallel mode\n\n4. Cross-partition uniqueness check with ATTACH in serial and parallel mode\n\n5. Cross-partition uniqueness check when INSERT and UPDATE\n\n\n\n\n\n- Not-supported Features -\n\n1. Global uniqueness check with Sub partition tables is not yet supported as we do not have immediate use case and it may involve majoy change in current implementation\n\n\n\n\n\n- Global unique index syntax -\n\nA global unique index can be created with \"GLOBAL\" and \"UNIQUE\" clauses in a \"CREATE INDEX\" statement run against a partitioned table. For example,\n\n\n\nCREATE UNIQUE INDEX global_index ON idxpart(bid) GLOBAL;\n\n\n\n\n\n- New Relkind: RELKIND_GLOBAL_INDEX -\n\nWhen a global unique index is created on a partitioned table, its relkind is RELKIND_PARTITIONED_INDEX (I). This is the same as creating a regular index. Then Postgres will recursively create index on each child partition, except now the relkind will be set as RELKIND_GLOBAL_INDEX (g) instead of RELKIND_INDEX (i). This new relkind, along with uniqueness flag are needed for cross-partition uniqueness check later.\n\n\n\n\n\n- Create a global unique index -\n\nTo create a regular unique index on a partitioned table, Postgres has to perform heap scan and sorting on every child partition. Uniqueness check happens during the sorting phase and will raise an error if multiple tuples with the same index key are sorted together. To achieve global uniqueness check, we make Postgres perform the sorting after all of the child partitions have been scanned instead of on the \"sort per partition\" fashion. In otherwords, the sorting only happens once at the very end and it sorts the tuples coming from all the partitions and therefore can ensure global uniqueness.\n\n\n\nIn parallel index build case, the idea is the same, except that the tuples will be put into shared file set (temp files) on disk instead of in memory to ensure other workers can share the sort results. At the end of the very last partition build, we make Postgres take over all the temp files and perform a final merge sort to ensure global uniqueness.\n\n\n\nExample:\n\n\n\n> CREATE TABLE gidx_part(a int, b int, c text) PARTITION BY RANGE (a);\n\n> CREATE TABLE gidx_part1 PARTITION OF gidx_part FOR VALUES FROM (0) to (10);\n\n> CREATE TABLE gidx_part2 PARTITION OF gidx_part FOR VALUES FROM (10) to (20);\n\n> INSERT INTO gidx_part values(5, 5, 'test');\n\n> INSERT INTO gidx_part values(15, 5, 'test');\n\n> CREATE UNIQUE INDEX global_unique_idx ON gidx_part(b) GLOBAL;\n\nERROR:  could not create unique index \"gidx_part1_b_idx\"\n\nDETAIL:  Key (b)=(5) is duplicated.\n\n\n\n\n\n- INSERT and UPDATE -\n\nFor every new tuple inserted or updated, Postgres attempts to fetch the same tuple from current partition to determine if a duplicate already exists. In the global unique index case, we make Postgres attempt to fetch the same tuple from other partitions as well as the current partition. If a duplicate is found, global uniqueness is violated and an error is raised.\n\n\n\nExample:\n\n\n\n> CREATE TABLE gidx_part (a int, b int, c text) PARTITION BY RANGE (a);\n\n> CREATE TABLE gidx_part1 partition of gidx_part FOR VALUES FROM (0) TO (10);\n\n> CREATE TABLE gidx_part2 partition of gidx_part FOR VALUES FROM (10) TO (20);\n\n> CREATE UNIQUE INDEX global_unique_idx ON gidx_part USING BTREE(b) GLOBAL;\n\n> INSERT INTO gidx_part values(5, 5, 'test');\n\n> INSERT INTO gidx_part values(15, 5, 'test');\n\nERROR:  duplicate key value violates unique constraint \"gidx_part1_b_idx\"\n\nDETAIL:  Key (b)=(5) already exists.\n\n\n\n\n\n- ATTACH -\n\nThe new partition-to-be may already contain a regular unique index or contain no index at all. If it has no index, Postgres will create a similar index for it upon ATTACH. If the partitioned table has a global unique index, a new global unique index is automatically created on the partition-to-be upon ATTACH, and it will run a global uniqueness check between all current partitions and the partition-to-be.\n\n\n\nIf the partition-to-be already contains a regular unique index, Postgres will change its relkind from RELKIND_INDEX to RELKIND_GLOBAL_INDEX and run a global uniqueness check between all current partitions and the partition-to-be. No new index is created in this case\n\n\n\nIf a duplicate record is found, global uniqueness is violated and an error is raised.\n\n\n\n\n\n- DETACH -\n\nSince we retain the same partitioned structure, detaching a partition with global unique index is straightforward. Upon DETACH, Postgres will change its relkind from RELKIND_GLOBAL_INDEX to RELKIND_INDEX and remove their inheritance relationship as usual.\n\n\n\n\n\n- Optimizer, query planning and vacuum -\n\nSince no major modification is done on global unique index's structure and storage, it works in the same way as a regular partitioned index. No major change is required to be done on optimizer, planner and vacuum process as they should work in the same way as regular index.\n\n\n\n\n\n- REINDX -\n\nA global unique index can be reindexed normally just like a regular index. No cross-partition uniqueness check is performed while a global unique index is being rebuilt. This is okay as long as it acquired a exclusive lock on the index relation.\n\n\n\n\n\n- Benchmark Result -\n\nUsing pgbench with 200 partitions running SELECT and READ-WRITE tests with a unique non-partition key, we observe orders of magnitude higher TPS compared to a regular unique index built with partition key restriction (multicolumn index).\n\n\n\n\n\n- TODOs -\n\nSince this is a POC patch, there is several TODOs related to user experience such as:\n\n\n\n    1. Raise error when user uses CREATE UNIQUE INDEX with ON ONLY clause\n\n    2. Raise error when user tries to create a global unique index directly on a child partition\n\n    3. ... maybe more\n\n\n\nWe will work on these at a later time.\n\n\n\nthank you\n\n\n\nPlease let us know your thoughts or questions about the feature.\n\n\n\nAll comments are welcome and greatly appreciated!\n\n\n\nDavid and Cary\n\n\n\n============================\n\nHighGo Software Canada\n\nhttp://www.highgo.ca", "msg_date": "Thu, 17 Nov 2022 15:01:19 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Patch: Global Unique Index" }, { "msg_contents": "Hello\nDo we need new syntax actually? I think that a global unique index can be created automatically instead of raising an error \"unique constraint on partitioned table must include all partitioning columns\"\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 18 Nov 2022 12:03:53 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re:Patch: Global Unique Index" }, { "msg_contents": "pá 18. 11. 2022 v 10:04 odesílatel Sergei Kornilov <sk@zsrv.org> napsal:\n\n> Hello\n> Do we need new syntax actually? I think that a global unique index can be\n> created automatically instead of raising an error \"unique constraint on\n> partitioned table must include all partitioning columns\"\n>\n+1\n\nPavel\n\n\n> regards, Sergei\n>\n>\n>\n\npá 18. 11. 2022 v 10:04 odesílatel Sergei Kornilov <sk@zsrv.org> napsal:Hello\nDo we need new syntax actually? I think that a global unique index can be created automatically instead of raising an error \"unique constraint on partitioned table must include all partitioning columns\"+1Pavel\n\nregards, Sergei", "msg_date": "Fri, 18 Nov 2022 10:41:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Sergei Kornilov <sk@zsrv.org> writes:\n> Do we need new syntax actually? I think that a global unique index can be created automatically instead of raising an error \"unique constraint on partitioned table must include all partitioning columns\"\n\nI'm not convinced that we want this feature at all: as far as I can see,\nit will completely destroy the benefits of making a partitioned table\nin the first place. But if we do want it, I don't think it should be\nso easy to create a global index by accident as that syntax approach\nwould make it. I think there needs to be a pretty clear YES I WANT TO\nSHOOT MYSELF IN THE FOOT clause in the command.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Nov 2022 10:06:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "pá 18. 11. 2022 v 16:06 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Sergei Kornilov <sk@zsrv.org> writes:\n> > Do we need new syntax actually? I think that a global unique index can\n> be created automatically instead of raising an error \"unique constraint on\n> partitioned table must include all partitioning columns\"\n>\n> I'm not convinced that we want this feature at all: as far as I can see,\n> it will completely destroy the benefits of making a partitioned table\n> in the first place. But if we do want it, I don't think it should be\n> so easy to create a global index by accident as that syntax approach\n> would make it. I think there needs to be a pretty clear YES I WANT TO\n> SHOOT MYSELF IN THE FOOT clause in the command.\n>\n\nisn't possible to have a partitioned index?\n\nhttps://www.highgo.ca/2022/10/14/global-index-a-different-approach/\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n>\n>\n\npá 18. 11. 2022 v 16:06 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Sergei Kornilov <sk@zsrv.org> writes:\n> Do we need new syntax actually? I think that a global unique index can be created automatically instead of raising an error \"unique constraint on partitioned table must include all partitioning columns\"\n\nI'm not convinced that we want this feature at all: as far as I can see,\nit will completely destroy the benefits of making a partitioned table\nin the first place.  But if we do want it, I don't think it should be\nso easy to create a global index by accident as that syntax approach\nwould make it.  I think there needs to be a pretty clear YES I WANT TO\nSHOOT MYSELF IN THE FOOT clause in the command.isn't possible to have a partitioned index?https://www.highgo.ca/2022/10/14/global-index-a-different-approach/RegardsPavel \n\n                        regards, tom lane", "msg_date": "Fri, 18 Nov 2022 16:14:44 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Thu, 17 Nov 2022 at 22:01, Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> Patch: Global Unique Index\n\nLet me start by expressing severe doubt on the usefulness of such a\nfeature, but also salute your efforts to contribute.\n\n> In other words, a global unique index and a regular partitioned index are essentially the same in terms of their storage structure except that one can do cross-partition uniqueness check, the other cannot.\n\nThis is the only workable architecture, since it allows DETACH to be\nfeasible, which is essential.\n\nYou don't seem to mention that this would require a uniqueness check\non each partition. Is that correct? This would result in O(N) cost of\nuniqueness checks, severely limiting load speed. I notice you don't\noffer any benchmarks on load speed or the overhead associated with\nthis, which is not good if you want to be taken seriously, but at\nleast it is recoverable.\n\n(It might be necessary to specify some partitions as READ ONLY, to\nallow us to record their min/max values for the indexed cols, allowing\nus to do this more quickly.)\n\n> - Supported Features -\n> 1. Global unique index is supported only on btree index type\n\nWhy? Surely any index type that supports uniqueness is good.\n\n> - Not-supported Features -\n> 1. Global uniqueness check with Sub partition tables is not yet supported as we do not have immediate use case and it may involve majoy change in current implementation\n\nHmm, sounds like a problem. Arranging the calls recursively should work.\n\n> - Create a global unique index -\n> To create a regular unique index on a partitioned table, Postgres has to perform heap scan and sorting on every child partition. Uniqueness check happens during the sorting phase and will raise an error if multiple tuples with the same index key are sorted together. To achieve global uniqueness check, we make Postgres perform the sorting after all of the child partitions have been scanned instead of on the \"sort per partition\" fashion. In otherwords, the sorting only happens once at the very end and it sorts the tuples coming from all the partitions and therefore can ensure global uniqueness.\n\nMy feeling is that performance on this will suck so badly that we must\nwarn people away from it, and tell people if they want this, create\nthe index at the start and let it load.\n\nHopefully CREATE INDEX CONCURRENTLY still works.\n\nLet's see some benchmarks on this also please.\n\nYou'll need to think about progress reporting early because correctly\nreporting the progress and expected run times are likely critical for\nusability.\n\n> Example:\n>\n> > CREATE TABLE gidx_part (a int, b int, c text) PARTITION BY RANGE (a);\n> > CREATE TABLE gidx_part1 partition of gidx_part FOR VALUES FROM (0) TO (10);\n> > CREATE TABLE gidx_part2 partition of gidx_part FOR VALUES FROM (10) TO (20);\n> > CREATE UNIQUE INDEX global_unique_idx ON gidx_part USING BTREE(b) GLOBAL;\n> > INSERT INTO gidx_part values(5, 5, 'test');\n> > INSERT INTO gidx_part values(15, 5, 'test');\n> ERROR: duplicate key value violates unique constraint \"gidx_part1_b_idx\"\n> DETAIL: Key (b)=(5) already exists.\n\nWell done.\n\n> - DETACH -\n> Since we retain the same partitioned structure, detaching a partition with global unique index is straightforward. Upon DETACH, Postgres will change its relkind from RELKIND_GLOBAL_INDEX to RELKIND_INDEX and remove their inheritance relationship as usual.\n\nIt's the only way that works\n\n> - Optimizer, query planning and vacuum -\n> Since no major modification is done on global unique index's structure and storage, it works in the same way as a regular partitioned index. No major change is required to be done on optimizer, planner and vacuum process as they should work in the same way as regular index.\n\nAgreed\n\n\nMaking a prototype is a great first step.\n\nThe next step is to understand the good and the bad aspects of it, so\nyou can see what else needs to be done. You need to be honest and real\nabout the fact that this may not actually be desirable in practice, or\nin a restricted use case.\n\nThat means performance analysis of create, load, attach, detach,\nINSERT, SELECT, UPD/DEL and anything else that might be affected,\ntogether with algorithmic analysis of what happens for larger N and\nlarger tables.\n\nExpect many versions; take provisions for many days.\n\nBest of luck\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:33:30 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Hi Simon\n\n\n\nThank you so much for sharing these valuable comments and concerns to our work. We understand there is a lot of TODOs left to be done to move forward with this in a serious matter. Your comments have been very helpful and we are very grateful.\n\n\n\n> You don't seem to mention that this would require a uniqueness check\n\n> on each partition. Is that correct? This would result in O(N) cost of\n\n> uniqueness checks, severely limiting load speed. I notice you don't\n\n> offer any benchmarks on load speed or the overhead associated with\n\n> this, which is not good if you want to be taken seriously, but at\n\n> least it is recoverable.\n\n\n\nYes, during INSERT and UPDATE, the uniqueness check happens on every partition including the current one. This introduces extra look-up costs and will limit the speed significantly especially when there is a large number of partitions. This is one drawback of global unique index that needs to be optimized / improved.\n\n\n\nIn fact, all other operations such as CREATE and ATTACH that involve global uniqueness check will have certain degree of performance loss as well. See benchmark figures below.\n\n\n\n\n\n> (It might be necessary to specify some partitions as READ ONLY, to\n\n> allow us to record their min/max values for the indexed cols, allowing\n\n> us to do this more quickly.)\n\n\n\nThank you so much for this great suggestion, If there were an indication that some partitions have become READ ONLY, record the min/max values of their global unique indexed columns to these partitions, then we might be able to skip these partitions for uniqueness checking if the value is out of the range (min/max)? Did we understand it correctly? Could you help elaborate more?\n\n\n\n\n\n>> 1. Global unique index is supported only on btree index type\n\n>\n\n> Why? Surely any index type that supports uniqueness is good.\n\n\n\nYes, we can definitely have the same support for other index types that support UNIQUE.\n\n\n\n\n\n>> - Not-supported Features -\n\n>> 1. Global uniqueness check with Sub partition tables is not yet supported as we do not have immediate use case and it may involve major change in current implementation\n\n>\n\n> Hmm, sounds like a problem. Arranging the calls recursively should work.\n\n\n\nYes, it is a matter of rearranging the recursive calls to correctly find out all \"leaves\" partitions to be considered for global uniqueness check. So far, only the partitions in the first layer is considered.\n\n\n\n\n\n> My feeling is that performance on this will suck so badly that we must\n\n> warn people away from it, and tell people if they want this, create\n\n> the index at the start and let it load.\n\n\n\nYes, to support global unique index, extra logic needs to be run to ensure uniqueness and especially during INSERT and ATTACH where it needs to look up all involved partitions. We have a benchmark figures attached below.\n\n\n\nThis is also the reason that \"global\" syntax is required so people know they really want to have this feature. To help users better understand the potential performance drawbacks, should we add a warning in the documentation?\n\n\n\n\n\n> Hopefully CREATE INDEX CONCURRENTLY still works.\n\n\n\nYes, we verified this global unique index approach on Postgres 14.5 with a community CREATE INDEX CONCURRENTLY patch on partitioned table.\n\n\n\n\n\n> Let's see some benchmarks on this also please.\n\n\n\nHere is a simple 'timing' comparison between regular and global unique index on a partitioned table having 6 partitions.\n\n\n\nglobal unique index:\n\n-> 156,285ms to insert 6 million records (1 million on each partition)\n\n-> 6,592ms to delete all 6 million records\n\n-> 3,957ms to create global unique index with 6 million records pre-inserted\n\n-> 3,650ms to attach a new partition with 1 million records pre-inserted\n\n-> 17ms to detach a partition with 1 million records in it\n\n\n\nregular unique index:\n\n-> 26,007ms to insert 6 million records (1 million on each partition)\n\n-> 7,238ms to delete all  6 million records\n\n-> 2,933ms to create regular unique index with 6 million records pre-inserted\n\n-> 628ms to attach a new partition with 1 million records pre-inserted\n\n-> 17ms to detach a partition with 1 million records in it\n\n\n\nThese are the commands I use to get the numbers (switch create unique index clause between global and regular):\n\n-> \\timing on\n\n-> create table test(a int, b int, c text) partition by range (a);\n\n-> create table test1 partition of test for values from (MINVALUE) to (1000000);\n\n-> create table test2 partition of test for values from (1000000) to (2000000);\n\n-> create table test3 partition of test for values from (2000000) to (3000000);\n\n-> create table test4 partition of test for values from (3000000) to (4000000);\n\n-> create table test5 partition of test for values from (4000000) to (5000000);\n\n-> create table test6 partition of test for values from (5000000) to (6000000);\n\n-> create unique index myindex on test(b) global;\n\n-> insert into test values(generate_series(0,5999999), generate_series(0,5999999), 'test');\t\t/* record timing */\n\n-> delete from test;\t/* record timing */\n\n-> drop index myindex;\n\n-> insert into test values(generate_series(0,5999999), generate_series(0,5999999), 'test');\n\n-> create unique index myindex on test(b) global;\t/* record timing */\n\n-> create table test7 (a int, b int, c text);\n\n-> insert into test7 values(generate_series(6000000, 6999999), generate_series(6000000, 6999999), 'test');\n\n-> alter table test attach partition test7 for values from (6000000) TO (7000000);\t\t/* record timing */\n\n-> alter table test detach partition test7;\t\t/* record timing */\n\n\n\n\n\nAs you can see, insert operation suffers the most performance drawbacks. In fact, it takes roughly 6 times as much time to complete the insertion, which matches the number of partitions in the test.\n\n\n\nThe Attach operation also takes roughly 6 times as much time to complete, because it has to performs uniqueness check on all 6 existing partitions to determine global uniqueness. Detach in both case takes the same time to complete.\n\n\n\nCreate global unique index takes 35% longer to build.\n\n\n\nWe also ran some tests for random SELECT and UPDATE using non-partition key with pgbench to compare the performance among 3 conditions: no index, regular unique index (with partition-key involved), and global unique index:\n\n\n\nTest 1: scale=100, 10 partitions, 1 million tuples/partition\n\n\n\nSELECT:\n\n-> No partitioned index:\t\t\t\ttps = 3.827886\n\n-> regular unique index:\t\t\t\ttps = 14.713099\n\n-> global unique index:\t\t\t\t\ttps = 23791.314238\n\nUPDATE mixed with SELECT:\n\n-> No partitioned index:\t\t\t\ttps = 1.926013\n\n-> regular unique index:\t\t\t\ttps = 7.087059\n\n-> global unique index:\t\t\t\t\ttps = 2253.098335\n\n\n\nTest 2: scale=1,000, 100 partitions, 1 million tuples/partition\n\n\n\nSELECT:\n\n-> No partitioned index:\t\t\t\ttps = 0.110029\n\n-> regular unique index:\t\t\t\ttps = 0.268199\n\n-> global unique index:\t\t\t\t\ttps = 2334.811682\n\nUPDATE mixed with SELECT:\n\n-> No partitioned index:\t\t\t\ttps = 0.115329\n\n-> regular unique index:\t\t\t\ttps = 0.197052\n\n-> global unique index:\t\t\t\t\ttps = 541.488621\n\n\n\nTest 3: scale=10,000, 1,000 partitions, 1 million tuples/partition\n\n\n\nSELECT:\n\n-> No partitioned index:\t\t\t\ttps = 0.011047\n\n-> regular unique index:\t\t\t\ttps = 0.036812\n\n-> global unique index:\t\t\t\t\ttps = 147.189456\n\nUPDATE mixed with SELECT:\n\n-> No partitioned index:\t\t\t\ttps = 0.008209\n\n-> regular unique index:\t\t\t\ttps = 0.054367\n\n-> global unique index:\t\t\t\t\ttps = 57.740432\n\n\n\n\n\nthank you very much and we hope this information could help clarify some concerns about this approach.\n\n\n\nDavid and Cary\n\n\n\n============================\n\nHighGo Software Canada\n\nhttp://www.highgo.ca\n\n\n\n\n\n\n\n\n\n\n\n---- On Mon, 21 Nov 2022 05:33:30 -0700  Simon Riggs  wrote ---\n\n> On Thu, 17 Nov 2022 at 22:01, Cary Huang mailto:cary.huang@highgo.ca> wrote:\n\n> >\n\n> > Patch: Global Unique Index\n\n>\n\n> Let me start by expressing severe doubt on the usefulness of such a\n\n> feature, but also salute your efforts to contribute.\n\n>\n\n> > In other words, a global unique index and a regular partitioned index are essentially the same in terms of their storage structure except that one can do cross-partition uniqueness check, the other cannot.\n\n>\n\n> This is the only workable architecture, since it allows DETACH to be\n\n> feasible, which is essential.\n\n>\n\n> You don't seem to mention that this would require a uniqueness check\n\n> on each partition. Is that correct? This would result in O(N) cost of\n\n> uniqueness checks, severely limiting load speed. I notice you don't\n\n> offer any benchmarks on load speed or the overhead associated with\n\n> this, which is not good if you want to be taken seriously, but at\n\n> least it is recoverable.\n\n>\n\n> (It might be necessary to specify some partitions as READ ONLY, to\n\n> allow us to record their min/max values for the indexed cols, allowing\n\n> us to do this more quickly.)\n\n>\n\n> > - Supported Features -\n\n> > 1. Global unique index is supported only on btree index type\n\n>\n\n> Why? Surely any index type that supports uniqueness is good.\n\n>\n\n> > - Not-supported Features -\n\n> > 1. Global uniqueness check with Sub partition tables is not yet supported as we do not have immediate use case and it may involve majoy change in current implementation\n\n>\n\n> Hmm, sounds like a problem. Arranging the calls recursively should work.\n\n>\n\n> > - Create a global unique index -\n\n> > To create a regular unique index on a partitioned table, Postgres has to perform heap scan and sorting on every child partition. Uniqueness check happens during the sorting phase and will raise an error if multiple tuples with the same index key are sorted together. To achieve global uniqueness check, we make Postgres perform the sorting after all of the child partitions have been scanned instead of on the \"sort per partition\" fashion. In otherwords, the sorting only happens once at the very end and it sorts the tuples coming from all the partitions and therefore can ensure global uniqueness.\n\n>\n\n> My feeling is that performance on this will suck so badly that we must\n\n> warn people away from it, and tell people if they want this, create\n\n> the index at the start and let it load.\n\n>\n\n> Hopefully CREATE INDEX CONCURRENTLY still works.\n\n>\n\n> Let's see some benchmarks on this also please.\n\n>\n\n> You'll need to think about progress reporting early because correctly\n\n> reporting the progress and expected run times are likely critical for\n\n> usability.\n\n>\n\n> > Example:\n\n> >\n\n> > > CREATE TABLE gidx_part (a int, b int, c text) PARTITION BY RANGE (a);\n\n> > > CREATE TABLE gidx_part1 partition of gidx_part FOR VALUES FROM (0) TO (10);\n\n> > > CREATE TABLE gidx_part2 partition of gidx_part FOR VALUES FROM (10) TO (20);\n\n> > > CREATE UNIQUE INDEX global_unique_idx ON gidx_part USING BTREE(b) GLOBAL;\n\n> > > INSERT INTO gidx_part values(5, 5, 'test');\n\n> > > INSERT INTO gidx_part values(15, 5, 'test');\n\n> > ERROR:  duplicate key value violates unique constraint \"gidx_part1_b_idx\"\n\n> > DETAIL:  Key (b)=(5) already exists.\n\n>\n\n> Well done.\n\n>\n\n> > - DETACH -\n\n> > Since we retain the same partitioned structure, detaching a partition with global unique index is straightforward. Upon DETACH, Postgres will change its relkind from RELKIND_GLOBAL_INDEX to RELKIND_INDEX and remove their inheritance relationship as usual.\n\n>\n\n> It's the only way that works\n\n>\n\n> > - Optimizer, query planning and vacuum -\n\n> > Since no major modification is done on global unique index's structure and storage, it works in the same way as a regular partitioned index. No major change is required to be done on optimizer, planner and vacuum process as they should work in the same way as regular index.\n\n>\n\n> Agreed\n\n>\n\n>\n\n> Making a prototype is a great first step.\n\n>\n\n> The next step is to understand the good and the bad aspects of it, so\n\n> you can see what else needs to be done. You need to be honest and real\n\n> about the fact that this may not actually be desirable in practice, or\n\n> in a restricted use case.\n\n>\n\n> That means performance analysis of create, load, attach, detach,\n\n> INSERT, SELECT, UPD/DEL and anything else that might be affected,\n\n> together with algorithmic analysis of what happens for larger N and\n\n> larger tables.\n\n>\n\n> Expect many versions; take provisions for many days.\n\n>\n\n> Best of luck\n\n>\n\n> --\n\n> Simon Riggs                http://www.EnterpriseDB.com/\n\n>\n\n>\n\n>\nHi SimonThank you so much for sharing these valuable comments and concerns to our work. We understand there is a lot of TODOs left to be done to move forward with this in a serious matter. Your comments have been very helpful and we are very grateful.> You don't seem to mention that this would require a uniqueness check> on each partition. Is that correct? This would result in O(N) cost of> uniqueness checks, severely limiting load speed. I notice you don't> offer any benchmarks on load speed or the overhead associated with> this, which is not good if you want to be taken seriously, but at> least it is recoverable.Yes, during INSERT and UPDATE, the uniqueness check happens on every partition including the current one. This introduces extra look-up costs and will limit the speed significantly especially when there is a large number of partitions. This is one drawback of global unique index that needs to be optimized / improved.In fact, all other operations such as CREATE and ATTACH that involve global uniqueness check will have certain degree of performance loss as well. See benchmark figures below.> (It might be necessary to specify some partitions as READ ONLY, to> allow us to record their min/max values for the indexed cols, allowing> us to do this more quickly.)Thank you so much for this great suggestion, If there were an indication that some partitions have become READ ONLY, record the min/max values of their global unique indexed columns to these partitions, then we might be able to skip these partitions for uniqueness checking if the value is out of the range (min/max)? Did we understand it correctly? Could you help elaborate more?>> 1. Global unique index is supported only on btree index type>> Why? Surely any index type that supports uniqueness is good.Yes, we can definitely have the same support for other index types that support UNIQUE.>> - Not-supported Features ->> 1. Global uniqueness check with Sub partition tables is not yet supported as we do not have immediate use case and it may involve major change in current implementation>> Hmm, sounds like a problem. Arranging the calls recursively should work.Yes, it is a matter of rearranging the recursive calls to correctly find out all \"leaves\" partitions to be considered for global uniqueness check. So far, only the partitions in the first layer is considered.> My feeling is that performance on this will suck so badly that we must> warn people away from it, and tell people if they want this, create> the index at the start and let it load.Yes, to support global unique index, extra logic needs to be run to ensure uniqueness and especially during INSERT and ATTACH where it needs to look up all involved partitions. We have a benchmark figures attached below.This is also the reason that \"global\" syntax is required so people know they really want to have this feature. To help users better understand the potential performance drawbacks, should we add a warning in the documentation?> Hopefully CREATE INDEX CONCURRENTLY still works.Yes, we verified this global unique index approach on Postgres 14.5 with a community CREATE INDEX CONCURRENTLY patch on partitioned table.> Let's see some benchmarks on this also please.Here is a simple 'timing' comparison between regular and global unique index on a partitioned table having 6 partitions.global unique index:-> 156,285ms to insert 6 million records (1 million on each partition)-> 6,592ms to delete all 6 million records-> 3,957ms to create global unique index with 6 million records pre-inserted-> 3,650ms to attach a new partition with 1 million records pre-inserted-> 17ms to detach a partition with 1 million records in itregular unique index:-> 26,007ms to insert 6 million records (1 million on each partition)-> 7,238ms to delete all  6 million records-> 2,933ms to create regular unique index with 6 million records pre-inserted-> 628ms to attach a new partition with 1 million records pre-inserted-> 17ms to detach a partition with 1 million records in itThese are the commands I use to get the numbers (switch create unique index clause between global and regular):-> \\timing on-> create table test(a int, b int, c text) partition by range (a);-> create table test1 partition of test for values from (MINVALUE) to (1000000);-> create table test2 partition of test for values from (1000000) to (2000000);-> create table test3 partition of test for values from (2000000) to (3000000);-> create table test4 partition of test for values from (3000000) to (4000000);-> create table test5 partition of test for values from (4000000) to (5000000);-> create table test6 partition of test for values from (5000000) to (6000000);-> create unique index myindex on test(b) global;-> insert into test values(generate_series(0,5999999), generate_series(0,5999999), 'test');\t\t/* record timing */-> delete from test;\t/* record timing */-> drop index myindex;-> insert into test values(generate_series(0,5999999), generate_series(0,5999999), 'test');-> create unique index myindex on test(b) global;\t/* record timing */-> create table test7 (a int, b int, c text);-> insert into test7 values(generate_series(6000000, 6999999), generate_series(6000000, 6999999), 'test');-> alter table test attach partition test7 for values from (6000000) TO (7000000);\t\t/* record timing */-> alter table test detach partition test7;\t\t/* record timing */As you can see, insert operation suffers the most performance drawbacks. In fact, it takes roughly 6 times as much time to complete the insertion, which matches the number of partitions in the test.The Attach operation also takes roughly 6 times as much time to complete, because it has to performs uniqueness check on all 6 existing partitions to determine global uniqueness. Detach in both case takes the same time to complete.Create global unique index takes 35% longer to build.We also ran some tests for random SELECT and UPDATE using non-partition key with pgbench to compare the performance among 3 conditions: no index, regular unique index (with partition-key involved), and global unique index:Test 1: scale=100, 10 partitions, 1 million tuples/partitionSELECT:-> No partitioned index:\t\t\t\ttps = 3.827886-> regular unique index:\t\t\t\ttps = 14.713099-> global unique index:\t\t\t\t\ttps = 23791.314238UPDATE mixed with SELECT:-> No partitioned index:\t\t\t\ttps = 1.926013-> regular unique index:\t\t\t\ttps = 7.087059-> global unique index:\t\t\t\t\ttps = 2253.098335Test 2: scale=1,000, 100 partitions, 1 million tuples/partitionSELECT:-> No partitioned index:\t\t\t\ttps = 0.110029-> regular unique index:\t\t\t\ttps = 0.268199-> global unique index:\t\t\t\t\ttps = 2334.811682UPDATE mixed with SELECT:-> No partitioned index:\t\t\t\ttps = 0.115329-> regular unique index:\t\t\t\ttps = 0.197052-> global unique index:\t\t\t\t\ttps = 541.488621Test 3: scale=10,000, 1,000 partitions, 1 million tuples/partitionSELECT:-> No partitioned index:\t\t\t\ttps = 0.011047-> regular unique index:\t\t\t\ttps = 0.036812-> global unique index:\t\t\t\t\ttps = 147.189456UPDATE mixed with SELECT:-> No partitioned index:\t\t\t\ttps = 0.008209-> regular unique index:\t\t\t\ttps = 0.054367-> global unique index:\t\t\t\t\ttps = 57.740432thank you very much and we hope this information could help clarify some concerns about this approach.David and Cary============================HighGo Software Canadawww.highgo.ca---- On Mon, 21 Nov 2022 05:33:30 -0700  Simon Riggs  wrote ---> On Thu, 17 Nov 2022 at 22:01, Cary Huang cary.huang@highgo.ca> wrote:> >> > Patch: Global Unique Index>> Let me start by expressing severe doubt on the usefulness of such a> feature, but also salute your efforts to contribute.>> > In other words, a global unique index and a regular partitioned index are essentially the same in terms of their storage structure except that one can do cross-partition uniqueness check, the other cannot.>> This is the only workable architecture, since it allows DETACH to be> feasible, which is essential.>> You don't seem to mention that this would require a uniqueness check> on each partition. Is that correct? This would result in O(N) cost of> uniqueness checks, severely limiting load speed. I notice you don't> offer any benchmarks on load speed or the overhead associated with> this, which is not good if you want to be taken seriously, but at> least it is recoverable.>> (It might be necessary to specify some partitions as READ ONLY, to> allow us to record their min/max values for the indexed cols, allowing> us to do this more quickly.)>> > - Supported Features -> > 1. Global unique index is supported only on btree index type>> Why? Surely any index type that supports uniqueness is good.>> > - Not-supported Features -> > 1. Global uniqueness check with Sub partition tables is not yet supported as we do not have immediate use case and it may involve majoy change in current implementation>> Hmm, sounds like a problem. Arranging the calls recursively should work.>> > - Create a global unique index -> > To create a regular unique index on a partitioned table, Postgres has to perform heap scan and sorting on every child partition. Uniqueness check happens during the sorting phase and will raise an error if multiple tuples with the same index key are sorted together. To achieve global uniqueness check, we make Postgres perform the sorting after all of the child partitions have been scanned instead of on the \"sort per partition\" fashion. In otherwords, the sorting only happens once at the very end and it sorts the tuples coming from all the partitions and therefore can ensure global uniqueness.>> My feeling is that performance on this will suck so badly that we must> warn people away from it, and tell people if they want this, create> the index at the start and let it load.>> Hopefully CREATE INDEX CONCURRENTLY still works.>> Let's see some benchmarks on this also please.>> You'll need to think about progress reporting early because correctly> reporting the progress and expected run times are likely critical for> usability.>> > Example:> >> > > CREATE TABLE gidx_part (a int, b int, c text) PARTITION BY RANGE (a);> > > CREATE TABLE gidx_part1 partition of gidx_part FOR VALUES FROM (0) TO (10);> > > CREATE TABLE gidx_part2 partition of gidx_part FOR VALUES FROM (10) TO (20);> > > CREATE UNIQUE INDEX global_unique_idx ON gidx_part USING BTREE(b) GLOBAL;> > > INSERT INTO gidx_part values(5, 5, 'test');> > > INSERT INTO gidx_part values(15, 5, 'test');> > ERROR:  duplicate key value violates unique constraint \"gidx_part1_b_idx\"> > DETAIL:  Key (b)=(5) already exists.>> Well done.>> > - DETACH -> > Since we retain the same partitioned structure, detaching a partition with global unique index is straightforward. Upon DETACH, Postgres will change its relkind from RELKIND_GLOBAL_INDEX to RELKIND_INDEX and remove their inheritance relationship as usual.>> It's the only way that works>> > - Optimizer, query planning and vacuum -> > Since no major modification is done on global unique index's structure and storage, it works in the same way as a regular partitioned index. No major change is required to be done on optimizer, planner and vacuum process as they should work in the same way as regular index.>> Agreed>>> Making a prototype is a great first step.>> The next step is to understand the good and the bad aspects of it, so> you can see what else needs to be done. You need to be honest and real> about the fact that this may not actually be desirable in practice, or> in a restricted use case.>> That means performance analysis of create, load, attach, detach,> INSERT, SELECT, UPD/DEL and anything else that might be affected,> together with algorithmic analysis of what happens for larger N and> larger tables.>> Expect many versions; take provisions for many days.>> Best of luck>> --> Simon Riggs                http://www.EnterpriseDB.com/>>>", "msg_date": "Wed, 23 Nov 2022 15:29:56 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Tom Lane schrieb am 18.11.2022 um 16:06:\n>> Do we need new syntax actually? I think that a global unique index\n>> can be created automatically instead of raising an error \"unique\n>> constraint on partitioned table must include all partitioning\n>> columns\"\n>\n> I'm not convinced that we want this feature at all: as far as I can\n> see, it will completely destroy the benefits of making a partitioned\n> table in the first place. But if we do want it, I don't think it\n> should be so easy to create a global index by accident as that syntax\n> approach would make it. I think there needs to be a pretty clear YES\n> I WANT TO SHOOT MYSELF IN THE FOOT clause in the command.\n\nThere are many Oracle users that find global indexes useful despite\ntheir disadvantages.\n\nI have seen this mostly when the goal was to get the benefits of\npartition pruning at runtime which turned the full table scan (=Seq Scan)\non huge tables to partition scans on much smaller partitions.\nPartition wise joins were also helpful for query performance.\nThe substantially slower drop partition performance was accepted in thos cases\n\nI think it would be nice to have the option in Postgres as well.\n\nI do agree however, that the global index should not be created automatically.\n\nSomething like CREATE GLOBAL [UNIQUE] INDEX ... would be a lot better\n\n\nJust my 0.05€\n\n\n", "msg_date": "Wed, 23 Nov 2022 23:42:28 +0100", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "st 23. 11. 2022 v 23:42 odesílatel Thomas Kellerer <shammat@gmx.net> napsal:\n\n> Tom Lane schrieb am 18.11.2022 um 16:06:\n> >> Do we need new syntax actually? I think that a global unique index\n> >> can be created automatically instead of raising an error \"unique\n> >> constraint on partitioned table must include all partitioning\n> >> columns\"\n> >\n> > I'm not convinced that we want this feature at all: as far as I can\n> > see, it will completely destroy the benefits of making a partitioned\n> > table in the first place. But if we do want it, I don't think it\n> > should be so easy to create a global index by accident as that syntax\n> > approach would make it. I think there needs to be a pretty clear YES\n> > I WANT TO SHOOT MYSELF IN THE FOOT clause in the command.\n>\n> There are many Oracle users that find global indexes useful despite\n> their disadvantages.\n>\n> I have seen this mostly when the goal was to get the benefits of\n> partition pruning at runtime which turned the full table scan (=Seq Scan)\n> on huge tables to partition scans on much smaller partitions.\n> Partition wise joins were also helpful for query performance.\n> The substantially slower drop partition performance was accepted in thos\n> cases\n>\n\n> I think it would be nice to have the option in Postgres as well.\n>\n> I do agree however, that the global index should not be created\n> automatically.\n>\n> Something like CREATE GLOBAL [UNIQUE] INDEX ... would be a lot better\n>\n\nIs it necessary to use special marks like GLOBAL if this index will be\npartitioned, and uniqueness will be ensured by repeated evaluations?\n\nOr you think so there should be really forced one relation based index?\n\nI can imagine a unique index on partitions without a special mark, that\nwill be partitioned, and a second variant classic index created over a\npartitioned table, that will be marked as GLOBAL.\n\nRegards\n\nPavel\n\n\n>\n> Just my 0.05€\n>\n>\n>\n\nst 23. 11. 2022 v 23:42 odesílatel Thomas Kellerer <shammat@gmx.net> napsal:Tom Lane schrieb am 18.11.2022 um 16:06:\n>> Do we need new syntax actually? I think that a global unique index\n>> can be created automatically instead of raising an error \"unique\n>> constraint on partitioned table must include all partitioning\n>> columns\"\n>\n> I'm not convinced that we want this feature at all: as far as I can\n> see, it will completely destroy the benefits of making a partitioned\n> table in the first place.  But if we do want it, I don't think it\n> should be so easy to create a global index by accident as that syntax\n> approach would make it.  I think there needs to be a pretty clear YES\n> I WANT TO SHOOT MYSELF IN THE FOOT clause in the command.\n\nThere are many Oracle users that find global indexes useful despite\ntheir disadvantages.\n\nI have seen this mostly when the goal was to get the benefits of\npartition pruning at runtime which turned the full table scan (=Seq Scan)\non huge tables to partition scans on much smaller partitions.\nPartition wise joins were also helpful for query performance.\nThe substantially slower drop partition performance was accepted in thos cases \n\nI think it would be nice to have the option in Postgres as well.\n\nI do agree however, that the global index should not be created automatically.\n\nSomething like CREATE GLOBAL [UNIQUE] INDEX ... would be a lot betterIs it necessary to use special marks like GLOBAL if this index will be partitioned, and uniqueness will be ensured by repeated evaluations?  Or you think so there should be really forced one relation based index?I can imagine a unique index on partitions without a special mark, that will be partitioned,  and a second variant classic index created over a partitioned table, that will be marked as GLOBAL.RegardsPavel\n\n\nJust my 0.05€", "msg_date": "Thu, 24 Nov 2022 07:03:24 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Pavel Stehule schrieb am 24.11.2022 um 07:03:\n> There are many Oracle users that find global indexes useful despite\n> their disadvantages.\n>\n> I have seen this mostly when the goal was to get the benefits of\n> partition pruning at runtime which turned the full table scan (=Seq Scan)\n> on huge tables to partition scans on much smaller partitions.\n> Partition wise joins were also helpful for query performance.\n> The substantially slower drop partition performance was accepted in thos cases\n>\n>\n> I think it would be nice to have the option in Postgres as well.\n>\n> I do agree however, that the global index should not be created automatically.\n>\n> Something like CREATE GLOBAL [UNIQUE] INDEX ... would be a lot better\n>\n>\n> Is it necessary to use special marks like GLOBAL if this index will\n> be partitioned, and uniqueness will be ensured by repeated\n> evaluations?\n>\n> Or you think so there should be really forced one relation based\n> index?\n>\n> I can imagine a unique index on partitions without a special mark,\n> that will be partitioned, and a second variant classic index created\n> over a partitioned table, that will be marked as GLOBAL.\n\n\nMy personal opinion is, that a global index should never be created\nautomatically.\n\nThe user should consciously decide on using a feature\nthat might have a serious impact on performance in some areas.\n\n\n\n", "msg_date": "Thu, 24 Nov 2022 16:00:59 +0100", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Fri, Nov 18, 2022 at 3:31 AM Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> Patch: Global Unique Index\n> - Optimizer, query planning and vacuum -\n> Since no major modification is done on global unique index's structure and storage, it works in the same way as a regular partitioned index. No major change is required to be done on optimizer, planner and vacuum process as they should work in the same way as regular index.\n\nIt might not need changes in the vacuum to make it work. But this can\nnot be really useful without modifying the vacuum the way it works. I\nmean currently, the indexes are also partitioned based on the table so\nwhenever we do table vacuum it's fine to do index vacuum but now you\nwill have one gigantic index and which will be vacuumed every time we\nvacuum any of the partitions. So for example, if you have 10000\npartitions then by the time you vacuum the whole table (all 10000\npartitions) the global index will be vacuumed 10000 times.\n\nThere was some effort in past (though it was not concluded) about\ndecoupling the index and heap vacuuming such that instead of doing the\nindex vacuum for each partition we remember the dead tids and we only\ndo the index vacuum when we think there are enough dead items so that\nthe index vacuum makes sense[1].\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Nov 2022 20:52:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Thu, Nov 24, 2022 at 07:03:24AM +0100, Pavel Stehule wrote:\n> I can imagine a unique index on partitions without a special mark, that\n> will be partitioned, \n\nThat exists since v11, as long as the index keys include the partition\nkeys.\n\n> and a second variant classic index created over a partitioned table,\n> that will be marked as GLOBAL.\n\nThat's not what this patch is about, though.\n\nOn Thu, Nov 24, 2022 at 08:52:16PM +0530, Dilip Kumar wrote:\n> but now you will have one gigantic index and which will be vacuumed\n> every time we vacuum any of the partitions.\n\nThis patch isn't implemented as \"one gigantic index\", though.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 24 Nov 2022 10:09:23 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": " ---- On Thu, 24 Nov 2022 08:00:59 -0700 Thomas Kellerer wrote --- \n > Pavel Stehule schrieb am 24.11.2022 um 07:03:\n > > There are many Oracle users that find global indexes useful despite\n > > their disadvantages.\n > >\n > > I have seen this mostly when the goal was to get the benefits of\n > > partition pruning at runtime which turned the full table scan (=Seq Scan)\n > > on huge tables to partition scans on much smaller partitions.\n > > Partition wise joins were also helpful for query performance.\n > > The substantially slower drop partition performance was accepted in thos cases\n > >\n > >\n > > I think it would be nice to have the option in Postgres as well.\n > >\n > > I do agree however, that the global index should not be created automatically.\n > >\n > > Something like CREATE GLOBAL [UNIQUE] INDEX ... would be a lot better\n > >\n > >\n > > Is it necessary to use special marks like GLOBAL if this index will\n > > be partitioned, and uniqueness will be ensured by repeated\n > > evaluations?\n > >\n > > Or you think so there should be really forced one relation based\n > > index?\n > >\n > > I can imagine a unique index on partitions without a special mark,\n > > that will be partitioned, and a second variant classic index created\n > > over a partitioned table, that will be marked as GLOBAL.\n > \n > \n > My personal opinion is, that a global index should never be created\n > automatically.\n > \n > The user should consciously decide on using a feature\n > that might have a serious impact on performance in some areas.\n\n\nAgreed, if a unique index is created on non-partition key columns without including the special mark (partition key columns), it may be a mistake from user. (At least I make this mistake all the time). Current PG will give you a warning to include the partition keys, which is good. \n\nIf we were to automatically turn that into a global unique index, user may be using the feature without knowing and experiencing some performance impacts (to account for extra uniqueness check in all partitions).\n\n\n\n", "msg_date": "Thu, 24 Nov 2022 11:15:39 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Thu, Nov 24, 2022 at 9:39 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Thu, Nov 24, 2022 at 08:52:16PM +0530, Dilip Kumar wrote:\n> > but now you will have one gigantic index and which will be vacuumed\n> > every time we vacuum any of the partitions.\n>\n> This patch isn't implemented as \"one gigantic index\", though.\n\nIf this patch is for supporting a global index then I expect that the\nglobal index across all the partitions is going to be big. Anyway, my\npoint was about vacuuming the common index every time you vacuum any\nof the partitions of the table is not the right way and that will make\nglobal indexes less usable.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Nov 2022 08:49:19 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Fri, Nov 25, 2022 at 8:49 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Nov 24, 2022 at 9:39 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Thu, Nov 24, 2022 at 08:52:16PM +0530, Dilip Kumar wrote:\n> > > but now you will have one gigantic index and which will be vacuumed\n> > > every time we vacuum any of the partitions.\n> >\n> > This patch isn't implemented as \"one gigantic index\", though.\n>\n> If this patch is for supporting a global index then I expect that the\n> global index across all the partitions is going to be big. Anyway, my\n> point was about vacuuming the common index every time you vacuum any\n> of the partitions of the table is not the right way and that will make\n> global indexes less usable.\n\nOkay, I got your point. After seeing the details it seems instead of\nsupporting one common index it is just allowing uniqueness checks\nacross multiple index partitions. Sorry for the noise.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Nov 2022 08:51:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Mon, Nov 21, 2022 at 12:33:30PM +0000, Simon Riggs wrote:\n> On Thu, 17 Nov 2022 at 22:01, Cary Huang <cary.huang@highgo.ca> wrote:\n> >\n> > Patch: Global Unique Index\n> \n> Let me start by expressing severe doubt on the usefulness of such a\n> feature, but also salute your efforts to contribute.\n> \n> > In other words, a global unique index and a regular partitioned index are essentially the same in terms of their storage structure except that one can do cross-partition uniqueness check, the other cannot.\n> \n> This is the only workable architecture, since it allows DETACH to be\n> feasible, which is essential.\n\nI had trouble understanding this feature so I spent some time thinking\nabout it. I don't think this is really a global unique index, meaning\nit is not one index with all the value in the index. Rather it is the\nenforcement of uniqueness across all of a partitioned table's indexes. \nI think global indexes have a limited enough use-case that this patch's\napproach is as close as we are going to get to it in the foreseeable\nfuture.\n\nSecond, I outlined the three values of global indexes in this blog\nentry, based on a 2019 email thread:\n\n\thttps://momjian.us/main/blogs/pgblog/2020.html#July_1_2020\n\thttps://www.postgresql.org/message-id/CA+Tgmob_J2M2+QKWrhg2NjQEkMEwZNTfd7a6Ubg34fJuZPkN2g@mail.gmail.com\n\nThe three values are:\n\n\t1. The ability to reference partitioned tables as foreign keys\n\twithout requiring the partition key to be part of the foreign\n\tkey reference; Postgres 12 allows such foreign keys if they match\n\tpartition keys.\n\n\t2. The ability to add a uniqueness constraint to a partitioned\n\ttable where the unique columns are not part of the partition key.\n\n\t3. The ability to index values that only appear in a few\n\tpartitions, and are not part of the partition key.\n\nThis patch should help with #1 and #2, but not #3. The uniqueness\nguarantee allows, on average, half of the partitioned table's indexes to\nbe checked if there is a match, and all partitioned table's indexes if\nnot. This is because once you find a match, you don't need to keep\nchecking because the value is unique.\n\nLooking at the patch, I am unclear how the the patch prevents concurrent\nduplicate value insertion during the partitioned index checking. I am\nactually not sure how that can be done without locking all indexes or\ninserting placeholder entries in all indexes. (Yeah, that sounds bad,\nunless I am missing something.)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Fri, 25 Nov 2022 12:48:12 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Hi Bruce,\n\nThank you for helping review the patches in such detail.\n\nOn 2022-11-25 9:48 a.m., Bruce Momjian wrote:\n> Looking at the patch, I am unclear how the the patch prevents concurrent\n> duplicate value insertion during the partitioned index checking. I am\n> actually not sure how that can be done without locking all indexes or\n> inserting placeholder entries in all indexes. (Yeah, that sounds bad,\n> unless I am missing something.)\n\nFor the uniqueness check cross all partitions, we tried to follow the \nimplementation of uniqueness check on a single partition, and added a \nloop to check uniqueness on other partitions after the index tuple has \nbeen inserted to current index partition but before this index tuple has \nbeen made visible. The uniqueness check will wait `XactLockTableWait` if \nthere is a valid transaction in process, and performs the uniqueness \ncheck again after the in-process transaction finished.\n\nWe tried to simulate this duplicate value case in blow steps:\n\n1) prepare the partitioned table,\nCREATE TABLE gidx_part (a int, b int, c text) PARTITION BY RANGE (a);\nCREATE TABLE gidx_part1 partition of gidx_part FOR VALUES FROM (0) TO (10);\nCREATE TABLE gidx_part2 partition of gidx_part FOR VALUES FROM (10) TO (20);\n\n2) having two psql consoles hooked up with gdbs and set break points \nafter _bt_doinsert\n\nresult = _bt_doinsert(rel, itup, checkUnique, indexUnchanged, heapRel);\n\ninside btinsert function in nbtree.c file.\n\n3) first, execute `INSERT INTO gidx_part values(1, 1, 'test');` on \nconsole-1, and then execute `INSERT INTO gidx_part values(11, 1, \n'test');` on console-2 (expect duplicated value '1' in the 2nd column to \nbe detected),\n\nThe test results is that: console-2 query will have to wait until either \nconsole-1 committed or aborted. If console-1 committed, then console-2 \nreports duplicated value already exists; if console-1 aborted, then \nconsole-2 will report insert successfully. If there is a deadlock, then \nthe one detected this deadlock will error out to allow the other one \ncontinue.\n\nI am not quite sure if this is a proper way to deal with a deadlock in \nthis case. It would be so grateful if someone could help provide some \ncases/methods to verify this cross all partitions uniqueness.\n\nBest regards,\n\nDavid\n\n============================\nHighGo Software Canada\nwww.highgo.ca <http://www.highgo.ca>\n\n\n\n\n\n\n\n\n\nHi Bruce,\nThank you for helping review the patches in such detail. \n\nOn 2022-11-25 9:48 a.m., Bruce Momjian\n wrote:\n \n\nLooking at the patch, I am unclear how the the patch prevents concurrent\nduplicate value insertion during the partitioned index checking. I am\nactually not sure how that can be done without locking all indexes or\ninserting placeholder entries in all indexes. (Yeah, that sounds bad,\nunless I am missing something.)\n\n\nFor the uniqueness check cross all partitions, we tried to follow\n the implementation of uniqueness check on a single partition, and\n added a loop to check uniqueness on other partitions after the\n index tuple has been inserted to current index partition but\n before this index tuple has been made visible. The uniqueness\n check will wait `XactLockTableWait` if there is a valid\n transaction in process, and performs the uniqueness check again\n after the in-process transaction finished.\nWe tried to simulate this duplicate value case in blow steps:\n\n1) prepare the partitioned table,\nCREATE TABLE gidx_part (a int, b int, c\n text) PARTITION BY RANGE (a);\n CREATE TABLE gidx_part1 partition of gidx_part FOR VALUES FROM\n (0) TO (10);\n CREATE TABLE gidx_part2 partition of gidx_part FOR VALUES FROM\n (10) TO (20);\n\n2) having two psql consoles hooked up with gdbs and set break\n points after _bt_doinsert \n\nresult = _bt_doinsert(rel, itup,\n checkUnique, indexUnchanged, heapRel);\ninside btinsert function in nbtree.c file.\n\n3) first, execute `INSERT INTO gidx_part\n values(1, 1, 'test');` on console-1, and then execute `INSERT INTO gidx_part values(11, 1, 'test');`\n on console-2 (expect duplicated value '1' in the 2nd column to be\n detected),\nThe test results is that: console-2 query will have to wait until\n either console-1 committed or aborted. If console-1 committed,\n then console-2 reports duplicated value already exists; if\n console-1 aborted, then console-2 will report insert successfully.\n If there is a deadlock, then the one detected this deadlock will\n error out to allow the other one continue.\nI am not quite sure if this is a proper way to deal with a\n deadlock in this case. It would be so grateful if someone could\n help provide some cases/methods to verify this cross all\n partitions uniqueness.\nBest regards,\nDavid\n============================\n\nHighGo Software Canada\n\nwww.highgo.ca", "msg_date": "Fri, 25 Nov 2022 17:03:06 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Fri, Nov 25, 2022 at 05:03:06PM -0800, David Zhang wrote:\n> Hi Bruce,\n> \n> Thank you for helping review the patches in such detail.\n> \n> On 2022-11-25 9:48 a.m., Bruce Momjian wrote:\n> \n> Looking at the patch, I am unclear how the the patch prevents concurrent\n> duplicate value insertion during the partitioned index checking. I am\n> actually not sure how that can be done without locking all indexes or\n> inserting placeholder entries in all indexes. (Yeah, that sounds bad,\n> unless I am missing something.)\n> \n> For the uniqueness check cross all partitions, we tried to follow the\n> implementation of uniqueness check on a single partition, and added a loop to\n> check uniqueness on other partitions after the index tuple has been inserted to\n> current index partition but before this index tuple has been made visible. The\n> uniqueness check will wait `XactLockTableWait` if there is a valid transaction\n> in process, and performs the uniqueness check again after the in-process\n> transaction finished.\n\nI can't see why this wouldn't work, but I also can't think of any cases\nwhere we do this in our code already, so it will need careful\nconsideration.\n\nWe kind of do this for UPDATE and unique key conflicts, but only for a\nsingle index entry. where we peek and sleep on pending changes, but not\nacross indexes.\n\n> I am not quite sure if this is a proper way to deal with a deadlock in this\n> case. It would be so grateful if someone could help provide some cases/methods\n> to verify this cross all partitions uniqueness.\n\nI assume you are using our existing deadlock detection code, and just\nsleeping in various indexes and expecting deadlock detection to happen.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Mon, 28 Nov 2022 16:28:55 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Fri, Nov 18, 2022 at 12:03:53PM +0300, Sergei Kornilov wrote:\n> Hello\n> Do we need new syntax actually? I think that a global unique index can be created automatically instead of raising an error \"unique constraint on partitioned table must include all partitioning columns\"\n\n I may suggest even more of the new syntax.\n\n If someone has to implement sequential index checking on unique\nconstraints, then it would be useful to be able to do that inde-\npendent of partitioning also.\n\nE.g. for some kinds of manual partitions or for strangely de-\nsigned datasets. Or for some of the table partitions instead for\nall of them.\n\nFor that reason, perhaps some other type of unique index -- that\nis not an index per se, but a check against a set of indexes --\ncould be added. Or, perhaps, not an index, but an EXCLUDE con-\nstraint of that kind.\n\n\n\n\n", "msg_date": "Tue, 29 Nov 2022 15:38:41 +0300", "msg_from": "Ilya Anfimov <ilan@tzirechnoy.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On 11/24/22 19:15, Cary Huang wrote:\n> ---- On Thu, 24 Nov 2022 08:00:59 -0700 Thomas Kellerer wrote ---\n> > Pavel Stehule schrieb am 24.11.2022 um 07:03:\n> > > There are many Oracle users that find global indexes useful despite\n> > > their disadvantages.\n> > >\n> > > I have seen this mostly when the goal was to get the benefits of\n> > > partition pruning at runtime which turned the full table scan (=Seq Scan)\n> > > on huge tables to partition scans on much smaller partitions.\n> > > Partition wise joins were also helpful for query performance.\n> > > The substantially slower drop partition performance was accepted in thos cases\n> > >\n> > >\n> > > I think it would be nice to have the option in Postgres as well.\n> > >\n> > > I do agree however, that the global index should not be created automatically.\n> > >\n> > > Something like CREATE GLOBAL [UNIQUE] INDEX ... would be a lot better\n> > >\n> > >\n> > > Is it necessary to use special marks like GLOBAL if this index will\n> > > be partitioned, and uniqueness will be ensured by repeated\n> > > evaluations?\n> > >\n> > > Or you think so there should be really forced one relation based\n> > > index?\n> > >\n> > > I can imagine a unique index on partitions without a special mark,\n> > > that will be partitioned, and a second variant classic index created\n> > > over a partitioned table, that will be marked as GLOBAL.\n> >\n> >\n> > My personal opinion is, that a global index should never be created\n> > automatically.\n> >\n> > The user should consciously decide on using a feature\n> > that might have a serious impact on performance in some areas.\n> \n> \n> Agreed, if a unique index is created on non-partition key columns without including the special mark (partition key columns), it may be a mistake from user. (At least I make this mistake all the time). Current PG will give you a warning to include the partition keys, which is good.\n> \n> If we were to automatically turn that into a global unique index, user may be using the feature without knowing and experiencing some performance impacts (to account for extra uniqueness check in all partitions).\n\nI disagree. A user does not need to know that a table is partitionned, \nand if the user wants a unique constraint on the table then making them \ntype an extra word to get it is just annoying.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Tue, 29 Nov 2022 13:58:21 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Tue, 2022-11-29 at 13:58 +0100, Vik Fearing wrote:\n> I disagree.  A user does not need to know that a table is partitionned, \n> and if the user wants a unique constraint on the table then making them \n> type an extra word to get it is just annoying.\n\nHmm. But if I created a primary key without thinking too hard about it,\nonly to discover later that dropping old partitions has become a problem,\nI would not be too happy either.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 29 Nov 2022 17:29:04 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Fri, 25 Nov 2022 at 20:03, David Zhang <david.zhang@highgo.ca> wrote:\n>\n> Hi Bruce,\n>\n> Thank you for helping review the patches in such detail.\n>\n> On 2022-11-25 9:48 a.m., Bruce Momjian wrote:\n>\n> Looking at the patch, I am unclear how the the patch prevents concurrent\n> duplicate value insertion during the partitioned index checking. I am\n> actually not sure how that can be done without locking all indexes or\n> inserting placeholder entries in all indexes. (Yeah, that sounds bad,\n> unless I am missing something.)\n>\n> For the uniqueness check cross all partitions, we tried to follow the implementation of uniqueness check on a single partition, and added a loop to check uniqueness on other partitions after the index tuple has been inserted to current index partition but before this index tuple has been made visible. The uniqueness check will wait `XactLockTableWait` if there is a valid transaction in process, and performs the uniqueness check again after the in-process transaction finished.\n\nI think this is the key issue to discuss. The rest is all UX\nbikeshedding (which is pretty important in this case) but this is the\ncore uniqueness implementation.\n\nIf I understand correctly you're going to insert into the local index\nfor the partition using the normal btree uniqueness implementation.\nThen while holding an exclusive lock on the index do lookups on every\npartition for the new key. Effectively serializing inserts to the\ntable?\n\nI think the precedent here are \"exclusion constraints\" which are\ndocumented in two places in the manual:\nhttps://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-EXCLUSION\nhttps://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-EXCLUDE\n\nThese also work by doing lookups for violating entries and don't\ndepend on any special index machinery like btree uniqueness. But I\ndon't think they need to entirely serialize inserts either so it may\nbe worth trying to figure out how they manage this to avoid imposing\nthat overhead.\n\nThere's a comment in src/backend/executor/execIndexing.c near the top\nabout them but I'm not sure it covers all the magic needed for them to\nwork...\n\n-- \ngreg\n\n\n", "msg_date": "Tue, 29 Nov 2022 17:51:49 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> If I understand correctly you're going to insert into the local index\n> for the partition using the normal btree uniqueness implementation.\n> Then while holding an exclusive lock on the index do lookups on every\n> partition for the new key. Effectively serializing inserts to the\n> table?\n\n... not to mention creating a high probability of deadlocks between\nconcurrent insertions to different partitions. If they each\nex-lock their own partition's index before starting to look into\nother partitions' indexes, it seems like a certainty that such\ncases would fail. The rule of thumb about locking multiple objects\nis that all comers had better do it in the same order, and this\nisn't doing that.\n\nThat specific issue could perhaps be fixed by having everybody\nexamine all the indexes in the same order, inserting when you\ncome to your own partition's index and otherwise just checking\nfor conflicts. But that still means serializing insertions\nacross all the partitions. And the fact that you need to lock\nall the partitions, or even just know what they all are, is\ngoing to play hob with a lot of assumptions we've made about\ndifferent partitions being independent, and about what locks\nare needed for operations like ALTER TABLE ATTACH PARTITION.\n\n(I wonder BTW what the game plan is for attaching a partition\nto a partitioned table having a global index. Won't that mean\nhaving to check every row in the new partition against every\none of the existing partitions? So much for ATTACH being fast.)\n\nI still think this is a dead end that will never get committed.\nIf folks want to put time into perhaps finding an ingenious\nway around these problems, okay; but they'd better realize that\nthere's a high probability of failure, or at least coming out\nwith something nobody will want to use.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Nov 2022 18:13:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Tue, Nov 29, 2022 at 06:13:56PM -0500, Tom Lane wrote:\n> Greg Stark <stark@mit.edu> writes:\n> > If I understand correctly you're going to insert into the local index\n> > for the partition using the normal btree uniqueness implementation.\n> > Then while holding an exclusive lock on the index do lookups on every\n> > partition for the new key. Effectively serializing inserts to the\n> > table?\n> \n> ... not to mention creating a high probability of deadlocks between\n> concurrent insertions to different partitions. If they each\n> ex-lock their own partition's index before starting to look into\n> other partitions' indexes, it seems like a certainty that such\n> cases would fail. The rule of thumb about locking multiple objects\n> is that all comers had better do it in the same order, and this\n> isn't doing that.\n\nI am not sure why they would need to exclusive lock anything more than\nthe unique index entry they are adding, just like UPDATE does.\n\n> I still think this is a dead end that will never get committed.\n> If folks want to put time into perhaps finding an ingenious\n> way around these problems, okay; but they'd better realize that\n> there's a high probability of failure, or at least coming out\n> with something nobody will want to use.\n\nAgreed, my earlier point was that this would need a lot of thought to\nget right since we don't do this often. The exclusion constraint is a\nclose example, though that is in a single index.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 29 Nov 2022 20:59:49 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Nov 29, 2022 at 06:13:56PM -0500, Tom Lane wrote:\n>> ... not to mention creating a high probability of deadlocks between\n>> concurrent insertions to different partitions. If they each\n>> ex-lock their own partition's index before starting to look into\n>> other partitions' indexes, it seems like a certainty that such\n>> cases would fail. The rule of thumb about locking multiple objects\n>> is that all comers had better do it in the same order, and this\n>> isn't doing that.\n\n> I am not sure why they would need to exclusive lock anything more than\n> the unique index entry they are adding, just like UPDATE does.\n\nAssuming that you are inserting into index X, and you've checked\nindex Y to find that it has no conflicts, what prevents another\nbackend from inserting a conflict into index Y just after you look?\nAIUI the idea is to prevent that by continuing to hold an exclusive\nlock on the whole index Y until you've completed the insertion.\nPerhaps there's a better way to do that, but it's not what was\ndescribed.\n\nI actually think that that problem should be soluble with a\nslightly different approach. The thing that feels insoluble\nis that you can't do this without acquiring sufficient locks\nto prevent addition of new partitions while the insertion is\nin progress. That will be expensive in itself, and it will\nturn ATTACH PARTITION into a performance disaster.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Nov 2022 21:16:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Tue, Nov 29, 2022 at 09:16:23PM -0500, Tom Lane wrote:\n> Assuming that you are inserting into index X, and you've checked\n> index Y to find that it has no conflicts, what prevents another\n> backend from inserting a conflict into index Y just after you look?\n> AIUI the idea is to prevent that by continuing to hold an exclusive\n> lock on the whole index Y until you've completed the insertion.\n> Perhaps there's a better way to do that, but it's not what was\n> described.\n\nAs I understood it, you insert into index X and then scan all other\nindexes to look for a conflict --- if you find one, you abort with a\nunique index conflict. Other index changes do the same.\n\nSo, for example, one session inserts into index X and then scans all\nother indexes. During the index scan, another session inserts into\nindex Y, but its scan sees the index X addition and gets a uniqueness\nconflict error.\n\n> I actually think that that problem should be soluble with a\n> slightly different approach. The thing that feels insoluble\n> is that you can't do this without acquiring sufficient locks\n> to prevent addition of new partitions while the insertion is\n> in progress. That will be expensive in itself, and it will\n> turn ATTACH PARTITION into a performance disaster.\n\nYes, that would require index locks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 29 Nov 2022 21:42:14 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On 11/29/22 17:29, Laurenz Albe wrote:\n> On Tue, 2022-11-29 at 13:58 +0100, Vik Fearing wrote:\n>> I disagree.  A user does not need to know that a table is partitionned,\n>> and if the user wants a unique constraint on the table then making them\n>> type an extra word to get it is just annoying.\n> \n> Hmm. But if I created a primary key without thinking too hard about it,\n> only to discover later that dropping old partitions has become a problem,\n> I would not be too happy either.\n\nI have not looked at this patch, but my understanding of its design is \nthe \"global\" part of the index just makes sure to check a unique index \non each partition. I don't see from that how dropping old partitions \nwould be a problem.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Wed, 30 Nov 2022 10:09:49 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Wed, 2022-11-30 at 10:09 +0100, Vik Fearing wrote:\n> On 11/29/22 17:29, Laurenz Albe wrote:\n> > On Tue, 2022-11-29 at 13:58 +0100, Vik Fearing wrote:\n> > > I disagree.  A user does not need to know that a table is partitionned,\n> > > and if the user wants a unique constraint on the table then making them\n> > > type an extra word to get it is just annoying.\n> > \n> > Hmm.  But if I created a primary key without thinking too hard about it,\n> > only to discover later that dropping old partitions has become a problem,\n> > I would not be too happy either.\n> \n> I have not looked at this patch, but my understanding of its design is \n> the \"global\" part of the index just makes sure to check a unique index \n> on each partition.  I don't see from that how dropping old partitions \n> would be a problem.\n\nRight, I should have looked closer. But, according to the parallel discussion,\nATTACH PARTITION might be a problem. A global index is likely to be a footgun\none way or the other, so I think it should at least have a safety on\n(CREATE PARTITIONED GLOBAL INDEX or something).\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 30 Nov 2022 13:28:50 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On Tue, 29 Nov 2022 at 21:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I actually think that that problem should be soluble with a\n> slightly different approach. The thing that feels insoluble\n> is that you can't do this without acquiring sufficient locks\n> to prevent addition of new partitions while the insertion is\n> in progress. That will be expensive in itself, and it will\n> turn ATTACH PARTITION into a performance disaster.\n\nI think there`s a lot of room to manoeuvre here. This is a new feature\nthat doesn't need to be 100% complete or satisfy any existing\nstandard. There are lots of options for compromises that leave room\nfor future improvements.\n\n1) We could just say sure ATTACH is slow if you're attaching an\nnon-empty partition\n2) We could invent a concept like convalidated and let people attach a\npartition without validating the uniqueness and then validate it later\nconcurrently\n3) We could say ATTACH doesn't work now and come up with a better\nstrategy in the future\n\nAlso, don't I vaguely recall something in exclusion constraints about\nhaving some kind of in-memory \"intent\" list where you declared that\nyou're about to insert a value, you validate it doesn't violate the\nconstraint and then you're free to insert it because anyone else will\nsee your intent in memory? There might be a need for some kind of\nglobal object that only holds inserted keys long enough that other\nsessions are guaranteed to see the key in the correct index. And that\ncould maybe even be in memory rather than on disk.\n\nThis isn't a simple project but I don't think it's impossible as long\nas we keep an open mind about the requirements.\n\n\n\n--\ngreg\n\n\n", "msg_date": "Wed, 30 Nov 2022 17:30:59 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Thanks a lot for all the comments.\n\nOn 2022-11-29 3:13 p.m., Tom Lane wrote:\n> ... not to mention creating a high probability of deadlocks between\n> concurrent insertions to different partitions. If they each\n> ex-lock their own partition's index before starting to look into\n> other partitions' indexes, it seems like a certainty that such\n> cases would fail. The rule of thumb about locking multiple objects\n> is that all comers had better do it in the same order, and this\n> isn't doing that.\nIn the current POC patch, the deadlock is happening when backend-1 \ninserts a value to index X(partition-1), and backend-2 try to insert a \nconflict value right after backend-1 released the buffer block lock but \nbefore start to check unique on index Y(partition-2). In this case, \nbackend-1 holds ExclusiveLock on transaction-1 and waits for ShareLock \non transaction-2 , while backend-2 holds ExclusiveLock on transaction-2 \nand waits for ShareLock on transaction-1. Based on my debugging tests, \nthis only happens when backend-1 and backend-2 want to insert a conflict \nvalue. If this is true, then is it ok to either `deadlock` error out or \n`duplicated value` error out since this is a conflict value? (hopefully \nend users can handle it in a similar way). I think the probability of \nsuch deadlock has two conditions: 1) users insert a conflict value and \nplus 2) the uniqueness checking happens in the right moment (see above).\n> That specific issue could perhaps be fixed by having everybody\n> examine all the indexes in the same order, inserting when you\n> come to your own partition's index and otherwise just checking\n> for conflicts. But that still means serializing insertions\n> across all the partitions. And the fact that you need to lock\n> all the partitions, or even just know what they all are,\nHere is the main change for insertion cross-partition uniqueness check \nin `0004-support-global-unique-index-insert-and-update.patch`,\n      result = _bt_doinsert(rel, itup, checkUnique, indexUnchanged, \nheapRel);\n\n+    if (checkUnique != UNIQUE_CHECK_NO)\n+        btinsert_check_unique_gi(itup, rel, heapRel, checkUnique);\n+\n      pfree(itup);\n\nwhere, a cross-partition uniqueness check is added after the index tuple \nbtree insertion on current partition. The idea is to make sure other \nbackends can find out the ongoing index tuple just inserted (but before \nmarked as visible yet), and the current partition uniqueness check can \nbe skipped as it has already been checked. Based on this change, I think \nthe insertion serialization can happen in two cases: 1) two insertions \nhappen on the same buffer block (buffer lock waiting); 2) two ongoing \ninsertions with duplicated values (transaction id waiting);\n\n\n\n\n", "msg_date": "Fri, 2 Dec 2022 15:29:25 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On 2022-11-29 6:16 p.m., Tom Lane wrote:\n> Assuming that you are inserting into index X, and you've checked\n> index Y to find that it has no conflicts, what prevents another\n> backend from inserting a conflict into index Y just after you look?\n> AIUI the idea is to prevent that by continuing to hold an exclusive\n> lock on the whole index Y until you've completed the insertion.\n> Perhaps there's a better way to do that, but it's not what was\n> described.\nAnother main change in patch \n`0004-support-global-unique-index-insert-and-update.patch`,\n+                search_global:\n+                        stack = _bt_search(iRel, insertstate.itup_key,\n+                                           &insertstate.buf, BT_READ, \nNULL);\n+                        xwait = _bt_check_unique_gi(iRel, &insertstate,\n+                                                    hRel, checkUnique, \n&is_unique,\n+ &speculativeToken, heapRel);\n+                        if (unlikely(TransactionIdIsValid(xwait)))\n+                        {\n... ...\n+                            goto search_global;\n+                        }\n\nHere, I am trying to use `BT_READ` to require a LW_SHARED lock on the \nbuffer block if a match found using `itup_key` search key. The \ncross-partition uniqueness checking will wait if the index tuple \ninsertion on this buffer block has not done yet, otherwise runs the \nuniqueness check to see if there is an ongoing transaction which may \ninsert a conflict value. Once the ongoing insertion is done, it will go \nback and check again (I think it can also handle the case that a \npotential conflict index tuple was later marked as dead in the same \ntransaction). Based on this change, my test results are:\n\n1) a select-only query will not be blocked by the ongoing insertion on \nindex X\n\n2) insertion happening on index Y may wait for the buffer block lock \nwhen inserting a different value but it does not wait for the \ntransaction lock held by insertion on index X.\n\n3) when an insertion inserting a conflict value on index Y,\n     3.1) it waits for buffer block lock if the lock has been held by \nthe insertion on index X.\n     3.2) then, it waits for transaction lock until the insertion on \nindex X is done.\n\n\n\n", "msg_date": "Fri, 2 Dec 2022 16:05:08 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Hi!\n\nSorry to bother - but is this patch used in IvorySQL?\nHere:\nhttps://www.ivorysql.org/docs/Global%20Unique%20Index/create_global_unique_index\nAccording to syntax it definitely looks like this patch.\nThank you!\n\n\nOn Sat, Dec 3, 2022 at 3:05 AM David Zhang <david.zhang@highgo.ca> wrote:\n\n> On 2022-11-29 6:16 p.m., Tom Lane wrote:\n> > Assuming that you are inserting into index X, and you've checked\n> > index Y to find that it has no conflicts, what prevents another\n> > backend from inserting a conflict into index Y just after you look?\n> > AIUI the idea is to prevent that by continuing to hold an exclusive\n> > lock on the whole index Y until you've completed the insertion.\n> > Perhaps there's a better way to do that, but it's not what was\n> > described.\n> Another main change in patch\n> `0004-support-global-unique-index-insert-and-update.patch`,\n> + search_global:\n> + stack = _bt_search(iRel, insertstate.itup_key,\n> + &insertstate.buf, BT_READ,\n> NULL);\n> + xwait = _bt_check_unique_gi(iRel, &insertstate,\n> + hRel, checkUnique,\n> &is_unique,\n> + &speculativeToken, heapRel);\n> + if (unlikely(TransactionIdIsValid(xwait)))\n> + {\n> ... ...\n> + goto search_global;\n> + }\n>\n> Here, I am trying to use `BT_READ` to require a LW_SHARED lock on the\n> buffer block if a match found using `itup_key` search key. The\n> cross-partition uniqueness checking will wait if the index tuple\n> insertion on this buffer block has not done yet, otherwise runs the\n> uniqueness check to see if there is an ongoing transaction which may\n> insert a conflict value. Once the ongoing insertion is done, it will go\n> back and check again (I think it can also handle the case that a\n> potential conflict index tuple was later marked as dead in the same\n> transaction). Based on this change, my test results are:\n>\n> 1) a select-only query will not be blocked by the ongoing insertion on\n> index X\n>\n> 2) insertion happening on index Y may wait for the buffer block lock\n> when inserting a different value but it does not wait for the\n> transaction lock held by insertion on index X.\n>\n> 3) when an insertion inserting a conflict value on index Y,\n> 3.1) it waits for buffer block lock if the lock has been held by\n> the insertion on index X.\n> 3.2) then, it waits for transaction lock until the insertion on\n> index X is done.\n>\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Sorry to bother - but is this patch used in IvorySQL?Here:https://www.ivorysql.org/docs/Global%20Unique%20Index/create_global_unique_indexAccording to syntax it definitely looks like this patch.Thank you!On Sat, Dec 3, 2022 at 3:05 AM David Zhang <david.zhang@highgo.ca> wrote:On 2022-11-29 6:16 p.m., Tom Lane wrote:\n> Assuming that you are inserting into index X, and you've checked\n> index Y to find that it has no conflicts, what prevents another\n> backend from inserting a conflict into index Y just after you look?\n> AIUI the idea is to prevent that by continuing to hold an exclusive\n> lock on the whole index Y until you've completed the insertion.\n> Perhaps there's a better way to do that, but it's not what was\n> described.\nAnother main change in patch \n`0004-support-global-unique-index-insert-and-update.patch`,\n+                search_global:\n+                        stack = _bt_search(iRel, insertstate.itup_key,\n+                                           &insertstate.buf, BT_READ, \nNULL);\n+                        xwait = _bt_check_unique_gi(iRel, &insertstate,\n+                                                    hRel, checkUnique, \n&is_unique,\n+ &speculativeToken, heapRel);\n+                        if (unlikely(TransactionIdIsValid(xwait)))\n+                        {\n... ...\n+                            goto search_global;\n+                        }\n\nHere, I am trying to use `BT_READ` to require a LW_SHARED lock on the \nbuffer block if a match found using `itup_key` search key. The \ncross-partition uniqueness checking will wait if the index tuple \ninsertion on this buffer block has not done yet, otherwise runs the \nuniqueness check to see if there is an ongoing transaction which may \ninsert a conflict value. Once the ongoing insertion is done, it will go \nback and check again (I think it can also handle the case that a \npotential conflict index tuple was later marked as dead in the same \ntransaction). Based on this change, my test results are:\n\n1) a select-only query will not be blocked by the ongoing insertion on \nindex X\n\n2) insertion happening on index Y may wait for the buffer block lock \nwhen inserting a different value but it does not wait for the \ntransaction lock held by insertion on index X.\n\n3) when an insertion inserting a conflict value on index Y,\n     3.1) it waits for buffer block lock if the lock has been held by \nthe insertion on index X.\n     3.2) then, it waits for transaction lock until the insertion on \nindex X is done.\n\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 19 Dec 2022 18:51:39 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On 2022-12-19 7:51 a.m., Nikita Malakhov wrote:\n> Sorry to bother - but is this patch used in IvorySQL?\n> Here:\n> https://www.ivorysql.org/docs/Global%20Unique%20Index/create_global_unique_index\n> According to syntax it definitely looks like this patch.\n\nThe global unique index is one of the features required in IvorySQL \ndevelopment. We want to share it to the communities to get more \nfeedback, and then hopefully we could better contribute it back to \nPostgreSQL.\n\nBest regards,\n\nDavid\n\n\n\n", "msg_date": "Tue, 27 Dec 2022 13:13:53 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On 2022-11-29 6:16 p.m., Tom Lane wrote:\n> Assuming that you are inserting into index X, and you've checked\n> index Y to find that it has no conflicts, what prevents another\n> backend from inserting a conflict into index Y just after you look?\n> AIUI the idea is to prevent that by continuing to hold an exclusive\n> lock on the whole index Y until you've completed the insertion.\n> Perhaps there's a better way to do that, but it's not what was\n> described.\n\nDuring inserts, global unique index patch does not acquire exclusive \nlock on the whole index Y while checking it for the uniqueness; it \nacquires a low level AccessShareLock on Y and will release after \nchecking. So while it is checking, another backend can still insert a \nduplicate in index Y. If this is the case, a \"transaction level lock\" \nwill be triggered.\n\nFor example.\n\nSay backend A inserts into index X, and checks index Y to find no \nconflict, and backend B inserts a conflict into index Y right after. In \nthis case, backend B still has to check index X for conflict and It will \nfetch a duplicate tuple that has been inserted by A, but it cannot \ndeclare a duplicate error yet. This is because the transaction inserting \nthis conflict tuple started by backend A is still in progress. At this \nmoment, backend B has to wait for backend A to commit / abort before it \ncan continue. This is how \"transaction level lock\" prevents concurrent \ninsert conflicts.\n\nThere is a chance of deadlock if the conflicting insertions done by A \nand B happen at roughly the same time, where both backends trigger \n\"transaction level lock\" to wait for each other to commit/abort. If this \nis the case, PG's deadlock detection code will error out one of the \nbackends.  It should be okay because it means one of the backends tries \nto insert a conflict. The purpose of global unique index is also to \nerror out backends trying to insert duplicates. In the end the effects \nare the same, it's just that the error says deadlock detected instead of \nduplicate detected.\n\nIf backend B did not insert a conflicting tuple, no transaction lock \nwait will be triggered, and therefore no deadlock will happen.\n\nRegards\nCary Huang\n-----------------------\nHighGo Software Canada\n\n\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 14:37:40 -0800", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "On 2022-11-30 2:30 p.m., Greg Stark wrote:\n> On Tue, 29 Nov 2022 at 21:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I actually think that that problem should be soluble with a\n>> slightly different approach. The thing that feels insoluble\n>> is that you can't do this without acquiring sufficient locks\n>> to prevent addition of new partitions while the insertion is\n>> in progress. That will be expensive in itself, and it will\n>> turn ATTACH PARTITION into a performance disaster.\n> I think there`s a lot of room to manoeuvre here. This is a new feature\n> that doesn't need to be 100% complete or satisfy any existing\n> standard. There are lots of options for compromises that leave room\n> for future improvements.\n>\n> 1) We could just say sure ATTACH is slow if you're attaching an\n> non-empty partition\n> 2) We could invent a concept like convalidated and let people attach a\n> partition without validating the uniqueness and then validate it later\n> concurrently\n> 3) We could say ATTACH doesn't work now and come up with a better\n> strategy in the future\n>\n> Also, don't I vaguely recall something in exclusion constraints about\n> having some kind of in-memory \"intent\" list where you declared that\n> you're about to insert a value, you validate it doesn't violate the\n> constraint and then you're free to insert it because anyone else will\n> see your intent in memory? There might be a need for some kind of\n> global object that only holds inserted keys long enough that other\n> sessions are guaranteed to see the key in the correct index. And that\n> could maybe even be in memory rather than on disk.\n>\n> This isn't a simple project but I don't think it's impossible as long\n> as we keep an open mind about the requirements.\n\nIn the current global unique index implementation, ATTACH can be slow if \nthere are concurrent inserts happening. ATTACH tries to acquire \nshareLock on all existing partitions and partition-to-be before it scans \nand sorts them for uniqueness check. It will release them only after all \npartitions have been checked. If there are concurrent inserts, ATTACH \nhas to wait for all inserts complete. Likewise, if ATTACH is in \nprogress, inserts have to wait as well. This is an issue now.\n\nIf we were to make ATTACH acquire a lower level lock (AccessShareLock), \nscans a partition, and then release it. there is nothing stopping any \nconcurrent inserts from inserting a conflict right after it finishes \nchecking. This is another issue. There is no transaction level lock \nbeing triggered here like in multiple concurent inserts case\n\nAnother email thread called \"create index concurrently on partitioned \nindex\" discuss some approaches that may be used to solve the attach \nissue here, basically to allow ATTACH PARTITION CONCURRENTLY...\n\n\nregards\n\nCary Huang\n---------------------------------\nHighGo Software Canada\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 13 Jan 2023 14:22:58 -0800", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Patch: Global Unique Index" }, { "msg_contents": "Hi!\n\nPlease advise on the status of this patch set - are there any improvements?\nIs there any work going on?\n\nThanks!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Please advise on the status of this patch set - are there any improvements?Is there any work going on?Thanks!-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 24 Nov 2023 14:40:14 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: Global Unique Index" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a patch proposal to avoid 2 calls to \npgstat_fetch_stat_tabentry_ext() in pgstat_fetch_stat_tabentry() in case \nthe relation is not a shared one and no statistics are found.\n\nThanks Andres for the suggestion done in [1].\n\n[1]: \nhttps://www.postgresql.org/message-id/20221116201202.3k74ajawyom2c3eq%40awork3.anarazel.de\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 18 Nov 2022 06:01:12 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid double lookup in pgstat_fetch_stat_tabentry()" }, { "msg_contents": "On Fri, Nov 18, 2022 at 10:32 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> Please find attached a patch proposal to avoid 2 calls to\n> pgstat_fetch_stat_tabentry_ext() in pgstat_fetch_stat_tabentry() in case\n> the relation is not a shared one and no statistics are found.\n>\n> Thanks Andres for the suggestion done in [1].\n>\n> [1]:\n> https://www.postgresql.org/message-id/20221116201202.3k74ajawyom2c3eq%40awork3.anarazel.de\n\n+1. The patch LGTM. However, I have a suggestion to simplify it\nfurther by getting rid of the local variable tabentry and just\nreturning pgstat_fetch_stat_tabentry_ext(IsSharedRelation(relid),\nrelid);. Furthermore, the pgstat_fetch_stat_tabentry() can just be a\nstatic inline function.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Nov 2022 11:36:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid double lookup in pgstat_fetch_stat_tabentry()" }, { "msg_contents": "Hi,\n\nOn 11/18/22 7:06 AM, Bharath Rupireddy wrote:\n> On Fri, Nov 18, 2022 at 10:32 AM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>>\n>> Hi hackers,\n>>\n>> Please find attached a patch proposal to avoid 2 calls to\n>> pgstat_fetch_stat_tabentry_ext() in pgstat_fetch_stat_tabentry() in case\n>> the relation is not a shared one and no statistics are found.\n>>\n>> Thanks Andres for the suggestion done in [1].\n>>\n>> [1]:\n>> https://www.postgresql.org/message-id/20221116201202.3k74ajawyom2c3eq%40awork3.anarazel.de\n> \n> +1. The patch LGTM. \n\nThanks for looking at it!\n\n> However, I have a suggestion to simplify it\n> further by getting rid of the local variable tabentry and just\n> returning pgstat_fetch_stat_tabentry_ext(IsSharedRelation(relid),\n> relid);. Furthermore, the pgstat_fetch_stat_tabentry() can just be a\n> static inline function.\nGood point. While at it, why not completely get rid of \npgstat_fetch_stat_tabentry_ext(), like in v2 the attached?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 18 Nov 2022 11:09:43 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid double lookup in pgstat_fetch_stat_tabentry()" }, { "msg_contents": "On Fri, Nov 18, 2022 at 3:41 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> > However, I have a suggestion to simplify it\n> > further by getting rid of the local variable tabentry and just\n> > returning pgstat_fetch_stat_tabentry_ext(IsSharedRelation(relid),\n> > relid);. Furthermore, the pgstat_fetch_stat_tabentry() can just be a\n> > static inline function.\n> Good point. While at it, why not completely get rid of\n> pgstat_fetch_stat_tabentry_ext(), like in v2 the attached?\n\nHm. While it saves around 20 LOC, IsSharedRelation() is now spread\nacross, but WFM.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Nov 2022 17:08:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid double lookup in pgstat_fetch_stat_tabentry()" }, { "msg_contents": "Hi,\n\nOn 2022-11-18 11:09:43 +0100, Drouvot, Bertrand wrote:\n> > Furthermore, the pgstat_fetch_stat_tabentry() can just be a\n> > static inline function.\n\nI think that's just premature optimization for something like this. The\nfunction call overhead on accessing stats can't be a relevant factor - the\nincrease in code size is more likely to matter (but still unlikely).\n\n\n> Good point. While at it, why not completely get rid of\n> pgstat_fetch_stat_tabentry_ext(), like in v2 the attached?\n\n-1, I don't think spreading the IsSharedRelation() is a good idea. It costs\nmore code than it saves.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Nov 2022 09:32:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid double lookup in pgstat_fetch_stat_tabentry()" }, { "msg_contents": "Hi,\n\nOn 11/18/22 6:32 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2022-11-18 11:09:43 +0100, Drouvot, Bertrand wrote:\n>>> Furthermore, the pgstat_fetch_stat_tabentry() can just be a\n>>> static inline function.\n> \n> I think that's just premature optimization for something like this. The\n> function call overhead on accessing stats can't be a relevant factor - the\n> increase in code size is more likely to matter (but still unlikely).\n> \n> \n>> Good point. While at it, why not completely get rid of\n>> pgstat_fetch_stat_tabentry_ext(), like in v2 the attached?\n> \n> -1, I don't think spreading the IsSharedRelation() is a good idea. It costs\n> more code than it saves.\n> \n\nGot it, please find attached V3: switching back to the initial proposal and implementing Bharath's comment (getting rid of the local variable tabentry).\n\nOut of curiosity, here are the sizes (no debug):\n\n- Current code (no patch)\n\n$ size ./src/backend/utils/adt/pgstatfuncs.o ./src/backend/utils/activity/pgstat_relation.o\n text data bss dec hex filename\n 24974 0 0 24974 618e ./src/backend/utils/adt/pgstatfuncs.o\n 7353 64 0 7417 1cf9 ./src/backend/utils/activity/pgstat_relation.o\n\n- IsSharedRelation() spreading\n\n$ size ./src/backend/utils/adt/pgstatfuncs.o ./src/backend/utils/activity/pgstat_relation.o\n text data bss dec hex filename\n 25304 0 0 25304 62d8 ./src/backend/utils/adt/pgstatfuncs.o\n 7249 64 0 7313 1c91 ./src/backend/utils/activity/pgstat_relation.o\n\n- inline function\n\n$ size ./src/backend/utils/adt/pgstatfuncs.o ./src/backend/utils/activity/pgstat_relation.o\n text data bss dec hex filename\n 25044 0 0 25044 61d4 ./src/backend/utils/adt/pgstatfuncs.o\n 7249 64 0 7313 1c91 ./src/backend/utils/activity/pgstat_relation.o\n\n- V3 attached\n\n$ size ./src/backend/utils/adt/pgstatfuncs.o ./src/backend/utils/activity/pgstat_relation.o\n text data bss dec hex filename\n 24974 0 0 24974 618e ./src/backend/utils/adt/pgstatfuncs.o\n 7323 64 0 7387 1cdb ./src/backend/utils/activity/pgstat_relation.o\n\n\nI'd vote for V3 for readability, size and \"backward compatibility\" with current code.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 19 Nov 2022 09:38:26 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid double lookup in pgstat_fetch_stat_tabentry()" }, { "msg_contents": "Hi,\n\nOn 2022-11-19 09:38:26 +0100, Drouvot, Bertrand wrote:\n> I'd vote for V3 for readability, size and \"backward compatibility\" with current code.\n\nPushed that. Thanks for the patch and evaluation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 20 Nov 2022 11:00:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoid double lookup in pgstat_fetch_stat_tabentry()" } ]
[ { "msg_contents": "Hi,\n\nI'm unable to reset stats. Please help me to fix this?\n\ntestdb => select * from pg_stat_reset_replication_slot(NULL);\nERROR: permission denied for function pg_stat_reset_replication_slot\n\nRegards,\n*Satya*\n\nHi,I'm unable to reset stats. Please help me to fix this?testdb => select * from pg_stat_reset_replication_slot(NULL);ERROR: permission denied for function pg_stat_reset_replication_slotRegards,Satya", "msg_date": "Fri, 18 Nov 2022 18:38:56 +0530", "msg_from": "Satya Thirumani <satyanarayana.thirumani@gmail.com>", "msg_from_op": true, "msg_subject": "Unable to reset stats using pg_stat_reset_replication_slot" }, { "msg_contents": "This doesn't seem like fitting here, but..\n\nAt Fri, 18 Nov 2022 18:38:56 +0530, Satya Thirumani <satyanarayana.thirumani@gmail.com> wrote in \n> I'm unable to reset stats. Please help me to fix this?\n> \n> testdb => select * from pg_stat_reset_replication_slot(NULL);\n> ERROR: permission denied for function pg_stat_reset_replication_slot\n\nYeah, the user doesn't seem to be allowed to do that. Only superusers\ncan do that defaultly.\n\nhttps://www.postgresql.org/docs/devel/monitoring-stats.html\n> This function is restricted to superusers by default, but other\n> users can be granted EXECUTE to run the function.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 21 Nov 2022 11:33:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unable to reset stats using pg_stat_reset_replication_slot" } ]
[ { "msg_contents": "Hi all,\n\nWorking with PostgreSQL Logical Replication is just great! It helps a lot\ndoing real time replication for analytical purposes without using any other\n3d party service. Although all these years working as product architect of\nreporting i have noted a few requirements which are always a challenge and\nmay help enhance logical replication even better.\n\nTo the point:\nPostgreSQL14 Logical Replication allows replication of a table to another\ntable that exists in another database or even in another host. It also\nallows multiple upstream tables using the same structure to downstream into\na single table.\n*CREATE PUBLICATION pb_test FOR TABLE test*\n\nPostgreSQL15 Logical Replication allows even better replication options,\nlike selecting subsets of the columns from publisher tables. It also\nsupports plenty of options like disable_on_error etc.\n*CREATE PUBLICATION pb_test FOR TABLE test (\"id\", \"name\")*\n\nWhat does not support is the option for defining custom column expressions,\nas keys or values, into the upstream (publication). This will give more\nflexibility into making replication from multiple upstreams into less\ndownstreams adding more logic. For instance, in a project for analytical\npurposes there is the need to consolidate data from multiple databases into\none and at the same time keep the origin of each replicated data\nidentified by a tenanant_id column. In this case we also need the ability\nto define the new column as an additional key which will participate into\nthe destination table.\n\nTenant 1 table\nid serial pk\ndescription varchar\n\nTenant 2 table\nid integer pk\ndescription varchar\n\nGroup table\ntenant integer pk\nid integer pk\ndescription varchar\n\nPossible syntax to archive that\n*CREATE PUBLICATION pb_test FOR TABLE test ({value:datatype:iskey:alias}\n,\"id\", \"name\")*\n\nExample\n*CREATE PUBLICATION pb_test FOR TABLE test ({1:integer:true:tenant} ,\"id\",\n\"name\")*\n\nI suppose the column definition should exist in the publication syntax as\nthe publication should know from before the datatype and if is a key before\nbeing consumed by a subscriber which may already have the column.\n\nSo making an insert or update or delete statement into a tenant 1 database:\nINSERT INTO test (id, description) VALUES (5, 'data')\nUPDATE test SET description = 'data' WHERE id = 5\nDELETE FROM test WHERE id = 5\nWill be reflected into subscriber as the following\nINSERT INTO test (tenant, id, description) VALUES (1, 5, 'data')\nUPDATE test SET description = 'data' WHERE tenant=1 AND id = 5\nDELETE FROM test WHERE tenant=1 AND id = 5\n\nFor more clarifications please reach me at koureasstavros@gmail.com\nThanks!\n\nHi all,Working with PostgreSQL Logical Replication is just great! It helps a lot doing real time replication for analytical purposes without using any other 3d party service. Although all these years working as product architect of reporting i have noted a few requirements which are always a challenge and may help enhance logical replication even better.To the point:PostgreSQL14 Logical Replication allows replication of a table to another table that exists in another database or even in another host. It also allows multiple upstream tables using the same structure to downstream into a single table.CREATE PUBLICATION pb_test FOR TABLE testPostgreSQL15 Logical Replication allows even better replication options, like selecting subsets of the columns from publisher tables. It also supports plenty of options like disable_on_error etc.CREATE PUBLICATION pb_test FOR TABLE test (\"id\", \"name\")What does not support is the option for defining custom column expressions, as keys or values, into the upstream (publication). This will give more flexibility into making replication from multiple upstreams into less downstreams adding more logic. For instance, in a project for analytical purposes there is the need to consolidate data from multiple databases into one and at the same time keep the origin of each replicated data identified by a tenanant_id column. In this case we also need the ability to define the new column as an additional key which will participate into the destination table.Tenant 1 tableid serial pkdescription varcharTenant 2 tableid integer pkdescription varcharGroup tabletenant integer pkid integer pkdescription varcharPossible syntax to archive thatCREATE PUBLICATION pb_test FOR TABLE test ({value:datatype:iskey:alias} ,\"id\", \"name\")ExampleCREATE PUBLICATION pb_test FOR TABLE test ({1:integer:true:tenant} ,\"id\", \"name\")I suppose the column definition should exist in the publication syntax as the publication should know from before the datatype and if is a key before being consumed by a subscriber which may already have the column.So making an insert or update or delete statement into a tenant 1 database:INSERT INTO test (id, description) VALUES (5, 'data')UPDATE test SET description = 'data' WHERE id = 5DELETE FROM test WHERE id = 5Will be reflected into subscriber as the followingINSERT INTO test (tenant, id, description) VALUES (1, 5, 'data')UPDATE test SET description = 'data' WHERE tenant=1 AND id = 5DELETE FROM test WHERE tenant=1 AND id = 5For more clarifications please reach me at koureasstavros@gmail.comThanks!", "msg_date": "Fri, 18 Nov 2022 15:26:25 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Logical Replication Custom Column Expression" }, { "msg_contents": "On Sat, Nov 19, 2022 at 6:47 PM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Hi all,\n>\n> Working with PostgreSQL Logical Replication is just great! It helps a lot doing real time replication for analytical purposes without using any other 3d party service. Although all these years working as product architect of reporting i have noted a few requirements which are always a challenge and may help enhance logical replication even better.\n>\n> To the point:\n> PostgreSQL14 Logical Replication allows replication of a table to another table that exists in another database or even in another host. It also allows multiple upstream tables using the same structure to downstream into a single table.\n> CREATE PUBLICATION pb_test FOR TABLE test\n>\n> PostgreSQL15 Logical Replication allows even better replication options, like selecting subsets of the columns from publisher tables. It also supports plenty of options like disable_on_error etc.\n> CREATE PUBLICATION pb_test FOR TABLE test (\"id\", \"name\")\n>\n> What does not support is the option for defining custom column expressions, as keys or values, into the upstream (publication). This will give more flexibility into making replication from multiple upstreams into less downstreams adding more logic. For instance, in a project for analytical purposes there is the need to consolidate data from multiple databases into one and at the same time keep the origin of each replicated data identified by a tenanant_id column. In this case we also need the ability to define the new column as an additional key which will participate into the destination table.\n>\n> Tenant 1 table\n> id serial pk\n> description varchar\n>\n> Tenant 2 table\n> id integer pk\n> description varchar\n>\n> Group table\n> tenant integer pk\n> id integer pk\n> description varchar\n>\n> Possible syntax to archive that\n> CREATE PUBLICATION pb_test FOR TABLE test ({value:datatype:iskey:alias} ,\"id\", \"name\")\n>\n> Example\n> CREATE PUBLICATION pb_test FOR TABLE test ({1:integer:true:tenant} ,\"id\", \"name\")\n\nI think that's a valid usecase.\n\nThis looks more like a subscription option to me. In multi-subscriber\nmulti-publisher scenarios, on one subscriber a given upstream may be\ntenant 1 but on some other it could be 2. But I don't think we allow\nspecifying subscription options for a single table. AFAIU, the origin\nids are available as part of the commit record which contained this\nchange; that's how conflict resolution is supposed to know it. So\nsomehow the subscriber will need to fetch those from there and set the\ntenant.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:05:15 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "On Mon, Nov 21, 2022 at 5:05 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Sat, Nov 19, 2022 at 6:47 PM Stavros Koureas\n> <koureasstavros@gmail.com> wrote:\n> >\n> > What does not support is the option for defining custom column expressions, as keys or values, into the upstream (publication). This will give more flexibility into making replication from multiple upstreams into less downstreams adding more logic. For instance, in a project for analytical purposes there is the need to consolidate data from multiple databases into one and at the same time keep the origin of each replicated data identified by a tenanant_id column. In this case we also need the ability to define the new column as an additional key which will participate into the destination table.\n> >\n> > Tenant 1 table\n> > id serial pk\n> > description varchar\n> >\n> > Tenant 2 table\n> > id integer pk\n> > description varchar\n> >\n> > Group table\n> > tenant integer pk\n> > id integer pk\n> > description varchar\n> >\n> > Possible syntax to archive that\n> > CREATE PUBLICATION pb_test FOR TABLE test ({value:datatype:iskey:alias} ,\"id\", \"name\")\n> >\n> > Example\n> > CREATE PUBLICATION pb_test FOR TABLE test ({1:integer:true:tenant} ,\"id\", \"name\")\n>\n> I think that's a valid usecase.\n>\n> This looks more like a subscription option to me. In multi-subscriber\n> multi-publisher scenarios, on one subscriber a given upstream may be\n> tenant 1 but on some other it could be 2. But I don't think we allow\n> specifying subscription options for a single table. AFAIU, the origin\n> ids are available as part of the commit record which contained this\n> change; that's how conflict resolution is supposed to know it. So\n> somehow the subscriber will need to fetch those from there and set the\n> tenant.\n>\n\nYeah, to me also it appears that we can handle it on the subscriber\nside. We have the provision of sending origin information in proto.c.\nBut note that by default publishers won't have any origin associated\nwith change unless someone has defined it. I think this work needs\nmore thought but sounds to be an interesting feature.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:52:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "Sure, this can be implemented as a subscription option, and it will cover\nthis use case scenario as each subscriber points only to one database.\nI also have some more analytical/reporting use-cases which need additions\nin logical-replication, I am not sure if I need to open\ndifferent discussions for each one, all ideas are for\npublication/subscription.\n\nΣτις Τρί 22 Νοε 2022 στις 2:22 μ.μ., ο/η Amit Kapila <\namit.kapila16@gmail.com> έγραψε:\n\n> On Mon, Nov 21, 2022 at 5:05 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Sat, Nov 19, 2022 at 6:47 PM Stavros Koureas\n> > <koureasstavros@gmail.com> wrote:\n> > >\n> > > What does not support is the option for defining custom column\n> expressions, as keys or values, into the upstream (publication). This will\n> give more flexibility into making replication from multiple upstreams into\n> less downstreams adding more logic. For instance, in a project for\n> analytical purposes there is the need to consolidate data from multiple\n> databases into one and at the same time keep the origin of each replicated\n> data identified by a tenanant_id column. In this case we also need the\n> ability to define the new column as an additional key which will\n> participate into the destination table.\n> > >\n> > > Tenant 1 table\n> > > id serial pk\n> > > description varchar\n> > >\n> > > Tenant 2 table\n> > > id integer pk\n> > > description varchar\n> > >\n> > > Group table\n> > > tenant integer pk\n> > > id integer pk\n> > > description varchar\n> > >\n> > > Possible syntax to archive that\n> > > CREATE PUBLICATION pb_test FOR TABLE test\n> ({value:datatype:iskey:alias} ,\"id\", \"name\")\n> > >\n> > > Example\n> > > CREATE PUBLICATION pb_test FOR TABLE test ({1:integer:true:tenant}\n> ,\"id\", \"name\")\n> >\n> > I think that's a valid usecase.\n> >\n> > This looks more like a subscription option to me. In multi-subscriber\n> > multi-publisher scenarios, on one subscriber a given upstream may be\n> > tenant 1 but on some other it could be 2. But I don't think we allow\n> > specifying subscription options for a single table. AFAIU, the origin\n> > ids are available as part of the commit record which contained this\n> > change; that's how conflict resolution is supposed to know it. So\n> > somehow the subscriber will need to fetch those from there and set the\n> > tenant.\n> >\n>\n> Yeah, to me also it appears that we can handle it on the subscriber\n> side. We have the provision of sending origin information in proto.c.\n> But note that by default publishers won't have any origin associated\n> with change unless someone has defined it. I think this work needs\n> more thought but sounds to be an interesting feature.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nSure, this can be implemented as a subscription option, and it will cover this use case scenario as each subscriber points only to one database.I also have some more analytical/reporting use-cases which need additions in logical-replication, I am not sure if I need to open different discussions for each one, all ideas are for publication/subscription.Στις Τρί 22 Νοε 2022 στις 2:22 μ.μ., ο/η Amit Kapila <amit.kapila16@gmail.com> έγραψε:On Mon, Nov 21, 2022 at 5:05 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Sat, Nov 19, 2022 at 6:47 PM Stavros Koureas\n> <koureasstavros@gmail.com> wrote:\n> >\n> > What does not support is the option for defining custom column expressions, as keys or values, into the upstream (publication). This will give more flexibility into making replication from multiple upstreams into less downstreams adding more logic. For instance, in a project for analytical purposes there is the need to consolidate data from multiple databases into one and at the same time keep the origin of each replicated data identified by a tenanant_id column. In this case we also need the ability to define the new column as an additional key which will participate into the destination table.\n> >\n> > Tenant 1 table\n> > id serial pk\n> > description varchar\n> >\n> > Tenant 2 table\n> > id integer pk\n> > description varchar\n> >\n> > Group table\n> > tenant integer pk\n> > id integer pk\n> > description varchar\n> >\n> > Possible syntax to archive that\n> > CREATE PUBLICATION pb_test FOR TABLE test ({value:datatype:iskey:alias} ,\"id\", \"name\")\n> >\n> > Example\n> > CREATE PUBLICATION pb_test FOR TABLE test ({1:integer:true:tenant} ,\"id\", \"name\")\n>\n> I think that's a valid usecase.\n>\n> This looks more like a subscription option to me. In multi-subscriber\n> multi-publisher scenarios, on one subscriber a given upstream may be\n> tenant 1 but on some other it could be 2. But I don't think we allow\n> specifying subscription options for a single table. AFAIU, the origin\n> ids are available as part of the commit record which contained this\n> change; that's how conflict resolution is supposed to know it. So\n> somehow the subscriber will need to fetch those from there and set the\n> tenant.\n>\n\nYeah, to me also it appears that we can handle it on the subscriber\nside. We have the provision of sending origin information in proto.c.\nBut note that by default publishers won't have any origin associated\nwith change unless someone has defined it. I think this work needs\nmore thought but sounds to be an interesting feature.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 22 Nov 2022 14:52:34 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "Reading more carefully what you described, I think you are interested in getting something you call origin from publishers, probably some metadata from the publications.\n\nThis identifier in those metadata maybe does not have business value on the reporting side. The idea is to use a value which has specific meaning to the user at the end.\n\nFor example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the end based on a dimension table which holds this mapping the user would be able to filter the data. So programmatically the user can set the id value of the column plus creating the mapping table from an application let’s say and be able to distinguish the data.\n\nIn addition this column should have the ability to be part of the primary key on the subscription table in order to not conflict with lines from other tenants having the same keys.\n\n> \n> 22 Νοε 2022, 14:52, ο χρήστης «Stavros Koureas <koureasstavros@gmail.com>» έγραψε:\n> \n> \n> Sure, this can be implemented as a subscription option, and it will cover this use case scenario as each subscriber points only to one database.\n> I also have some more analytical/reporting use-cases which need additions in logical-replication, I am not sure if I need to open different discussions for each one, all ideas are for publication/subscription.\n> \n> Στις Τρί 22 Νοε 2022 στις 2:22 μ.μ., ο/η Amit Kapila <amit.kapila16@gmail.com> έγραψε:\n>> On Mon, Nov 21, 2022 at 5:05 PM Ashutosh Bapat\n>> <ashutosh.bapat.oss@gmail.com> wrote:\n>> >\n>> > On Sat, Nov 19, 2022 at 6:47 PM Stavros Koureas\n>> > <koureasstavros@gmail.com> wrote:\n>> > >\n>> > > What does not support is the option for defining custom column expressions, as keys or values, into the upstream (publication). This will give more flexibility into making replication from multiple upstreams into less downstreams adding more logic. For instance, in a project for analytical purposes there is the need to consolidate data from multiple databases into one and at the same time keep the origin of each replicated data identified by a tenanant_id column. In this case we also need the ability to define the new column as an additional key which will participate into the destination table.\n>> > >\n>> > > Tenant 1 table\n>> > > id serial pk\n>> > > description varchar\n>> > >\n>> > > Tenant 2 table\n>> > > id integer pk\n>> > > description varchar\n>> > >\n>> > > Group table\n>> > > tenant integer pk\n>> > > id integer pk\n>> > > description varchar\n>> > >\n>> > > Possible syntax to archive that\n>> > > CREATE PUBLICATION pb_test FOR TABLE test ({value:datatype:iskey:alias} ,\"id\", \"name\")\n>> > >\n>> > > Example\n>> > > CREATE PUBLICATION pb_test FOR TABLE test ({1:integer:true:tenant} ,\"id\", \"name\")\n>> >\n>> > I think that's a valid usecase.\n>> >\n>> > This looks more like a subscription option to me. In multi-subscriber\n>> > multi-publisher scenarios, on one subscriber a given upstream may be\n>> > tenant 1 but on some other it could be 2. But I don't think we allow\n>> > specifying subscription options for a single table. AFAIU, the origin\n>> > ids are available as part of the commit record which contained this\n>> > change; that's how conflict resolution is supposed to know it. So\n>> > somehow the subscriber will need to fetch those from there and set the\n>> > tenant.\n>> >\n>> \n>> Yeah, to me also it appears that we can handle it on the subscriber\n>> side. We have the provision of sending origin information in proto.c.\n>> But note that by default publishers won't have any origin associated\n>> with change unless someone has defined it. I think this work needs\n>> more thought but sounds to be an interesting feature.\n>> \n>> -- \n>> With Regards,\n>> Amit Kapila.\n\nReading more carefully what you described, I think you are interested in getting something you call origin from publishers, probably some metadata from the publications.This identifier in those metadata maybe does not have business value on the reporting side. The idea is to use a value which has specific meaning to the user at the end.For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the end based on a dimension table which holds this mapping the user would be able to filter the data. So programmatically the user can set the id value of the column plus creating the mapping table from an application let’s say and be able to distinguish the data.In addition this column should have the ability to be part of the primary key on the subscription table in order to not conflict with lines from other tenants having the same keys.22 Νοε 2022, 14:52, ο χρήστης «Stavros Koureas <koureasstavros@gmail.com>» έγραψε:Sure, this can be implemented as a subscription option, and it will cover this use case scenario as each subscriber points only to one database.I also have some more analytical/reporting use-cases which need additions in logical-replication, I am not sure if I need to open different discussions for each one, all ideas are for publication/subscription.Στις Τρί 22 Νοε 2022 στις 2:22 μ.μ., ο/η Amit Kapila <amit.kapila16@gmail.com> έγραψε:On Mon, Nov 21, 2022 at 5:05 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Sat, Nov 19, 2022 at 6:47 PM Stavros Koureas\n> <koureasstavros@gmail.com> wrote:\n> >\n> > What does not support is the option for defining custom column expressions, as keys or values, into the upstream (publication). This will give more flexibility into making replication from multiple upstreams into less downstreams adding more logic. For instance, in a project for analytical purposes there is the need to consolidate data from multiple databases into one and at the same time keep the origin of each replicated data identified by a tenanant_id column. In this case we also need the ability to define the new column as an additional key which will participate into the destination table.\n> >\n> > Tenant 1 table\n> > id serial pk\n> > description varchar\n> >\n> > Tenant 2 table\n> > id integer pk\n> > description varchar\n> >\n> > Group table\n> > tenant integer pk\n> > id integer pk\n> > description varchar\n> >\n> > Possible syntax to archive that\n> > CREATE PUBLICATION pb_test FOR TABLE test ({value:datatype:iskey:alias} ,\"id\", \"name\")\n> >\n> > Example\n> > CREATE PUBLICATION pb_test FOR TABLE test ({1:integer:true:tenant} ,\"id\", \"name\")\n>\n> I think that's a valid usecase.\n>\n> This looks more like a subscription option to me. In multi-subscriber\n> multi-publisher scenarios, on one subscriber a given upstream may be\n> tenant 1 but on some other it could be 2. But I don't think we allow\n> specifying subscription options for a single table. AFAIU, the origin\n> ids are available as part of the commit record which contained this\n> change; that's how conflict resolution is supposed to know it. So\n> somehow the subscriber will need to fetch those from there and set the\n> tenant.\n>\n\nYeah, to me also it appears that we can handle it on the subscriber\nside. We have the provision of sending origin information in proto.c.\nBut note that by default publishers won't have any origin associated\nwith change unless someone has defined it. I think this work needs\nmore thought but sounds to be an interesting feature.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 22 Nov 2022 22:10:15 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "On Wed, Nov 23, 2022 at 7:38 AM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Reading more carefully what you described, I think you are interested in getting something you call origin from publishers, probably some metadata from the publications.\n>\n> This identifier in those metadata maybe does not have business value on the reporting side. The idea is to use a value which has specific meaning to the user at the end.\n>\n> For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the end based on a dimension table which holds this mapping the user would be able to filter the data. So programmatically the user can set the id value of the column plus creating the mapping table from an application let’s say and be able to distinguish the data.\n>\n> In addition this column should have the ability to be part of the primary key on the subscription table in order to not conflict with lines from other tenants having the same keys.\n>\n>\n\nI was wondering if a simpler syntax solution might also work here.\n\nImagine another SUBSCRIPTION parameter that indicates to write the\n*name* of the subscription to some pre-defined table column:\ne.g. CREATE SUBSCRIPTION subname FOR PUBLICATION pub_tenant_1\nCONNECTION '...' WITH (subscription_column);\n\nLogical Replication already allows the subscriber table to have extra\ncolumns, so you just need to manually create the extra 'subscription'\ncolumn up-front.\n\nThen...\n\n~~\n\nOn Publisher:\n\ntest_pub=# CREATE TABLE tab(id int primary key, description varchar);\nCREATE TABLE\n\ntest_pub=# INSERT INTO tab VALUES (1,'one'),(2,'two'),(3,'three');\nINSERT 0 3\n\ntest_pub=# CREATE PUBLICATION tenant1 FOR ALL TABLES;\nCREATE PUBLICATION\n\n~~\n\nOn Subscriber:\n\ntest_sub=# CREATE TABLE tab(id int, description varchar, subscription varchar);\nCREATE TABLE\n\ntest_sub=# CREATE SUBSCRIPTION sub_tenant1 CONNECTION 'host=localhost\ndbname=test_pub' PUBLICATION tenant1 WITH (subscription_column);\nCREATE SUBSCRIPTION\n\ntest_sub=# SELECT * FROM tab;\n id | description | subscription\n----+-------------+--------------\n 1 | one | sub_tenant1\n 2 | two | sub_tenant1\n 3 | three | sub_tenant1\n(3 rows)\n\n~~\n\nSubscriptions to different tenants would be named differently.\n\nAnd using other SQL you can map/filter those names however your\napplication wants.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 23 Nov 2022 10:24:33 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "On Tue, Nov 22, 2022 at 6:22 PM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Sure, this can be implemented as a subscription option, and it will cover this use case scenario as each subscriber points only to one database.\n> I also have some more analytical/reporting use-cases which need additions in logical-replication, I am not sure if I need to open different discussions for each one, all ideas are for publication/subscription.\n>\n\nI think to some extent it depends on how unique each idea is but\ninitially you may want to post here and then we can spin off different\nthreads for a discussion if required. Are you interested in working on\none or more of those ideas to make them reality or do you want others\nto pick up based on their interest?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Nov 2022 08:41:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "On Wed, Nov 23, 2022 at 1:40 AM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Reading more carefully what you described, I think you are interested in getting something you call origin from publishers, probably some metadata from the publications.\n>\n> This identifier in those metadata maybe does not have business value on the reporting side. The idea is to use a value which has specific meaning to the user at the end.\n>\n> For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the end based on a dimension table which holds this mapping the user would be able to filter the data. So programmatically the user can set the id value of the column plus creating the mapping table from an application let’s say and be able to distinguish the data.\n>\n\nIn your example, are different tenants represent different publisher\nnodes? If so, why can't we have a predefined column and value for the\nrequired tables on each publisher rather than logical replication\ngenerate that value while replicating data?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Nov 2022 08:49:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "It's easy to answer this question.\n\nImagine that in a software company who sells the product and also offers\nreporting solutions, the ERP tables will not have this additional column to\nall the tables.\nNow the reporting department comes and needs to consolidate all that data\nfrom different databases (publishers) and create one multitenant database\nto have all the data.\nSo in an ERP like NAV or anything else you cannot suggest change all the\ncode to all of the tables plus all functions to add one additional column\nto this table, even that was possible then you cannot work with integers\nbut you need to work with GUIDs as this column should be predefined to each\nERP. Then joining with GUID in the second phase for reporting\ndefinitely will slow down the performance.\n\nIn summary:\n\n 1. Cannot touch the underlying source (important)\n 2. GUID identifier column will slow down the reporting performance\n\n\nΣτις Τετ 23 Νοε 2022 στις 5:19 π.μ., ο/η Amit Kapila <\namit.kapila16@gmail.com> έγραψε:\n\n> On Wed, Nov 23, 2022 at 1:40 AM Stavros Koureas\n> <koureasstavros@gmail.com> wrote:\n> >\n> > Reading more carefully what you described, I think you are interested in\n> getting something you call origin from publishers, probably some metadata\n> from the publications.\n> >\n> > This identifier in those metadata maybe does not have business value on\n> the reporting side. The idea is to use a value which has specific meaning\n> to the user at the end.\n> >\n> > For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the\n> end based on a dimension table which holds this mapping the user would be\n> able to filter the data. So programmatically the user can set the id value\n> of the column plus creating the mapping table from an application let’s say\n> and be able to distinguish the data.\n> >\n>\n> In your example, are different tenants represent different publisher\n> nodes? If so, why can't we have a predefined column and value for the\n> required tables on each publisher rather than logical replication\n> generate that value while replicating data?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nIt's easy to answer this question.Imagine that in a software company who sells the product and also offers reporting solutions, the ERP tables will not have this additional column to all the tables.Now the reporting department comes and needs to consolidate all that data from different databases (publishers) and create one multitenant database to have all the data.So in an ERP like NAV or anything else you cannot suggest change all the code to all of the tables plus all functions to add one additional column to this table, even that was possible then you cannot work with integers but you need to work with GUIDs as this column should be predefined to each ERP. Then joining with GUID in the second phase for reporting definitely will slow down the performance.In summary:Cannot touch the underlying source (important)GUID identifier column will slow down the reporting performanceΣτις Τετ 23 Νοε 2022 στις 5:19 π.μ., ο/η Amit Kapila <amit.kapila16@gmail.com> έγραψε:On Wed, Nov 23, 2022 at 1:40 AM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Reading more carefully what you described, I think you are interested in getting something you call origin from publishers, probably some metadata from the publications.\n>\n> This identifier in those metadata maybe does not have business value on the reporting side. The idea is to use a value which has specific meaning to the user at the end.\n>\n> For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the end based on a dimension table which holds this mapping the user would be able to filter the data. So programmatically the user can set the id value of the column plus creating the mapping table from an application let’s say and be able to distinguish the data.\n>\n\nIn your example, are different tenants represent different publisher\nnodes? If so, why can't we have a predefined column and value for the\nrequired tables on each publisher rather than logical replication\ngenerate that value while replicating data?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 23 Nov 2022 09:53:54 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "Just one correction for the subscriber\nOn Subscriber:\n\ntest_sub=# CREATE TABLE tab(id int *pkey*, description varchar,\nsubscription varchar *pkey*);\nCREATE TABLE\n\nThe subscription table should have the same primary key columns as the\npublisher plus one more.\nWe need to make sure that on update only the same origin data is\nbeing updated.\n\nΣτις Τετ 23 Νοε 2022 στις 1:24 π.μ., ο/η Peter Smith <smithpb2250@gmail.com>\nέγραψε:\n\n> On Wed, Nov 23, 2022 at 7:38 AM Stavros Koureas\n> <koureasstavros@gmail.com> wrote:\n> >\n> > Reading more carefully what you described, I think you are interested in\n> getting something you call origin from publishers, probably some metadata\n> from the publications.\n> >\n> > This identifier in those metadata maybe does not have business value on\n> the reporting side. The idea is to use a value which has specific meaning\n> to the user at the end.\n> >\n> > For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the\n> end based on a dimension table which holds this mapping the user would be\n> able to filter the data. So programmatically the user can set the id value\n> of the column plus creating the mapping table from an application let’s say\n> and be able to distinguish the data.\n> >\n> > In addition this column should have the ability to be part of the\n> primary key on the subscription table in order to not conflict with lines\n> from other tenants having the same keys.\n> >\n> >\n>\n> I was wondering if a simpler syntax solution might also work here.\n>\n> Imagine another SUBSCRIPTION parameter that indicates to write the\n> *name* of the subscription to some pre-defined table column:\n> e.g. CREATE SUBSCRIPTION subname FOR PUBLICATION pub_tenant_1\n> CONNECTION '...' WITH (subscription_column);\n>\n> Logical Replication already allows the subscriber table to have extra\n> columns, so you just need to manually create the extra 'subscription'\n> column up-front.\n>\n> Then...\n>\n> ~~\n>\n> On Publisher:\n>\n> test_pub=# CREATE TABLE tab(id int primary key, description varchar);\n> CREATE TABLE\n>\n> test_pub=# INSERT INTO tab VALUES (1,'one'),(2,'two'),(3,'three');\n> INSERT 0 3\n>\n> test_pub=# CREATE PUBLICATION tenant1 FOR ALL TABLES;\n> CREATE PUBLICATION\n>\n> ~~\n>\n> On Subscriber:\n>\n> test_sub=# CREATE TABLE tab(id int, description varchar, subscription\n> varchar);\n> CREATE TABLE\n>\n> test_sub=# CREATE SUBSCRIPTION sub_tenant1 CONNECTION 'host=localhost\n> dbname=test_pub' PUBLICATION tenant1 WITH (subscription_column);\n> CREATE SUBSCRIPTION\n>\n> test_sub=# SELECT * FROM tab;\n> id | description | subscription\n> ----+-------------+--------------\n> 1 | one | sub_tenant1\n> 2 | two | sub_tenant1\n> 3 | three | sub_tenant1\n> (3 rows)\n>\n> ~~\n>\n> Subscriptions to different tenants would be named differently.\n>\n> And using other SQL you can map/filter those names however your\n> application wants.\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>\n\nJust one correction for the subscriberOn Subscriber:test_sub=# CREATE TABLE tab(id int pkey, description varchar, subscription varchar pkey);CREATE TABLEThe subscription table should have the same primary key columns as the publisher plus one more.We need to make sure that on update only the same origin data is being updated.Στις Τετ 23 Νοε 2022 στις 1:24 π.μ., ο/η Peter Smith <smithpb2250@gmail.com> έγραψε:On Wed, Nov 23, 2022 at 7:38 AM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Reading more carefully what you described, I think you are interested in getting something you call origin from publishers, probably some metadata from the publications.\n>\n> This identifier in those metadata maybe does not have business value on the reporting side. The idea is to use a value which has specific meaning to the user at the end.\n>\n> For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the end based on a dimension table which holds this mapping the user would be able to filter the data. So programmatically the user can set the id value of the column plus creating the mapping table from an application let’s say and be able to distinguish the data.\n>\n> In addition this column should have the ability to be part of the primary key on the subscription table in order to not conflict with lines from other tenants having the same keys.\n>\n>\n\nI was wondering if a simpler syntax solution might also work here.\n\nImagine another SUBSCRIPTION parameter that indicates to write the\n*name* of the subscription to some pre-defined table column:\ne.g. CREATE SUBSCRIPTION subname FOR PUBLICATION pub_tenant_1\nCONNECTION '...' WITH (subscription_column);\n\nLogical Replication already allows the subscriber table to have extra\ncolumns, so you just need to manually create the extra 'subscription'\ncolumn up-front.\n\nThen...\n\n~~\n\nOn Publisher:\n\ntest_pub=# CREATE TABLE tab(id int primary key, description varchar);\nCREATE TABLE\n\ntest_pub=# INSERT INTO tab VALUES (1,'one'),(2,'two'),(3,'three');\nINSERT 0 3\n\ntest_pub=# CREATE PUBLICATION tenant1 FOR ALL TABLES;\nCREATE PUBLICATION\n\n~~\n\nOn Subscriber:\n\ntest_sub=# CREATE TABLE tab(id int, description varchar, subscription varchar);\nCREATE TABLE\n\ntest_sub=# CREATE SUBSCRIPTION sub_tenant1 CONNECTION 'host=localhost\ndbname=test_pub' PUBLICATION tenant1 WITH (subscription_column);\nCREATE SUBSCRIPTION\n\ntest_sub=# SELECT * FROM tab;\n id | description | subscription\n----+-------------+--------------\n  1 | one         | sub_tenant1\n  2 | two         | sub_tenant1\n  3 | three       | sub_tenant1\n(3 rows)\n\n~~\n\nSubscriptions to different tenants would be named differently.\n\nAnd using other SQL you can map/filter those names however your\napplication wants.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 23 Nov 2022 10:00:37 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "On Wed, Nov 23, 2022 at 4:54 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Nov 23, 2022 at 7:38 AM Stavros Koureas\n> <koureasstavros@gmail.com> wrote:\n> >\n> > Reading more carefully what you described, I think you are interested in getting something you call origin from publishers, probably some metadata from the publications.\n> >\n> > This identifier in those metadata maybe does not have business value on the reporting side. The idea is to use a value which has specific meaning to the user at the end.\n> >\n> > For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the end based on a dimension table which holds this mapping the user would be able to filter the data. So programmatically the user can set the id value of the column plus creating the mapping table from an application let’s say and be able to distinguish the data.\n> >\n> > In addition this column should have the ability to be part of the primary key on the subscription table in order to not conflict with lines from other tenants having the same keys.\n> >\n> >\n>\n> I was wondering if a simpler syntax solution might also work here.\n>\n> Imagine another SUBSCRIPTION parameter that indicates to write the\n> *name* of the subscription to some pre-defined table column:\n> e.g. CREATE SUBSCRIPTION subname FOR PUBLICATION pub_tenant_1\n> CONNECTION '...' WITH (subscription_column);\n>\n> Logical Replication already allows the subscriber table to have extra\n> columns, so you just need to manually create the extra 'subscription'\n> column up-front.\n>\n> Then...\n>\n> ~~\n>\n> On Publisher:\n>\n> test_pub=# CREATE TABLE tab(id int primary key, description varchar);\n> CREATE TABLE\n>\n> test_pub=# INSERT INTO tab VALUES (1,'one'),(2,'two'),(3,'three');\n> INSERT 0 3\n>\n> test_pub=# CREATE PUBLICATION tenant1 FOR ALL TABLES;\n> CREATE PUBLICATION\n>\n> ~~\n>\n> On Subscriber:\n>\n> test_sub=# CREATE TABLE tab(id int, description varchar, subscription varchar);\n> CREATE TABLE\n>\n> test_sub=# CREATE SUBSCRIPTION sub_tenant1 CONNECTION 'host=localhost\n> dbname=test_pub' PUBLICATION tenant1 WITH (subscription_column);\n> CREATE SUBSCRIPTION\n>\n> test_sub=# SELECT * FROM tab;\n> id | description | subscription\n> ----+-------------+--------------\n> 1 | one | sub_tenant1\n> 2 | two | sub_tenant1\n> 3 | three | sub_tenant1\n> (3 rows)\n>\n> ~~\n>\nThanks for the example. This is more concrete than just verbal description.\n\nIn this example, do all the tables that a subscription subscribes to\nneed that additional column or somehow the pglogical receiver will\nfigure out which tables have that column and populate rows\naccordingly?\n\nMy further fear is that the subscriber will also need to match the\nsubscription column along with the rest of PK so as not to update rows\nfrom other subscriptions.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 25 Nov 2022 15:50:22 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "Yes, if the property is on the subscription side then it should be applied\nfor all the tables that the connected publication is exposing.\nSo if the property is enabled you should be sure that this origin column\nexists to all of the tables that the publication is exposing...\n\nSure this is the complete idea, that the subscriber should match the PK of\norigin, <previous_pkey>\nAs the subscription table will contain same key values from different\norigins, for example:\n\n*For publisher1 database **table*\nid pk integer | value character varying\n1 | testA1\n2 | testA2\n\n*For publisher2 database **table*\nid pk integer | value character varying\n1 | testB1\n2 | testB2\n\n*For subscriber database table*\norigin *pk *character varying | id *pk *integer | value character varying\npublisher1 | 1 | testA1\npublisher1 | 2 | testA2\npublisher2 | 1 | testB1\npublisher2 | 2 | testB2\n\nAll statements INSERT, UPDATE, DELETE should always include the predicate\nof the origin.\n\nΣτις Παρ 25 Νοε 2022 στις 12:21 μ.μ., ο/η Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> έγραψε:\n\n> On Wed, Nov 23, 2022 at 4:54 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Nov 23, 2022 at 7:38 AM Stavros Koureas\n> > <koureasstavros@gmail.com> wrote:\n> > >\n> > > Reading more carefully what you described, I think you are interested\n> in getting something you call origin from publishers, probably some\n> metadata from the publications.\n> > >\n> > > This identifier in those metadata maybe does not have business value\n> on the reporting side. The idea is to use a value which has specific\n> meaning to the user at the end.\n> > >\n> > > For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at\n> the end based on a dimension table which holds this mapping the user would\n> be able to filter the data. So programmatically the user can set the id\n> value of the column plus creating the mapping table from an application\n> let’s say and be able to distinguish the data.\n> > >\n> > > In addition this column should have the ability to be part of the\n> primary key on the subscription table in order to not conflict with lines\n> from other tenants having the same keys.\n> > >\n> > >\n> >\n> > I was wondering if a simpler syntax solution might also work here.\n> >\n> > Imagine another SUBSCRIPTION parameter that indicates to write the\n> > *name* of the subscription to some pre-defined table column:\n> > e.g. CREATE SUBSCRIPTION subname FOR PUBLICATION pub_tenant_1\n> > CONNECTION '...' WITH (subscription_column);\n> >\n> > Logical Replication already allows the subscriber table to have extra\n> > columns, so you just need to manually create the extra 'subscription'\n> > column up-front.\n> >\n> > Then...\n> >\n> > ~~\n> >\n> > On Publisher:\n> >\n> > test_pub=# CREATE TABLE tab(id int primary key, description varchar);\n> > CREATE TABLE\n> >\n> > test_pub=# INSERT INTO tab VALUES (1,'one'),(2,'two'),(3,'three');\n> > INSERT 0 3\n> >\n> > test_pub=# CREATE PUBLICATION tenant1 FOR ALL TABLES;\n> > CREATE PUBLICATION\n> >\n> > ~~\n> >\n> > On Subscriber:\n> >\n> > test_sub=# CREATE TABLE tab(id int, description varchar, subscription\n> varchar);\n> > CREATE TABLE\n> >\n> > test_sub=# CREATE SUBSCRIPTION sub_tenant1 CONNECTION 'host=localhost\n> > dbname=test_pub' PUBLICATION tenant1 WITH (subscription_column);\n> > CREATE SUBSCRIPTION\n> >\n> > test_sub=# SELECT * FROM tab;\n> > id | description | subscription\n> > ----+-------------+--------------\n> > 1 | one | sub_tenant1\n> > 2 | two | sub_tenant1\n> > 3 | three | sub_tenant1\n> > (3 rows)\n> >\n> > ~~\n> >\n> Thanks for the example. This is more concrete than just verbal description.\n>\n> In this example, do all the tables that a subscription subscribes to\n> need that additional column or somehow the pglogical receiver will\n> figure out which tables have that column and populate rows\n> accordingly?\n>\n> My further fear is that the subscriber will also need to match the\n> subscription column along with the rest of PK so as not to update rows\n> from other subscriptions.\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nYes, if the property is on the subscription side then it should be applied for all the tables that the connected publication is exposing.So if the property is enabled you should be sure that this origin column exists to all of the tables that the publication is exposing...Sure this is the complete idea, that the subscriber should match the PK of origin, <previous_pkey>As the subscription table will contain same key values from different origins, for example:For publisher1 database tableid pk integer | value character varying1                   | testA12                   | testA2For publisher2 database tableid pk integer | value character varying1                   | testB12                   | testB2For subscriber database tableorigin pk character varying | id pk integer | value character varyingpublisher1                           | 1                   | testA1publisher1                           | 2                   | testA2publisher2                           | 1                   | testB1publisher2                           | 2                   | testB2All statements INSERT, UPDATE, DELETE should always include the predicate of the origin.Στις Παρ 25 Νοε 2022 στις 12:21 μ.μ., ο/η Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> έγραψε:On Wed, Nov 23, 2022 at 4:54 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Nov 23, 2022 at 7:38 AM Stavros Koureas\n> <koureasstavros@gmail.com> wrote:\n> >\n> > Reading more carefully what you described, I think you are interested in getting something you call origin from publishers, probably some metadata from the publications.\n> >\n> > This identifier in those metadata maybe does not have business value on the reporting side. The idea is to use a value which has specific meaning to the user at the end.\n> >\n> > For example assigning 1 for tenant 1, 2 for tenant 2 and so one, at the end based on a dimension table which holds this mapping the user would be able to filter the data. So programmatically the user can set the id value of the column plus creating the mapping table from an application let’s say and be able to distinguish the data.\n> >\n> > In addition this column should have the ability to be part of the primary key on the subscription table in order to not conflict with lines from other tenants having the same keys.\n> >\n> >\n>\n> I was wondering if a simpler syntax solution might also work here.\n>\n> Imagine another SUBSCRIPTION parameter that indicates to write the\n> *name* of the subscription to some pre-defined table column:\n> e.g. CREATE SUBSCRIPTION subname FOR PUBLICATION pub_tenant_1\n> CONNECTION '...' WITH (subscription_column);\n>\n> Logical Replication already allows the subscriber table to have extra\n> columns, so you just need to manually create the extra 'subscription'\n> column up-front.\n>\n> Then...\n>\n> ~~\n>\n> On Publisher:\n>\n> test_pub=# CREATE TABLE tab(id int primary key, description varchar);\n> CREATE TABLE\n>\n> test_pub=# INSERT INTO tab VALUES (1,'one'),(2,'two'),(3,'three');\n> INSERT 0 3\n>\n> test_pub=# CREATE PUBLICATION tenant1 FOR ALL TABLES;\n> CREATE PUBLICATION\n>\n> ~~\n>\n> On Subscriber:\n>\n> test_sub=# CREATE TABLE tab(id int, description varchar, subscription varchar);\n> CREATE TABLE\n>\n> test_sub=# CREATE SUBSCRIPTION sub_tenant1 CONNECTION 'host=localhost\n> dbname=test_pub' PUBLICATION tenant1 WITH (subscription_column);\n> CREATE SUBSCRIPTION\n>\n> test_sub=# SELECT * FROM tab;\n>  id | description | subscription\n> ----+-------------+--------------\n>   1 | one         | sub_tenant1\n>   2 | two         | sub_tenant1\n>   3 | three       | sub_tenant1\n> (3 rows)\n>\n> ~~\n>\nThanks for the example. This is more concrete than just verbal description.\n\nIn this example, do all the tables that a subscription subscribes to\nneed that additional column or somehow the pglogical receiver will\nfigure out which tables have that column and populate rows\naccordingly?\n\nMy further fear is that the subscriber will also need to match the\nsubscription column along with the rest of PK so as not to update rows\nfrom other subscriptions.\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 25 Nov 2022 12:43:46 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "On Fri, Nov 25, 2022 at 9:43 PM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Yes, if the property is on the subscription side then it should be applied for all the tables that the connected publication is exposing.\n> So if the property is enabled you should be sure that this origin column exists to all of the tables that the publication is exposing...\n>\n> Sure this is the complete idea, that the subscriber should match the PK of origin, <previous_pkey>\n> As the subscription table will contain same key values from different origins, for example:\n>\n> For publisher1 database table\n> id pk integer | value character varying\n> 1 | testA1\n> 2 | testA2\n>\n> For publisher2 database table\n> id pk integer | value character varying\n> 1 | testB1\n> 2 | testB2\n>\n> For subscriber database table\n> origin pk character varying | id pk integer | value character varying\n> publisher1 | 1 | testA1\n> publisher1 | 2 | testA2\n> publisher2 | 1 | testB1\n> publisher2 | 2 | testB2\n>\n> All statements INSERT, UPDATE, DELETE should always include the predicate of the origin.\n>\n\nThis sounds similar to what I had posted [1] although I was saying the\ngenerated column value might be the *subscriber* name, not the origin\npublisher name. (where are you getting that value from -- somehow from\nthe subscriptions' CONNECTION dbname?)\n\nAnyway, regardless of the details, please note -- my idea was really\nintended just as a discussion starting point to demonstrate that\nrequired functionality might be achieved using a simpler syntax than\nwhat had been previously suggested. But in practice there may be some\nproblems with this approach -- e.g. how will the initial tablesync\nCOPY efficiently assign these subscriber name column values?\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPuZowXd7Aa7t0nqjP6afHMwJarngzeMq%2BQP0vE2KKLOgQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Mon, 28 Nov 2022 19:16:17 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "Sure I understand and neither do I have good knowledge of what else could\nbe influenced by such a change.\nIf the value of the column is the subscriber name has no benefit to this\nidea of merging multiple upstreams with same primary keys, later you\ndescribe the \"connection dbname\", yes this could be a possibility.\nI do not fully understand that part \"how will the initial tablesync COPY\nefficiently assign these subscriber name column values?\"\nWhy is difficult that during the initial sync put everywhere the same value\nfor all rows of the same origin?\n\nΣτις Δευ 28 Νοε 2022 στις 10:16 π.μ., ο/η Peter Smith <smithpb2250@gmail.com>\nέγραψε:\n\n> On Fri, Nov 25, 2022 at 9:43 PM Stavros Koureas\n> <koureasstavros@gmail.com> wrote:\n> >\n> > Yes, if the property is on the subscription side then it should be\n> applied for all the tables that the connected publication is exposing.\n> > So if the property is enabled you should be sure that this origin column\n> exists to all of the tables that the publication is exposing...\n> >\n> > Sure this is the complete idea, that the subscriber should match the PK\n> of origin, <previous_pkey>\n> > As the subscription table will contain same key values from different\n> origins, for example:\n> >\n> > For publisher1 database table\n> > id pk integer | value character varying\n> > 1 | testA1\n> > 2 | testA2\n> >\n> > For publisher2 database table\n> > id pk integer | value character varying\n> > 1 | testB1\n> > 2 | testB2\n> >\n> > For subscriber database table\n> > origin pk character varying | id pk integer | value character varying\n> > publisher1 | 1 | testA1\n> > publisher1 | 2 | testA2\n> > publisher2 | 1 | testB1\n> > publisher2 | 2 | testB2\n> >\n> > All statements INSERT, UPDATE, DELETE should always include the\n> predicate of the origin.\n> >\n>\n> This sounds similar to what I had posted [1] although I was saying the\n> generated column value might be the *subscriber* name, not the origin\n> publisher name. (where are you getting that value from -- somehow from\n> the subscriptions' CONNECTION dbname?)\n>\n> Anyway, regardless of the details, please note -- my idea was really\n> intended just as a discussion starting point to demonstrate that\n> required functionality might be achieved using a simpler syntax than\n> what had been previously suggested. But in practice there may be some\n> problems with this approach -- e.g. how will the initial tablesync\n> COPY efficiently assign these subscriber name column values?\n>\n> ------\n> [1]\n> https://www.postgresql.org/message-id/CAHut%2BPuZowXd7Aa7t0nqjP6afHMwJarngzeMq%2BQP0vE2KKLOgQ%40mail.gmail.com\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia.\n>\n\nSure I understand and neither do I have good knowledge of what else could be influenced by such a change.If the value of the column is the subscriber name has no benefit to this idea of merging multiple upstreams with same primary keys, later you describe the \"connection dbname\", yes this could be a possibility.I do not fully understand that part \"how will the initial tablesync COPY efficiently assign these subscriber name column values?\"Why is difficult that during the initial sync put everywhere the same value for all rows of the same origin?Στις Δευ 28 Νοε 2022 στις 10:16 π.μ., ο/η Peter Smith <smithpb2250@gmail.com> έγραψε:On Fri, Nov 25, 2022 at 9:43 PM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Yes, if the property is on the subscription side then it should be applied for all the tables that the connected publication is exposing.\n> So if the property is enabled you should be sure that this origin column exists to all of the tables that the publication is exposing...\n>\n> Sure this is the complete idea, that the subscriber should match the PK of origin, <previous_pkey>\n> As the subscription table will contain same key values from different origins, for example:\n>\n> For publisher1 database table\n> id pk integer | value character varying\n> 1                   | testA1\n> 2                   | testA2\n>\n> For publisher2 database table\n> id pk integer | value character varying\n> 1                   | testB1\n> 2                   | testB2\n>\n> For subscriber database table\n> origin pk character varying | id pk integer | value character varying\n> publisher1                           | 1                   | testA1\n> publisher1                           | 2                   | testA2\n> publisher2                           | 1                   | testB1\n> publisher2                           | 2                   | testB2\n>\n> All statements INSERT, UPDATE, DELETE should always include the predicate of the origin.\n>\n\nThis sounds similar to what I had posted [1] although I was saying the\ngenerated column value might be the *subscriber* name, not the origin\npublisher name. (where are you getting that value from -- somehow from\nthe subscriptions' CONNECTION dbname?)\n\nAnyway, regardless of the details, please note -- my idea was really\nintended just as a discussion starting point to demonstrate that\nrequired functionality might be achieved using a simpler syntax than\nwhat had been previously suggested. But in practice there may be some\nproblems with this approach -- e.g. how will the initial tablesync\nCOPY efficiently assign these subscriber name column values?\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPuZowXd7Aa7t0nqjP6afHMwJarngzeMq%2BQP0vE2KKLOgQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Mon, 28 Nov 2022 14:52:19 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "On Fri, Nov 25, 2022 at 4:13 PM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n> Yes, if the property is on the subscription side then it should be applied for all the tables that the connected publication is exposing.\n> So if the property is enabled you should be sure that this origin column exists to all of the tables that the publication is exposing...\n>\n\nThat would be too restrictive - not necessarily in your application\nbut generally. There could be some tables where consolidating rows\nwith same PK from different publishers into a single row in subscriber\nwould be desirable. I think we need to enable the property for every\nsubscriber that intends to add publisher column to the desired and\nsubscribed tables. But there should be another option per table which\nwill indicate that receiver should add publisher when INSERTING row to\nthat table.\n\n\n> Sure this is the complete idea, that the subscriber should match the PK of origin, <previous_pkey>\n> As the subscription table will contain same key values from different origins, for example:\n>\n\nAnd yes, probably you need to change the way you reply to email on\nthis list. Top-posting is generally avoided. See\nhttps://wiki.postgresql.org/wiki/Mailing_Lists.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 29 Nov 2022 18:56:54 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "Στις Τρί 29 Νοε 2022 στις 3:27 μ.μ., ο/η Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> έγραψε:\n> That would be too restrictive - not necessarily in your application\n> but generally. There could be some tables where consolidating rows\n> with same PK from different publishers into a single row in subscriber\n> would be desirable. I think we need to enable the property for every\n> subscriber that intends to add publisher column to the desired and\n> subscribed tables. But there should be another option per table which\n> will indicate that receiver should add publisher when INSERTING row to\n> that table.\n\nSo we are discussing the scope level of this property, if this property\nwill be implemented on subscriber level or on subscriber table.\nIn that case I am not sure how this will be implemented as currently\npostgres subscribers can have multiple tables streamed from a single\npublisher.\nIn that case we may have an additional syntax on subscriber, for example:\n\nCREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost port=5432 user=postgres\npassword=XXXXXX dbname=publisher1' PUBLICATION pub1 with (enabled = false,\ncreate_slot = false, slot_name = NONE, tables = {tableA:union, tableB:none,\n....});\n\nSomething like this?\n\n> And yes, probably you need to change the way you reply to email on\n> this list. Top-posting is generally avoided. See\n> https://wiki.postgresql.org/wiki/Mailing_Lists.\n\nThanks for bringing this into the discussion :)\n\nΣτις Τρί 29 Νοε 2022 στις 3:27 μ.μ., ο/η Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> έγραψε:> That would be too restrictive - not necessarily in your application> but generally. There could be some tables where consolidating rows> with same PK from different publishers into a single row in subscriber> would be desirable. I think we need to enable the property for every> subscriber that intends to add publisher column to the desired and> subscribed tables. But there should be another option per table which> will indicate that receiver should add publisher when INSERTING row to> that table.So we are discussing the scope level of this property, if this property will be implemented on subscriber level or on subscriber table.In that case I am not sure how this will be implemented as currently postgres subscribers can have multiple tables streamed from a single publisher.In that case we may have an additional syntax on subscriber, for example:CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost port=5432 user=postgres password=XXXXXX dbname=publisher1' PUBLICATION pub1 with (enabled = false, create_slot = false, slot_name = NONE, tables = {tableA:union, tableB:none, ....});Something like this?> And yes, probably you need to change the way you reply to email on> this list. Top-posting is generally avoided. See> https://wiki.postgresql.org/wiki/Mailing_Lists.Thanks for bringing this into the discussion :)", "msg_date": "Wed, 30 Nov 2022 10:39:26 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": "On Wed, Nov 30, 2022 at 2:09 PM Stavros Koureas\n<koureasstavros@gmail.com> wrote:\n>\n>\n>\n> Στις Τρί 29 Νοε 2022 στις 3:27 μ.μ., ο/η Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> έγραψε:\n> > That would be too restrictive - not necessarily in your application\n> > but generally. There could be some tables where consolidating rows\n> > with same PK from different publishers into a single row in subscriber\n> > would be desirable. I think we need to enable the property for every\n> > subscriber that intends to add publisher column to the desired and\n> > subscribed tables. But there should be another option per table which\n> > will indicate that receiver should add publisher when INSERTING row to\n> > that table.\n>\n> So we are discussing the scope level of this property, if this property will be implemented on subscriber level or on subscriber table.\n> In that case I am not sure how this will be implemented as currently postgres subscribers can have multiple tables streamed from a single publisher.\n> In that case we may have an additional syntax on subscriber, for example:\n>\n> CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost port=5432 user=postgres password=XXXXXX dbname=publisher1' PUBLICATION pub1 with (enabled = false, create_slot = false, slot_name = NONE, tables = {tableA:union, tableB:none, ....});\n>\n> Something like this?\n\nNope, I think we will need to add a table level property through table\noptions or receiver can infer it by looking at the table columns -\ne.g. existence of origin_id column or some such thing.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 2 Dec 2022 16:27:54 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication Custom Column Expression" }, { "msg_contents": ">> And yes, probably you need to change the way you reply to email on\n>> this list. Top-posting is generally avoided. See\n>> https://wiki.postgresql.org/wiki/Mailing_Lists.\n\n>Thanks for bringing this into the discussion :)\n\nThinking these days more about this topic, subscriber name is not a bad\nidea, although it makes sense to be able to give your own value even on\nsubscriber level, for example an integer.\nHaving a custom integer value is better as definitely this integer will\nparticipate later in all joins beside the tables and for sure joining with\nan integer it would be quicker rather than joining on a character varying\n(plus the rest of the columns).\nIn addition, discussing with other people and also on Stack\nOverflow/DBAExchange I have found that other people think it is a great\nenhancement for analytical purposes.\n\nΣτις Τετ 30 Νοε 2022 στις 10:39 π.μ., ο/η Stavros Koureas <\nkoureasstavros@gmail.com> έγραψε:\n\n>\n>\n> Στις Τρί 29 Νοε 2022 στις 3:27 μ.μ., ο/η Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> έγραψε:\n> > That would be too restrictive - not necessarily in your application\n> > but generally. There could be some tables where consolidating rows\n> > with same PK from different publishers into a single row in subscriber\n> > would be desirable. I think we need to enable the property for every\n> > subscriber that intends to add publisher column to the desired and\n> > subscribed tables. But there should be another option per table which\n> > will indicate that receiver should add publisher when INSERTING row to\n> > that table.\n>\n> So we are discussing the scope level of this property, if this property\n> will be implemented on subscriber level or on subscriber table.\n> In that case I am not sure how this will be implemented as currently\n> postgres subscribers can have multiple tables streamed from a single\n> publisher.\n> In that case we may have an additional syntax on subscriber, for example:\n>\n> CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost port=5432\n> user=postgres password=XXXXXX dbname=publisher1' PUBLICATION pub1 with\n> (enabled = false, create_slot = false, slot_name = NONE, tables =\n> {tableA:union, tableB:none, ....});\n>\n> Something like this?\n>\n> > And yes, probably you need to change the way you reply to email on\n> > this list. Top-posting is generally avoided. See\n> > https://wiki.postgresql.org/wiki/Mailing_Lists.\n>\n> Thanks for bringing this into the discussion :)\n>\n\n>> And yes, probably you need to change the way you reply to email on>> this list. Top-posting is generally avoided. See>> https://wiki.postgresql.org/wiki/Mailing_Lists.>Thanks for bringing this into the discussion :)Thinking these days more about this topic, subscriber name is not a bad idea, although it makes sense to be able to give your own value even on subscriber level, for example an integer.Having a custom integer value is better as definitely this integer will participate later in all joins beside the tables and for sure joining with an integer it would be quicker rather than joining on a character varying (plus the rest of the columns).In addition, discussing with other people and also on Stack Overflow/DBAExchange I have found that other people think it is a great enhancement for analytical purposes.Στις Τετ 30 Νοε 2022 στις 10:39 π.μ., ο/η Stavros Koureas <koureasstavros@gmail.com> έγραψε:Στις Τρί 29 Νοε 2022 στις 3:27 μ.μ., ο/η Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> έγραψε:> That would be too restrictive - not necessarily in your application> but generally. There could be some tables where consolidating rows> with same PK from different publishers into a single row in subscriber> would be desirable. I think we need to enable the property for every> subscriber that intends to add publisher column to the desired and> subscribed tables. But there should be another option per table which> will indicate that receiver should add publisher when INSERTING row to> that table.So we are discussing the scope level of this property, if this property will be implemented on subscriber level or on subscriber table.In that case I am not sure how this will be implemented as currently postgres subscribers can have multiple tables streamed from a single publisher.In that case we may have an additional syntax on subscriber, for example:CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost port=5432 user=postgres password=XXXXXX dbname=publisher1' PUBLICATION pub1 with (enabled = false, create_slot = false, slot_name = NONE, tables = {tableA:union, tableB:none, ....});Something like this?> And yes, probably you need to change the way you reply to email on> this list. Top-posting is generally avoided. See> https://wiki.postgresql.org/wiki/Mailing_Lists.Thanks for bringing this into the discussion :)", "msg_date": "Mon, 6 Feb 2023 19:46:01 +0200", "msg_from": "Stavros Koureas <koureasstavros@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication Custom Column Expression" } ]
[ { "msg_contents": "We realized today [1] that it's been some time since the buildfarm\nhad any debug_discard_caches (nee CLOBBER_CACHE_ALWAYS) coverage.\nSure enough, as soon as Tomas turned that back on, kaboom [2].\nThe test_oat_hooks test is failing --- it's not crashing, but\nit's emitting more NOTICE lines than the expected output includes,\nevidently as a result of the hooks getting invoked extra times\nduring cache reloads. I can reproduce that here.\n\nMaybe it was a poor design that these hooks were placed someplace\nthat's sensitive to that. I dunno. The only short-term solution\nI can think of is to force debug_discard_caches to 0 within that\ntest script, which is annoying but feasible (since that module\nonly exists in v15+).\n\nThoughts, other proposals?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/6b52e783-1b32-e723-4311-0e433a5a5a75%40enterprisedb.com\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=avocet&dt=2022-11-18%2016%3A01%3A43\n\n\n", "msg_date": "Fri, 18 Nov 2022 15:55:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "test/modules/test_oat_hooks vs. debug_discard_caches=1" }, { "msg_contents": "Hi,\n\nOn 2022-11-18 15:55:34 -0500, Tom Lane wrote:\n> We realized today [1] that it's been some time since the buildfarm\n> had any debug_discard_caches (nee CLOBBER_CACHE_ALWAYS) coverage.\n\nDo we know when it was covered last? I assume it's before the addition of\ntest_oat_hooks in 90efa2f5565?\n\n\n> Sure enough, as soon as Tomas turned that back on, kaboom [2].\n> The test_oat_hooks test is failing --- it's not crashing, but\n> it's emitting more NOTICE lines than the expected output includes,\n> evidently as a result of the hooks getting invoked extra times\n> during cache reloads. I can reproduce that here.\n\nDid you already look into where those additional namespace searches are coming\nfrom? There are case in which it is not unproblematic to have repeated\nnamespace searches due to the potential for races it opens up...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Nov 2022 16:33:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: test/modules/test_oat_hooks vs. debug_discard_caches=1" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-18 15:55:34 -0500, Tom Lane wrote:\n>> The test_oat_hooks test is failing --- it's not crashing, but\n>> it's emitting more NOTICE lines than the expected output includes,\n>> evidently as a result of the hooks getting invoked extra times\n>> during cache reloads. I can reproduce that here.\n\n> Did you already look into where those additional namespace searches are coming\n> from? There are case in which it is not unproblematic to have repeated\n> namespace searches due to the potential for races it opens up...\n\nI'm not sufficiently interested in that API to dig hard for details,\nbut in a first look it seemed like the extra reports were coming\nfrom repeated executions of recomputeNamespacePath, which are\nforced after a cache invalidation by NamespaceCallback.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Nov 2022 20:04:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: test/modules/test_oat_hooks vs. debug_discard_caches=1" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-18 15:55:34 -0500, Tom Lane wrote:\n>> We realized today [1] that it's been some time since the buildfarm\n>> had any debug_discard_caches (nee CLOBBER_CACHE_ALWAYS) coverage.\n\n> Do we know when it was covered last? I assume it's before the addition of\n> test_oat_hooks in 90efa2f5565?\n\nAs far as that goes: some digging in the buildfarm DB says that avocet\nlast did a CCA run on 2021-10-22 and trilobite on 2021-10-24. They\nwere then offline completely until 2022-02-10, and when they restarted\nthe runtimes were way too short to be CCA tests.\n\nSeems like maybe we need a little more redundancy in this bunch of\nbuildfarm animals.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Nov 2022 22:10:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: test/modules/test_oat_hooks vs. debug_discard_caches=1" }, { "msg_contents": "\n\nOn 11/19/22 04:10, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-11-18 15:55:34 -0500, Tom Lane wrote:\n>>> We realized today [1] that it's been some time since the buildfarm\n>>> had any debug_discard_caches (nee CLOBBER_CACHE_ALWAYS) coverage.\n> \n>> Do we know when it was covered last? I assume it's before the addition of\n>> test_oat_hooks in 90efa2f5565?\n> \n> As far as that goes: some digging in the buildfarm DB says that avocet\n> last did a CCA run on 2021-10-22 and trilobite on 2021-10-24. They\n> were then offline completely until 2022-02-10, and when they restarted\n> the runtimes were way too short to be CCA tests.\n> \n\nYeah. I'll try setting up a better monitoring / alerting to notice\nissues like this more promptly ... it's a bit tough, because IIRC the\ngap 2021-10-22 - 2022-02-10 was due to the tests running, but getting\nstuck for some reason. So it's not like the machine was off.\n\nI wonder if it'd make sense to have some simple & optional alerting\nbased on how long ago the machine reported the last result. Send e-mail\nif there was no report for a month or so would be enough.\n\n> Seems like maybe we need a little more redundancy in this bunch of\n> buildfarm animals.\n> \n\nIt's actually a bit worse than that, because both animals are on the\nsame machine. So avocet gets \"stuck\" -> trilobite is stuck too.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 19 Nov 2022 11:34:07 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: test/modules/test_oat_hooks vs. debug_discard_caches=1" }, { "msg_contents": "\nOn 2022-11-19 Sa 05:34, Tomas Vondra wrote:\n>\n> I wonder if it'd make sense to have some simple & optional alerting\n> based on how long ago the machine reported the last result. Send e-mail\n> if there was no report for a month or so would be enough.\n\n\nThis has been part of the buildfarm for a very long time. See the alerts\nsection of the config file.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 19 Nov 2022 08:51:28 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: test/modules/test_oat_hooks vs. debug_discard_caches=1" }, { "msg_contents": "On 11/19/22 14:51, Andrew Dunstan wrote:\n> \n> On 2022-11-19 Sa 05:34, Tomas Vondra wrote:\n>>\n>> I wonder if it'd make sense to have some simple & optional alerting\n>> based on how long ago the machine reported the last result. Send e-mail\n>> if there was no report for a month or so would be enough.\n> \n> \n> This has been part of the buildfarm for a very long time. See the alerts\n> section of the config file.\n> \n\nI'm aware of that, but those alerts are not quite what I was asking\nabout. Imagine the run gets stuck for whatever reason (like infinite\nloop somewhere), or maybe the VM fails / gets inaccessible for whatever\nreason, perhaps because of some sort of human error so that the cron\ndoes not get run ...\n\nI don't think alerting from the client would catch those cases, but\nmaybe it's a rare issue and I'm overthinking it.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 19 Nov 2022 15:07:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: test/modules/test_oat_hooks vs. debug_discard_caches=1" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 11/19/22 14:51, Andrew Dunstan wrote:\n>> On 2022-11-19 Sa 05:34, Tomas Vondra wrote:\n>>> I wonder if it'd make sense to have some simple & optional alerting\n>>> based on how long ago the machine reported the last result. Send e-mail\n>>> if there was no report for a month or so would be enough.\n\n>> This has been part of the buildfarm for a very long time. See the alerts\n>> section of the config file.\n\n> I don't think alerting from the client would catch those cases, but\n> maybe it's a rare issue and I'm overthinking it.\n\nThose alerts are sent by the buildfarm server, not the client.\n\nThat has a failure mode of its own: if an animal goes down hard,\nthe server is left with its last-seen alert setup. The only\nway to not get nagged permanently is to ask Andrew to intervene\nmanually. (Ask me how I know.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Nov 2022 09:33:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: test/modules/test_oat_hooks vs. debug_discard_caches=1" }, { "msg_contents": "\nOn 2022-11-19 Sa 09:33, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 11/19/22 14:51, Andrew Dunstan wrote:\n>>> On 2022-11-19 Sa 05:34, Tomas Vondra wrote:\n>>>> I wonder if it'd make sense to have some simple & optional alerting\n>>>> based on how long ago the machine reported the last result. Send e-mail\n>>>> if there was no report for a month or so would be enough.\n>>> This has been part of the buildfarm for a very long time. See the alerts\n>>> section of the config file.\n>> I don't think alerting from the client would catch those cases, but\n>> maybe it's a rare issue and I'm overthinking it.\n> Those alerts are sent by the buildfarm server, not the client.\n>\n> That has a failure mode of its own: if an animal goes down hard,\n> the server is left with its last-seen alert setup. The only\n> way to not get nagged permanently is to ask Andrew to intervene\n> manually. (Ask me how I know.)\n>\n> \t\t\t\n\n\nTrue for now. The next release will have a utility command to\ndisable/enable alerts. The required server changes have already been made.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 19 Nov 2022 10:30:40 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: test/modules/test_oat_hooks vs. debug_discard_caches=1" } ]
[ { "msg_contents": "Fix typos and bump catversion.\n\nTypos reported by Álvaro Herrera and Erik Rijkers.\n\nCatversion bump for 3d14e171e9e2236139e8976f3309a588bcc8683b was\ninadvertently omitted.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/2fb6154fcd769b2d4ea1226788e0ec2fc3522cb8\n\nModified Files\n--------------\ndoc/src/sgml/ref/grant.sgml | 2 +-\nsrc/bin/pg_dump/pg_dumpall.c | 1 -\nsrc/include/catalog/catversion.h | 2 +-\n3 files changed, 2 insertions(+), 3 deletions(-)", "msg_date": "Fri, 18 Nov 2022 21:18:44 +0000", "msg_from": "Robert Haas <rhaas@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Fix typos and bump catversion." }, { "msg_contents": "On 11/18/22 16:18, Robert Haas wrote:\n> Fix typos and bump catversion.\n> \n> Typos reported by Álvaro Herrera and Erik Rijkers.\n> \n> Catversion bump for 3d14e171e9e2236139e8976f3309a588bcc8683b was\n> inadvertently omitted.\n> \n> Branch\n> ------\n> master\n> \n> Details\n> -------\n> https://git.postgresql.org/pg/commitdiff/2fb6154fcd769b2d4ea1226788e0ec2fc3522cb8\n> \n> Modified Files\n> --------------\n> doc/src/sgml/ref/grant.sgml | 2 +-\n> src/bin/pg_dump/pg_dumpall.c | 1 -\n> src/include/catalog/catversion.h | 2 +-\n\nRishu Bagga pointed out to me offlist that this catversion bump seems \nflawed:\n\ndiff --git a/src/include/catalog/catversion.h \nb/src/include/catalog/catversion.h\nindex \nc6ef593207c227ce10b0c897379476b553974f67..b3a57136b755fed182b4518330e65786032db870 \n100644 (file)\n--- a/src/include/catalog/catversion.h\n+++ b/src/include/catalog/catversion.h\n@@ -57,6 +57,6 @@\n */\n\n /* yyyymmddN */\n-#define CATALOG_VERSION_NO 202211121\n+#define CATALOG_VERSION_NO 202211821\n\n #endif\n\nI think that should be 202211181, no? I am not clear on the desirable \nway to fix though :-/\n\nJoe\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 19 Nov 2022 17:10:57 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix typos and bump catversion." }, { "msg_contents": "Hi,\n\nOn 2022-11-19 17:10:57 -0500, Joe Conway wrote:\n> Rishu Bagga pointed out to me offlist that this catversion bump seems\n> flawed:\n> \n> diff --git a/src/include/catalog/catversion.h\n> b/src/include/catalog/catversion.h\n> index c6ef593207c227ce10b0c897379476b553974f67..b3a57136b755fed182b4518330e65786032db870\n> 100644 (file)\n> --- a/src/include/catalog/catversion.h\n> +++ b/src/include/catalog/catversion.h\n> @@ -57,6 +57,6 @@\n> */\n> \n> /* yyyymmddN */\n> -#define CATALOG_VERSION_NO 202211121\n> +#define CATALOG_VERSION_NO 202211821\n> \n> #endif\n> \n> I think that should be 202211181, no? I am not clear on the desirable way to\n> fix though :-/\n\nI think it's fine to just update to a \"normal\" catversion. We really just\nmatch for an exact match with a few exceptions (somewhere in pg_upgrade IIRC),\nand those exceptions don't really matter for some individual commit on the\ndevelopment branch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Nov 2022 14:16:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix typos and bump catversion." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-19 17:10:57 -0500, Joe Conway wrote:\n>> Rishu Bagga pointed out to me offlist that this catversion bump seems\n>> flawed:\n>> /* yyyymmddN */\n>> -#define CATALOG_VERSION_NO 202211121\n>> +#define CATALOG_VERSION_NO 202211821\n\n>> I think that should be 202211181, no? I am not clear on the desirable way to\n>> fix though :-/\n\n> I think it's fine to just update to a \"normal\" catversion.\n\nYeah, just fix it --- and the sooner the better, to avoid the risk\nthat this becomes a value that someone needs to care about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Nov 2022 17:24:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix typos and bump catversion." }, { "msg_contents": "On 11/19/22 17:24, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-11-19 17:10:57 -0500, Joe Conway wrote:\n>>> Rishu Bagga pointed out to me offlist that this catversion bump seems\n>>> flawed:\n>>> /* yyyymmddN */\n>>> -#define CATALOG_VERSION_NO 202211121\n>>> +#define CATALOG_VERSION_NO 202211821\n> \n>>> I think that should be 202211181, no? I am not clear on the desirable way to\n>>> fix though :-/\n> \n>> I think it's fine to just update to a \"normal\" catversion.\n> \n> Yeah, just fix it --- and the sooner the better, to avoid the risk\n> that this becomes a value that someone needs to care about.\n\n\nWFM, done\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 19 Nov 2022 18:01:18 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix typos and bump catversion." } ]
[ { "msg_contents": "Hi,\n\nIs there a way to find out about new git commits that is more\nefficient and timely than running N git fetches or whatever every\nminute in a cron job? Maybe some kind of long polling where you send\nan HTTP request that says \"I think the tips of branches x, y, z are at\n111, 222, 333\" and the server responds when that ceases to be true?\n\n\n", "msg_date": "Sat, 19 Nov 2022 16:12:24 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "More efficient build farm animal wakeup?" }, { "msg_contents": "On Sat, Nov 19, 2022 at 4:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Hi,\n>\n> Is there a way to find out about new git commits that is more\n> efficient and timely than running N git fetches or whatever every\n> minute in a cron job? Maybe some kind of long polling where you send\n> an HTTP request that says \"I think the tips of branches x, y, z are at\n> 111, 222, 333\" and the server responds when that ceases to be true?\n>\n\nI'm not aware of any such thing standardized for git, but it wouldn't be\nhard to build one for that (I'm talking primarily about the server side\nhere, not how to integrate that into the buildfarm side of things).\n\nWe could also set something up whereby we could fire off webhooks when\nbranches change (easy enough for registered servers in the buildfarm as we\ncan easily avoid abuse there -- it would take more work to make something\nlike that a public service, due to the risk of abuse). But that may\nactually be worse off, since I bet a lot of buildfarm animals (most even?)\nare probably sitting behind a NAT gateway of some kind, meaning consuming\nwebhooks is hard.\n\nI did something similar for how we did things on borka (using some internal\npginfra webhooks that are not available to the public at this time), but I\nhad to revert that because of issues with concurrent buildfarm runs in the\nenvironment that we had set up. But we are using it for the snapshots docs\nbuilder, to make sure the website for that gets updated immediately after a\ncommit on master. But the principle definitely work.\n\nAnother thing to consider would be that something like this would cause all\nbuildfarm clients to start git pull:ing down changes at more or less\nexactly the same time. Though in total that would probably still mean a lot\nless load than those that \"git pul\" very frequently today, it could\npotentially lead to some nets with lots of bf clients experiencing some\nlevel of bandwidth filling or something. Could probably be solved pretty\neasily with a random delay (which doesn't have to be long, as for most git\npulls it will be a very quick operation), just something that's worth\nconsidering.\n\ntl,tr; it's not there now, but yes if we can find a smart way for th ebf\nclients to consume it, it is something we could build and deploy fairly\neasily.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Nov 19, 2022 at 4:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:Hi,\n\nIs there a way to find out about new git commits that is more\nefficient and timely than running N git fetches or whatever every\nminute in a cron job?  Maybe some kind of long polling where you send\nan HTTP request that says \"I think the tips of branches x, y, z are at\n111, 222, 333\" and the server responds when that ceases to be true?I'm not aware of any such thing standardized for git, but it wouldn't be hard to build one for that (I'm talking primarily about the server side here, not how to integrate that into the buildfarm side of things).We could also set something up whereby we could fire off webhooks when branches change (easy enough for registered servers in the buildfarm as we can easily avoid abuse there -- it would take more work to make something like that a public service, due to the risk of abuse). But that may actually be worse off, since I bet a lot of buildfarm animals (most even?) are probably sitting behind a NAT gateway of some kind, meaning consuming webhooks is hard.I did something similar for how we did things on borka (using some internal pginfra webhooks that are not available to the public at this time), but I had to revert that because of issues with concurrent buildfarm runs in the environment that we had set up. But we are using it for the snapshots docs builder, to make sure the website for that gets updated immediately after a commit on master. But the principle definitely work.Another thing to consider would be that something like this would cause all buildfarm clients to start git pull:ing down changes at more or less exactly the same time. Though in total that would probably still mean a lot less load than those that \"git pul\" very frequently today, it could potentially lead to some nets with lots of bf clients experiencing some level of bandwidth filling or something. Could probably be solved pretty easily with a random delay (which doesn't have to be long, as for most git pulls it will be a very quick operation), just something that's worth considering.tl,tr; it's not there now, but yes if we can find a smart way for th ebf clients to consume it, it is something we could build and deploy fairly easily.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sat, 19 Nov 2022 13:35:42 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "\nOn 2022-11-18 Fr 22:12, Thomas Munro wrote:\n> Hi,\n>\n> Is there a way to find out about new git commits that is more\n> efficient and timely than running N git fetches or whatever every\n> minute in a cron job? Maybe some kind of long polling where you send\n> an HTTP request that says \"I think the tips of branches x, y, z are at\n> 111, 222, 333\" and the server responds when that ceases to be true?\n\n\n\nIt might not suit your use case, but one of the things I do to reduce\nfetch load is to run a local mirror which runs\n\n   git fetch -q --prune\n\nevery 5 minutes. It also runs a git daemon, and several of my animals\npoint at that.\n\nIf there's a better git API I'll be happy to try to use it.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 19 Nov 2022 08:44:26 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Sun, Nov 20, 2022 at 1:35 AM Magnus Hagander <magnus@hagander.net> wrote:\n> tl,tr; it's not there now, but yes if we can find a smart way for th ebf clients to consume it, it is something we could build and deploy fairly easily.\n\nCool -- it sounds a lot like you've thought about this already :-)\n\nAbout the client: currently run_branches.pl makes an HTTP request for\nthe \"branches of interest\" list. Seems like a candidate point for a\nlong poll? I don't think it'd have to be much smarter than it is\ntoday, it'd just have to POST the commits it already has, I think.\n\nPerhaps as a first step, the server could immediately report which\nbranches to bother fetching, considering the client's existing\ncommits. That'd almost always be none, but ~11.7 times per day a new\ncommit shows up, and once a year there's a new interesting branch.\nThat would avoid the need for the 6 git fetches that usually follow in\nthe common case, which admittedly might not be a change worth making\non its own. After all, the git fetches are probably quite similar\nHTTP requests themselves, except that there 6 of them, one per branch,\nand they hit the public git server instead of some hypothetical\nbuildfarm endpoint.\n\nThen you could switch to long polling by letting the client say \"if\ncurrently none, I'm prepared to wait up to X seconds for a different\nanswer\", assuming you know how to build the server side of that\n(insert magic here). Of course, you can't make it too long or your\nsession might be dropped in the badlands between client and server,\nbut that's just a reason to make X configurable. I think RFC6202 says\nthat 120 seconds probably works fine across most kinds of links, which\nmeans that you lower the total poll rate hitting the server, but--more\ninterestingly for me as a client--you minimise latency when something\nfinally happens. (With various keepalive tricks and/or heartbeat\nstreaming tricks you could possibly make it much higher, who knows...\nbut you'd have to set it very very low to do worse than what we're\ndoing today in total request count). Or maybe there is some existing\neasy perl library that could be used for this (joke answer: cpan\ninstall Twitter::API and follow @pg_commits).\n\nBy the way, the reason I wrote this is because I've just been\nre-establishing my animal elver. It's set to run every minute by\ncron, and spends nearly *half of each minute* running various git\ncommands when nothing is happening. Actually it's more than 6\nconnections to the server, because I see there's a fetch and an\nls-remote, so it's at least 12 (being unfamiliar with git plumbing, it\ncould be much more for all I know, and I kinda suspect so based on the\ntotal run time). Admittedly network packets take a little while to\nfly to my South Pacific location so maybe this looks less insane from\nover there.\n\nHowever, when I started this thread I was half expecting such a thing\nto exist already, somewhere, I just haven't been able to find it\nmyself... Don't other people have this problem? Maybe everybody who\nhas this problem uses webhooks (git server post commit hook opens\nconnection to client) as you mentioned, but as you also mentioned\nthat'd never fly for our topology.\n\n\n", "msg_date": "Sun, 20 Nov 2022 16:56:20 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "Hi,\n\nOn 2022-11-19 16:12:24 +1300, Thomas Munro wrote:\n> Is there a way to find out about new git commits that is more\n> efficient and timely than running N git fetches or whatever every\n> minute in a cron job? Maybe some kind of long polling where you send\n> an HTTP request that says \"I think the tips of branches x, y, z are at\n> 111, 222, 333\" and the server responds when that ceases to be true?\n\nI think a git fetch is actually ok for that - it doesn't take a whole lot of\nresources. However run_builds.pl is more heavyweight. For one, it starts one\nrun_build.pl for each branch, which each then fetches from git separately. But\nmore importantly, each run_build.pl seems to actually do a fair bit of work\nbefore discovering nothing has changed.\n\nA typical log I see:\n\nNov 20 06:08:17 bf-valgrind-v4 run_branches.pl[3289916]: Sun Nov 20 06:08:17 2022: buildfarm run for grassquit:REL_14_STABLE starting\nNov 20 06:08:17 bf-valgrind-v4 run_branches.pl[3289916]: grassquit:REL_14_STABLE [06:08:17] checking out source ...\nNov 20 06:08:20 bf-valgrind-v4 run_branches.pl[3289916]: grassquit:REL_14_STABLE [06:08:20] checking if build run needed ...\nNov 20 06:08:20 bf-valgrind-v4 run_branches.pl[3289916]: grassquit:REL_14_STABLE [06:08:20] No build required: last status = Sat Nov 19 23:54:38 2022 GMT, cur>\n\nSo we spend three seconds in the \"checking out source\" stage, just to then see\nthat nothing has actually changed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Nov 2022 22:23:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Sun, Nov 20, 2022 at 4:56 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Sun, Nov 20, 2022 at 1:35 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > tl,tr; it's not there now, but yes if we can find a smart way for th ebf\n> clients to consume it, it is something we could build and deploy fairly\n> easily.\n>\n> Cool -- it sounds a lot like you've thought about this already :-)\n>\n> About the client: currently run_branches.pl makes an HTTP request for\n> the \"branches of interest\" list. Seems like a candidate point for a\n> long poll? I don't think it'd have to be much smarter than it is\n> today, it'd just have to POST the commits it already has, I think.\n>\n\nUm, branches of interest will only pick up when it gets a new *branch*, not\na new *commit*, so I think that would be a very different problem to solve.\nAnd I don't think we have new branche *that* often...\n\n\nPerhaps as a first step, the server could immediately report which\n> branches to bother fetching, considering the client's existing\n> commits. That'd almost always be none, but ~11.7 times per day a new\n> commit shows up, and once a year there's a new interesting branch.\n> That would avoid the need for the 6 git fetches that usually follow in\n> the common case, which admittedly might not be a change worth making\n> on its own. After all, the git fetches are probably quite similar\n> HTTP requests themselves, except that there 6 of them, one per branch,\n> and they hit the public git server instead of some hypothetical\n> buildfarm endpoint.\n>\n\nAs Andres mentioned downthread, that's not a lot more lightweight than what\n\"git fetch\" does.\n\nThe thing we'd want to avoid is having to do that so much and often. And\ngetting to that is going to require modification of the buildfarm client to\nmake it more \"smart\" regardless. In particular, making it do this \"right\"\nin the face of multiple branches is probably going to be a big win.\n\n\nThen you could switch to long polling by letting the client say \"if\n> currently none, I'm prepared to wait up to X seconds for a different\n> answer\", assuming you know how to build the server side of that\n> (insert magic here). Of course, you can't make it too long or your\n> session might be dropped in the badlands between client and server,\n> but that's just a reason to make X configurable. I think RFC6202 says\n> that 120 seconds probably works fine across most kinds of links, which\n> means that you lower the total poll rate hitting the server, but--more\n> interestingly for me as a client--you minimise latency when something\n> finally happens. (With various keepalive tricks and/or heartbeat\n> streaming tricks you could possibly make it much higher, who knows...\n> but you'd have to set it very very low to do worse than what we're\n> doing today in total request count). Or maybe there is some existing\n> easy perl library that could be used for this (joke answer: cpan\n> install Twitter::API and follow @pg_commits).\n>\n\nI also honestly wonder how big a problem a much longer than 120 seconds\ntimeout would be in practice. Since we own both the client and the server\nin this case, we'd only be at mercy of network equipment in between and I\nthink we're much less exposed to weirdness there than \"the average\nbrowser\". Thus, as long as it's configurable, I think we could go for\nsomething much longer by default.\n\nI'd imagine something like a\nGET https://git.postgresql.org/buildfarm-branchtips\nX-branch-master: a4adc31f69\nX-branch-REL_14_STABLE: b33283cbd3\nX-longpoll: 120\n\nFor that one it would check branch master and rel 14, and if either\nbranchtip doesn't match what was in the header, it'd return immediately\nwith a textfile that's basically\nmaster:<whateveritis>\n\nif master has changed and not REL_14.\n\nIf nothing has changed, go into longpoll for 120 seconds based on the\nheader, and if nothing at all has changed in that time, return a 304.\n\n\nWe could also use something like a websocket to just stream the changes out\nover.\n\nIn either case it would also need to change the buildfarm client to run as\na daemon rather than a cronjob I think? (obviously optional, we don't have\nto remove the current abilities)\n\n\nHowever, when I started this thread I was half expecting such a thing\n> to exist already, somewhere, I just haven't been able to find it\n> myself... Don't other people have this problem? Maybe everybody who\n> has this problem uses webhooks (git server post commit hook opens\n> connection to client) as you mentioned, but as you also mentioned\n> that'd never fly for our topology.\n>\n\nYeah, webhook seems to be what most people use.\n\nFWIW, an implementation for us would be a small daemon that receives such\nwebhooks from our git server and redistributtes it for the long polling.\nThat's still the easiest way to get the data out of git itself...\n\n//Magnus\n\nOn Sun, Nov 20, 2022 at 4:56 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Sun, Nov 20, 2022 at 1:35 AM Magnus Hagander <magnus@hagander.net> wrote:\n> tl,tr; it's not there now, but yes if we can find a smart way for th ebf clients to consume it, it is something we could build and deploy fairly easily.\n\nCool -- it sounds a lot like you've thought about this already :-)\n\nAbout the client: currently run_branches.pl makes an HTTP request for\nthe \"branches of interest\" list.  Seems like a candidate point for a\nlong poll?  I don't think it'd have to be much smarter than it is\ntoday, it'd just have to POST the commits it already has, I think.Um, branches of interest will only pick up when it gets a new *branch*, not a new *commit*, so I think that would be a very different problem to solve. And I don't think we have new branche *that* often...\nPerhaps as a first step, the server could immediately report which\nbranches to bother fetching, considering the client's existing\ncommits.  That'd almost always be none, but ~11.7 times per day a new\ncommit shows up, and once a year there's a new interesting branch.\nThat would avoid the need for the 6 git fetches that usually follow in\nthe common case, which admittedly might not be a change worth making\non its own.  After all, the git fetches are probably quite similar\nHTTP requests themselves, except that there 6 of them, one per branch,\nand they hit the public git server instead of some hypothetical\nbuildfarm endpoint.As Andres mentioned downthread, that's not a lot more lightweight than what \"git fetch\" does.The thing we'd want to avoid is having to do that so much and often. And getting to that is going to require modification of the buildfarm client to make it more \"smart\" regardless. In particular, making it do this \"right\" in the face of multiple branches is probably going to be a big win.\nThen you could switch to long polling by letting the client say \"if\ncurrently none, I'm prepared to wait up to X seconds for a different\nanswer\", assuming you know how to build the server side of that\n(insert magic here).  Of course, you can't make it too long or your\nsession might be dropped in the badlands between client and server,\nbut that's just a reason to make X configurable.  I think RFC6202 says\nthat 120 seconds probably works fine across most kinds of links, which\nmeans that you lower the total poll rate hitting the server, but--more\ninterestingly for me as a client--you minimise latency when something\nfinally happens.  (With various keepalive tricks and/or heartbeat\nstreaming tricks you could possibly make it much higher, who knows...\nbut you'd have to set it very very low to do worse than what we're\ndoing today in total request count).  Or maybe there is some existing\neasy perl library that could be used for this (joke answer: cpan\ninstall Twitter::API and follow @pg_commits).I also honestly wonder how big a problem a much longer than 120 seconds timeout would be in practice. Since we own both the client and the server in this case, we'd only be at mercy of network equipment in between and I think we're much less exposed to weirdness there than \"the average browser\". Thus, as long as it's configurable, I think we could go for something much longer by default.I'd imagine something like aGET https://git.postgresql.org/buildfarm-branchtipsX-branch-master: a4adc31f69X-branch-REL_14_STABLE: b33283cbd3X-longpoll: 120For that one it would check branch master and rel 14, and if either branchtip doesn't match what was in the header, it'd return immediately with a textfile that's basicallymaster:<whateveritis>if master has changed and not REL_14.If nothing has changed, go into longpoll for 120 seconds based on the header, and if nothing at all has changed in that time, return a 304. We could also use something like a websocket to just stream the changes out over. In either case it would also need to change the buildfarm client to run as a daemon rather than a cronjob I think? (obviously optional, we don't have to remove the current abilities)However, when I started this thread I was half expecting such a thing\nto exist already, somewhere, I just haven't been able to find it\nmyself...  Don't other people have this problem?  Maybe everybody who\nhas this problem uses webhooks (git server post commit hook opens\nconnection to client) as you mentioned, but as you also mentioned\nthat'd never fly for our topology.\nYeah, webhook seems to be what most people use.FWIW, an implementation for us would be a small daemon that receives such webhooks from our git server and redistributtes it for the long polling. That's still the easiest way to get the data out of git itself...//Magnus", "msg_date": "Sun, 20 Nov 2022 22:31:14 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Mon, Nov 21, 2022 at 10:31 AM Magnus Hagander <magnus@hagander.net> wrote:\n> Um, branches of interest will only pick up when it gets a new *branch*, not a new *commit*, so I think that would be a very different problem to solve. And I don't think we have new branche *that* often...\n\nSure, could be done with an extra different request you make from time\nto time or keeping the existing list. No strong opinions on that, I\nwas just observing that it could also be combined, something like:\n\nClient: I have 14@1234, 15@1234, HEAD@1234; what should I do now, boss?\nServer: You should fetch 14 (it has a new commit) and 16 (it's a new\nbranch you didn't mention).\n\n> I'd imagine something like a\n> GET https://git.postgresql.org/buildfarm-branchtips\n> X-branch-master: a4adc31f69\n> X-branch-REL_14_STABLE: b33283cbd3\n> X-longpoll: 120\n>\n> For that one it would check branch master and rel 14, and if either branchtip doesn't match what was in the header, it'd return immediately with a textfile that's basically\n> master:<whateveritis>\n>\n> if master has changed and not REL_14.\n>\n> If nothing has changed, go into longpoll for 120 seconds based on the header, and if nothing at all has changed in that time, return a 304.\n\nLGTM, that's exactly the sort of thing I was imagining.\n\n> We could also use something like a websocket to just stream the changes out over.\n\nTrue. The reason I started on about long polling instead of\nwebsockets is that I was imagining that the simpler, dumber protocol\nwhere the client doesn't even really know it's participating a new\nkind of magic would be more cromulent in ye olde perl script (no new\ncpan dependencies).\n\n> In either case it would also need to change the buildfarm client to run as a daemon rather than a cronjob I think? (obviously optional, we don't have to remove the current abilities)\n\nGiven that the point of the build farm is (these days) to test on\nweird computers and operating systems, I expect that proper 'run like\na service' support would be painful or not get done. It'd be nice if\nthere were some way to make this work with simple crontab entries...\n\n\n", "msg_date": "Mon, 21 Nov 2022 10:53:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Sun, Nov 20, 2022 at 2:44 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> It might not suit your use case, but one of the things I do to reduce\n> fetch load is to run a local mirror which runs\n>\n> git fetch -q --prune\n>\n> every 5 minutes. It also runs a git daemon, and several of my animals\n> point at that.\n\nThanks. I understand now that my configuration without a local mirror\nis super inefficient (it spends the first ~25s of each minute running\ngit commands). Still, even though that can be improved by me setting\nup more stuff, I'd like something event-driven rather than short\npolling-based for lower latency.\n\n> If there's a better git API I'll be happy to try to use it.\n\nCool. Seems like we just have to invent something first...\n\nFWIW I'm also trying to chase the short polling out of cfbot. It\nregularly harasses the git servers at one end (could be fixed with\nthis approach), and wastes a percentage of our allotted CPU slots on\nthe other end by scheduling periodically (could be fixed with webhooks\nfrom Cirrus).\n\n\n", "msg_date": "Mon, 21 Nov 2022 11:32:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "\nOn 2022-11-20 Su 17:32, Thomas Munro wrote:\n> On Sun, Nov 20, 2022 at 2:44 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> It might not suit your use case, but one of the things I do to reduce\n>> fetch load is to run a local mirror which runs\n>>\n>> git fetch -q --prune\n>>\n>> every 5 minutes. It also runs a git daemon, and several of my animals\n>> point at that.\n> Thanks. I understand now that my configuration without a local mirror\n> is super inefficient (it spends the first ~25s of each minute running\n> git commands). Still, even though that can be improved by me setting\n> up more stuff, I'd like something event-driven rather than short\n> polling-based for lower latency.\n>\n>> If there's a better git API I'll be happy to try to use it.\n> Cool. Seems like we just have to invent something first...\n>\n> FWIW I'm also trying to chase the short polling out of cfbot. It\n> regularly harasses the git servers at one end (could be fixed with\n> this approach), and wastes a percentage of our allotted CPU slots on\n> the other end by scheduling periodically (could be fixed with webhooks\n> from Cirrus).\n\n\n\nI think I have solved most of the actual issues without getting too complex.\n\nHere's how:\n\nThe buildfarm server now creates a companion to branches_of_interest.txt\ncalled branches_of_interest.json which looks like this:\n\n[\n   {\n      \"REL_11_STABLE\" : \"140c803723\"\n   },\n   {\n      \"REL_12_STABLE\" : \"4cbcb7ed85\"\n   },\n   {\n      \"REL_13_STABLE\" : \"c13667b518\"\n   },\n   {\n      \"REL_14_STABLE\" : \"5cda142bb9\"\n   },\n   {\n      \"REL_15_STABLE\" : \"ff9d27ee2b\"\n   },\n   {\n      \"HEAD\" : \"51b5834cd5\"\n   }\n]\n\nIt updates this every time it does a git fetch, currently every 5 minutes.\n\nrun_branches.pl fetches this file instead of the plain list of branches,\nand before running run_build.pl checks if the given commit was the\nlatest one tested, and if so and a build isn't being forced, skips the\nbranch. Thus, in the case where all the branches are up to date there\nwill be no git calls whatsoever.\n\nYou can try it out by getting run_branches.pl from\n<https://raw.githubusercontent.com/PGBuildFarm/client-code/main/run_branches.pl>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:50:57 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The buildfarm server now creates a companion to branches_of_interest.txt\n> called branches_of_interest.json which looks like this:\n\n... okay ...\n\n> It updates this every time it does a git fetch, currently every 5 minutes.\n\nThat up-to-five-minute delay, on top of whatever cronjob delay one has\non one's animals, seems kind of sad. I've gotten kind of spoiled maybe\nby seeing first buildfarm results typically within 15 minutes of a push.\nBut if we're trying to improve matters in this area, this doesn't seem\nlike quite the way to go.\n\nBut it does seem like this eliminates one expense. Now that you have\nthat bit, maybe we could arrange a webhook or something that allows\nbranches_of_interest.json to get updated immediately after a push?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:58:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "n Mon, Nov 21, 2022 at 9:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > The buildfarm server now creates a companion to branches_of_interest.txt\n> > called branches_of_interest.json which looks like this:\n>\n> ... okay ...\n>\n\nYeah, it's not as efficient as something like long polling or web sockets,\nbut it is most definitely a lot simpler!\n\nIf we're going to have a lot of animals do pulls of this file every minute\nor more, it's certainly a lot better to pull this small file than to make\nmultiple git calls.\n\nIt could trivially be made even more efficient by making the request with\neither a If-None-Match or If-Modified-Since. While it's still small, that\ncuts the size approximately in half, and would allow you to skip even more\nprocessing if nothing has changed.\n\n\n> It updates this every time it does a git fetch, currently every 5 minutes.\n>\n> That up-to-five-minute delay, on top of whatever cronjob delay one has\n> on one's animals, seems kind of sad. I've gotten kind of spoiled maybe\n> by seeing first buildfarm results typically within 15 minutes of a push.\n> But if we're trying to improve matters in this area, this doesn't seem\n> like quite the way to go.\n>\n> But it does seem like this eliminates one expense. Now that you have\n> that bit, maybe we could arrange a webhook or something that allows\n> branches_of_interest.json to get updated immediately after a push?\n>\n\nWebhooks are definitely a lot easier to implement in between our servers\nyeah, so that shouldn't be too hard. We could use the same hooks that we\nuse for borka to build the docs, but have it just run whatever script it is\nthe buildfarm needs. I assume it's just something trivial to run there,\nAndrew?\n\n//Magnus\n\nn Mon, Nov 21, 2022 at 9:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew@dunslane.net> writes:\n> The buildfarm server now creates a companion to branches_of_interest.txt\n> called branches_of_interest.json which looks like this:\n\n... okay ...Yeah, it's not as efficient as something like long polling or web sockets, but it is most definitely a lot simpler!If we're going to have a lot of animals do pulls of this file every minute or more, it's certainly a lot better to pull this small file than to make multiple git calls.It could trivially be made even more efficient by making the request with either a If-None-Match or If-Modified-Since. While it's still small, that cuts the size approximately in half, and would allow you to skip even more processing if nothing has changed.\n> It updates this every time it does a git fetch, currently every 5 minutes.\n\nThat up-to-five-minute delay, on top of whatever cronjob delay one has\non one's animals, seems kind of sad.  I've gotten kind of spoiled maybe\nby seeing first buildfarm results typically within 15 minutes of a push.\nBut if we're trying to improve matters in this area, this doesn't seem\nlike quite the way to go.\n\nBut it does seem like this eliminates one expense.  Now that you have\nthat bit, maybe we could arrange a webhook or something that allows\nbranches_of_interest.json to get updated immediately after a push?Webhooks are definitely a lot easier to implement in between our servers yeah, so that shouldn't be too hard. We could use the same hooks that we use for borka to build the docs, but have it just run whatever script it is the buildfarm needs. I assume it's just something trivial to run there, Andrew?//Magnus", "msg_date": "Mon, 21 Nov 2022 22:20:20 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Mon, Nov 21, 2022 at 9:51 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-11-20 Su 17:32, Thomas Munro wrote:\n> > On Sun, Nov 20, 2022 at 2:44 AM Andrew Dunstan <andrew@dunslane.net>\n> wrote:\n> >> It might not suit your use case, but one of the things I do to reduce\n> >> fetch load is to run a local mirror which runs\n> >>\n> >> git fetch -q --prune\n> >>\n> >> every 5 minutes. It also runs a git daemon, and several of my animals\n> >> point at that.\n> > Thanks. I understand now that my configuration without a local mirror\n> > is super inefficient (it spends the first ~25s of each minute running\n> > git commands). Still, even though that can be improved by me setting\n> > up more stuff, I'd like something event-driven rather than short\n> > polling-based for lower latency.\n> >\n> >> If there's a better git API I'll be happy to try to use it.\n> > Cool. Seems like we just have to invent something first...\n> >\n> > FWIW I'm also trying to chase the short polling out of cfbot. It\n> > regularly harasses the git servers at one end (could be fixed with\n> > this approach), and wastes a percentage of our allotted CPU slots on\n> > the other end by scheduling periodically (could be fixed with webhooks\n> > from Cirrus).\n>\n>\n>\n> I think I have solved most of the actual issues without getting too\n> complex.\n>\n> Here's how:\n>\n> The buildfarm server now creates a companion to branches_of_interest.txt\n> called branches_of_interest.json which looks like this:\n>\n> [\n> {\n> \"REL_11_STABLE\" : \"140c803723\"\n> },\n> {\n> \"REL_12_STABLE\" : \"4cbcb7ed85\"\n> },\n> {\n> \"REL_13_STABLE\" : \"c13667b518\"\n> },\n> {\n> \"REL_14_STABLE\" : \"5cda142bb9\"\n> },\n> {\n> \"REL_15_STABLE\" : \"ff9d27ee2b\"\n> },\n> {\n> \"HEAD\" : \"51b5834cd5\"\n> }\n> ]\n\n\nIs there a reason this file is a list of hashes each hash with a single\nvalue in it? Would it make more sense if it was:\n{\n \"REL_11_STABLE\": \"140c803723\",\n \"REL_12_STABLE\": \"4cbcb7ed85\",\n \"REL_13_STABLE\": \"c13667b518\",\n \"REL_14_STABLE\": \"5cda142bb9\",\n \"REL_15_STABLE\": \"ff9d27ee2b\",\n \"HEAD\": \"51b5834cd5\"\n}\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Nov 21, 2022 at 9:51 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-11-20 Su 17:32, Thomas Munro wrote:\n> On Sun, Nov 20, 2022 at 2:44 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> It might not suit your use case, but one of the things I do to reduce\n>> fetch load is to run a local mirror which runs\n>>\n>>    git fetch -q --prune\n>>\n>> every 5 minutes. It also runs a git daemon, and several of my animals\n>> point at that.\n> Thanks.  I understand now that my configuration without a local mirror\n> is super inefficient (it spends the first ~25s of each minute running\n> git commands).  Still, even though that can be improved by me setting\n> up more stuff, I'd like something event-driven rather than short\n> polling-based for lower latency.\n>\n>> If there's a better git API I'll be happy to try to use it.\n> Cool.  Seems like we just have to invent something first...\n>\n> FWIW I'm also trying to chase the short polling out of cfbot.  It\n> regularly harasses the git servers at one end (could be fixed with\n> this approach), and wastes a percentage of our allotted CPU slots on\n> the other end by scheduling periodically (could be fixed with webhooks\n> from Cirrus).\n\n\n\nI think I have solved most of the actual issues without getting too complex.\n\nHere's how:\n\nThe buildfarm server now creates a companion to branches_of_interest.txt\ncalled branches_of_interest.json which looks like this:\n\n[\n   {\n      \"REL_11_STABLE\" : \"140c803723\"\n   },\n   {\n      \"REL_12_STABLE\" : \"4cbcb7ed85\"\n   },\n   {\n      \"REL_13_STABLE\" : \"c13667b518\"\n   },\n   {\n      \"REL_14_STABLE\" : \"5cda142bb9\"\n   },\n   {\n      \"REL_15_STABLE\" : \"ff9d27ee2b\"\n   },\n   {\n      \"HEAD\" : \"51b5834cd5\"\n   }\n]Is there a reason this file is a list of hashes each hash with a single value in it? Would it make more sense if it was:{  \"REL_11_STABLE\": \"140c803723\",  \"REL_12_STABLE\": \"4cbcb7ed85\",  \"REL_13_STABLE\": \"c13667b518\",  \"REL_14_STABLE\": \"5cda142bb9\",  \"REL_15_STABLE\": \"ff9d27ee2b\",  \"HEAD\": \"51b5834cd5\"} --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 21 Nov 2022 22:26:39 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "\nOn 2022-11-21 Mo 16:26, Magnus Hagander wrote:\n\n>\n> Is there a reason this file is a list of hashes each hash with a\n> single value in it? Would it make more sense if it was:\n> {\n>   \"REL_11_STABLE\": \"140c803723\",\n>   \"REL_12_STABLE\": \"4cbcb7ed85\",\n>   \"REL_13_STABLE\": \"c13667b518\",\n>   \"REL_14_STABLE\": \"5cda142bb9\",\n>   \"REL_15_STABLE\": \"ff9d27ee2b\",\n>   \"HEAD\": \"51b5834cd5\"\n> }\n>  \n\n\nNo. It's the way it is because the client relies on their being in the\nright order. JSON hashes are conceptually unordered.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:27:44 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Mon, Nov 21, 2022 at 11:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-11-21 Mo 16:26, Magnus Hagander wrote:\n>\n> >\n> > Is there a reason this file is a list of hashes each hash with a\n> > single value in it? Would it make more sense if it was:\n> > {\n> > \"REL_11_STABLE\": \"140c803723\",\n> > \"REL_12_STABLE\": \"4cbcb7ed85\",\n> > \"REL_13_STABLE\": \"c13667b518\",\n> > \"REL_14_STABLE\": \"5cda142bb9\",\n> > \"REL_15_STABLE\": \"ff9d27ee2b\",\n> > \"HEAD\": \"51b5834cd5\"\n> > }\n> >\n>\n>\n> No. It's the way it is because the client relies on their being in the\n> right order. JSON hashes are conceptually unordered.\n>\n\nAh yeah, if they need to be ordered that certainly makes more sense.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Nov 21, 2022 at 11:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-11-21 Mo 16:26, Magnus Hagander wrote:\n\n>\n> Is there a reason this file is a list of hashes each hash with a\n> single value in it? Would it make more sense if it was:\n> {\n>   \"REL_11_STABLE\": \"140c803723\",\n>   \"REL_12_STABLE\": \"4cbcb7ed85\",\n>   \"REL_13_STABLE\": \"c13667b518\",\n>   \"REL_14_STABLE\": \"5cda142bb9\",\n>   \"REL_15_STABLE\": \"ff9d27ee2b\",\n>   \"HEAD\": \"51b5834cd5\"\n> }\n>  \n\n\nNo. It's the way it is because the client relies on their being in the\nright order. JSON hashes are conceptually unordered.Ah yeah, if they need to be ordered that certainly makes more sense. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 21 Nov 2022 23:34:23 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "\nOn 2022-11-21 Mo 15:58, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> The buildfarm server now creates a companion to branches_of_interest.txt\n>> called branches_of_interest.json which looks like this:\n> ... okay ...\n>\n>> It updates this every time it does a git fetch, currently every 5 minutes.\n> That up-to-five-minute delay, on top of whatever cronjob delay one has\n> on one's animals, seems kind of sad. I've gotten kind of spoiled maybe\n> by seeing first buildfarm results typically within 15 minutes of a push.\n> But if we're trying to improve matters in this area, this doesn't seem\n> like quite the way to go.\n\n\nWell, 5 minutes was originally chosen because it was sufficient for the\npurpose for which up to now the server used its mirror. Now we have\nadded a new purpose we can certainly revisit that. Shall I try 2 minutes\nor go down to 1?\n\n\n>\n> But it does seem like this eliminates one expense. Now that you have\n> that bit, maybe we could arrange a webhook or something that allows\n> branches_of_interest.json to get updated immediately after a push?\n>\n> \t\t\t\n\n\nSure, if you think and extra few seconds is worth saving.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:35:49 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-11-21 Mo 15:58, Tom Lane wrote:\n>> But if we're trying to improve matters in this area, this doesn't seem\n>> like quite the way to go.\n\n> Well, 5 minutes was originally chosen because it was sufficient for the\n> purpose for which up to now the server used its mirror. Now we have\n> added a new purpose we can certainly revisit that. Shall I try 2 minutes\n> or go down to 1?\n\nActually, if we implement a webhook to update this, the server could\nstop doing speculative git pulls too, no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:40:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "\nOn 2022-11-21 Mo 16:20, Magnus Hagander wrote:\n> n Mon, Nov 21, 2022 at 9:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > The buildfarm server now creates a companion to\n> branches_of_interest.txt\n> > called branches_of_interest.json which looks like this:\n>\n> ... okay ...\n>\n>\n> Yeah, it's not as efficient as something like long polling or web\n> sockets, but it is most definitely a lot simpler!\n>\n> If we're going to have a lot of animals do pulls of this file every\n> minute or more, it's certainly a lot better to pull this small file\n> than to make multiple git calls.\n>\n> It could trivially be made even more efficient by making the request\n> with either a If-None-Match or If-Modified-Since. While it's still\n> small, that cuts the size approximately in half, and would allow you\n> to skip even more processing if nothing has changed.\n\n\nI'll look at that.\n\n\n>\n>\n> > It updates this every time it does a git fetch, currently every\n> 5 minutes.\n>\n> That up-to-five-minute delay, on top of whatever cronjob delay one has\n> on one's animals, seems kind of sad.  I've gotten kind of spoiled\n> maybe\n> by seeing first buildfarm results typically within 15 minutes of a\n> push.\n> But if we're trying to improve matters in this area, this doesn't seem\n> like quite the way to go.\n>\n> But it does seem like this eliminates one expense.  Now that you have\n> that bit, maybe we could arrange a webhook or something that allows\n> branches_of_interest.json to get updated immediately after a push?\n>\n>\n> Webhooks are definitely a lot easier to implement in between our\n> servers yeah, so that shouldn't be too hard. We could use the same\n> hooks that we use for borka to build the docs, but have it just run\n> whatever script it is the buildfarm needs. I assume it's just\n> something trivial to run there, Andrew?\n\n\nYes, I think much better between servers. Currently the cron job looks\nsomething like this:\n\n\n*/5 * * * * cd $HOME/postgresql.git && git fetch -q &&\n$HOME/website/bin/branches_of_interest.pl\n\n\nThat script is what sets up the json files.\n\n\nI know nothing about git webhooks though, someone will have to point me\nin the right direction.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:42:41 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Mon, Nov 21, 2022 at 11:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2022-11-21 Mo 15:58, Tom Lane wrote:\n> >> But if we're trying to improve matters in this area, this doesn't seem\n> >> like quite the way to go.\n>\n> > Well, 5 minutes was originally chosen because it was sufficient for the\n> > purpose for which up to now the server used its mirror. Now we have\n> > added a new purpose we can certainly revisit that. Shall I try 2 minutes\n> > or go down to 1?\n>\n> Actually, if we implement a webhook to update this, the server could\n> stop doing speculative git pulls too, no?\n>\n\nThat would be the main point, yes. Saves a few hundred (or thousand)\nwasteful git pulls *and* reacts quicker to actual pushes. As long as you\nhave a clear line of communications between the machines, it's basically\nwin/win I think. That's probably why, as Thomas noticed earlier, that's\nwhat \"everybody\" does.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Nov 21, 2022 at 11:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-11-21 Mo 15:58, Tom Lane wrote:\n>> But if we're trying to improve matters in this area, this doesn't seem\n>> like quite the way to go.\n\n> Well, 5 minutes was originally chosen because it was sufficient for the\n> purpose for which up to now the server used its mirror. Now we have\n> added a new purpose we can certainly revisit that. Shall I try 2 minutes\n> or go down to 1?\n\nActually, if we implement a webhook to update this, the server could\nstop doing speculative git pulls too, no?That would be the main point, yes.  Saves a few hundred (or thousand) wasteful git pulls *and* reacts quicker to actual pushes. As long as you have a clear line of communications between the machines, it's basically win/win I think. That's probably why, as Thomas noticed earlier, that's what \"everybody\" does.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 21 Nov 2022 23:44:08 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Mon, Nov 21, 2022 at 11:42 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-11-21 Mo 16:20, Magnus Hagander wrote:\n> > n Mon, Nov 21, 2022 at 9:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > The buildfarm server now creates a companion to\n> > branches_of_interest.txt\n> > > called branches_of_interest.json which looks like this:\n> >\n> > ... okay ...\n> >\n> >\n> > Yeah, it's not as efficient as something like long polling or web\n> > sockets, but it is most definitely a lot simpler!\n> >\n> > If we're going to have a lot of animals do pulls of this file every\n> > minute or more, it's certainly a lot better to pull this small file\n> > than to make multiple git calls.\n> >\n> > It could trivially be made even more efficient by making the request\n> > with either a If-None-Match or If-Modified-Since. While it's still\n> > small, that cuts the size approximately in half, and would allow you\n> > to skip even more processing if nothing has changed.\n>\n>\n> I'll look at that.\n>\n>\n> >\n> >\n> > > It updates this every time it does a git fetch, currently every\n> > 5 minutes.\n> >\n> > That up-to-five-minute delay, on top of whatever cronjob delay one\n> has\n> > on one's animals, seems kind of sad. I've gotten kind of spoiled\n> > maybe\n> > by seeing first buildfarm results typically within 15 minutes of a\n> > push.\n> > But if we're trying to improve matters in this area, this doesn't\n> seem\n> > like quite the way to go.\n> >\n> > But it does seem like this eliminates one expense. Now that you have\n> > that bit, maybe we could arrange a webhook or something that allows\n> > branches_of_interest.json to get updated immediately after a push?\n> >\n> >\n> > Webhooks are definitely a lot easier to implement in between our\n> > servers yeah, so that shouldn't be too hard. We could use the same\n> > hooks that we use for borka to build the docs, but have it just run\n> > whatever script it is the buildfarm needs. I assume it's just\n> > something trivial to run there, Andrew?\n>\n>\n> Yes, I think much better between servers. Currently the cron job looks\n> something like this:\n>\n>\n> */5 * * * * cd $HOME/postgresql.git && git fetch -q &&\n> $HOME/website/bin/branches_of_interest.pl\n>\n>\n> That script is what sets up the json files.\n>\n>\n> I know nothing about git webhooks though, someone will have to point me\n> in the right direction.\n>\n\nI can set that up for you -- we have ready-made packages for 95% of what's\nneeded for that one as we use it elsewhere in the infra. So I'll just set\nsomething up that will run that exact script (as the correct user of\ncourse) and comment out the cronjob,and then send you the details of what\nis set up where (I don't recall it offhand, but as it's the same we have\nelsewhere I'll find it quickly once I look into it).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Nov 21, 2022 at 11:42 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-11-21 Mo 16:20, Magnus Hagander wrote:\n> n Mon, Nov 21, 2022 at 9:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>     Andrew Dunstan <andrew@dunslane.net> writes:\n>     > The buildfarm server now creates a companion to\n>     branches_of_interest.txt\n>     > called branches_of_interest.json which looks like this:\n>\n>     ... okay ...\n>\n>\n> Yeah, it's not as efficient as something like long polling or web\n> sockets, but it is most definitely a lot simpler!\n>\n> If we're going to have a lot of animals do pulls of this file every\n> minute or more, it's certainly a lot better to pull this small file\n> than to make multiple git calls.\n>\n> It could trivially be made even more efficient by making the request\n> with either a If-None-Match or If-Modified-Since. While it's still\n> small, that cuts the size approximately in half, and would allow you\n> to skip even more processing if nothing has changed.\n\n\nI'll look at that.\n\n\n>\n>\n>     > It updates this every time it does a git fetch, currently every\n>     5 minutes.\n>\n>     That up-to-five-minute delay, on top of whatever cronjob delay one has\n>     on one's animals, seems kind of sad.  I've gotten kind of spoiled\n>     maybe\n>     by seeing first buildfarm results typically within 15 minutes of a\n>     push.\n>     But if we're trying to improve matters in this area, this doesn't seem\n>     like quite the way to go.\n>\n>     But it does seem like this eliminates one expense.  Now that you have\n>     that bit, maybe we could arrange a webhook or something that allows\n>     branches_of_interest.json to get updated immediately after a push?\n>\n>\n> Webhooks are definitely a lot easier to implement in between our\n> servers yeah, so that shouldn't be too hard. We could use the same\n> hooks that we use for borka to build the docs, but have it just run\n> whatever script it is the buildfarm needs. I assume it's just\n> something trivial to run there, Andrew?\n\n\nYes, I think much better between servers. Currently the cron job looks\nsomething like this:\n\n\n*/5 * * * * cd $HOME/postgresql.git && git fetch -q &&\n$HOME/website/bin/branches_of_interest.pl\n\n\nThat script is what sets up the json files.\n\n\nI know nothing about git webhooks though, someone will have to point me\nin the right direction.I can set that up for you -- we have ready-made packages for 95% of what's needed for that one as we use it elsewhere in the infra. So I'll just set something up that will run that exact script (as the correct user of course) and comment out the cronjob,and then send you the details of what is set up where (I don't recall it offhand, but as it's the same we have elsewhere I'll find it quickly once I look into it). --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 22 Nov 2022 00:10:29 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Tue, Nov 22, 2022 at 12:10 AM Magnus Hagander <magnus@hagander.net>\nwrote:\n\n>\n>\n> On Mon, Nov 21, 2022 at 11:42 PM Andrew Dunstan <andrew@dunslane.net>\n> wrote:\n>\n>>\n>> On 2022-11-21 Mo 16:20, Magnus Hagander wrote:\n>> > n Mon, Nov 21, 2022 at 9:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >\n>> > Andrew Dunstan <andrew@dunslane.net> writes:\n>> b> > The buildfarm server now creates a companion to\n>> > branches_of_interest.txt\n>> > > called branches_of_interest.json which looks like this:\n>> >\n>> > ... okay ...\n>> >\n>> >\n>> > Yeah, it's not as efficient as something like long polling or web\n>> > sockets, but it is most definitely a lot simpler!\n>> >\n>> > If we're going to have a lot of animals do pulls of this file every\n>> > minute or more, it's certainly a lot better to pull this small file\n>> > than to make multiple git calls.\n>> >\n>> > It could trivially be made even more efficient by making the request\n>> > with either a If-None-Match or If-Modified-Since. While it's still\n>> > small, that cuts the size approximately in half, and would allow you\n>> > to skip even more processing if nothing has changed.\n>>\n>>\n>> I'll look at that.\n>>\n>>\n>> >\n>> >\n>> > > It updates this every time it does a git fetch, currently every\n>> > 5 minutes.\n>> >\n>> > That up-to-five-minute delay, on top of whatever cronjob delay one\n>> has\n>> > on one's animals, seems kind of sad. I've gotten kind of spoiled\n>> > maybe\n>> > by seeing first buildfarm results typically within 15 minutes of a\n>> > push.\n>> > But if we're trying to improve matters in this area, this doesn't\n>> seem\n>> > like quite the way to go.\n>> >\n>> > But it does seem like this eliminates one expense. Now that you\n>> have\n>> > that bit, maybe we could arrange a webhook or something that allows\n>> > branches_of_interest.json to get updated immediately after a push?\n>> >\n>> >\n>> > Webhooks are definitely a lot easier to implement in between our\n>> > servers yeah, so that shouldn't be too hard. We could use the same\n>> > hooks that we use for borka to build the docs, but have it just run\n>> > whatever script it is the buildfarm needs. I assume it's just\n>> > something trivial to run there, Andrew?\n>>\n>>\n>> Yes, I think much better between servers. Currently the cron job looks\n>> something like this:\n>>\n>>\n>> */5 * * * * cd $HOME/postgresql.git && git fetch -q &&\n>> $HOME/website/bin/branches_of_interest.pl\n>>\n>>\n>> That script is what sets up the json files.\n>>\n>>\n>> I know nothing about git webhooks though, someone will have to point me\n>> in the right direction.\n>>\n>\n> I can set that up for you -- we have ready-made packages for 95% of what's\n> needed for that one as we use it elsewhere in the infra. So I'll just set\n> something up that will run that exact script (as the correct user of\n> course) and comment out the cronjob,and then send you the details of what\n> is set up where (I don't recall it offhand, but as it's the same we have\n> elsewhere I'll find it quickly once I look into it).\n>\n>\n\nHi!\n\nThis should now be set up, and Andrew has been sent the instructions for\nhow to access that setup on the buildfarm server. So hopefully it will now\nbe updating the buildfarm server side of things within a couple of seconds\nfrom a commit, and not do any speculative pulls. But we'll keep an extra\neye on it for a bit of course, as it's entirely possible I got something\nworng :)\n\n(This is only the part git -> bf server, of course, as that step doesn't\nneed any client changes it was easier to do quickly)\n\n//Magnus\n\nOn Tue, Nov 22, 2022 at 12:10 AM Magnus Hagander <magnus@hagander.net> wrote:On Mon, Nov 21, 2022 at 11:42 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-11-21 Mo 16:20, Magnus Hagander wrote:\n> n Mon, Nov 21, 2022 at 9:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>     Andrew Dunstan <andrew@dunslane.net> writes:b>     > The buildfarm server now creates a companion to\n>     branches_of_interest.txt\n>     > called branches_of_interest.json which looks like this:\n>\n>     ... okay ...\n>\n>\n> Yeah, it's not as efficient as something like long polling or web\n> sockets, but it is most definitely a lot simpler!\n>\n> If we're going to have a lot of animals do pulls of this file every\n> minute or more, it's certainly a lot better to pull this small file\n> than to make multiple git calls.\n>\n> It could trivially be made even more efficient by making the request\n> with either a If-None-Match or If-Modified-Since. While it's still\n> small, that cuts the size approximately in half, and would allow you\n> to skip even more processing if nothing has changed.\n\n\nI'll look at that.\n\n\n>\n>\n>     > It updates this every time it does a git fetch, currently every\n>     5 minutes.\n>\n>     That up-to-five-minute delay, on top of whatever cronjob delay one has\n>     on one's animals, seems kind of sad.  I've gotten kind of spoiled\n>     maybe\n>     by seeing first buildfarm results typically within 15 minutes of a\n>     push.\n>     But if we're trying to improve matters in this area, this doesn't seem\n>     like quite the way to go.\n>\n>     But it does seem like this eliminates one expense.  Now that you have\n>     that bit, maybe we could arrange a webhook or something that allows\n>     branches_of_interest.json to get updated immediately after a push?\n>\n>\n> Webhooks are definitely a lot easier to implement in between our\n> servers yeah, so that shouldn't be too hard. We could use the same\n> hooks that we use for borka to build the docs, but have it just run\n> whatever script it is the buildfarm needs. I assume it's just\n> something trivial to run there, Andrew?\n\n\nYes, I think much better between servers. Currently the cron job looks\nsomething like this:\n\n\n*/5 * * * * cd $HOME/postgresql.git && git fetch -q &&\n$HOME/website/bin/branches_of_interest.pl\n\n\nThat script is what sets up the json files.\n\n\nI know nothing about git webhooks though, someone will have to point me\nin the right direction.I can set that up for you -- we have ready-made packages for 95% of what's needed for that one as we use it elsewhere in the infra. So I'll just set something up that will run that exact script (as the correct user of course) and comment out the cronjob,and then send you the details of what is set up where (I don't recall it offhand, but as it's the same we have elsewhere I'll find it quickly once I look into it). Hi!This should now be set up, and Andrew has been sent the instructions for how to access that setup on the buildfarm server. So hopefully it will now be updating the buildfarm server side of things within a couple of seconds from a commit, and not do any speculative pulls. But we'll keep an extra eye on it for a bit of course, as it's entirely possible I got something worng :)(This is only the part git -> bf server, of course, as that step doesn't need any client changes it was easier to do quickly)//Magnus", "msg_date": "Tue, 22 Nov 2022 19:04:43 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "\nOn 2022-11-22 Tu 13:04, Magnus Hagander wrote:\n>\n>\n> On Tue, Nov 22, 2022 at 12:10 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n>\n>\n>\n> On Mon, Nov 21, 2022 at 11:42 PM Andrew Dunstan\n> <andrew@dunslane.net> wrote:\n>\n>\n> On 2022-11-21 Mo 16:20, Magnus Hagander wrote:\n> > n Mon, Nov 21, 2022 at 9:58 PM Tom Lane <tgl@sss.pgh.pa.us>\n> wrote:\n> >\n> >     Andrew Dunstan <andrew@dunslane.net> writes:\n> b>     > The buildfarm server now creates a companion to\n> >     branches_of_interest.txt\n> >     > called branches_of_interest.json which looks like this:\n> >\n> >     ... okay ...\n> >\n> >\n> > Yeah, it's not as efficient as something like long polling\n> or web\n> > sockets, but it is most definitely a lot simpler!\n> >\n> > If we're going to have a lot of animals do pulls of this\n> file every\n> > minute or more, it's certainly a lot better to pull this\n> small file\n> > than to make multiple git calls.\n> >\n> > It could trivially be made even more efficient by making the\n> request\n> > with either a If-None-Match or If-Modified-Since. While it's\n> still\n> > small, that cuts the size approximately in half, and would\n> allow you\n> > to skip even more processing if nothing has changed.\n>\n>\n> I'll look at that.\n>\n>\n> >\n> >\n> >     > It updates this every time it does a git fetch,\n> currently every\n> >     5 minutes.\n> >\n> >     That up-to-five-minute delay, on top of whatever cronjob\n> delay one has\n> >     on one's animals, seems kind of sad.  I've gotten kind\n> of spoiled\n> >     maybe\n> >     by seeing first buildfarm results typically within 15\n> minutes of a\n> >     push.\n> >     But if we're trying to improve matters in this area,\n> this doesn't seem\n> >     like quite the way to go.\n> >\n> >     But it does seem like this eliminates one expense.  Now\n> that you have\n> >     that bit, maybe we could arrange a webhook or something\n> that allows\n> >     branches_of_interest.json to get updated immediately\n> after a push?\n> >\n> >\n> > Webhooks are definitely a lot easier to implement in between our\n> > servers yeah, so that shouldn't be too hard. We could use\n> the same\n> > hooks that we use for borka to build the docs, but have it\n> just run\n> > whatever script it is the buildfarm needs. I assume it's just\n> > something trivial to run there, Andrew?\n>\n>\n> Yes, I think much better between servers. Currently the cron\n> job looks\n> something like this:\n>\n>\n> */5 * * * * cd $HOME/postgresql.git && git fetch -q &&\n> $HOME/website/bin/branches_of_interest.pl\n> <http://branches_of_interest.pl>\n>\n>\n> That script is what sets up the json files.\n>\n>\n> I know nothing about git webhooks though, someone will have to\n> point me\n> in the right direction.\n>\n>\n> I can set that up for you -- we have ready-made packages for 95%\n> of what's needed for that one as we use it elsewhere in the infra.\n> So I'll just set something up that will run that exact script (as\n> the correct user of course) and comment out the cronjob,and then\n> send you the details of what is set up where (I don't recall it\n> offhand, but as it's the same we have elsewhere I'll find it\n> quickly once I look into it).\n>  \n>\n>\n> Hi!\n>\n> This should now be set up, and Andrew has been sent the instructions\n> for how to access that setup on the buildfarm server. So hopefully it\n> will now be updating the buildfarm server side of things within a\n> couple of seconds from a commit, and not do any speculative pulls. But\n> we'll keep an extra eye on it for a bit of course, as it's entirely\n> possible I got something worng :)\n>\n> (This is only the part git -> bf server, of course, as that step\n> doesn't need any client changes it was easier to do quickly)\n>\n>\n\nThe server side appears to be working well.\n\nThe new client side code is being tested on crake and working fine - the\nall-up-to-date case takes just a second or two, almost all of which is\ntaken with getting the json file from the server. No git calls at all\nare done on the client in this case.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:35:12 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The new client side code is being tested on crake and working fine - the\n> all-up-to-date case takes just a second or two, almost all of which is\n> taken with getting the json file from the server. No git calls at all\n> are done on the client in this case.\n\nNice! I installed the new run_branches.pl file on sifaka, and it seems to\nbe doing the right things. With the much lower overhead, I've reduced\nthat cronjob's cycle time from five minutes to one, so that machine's\nresponse time should be even better.\n\nIt'll probably help my slower animals even more, so I'm off to\nupdate them as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Nov 2022 18:55:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "Hi,\n\nOn 2022-11-22 17:35:12 -0500, Andrew Dunstan wrote:\n> The server side appears to be working well.\n> \n> The new client side code is being tested on crake and working fine - the\n> all-up-to-date case takes just a second or two, almost all of which is\n> taken with getting the json file from the server. No git calls at all\n> are done on the client in this case.\n\nIt's a huge improvement here. I start one set of animals via systemd timers\nand updated that buildfarm client to 24a6bb0 (because it shows the cpu/network\nresources used):\n\nBefore:\nNov 21 20:36:01 bf-valgrind-v4 systemd[1]: Starting PG buildfarm spinlock...\n...\nNov 21 20:36:14 bf-valgrind-v4 systemd[1]: bf@spinlock.service: Consumed 2.346s CPU time, received 578.3K IP traffic, sent 32.3K IP traffic.\n\nNow:\n\nNov 23 00:59:25 bf-valgrind-v4 systemd[1]: Starting PG buildfarm spinlock...\n...\nNov 23 00:59:26 bf-valgrind-v4 systemd[1]: bf@spinlock.service: Consumed 173ms CPU time, received 5.2K IP traffic, sent 1.8K IP traffic.\n\nBoth of these are for builds that didn't do anything.\n\n\nLeaving wall clock time and resource usage aside, the output is also just a\nlot more readable:\n\nNov 21 20:36:02 bf-valgrind-v4 run_branches.pl[1188989]: Mon Nov 21 20:36:02 2022: buildfarm run for francolin:REL_11_STABLE starting\nNov 21 20:36:02 bf-valgrind-v4 run_branches.pl[1188989]: francolin:REL_11_STABLE [20:36:02] checking out source ...\nNov 21 20:36:04 bf-valgrind-v4 run_branches.pl[1188989]: francolin:REL_11_STABLE [20:36:04] checking if build run needed ...\nNov 21 20:36:04 bf-valgrind-v4 run_branches.pl[1188989]: francolin:REL_11_STABLE [20:36:04] No build required: last status = Sat Nov 19 21:56:55 2022 GMT, current snapshot = Sat Nov 19 20:36:52 2022 GMT, changed files = 0\nNov 21 20:36:04 bf-valgrind-v4 run_branches.pl[1189119]: Mon Nov 21 20:36:04 2022: buildfarm run for francolin:REL_12_STABLE starting\nNov 21 20:36:04 bf-valgrind-v4 run_branches.pl[1189119]: francolin:REL_12_STABLE [20:36:04] checking out source ...\nNov 21 20:36:06 bf-valgrind-v4 run_branches.pl[1189119]: francolin:REL_12_STABLE [20:36:06] checking if build run needed ...\nNov 21 20:36:06 bf-valgrind-v4 run_branches.pl[1189119]: francolin:REL_12_STABLE [20:36:06] No build required: last status = Sat Nov 19 22:52:54 2022 GMT, current snapshot = Sat Nov 19 20:36:48 2022 GMT, changed files = 0\nNov 21 20:36:06 bf-valgrind-v4 run_branches.pl[1189233]: Mon Nov 21 20:36:06 2022: buildfarm run for francolin:REL_13_STABLE starting\nNov 21 20:36:06 bf-valgrind-v4 run_branches.pl[1189233]: francolin:REL_13_STABLE [20:36:06] checking out source ...\nNov 21 20:36:08 bf-valgrind-v4 run_branches.pl[1189233]: francolin:REL_13_STABLE [20:36:08] checking if build run needed ...\nNov 21 20:36:08 bf-valgrind-v4 run_branches.pl[1189233]: francolin:REL_13_STABLE [20:36:08] No build required: last status = Sat Nov 19 23:12:55 2022 GMT, current snapshot = Sat Nov 19 20:36:33 2022 GMT, changed files = 0\nNov 21 20:36:08 bf-valgrind-v4 run_branches.pl[1189298]: Mon Nov 21 20:36:08 2022: buildfarm run for francolin:REL_14_STABLE starting\nNov 21 20:36:08 bf-valgrind-v4 run_branches.pl[1189298]: francolin:REL_14_STABLE [20:36:08] checking out source ...\nNov 21 20:36:10 bf-valgrind-v4 run_branches.pl[1189298]: francolin:REL_14_STABLE [20:36:10] checking if build run needed ...\nNov 21 20:36:10 bf-valgrind-v4 run_branches.pl[1189298]: francolin:REL_14_STABLE [20:36:10] No build required: last status = Mon Nov 21 15:51:38 2022 GMT, current snapshot = Mon Nov 21 15:50:50 2022 GMT, changed files = 0\nNov 21 20:36:10 bf-valgrind-v4 run_branches.pl[1189364]: Mon Nov 21 20:36:10 2022: buildfarm run for francolin:REL_15_STABLE starting\nNov 21 20:36:10 bf-valgrind-v4 run_branches.pl[1189364]: francolin:REL_15_STABLE [20:36:10] checking out source ...\nNov 21 20:36:12 bf-valgrind-v4 run_branches.pl[1189364]: francolin:REL_15_STABLE [20:36:12] checking if build run needed ...\nNov 21 20:36:12 bf-valgrind-v4 run_branches.pl[1189364]: francolin:REL_15_STABLE [20:36:12] No build required: last status = Mon Nov 21 16:26:28 2022 GMT, current snapshot = Mon Nov 21 15:50:50 2022 GMT, changed files = 0\nNov 21 20:36:12 bf-valgrind-v4 run_branches.pl[1189432]: Mon Nov 21 20:36:12 2022: buildfarm run for francolin:HEAD starting\nNov 21 20:36:12 bf-valgrind-v4 run_branches.pl[1189432]: francolin:HEAD [20:36:12] checking out source ...\nNov 21 20:36:14 bf-valgrind-v4 run_branches.pl[1189432]: francolin:HEAD [20:36:14] checking if build run needed ...\nNov 21 20:36:14 bf-valgrind-v4 run_branches.pl[1189432]: francolin:HEAD [20:36:14] No build required: last status = Mon Nov 21 17:31:31 2022 GMT, current snapshot = Mon Nov 21 16:59:29 2022 GMT, changed files = 0\n\nvs\n\nNov 23 00:59:26 bf-valgrind-v4 run_branches.pl[4125973]: Wed Nov 23 00:59:26 2022: francolin:REL_11_STABLE is up to date.\nNov 23 00:59:26 bf-valgrind-v4 run_branches.pl[4125973]: Wed Nov 23 00:59:26 2022: francolin:REL_12_STABLE is up to date.\nNov 23 00:59:26 bf-valgrind-v4 run_branches.pl[4125973]: Wed Nov 23 00:59:26 2022: francolin:REL_13_STABLE is up to date.\nNov 23 00:59:26 bf-valgrind-v4 run_branches.pl[4125973]: Wed Nov 23 00:59:26 2022: francolin:REL_14_STABLE is up to date.\nNov 23 00:59:26 bf-valgrind-v4 run_branches.pl[4125973]: Wed Nov 23 00:59:26 2022: francolin:REL_15_STABLE is up to date.\nNov 23 00:59:26 bf-valgrind-v4 run_branches.pl[4125973]: Wed Nov 23 00:59:26 2022: francolin:HEAD is up to date.\n\n\nThanks a lot!\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:09:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:09 PM Andres Freund <andres@anarazel.de> wrote:\n> It's a huge improvement here.\n\nSame here. eelpout + elver looking good, just a fraction of a second\nhitting that web server each minute. Long polling will be better and\nshave off 30 seconds (+/- 30) on start time, but this avoids a lot of\nuseless churn without even needing a local mirror. Thanks Andrew!\n\n\n", "msg_date": "Wed, 23 Nov 2022 21:14:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Wed, Nov 23, 2022 at 9:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, Nov 23, 2022 at 2:09 PM Andres Freund <andres@anarazel.de> wrote:\n> > It's a huge improvement here.\n>\n> Same here. eelpout + elver looking good, just a fraction of a second\n> hitting that web server each minute. Long polling will be better and\n> shave off 30 seconds (+/- 30) on start time, but this avoids a lot of\n> useless churn without even needing a local mirror. Thanks Andrew!\n>\n\nAre you saying you still think it's worth pursuing longpoll or similar\nmethods for it, or that this is good enough?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Nov 23, 2022 at 9:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Wed, Nov 23, 2022 at 2:09 PM Andres Freund <andres@anarazel.de> wrote:\n> It's a huge improvement here.\n\nSame here. eelpout + elver looking good, just a fraction of a second\nhitting that web server each minute.  Long polling will be better and\nshave off 30 seconds (+/- 30) on start time, but this avoids a lot of\nuseless churn without even needing a local mirror.  Thanks Andrew!\nAre you saying you still think it's worth pursuing longpoll or similar methods for it, or that this is good enough?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 23 Nov 2022 21:59:55 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "On Thu, Nov 24, 2022 at 10:00 AM Magnus Hagander <magnus@hagander.net> wrote:\n> On Wed, Nov 23, 2022 at 9:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Are you saying you still think it's worth pursuing longpoll or similar methods for it, or that this is good enough?\n\nI personally think it'd be pretty neat, to squeeze out that last bit\nof latency. Maybe it's overkill...\n\nThe best idea I have so far for how to make it work more like a\nservice but still require nothing more than cron (so it's not hard for\npeople on systems where they don't even have root) is to have it start\nif not already running (current lock file scheme already does that)\nAND if some file buildfarm_enabled exists, or buildfarm_disabled\ndoesn't exist or something like that, and then keep running while\nthat's true. So if you need to turn it off for a while you can just\ntouch/rm that, but normally it'll keep running its wait loop forever,\nand start up soon after a reboot; maybe it also exits if you touch the\nconfig file so it can restart next time and reread it, or something\nlike that. Then it can spend all day in a loop that does 120s long\npolls, and start builds within seconds of a new commit landing.\n\nCurious to know how you'd build the server side. You mentioned a\ncommit hook notifying some kind of long poll distributor. Would you\nuse a Twisted/async/whatever-based server that knows how to handle\nlots of sockets efficiently, or just use old school web server tech\nthat would block waiting for NOTIFY or something like that? You'd\nprobably get away with that for the small numbers of animals involved\n(I mean, a couple of hundred web server threads/processes just sitting\nthere waiting would be borderline acceptable I guess). But it'd be\nmore fun to do it with async magic.\n\n\n", "msg_date": "Thu, 24 Nov 2022 10:43:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Nov 24, 2022 at 10:00 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> On Wed, Nov 23, 2022 at 9:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Are you saying you still think it's worth pursuing longpoll or similar methods for it, or that this is good enough?\n\n> I personally think it'd be pretty neat, to squeeze out that last bit\n> of latency. Maybe it's overkill...\n\nI can't get excited about pursuing the last ~30 seconds of delay\nfor launching tasks that are going to run 10 or 20 or more minutes\n(where the future trend of those numbers is surely up not down).\n\nThe thing that was really significantly relevant here IMO was to\nreduce the load on the central server, and I think we've done that.\nWould adding longpoll reduce that further? In principle maybe,\nbut I'm not sure we have enough animals to make it worthwhile.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 16:59:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" }, { "msg_contents": "\nOn 2022-11-23 We 16:59, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Thu, Nov 24, 2022 at 10:00 AM Magnus Hagander <magnus@hagander.net> wrote:\n>>> On Wed, Nov 23, 2022 at 9:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> Are you saying you still think it's worth pursuing longpoll or similar methods for it, or that this is good enough?\n>> I personally think it'd be pretty neat, to squeeze out that last bit\n>> of latency. Maybe it's overkill...\n> I can't get excited about pursuing the last ~30 seconds of delay\n> for launching tasks that are going to run 10 or 20 or more minutes\n> (where the future trend of those numbers is surely up not down).\n>\n> The thing that was really significantly relevant here IMO was to\n> reduce the load on the central server, and I think we've done that.\n> Would adding longpoll reduce that further? In principle maybe,\n> but I'm not sure we have enough animals to make it worthwhile.\n>\n> \t\t\t\n\n\nYeah, that's my feeling. We have managed to get a large improvement with\na fairly small effort, I'm much less excited about getting another small\nimprovement from a large effort.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 20:35:31 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: More efficient build farm animal wakeup?" } ]
[ { "msg_contents": "\nHello devs,\n\nI want to abort a psql script. How can I do that? The answer seems to be \n\\quit, but it is not so simple:\n\n - when the current script is from a terminal, you exit psql, OK\n\n - when the current script is from a file (-f, <), you exit psql, OK\n\n - when the current script is included from something,\n you quit the current script and proceed after the \\i of next -f, BAD\n\nQuestion: is there any way to really abort a psql script from an included \nfile?\n\nI've found \"\\! kill $PPID\" which works with bash, but I'm not sure of the \nportability and I was hoping for something straightforward and cleaner.\n\nIf there is really no simple way, would it be okay to add a \\exit which \ndoes that?\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 19 Nov 2022 19:55:36 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "How to *really* quit psql?" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> - when the current script is included from something,\n> you quit the current script and proceed after the \\i of next -f, BAD\n\n> Question: is there any way to really abort a psql script from an included \n> file?\n\nUnder what circumstances would it be appropriate for a script to take\nit on itself to decide that? It has no way of knowing what the next -f\noption is or what the user intended.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Nov 2022 14:10:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to *really* quit psql?" }, { "msg_contents": "On Sat, Nov 19, 2022 at 12:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> > - when the current script is included from something,\n> > you quit the current script and proceed after the \\i of next -f, BAD\n>\n> > Question: is there any way to really abort a psql script from an\n> included\n> > file?\n>\n> Under what circumstances would it be appropriate for a script to take\n> it on itself to decide that? It has no way of knowing what the next -f\n> option is or what the user intended.\n>\n>\nCan we add an exit code argument to the \\quit meta-command that could be\nset to non-zero and, combined with ON_ERROR_STOP, produces the desired\neffect of aborting everything just like an error under ON_ERROR_STOP does\n(which is the workaround here I suppose, but an ugly one that involves the\nserver).\n\nDavid J.\n\nOn Sat, Nov 19, 2022 at 12:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>   - when the current script is included from something,\n>     you quit the current script and proceed after the \\i of next -f, BAD\n\n> Question: is there any way to really abort a psql script from an included \n> file?\n\nUnder what circumstances would it be appropriate for a script to take\nit on itself to decide that?  It has no way of knowing what the next -f\noption is or what the user intended.Can we add an exit code argument to the \\quit meta-command that could be set to non-zero and, combined with ON_ERROR_STOP, produces the desired effect of aborting everything just like an error under ON_ERROR_STOP does (which is the workaround here I suppose, but an ugly one that involves the server).David J.", "msg_date": "Sat, 19 Nov 2022 12:25:34 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to *really* quit psql?" }, { "msg_contents": "On Sat, 19 Nov 2022 at 14:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Under what circumstances would it be appropriate for a script to take\n> it on itself to decide that? It has no way of knowing what the next -f\n> option is or what the user intended.\n\nPresumably when they're written by the same person so the script does\neffectively know what the \"user\" intended because it's written by the\nsame user.\n\nOff the top of my head I could imagine someone writing something like\nreport-error-and-exit.sql and wanting to be able to use \\i\nreport-error-and-exit.sql to ensure all scripts report their errors\nusing some common log file or something.\n\nNot saying that's the only or best way to do that though. And there is\nthe risk that scripts would start using this functionality\ninappropriately which would mean, for example, getting an install\nscript for something and then not being able to use it within another\nscript safely :(\n\n--\ngreg\n\n\n", "msg_date": "Sat, 19 Nov 2022 14:39:04 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: How to *really* quit psql?" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> On Sat, 19 Nov 2022 at 14:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Under what circumstances would it be appropriate for a script to take\n>> it on itself to decide that? It has no way of knowing what the next -f\n>> option is or what the user intended.\n\n> Presumably when they're written by the same person so the script does\n> effectively know what the \"user\" intended because it's written by the\n> same user.\n\nEven so, embedding that knowledge in the first script doesn't seem\nlike the sort of design we ought to encourage. It'd be better if\n\"don't run the next script if the first one fails\" were directed\nby a command-line switch or the like. I also wonder exactly how\nthis interacts with existing features like ON_ERROR_STOP.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Nov 2022 14:49:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to *really* quit psql?" }, { "msg_contents": "On Sat, Nov 19, 2022 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Greg Stark <stark@mit.edu> writes:\n> > On Sat, 19 Nov 2022 at 14:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Under what circumstances would it be appropriate for a script to take\n> >> it on itself to decide that? It has no way of knowing what the next -f\n> >> option is or what the user intended.\n>\n> > Presumably when they're written by the same person so the script does\n> > effectively know what the \"user\" intended because it's written by the\n> > same user.\n>\n> Even so, embedding that knowledge in the first script doesn't seem\n> like the sort of design we ought to encourage. It'd be better if\n> \"don't run the next script if the first one fails\" were directed\n> by a command-line switch or the like. I also wonder exactly how\n> this interacts with existing features like ON_ERROR_STOP.\n>\n\nvagrant@vagrant:~$ /usr/local/pgsql/bin/psql -v ON_ERROR_STOP=1 -f two.psql\n-f three.psql postgres\npsql:two.psql:1: ERROR: division by zero\nvagrant@vagrant:~$ /usr/local/pgsql/bin/psql -f two.psql -f three.psql\npostgres\npsql:two.psql:1: ERROR: division by zero\n ?column?\n----------\n 2\n(1 row)\n\n ?column?\n----------\n 3\n(1 row)\n\nDavid J.\n\nOn Sat, Nov 19, 2022 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Greg Stark <stark@mit.edu> writes:\n> On Sat, 19 Nov 2022 at 14:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Under what circumstances would it be appropriate for a script to take\n>> it on itself to decide that?  It has no way of knowing what the next -f\n>> option is or what the user intended.\n\n> Presumably when they're written by the same person so the script does\n> effectively know what the \"user\" intended because it's written by the\n> same user.\n\nEven so, embedding that knowledge in the first script doesn't seem\nlike the sort of design we ought to encourage.  It'd be better if\n\"don't run the next script if the first one fails\" were directed\nby a command-line switch or the like.  I also wonder exactly how\nthis interacts with existing features like ON_ERROR_STOP.\nvagrant@vagrant:~$ /usr/local/pgsql/bin/psql -v ON_ERROR_STOP=1 -f two.psql -f three.psql postgrespsql:two.psql:1: ERROR:  division by zerovagrant@vagrant:~$ /usr/local/pgsql/bin/psql -f two.psql -f three.psql postgrespsql:two.psql:1: ERROR:  division by zero ?column?----------        2(1 row) ?column?----------        3(1 row)David J.", "msg_date": "Sat, 19 Nov 2022 12:59:01 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to *really* quit psql?" }, { "msg_contents": "On Sat, Nov 19, 2022 at 12:59 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Sat, Nov 19, 2022 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Greg Stark <stark@mit.edu> writes:\n>> > On Sat, 19 Nov 2022 at 14:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >> Under what circumstances would it be appropriate for a script to take\n>> >> it on itself to decide that? It has no way of knowing what the next -f\n>> >> option is or what the user intended.\n>>\n>> > Presumably when they're written by the same person so the script does\n>> > effectively know what the \"user\" intended because it's written by the\n>> > same user.\n>>\n>> Even so, embedding that knowledge in the first script doesn't seem\n>> like the sort of design we ought to encourage. It'd be better if\n>> \"don't run the next script if the first one fails\" were directed\n>> by a command-line switch or the like. I also wonder exactly how\n>> this interacts with existing features like ON_ERROR_STOP.\n>>\n>\n> vagrant@vagrant:~$ /usr/local/pgsql/bin/psql -v ON_ERROR_STOP=1 -f\n> two.psql -f three.psql postgres\n> psql:two.psql:1: ERROR: division by zero\n> vagrant@vagrant:~$ /usr/local/pgsql/bin/psql -f two.psql -f three.psql\n> postgres\n> psql:two.psql:1: ERROR: division by zero\n> ?column?\n> ----------\n> 2\n> (1 row)\n>\n> ?column?\n> ----------\n> 3\n> (1 row)\n>\n>\nSorry, forgot the \\quit test:\n\nvagrant@vagrant:~$ /usr/local/pgsql/bin/psql -v ON_ERROR_STOP=1 -f two.psql\n-f three.psql postgres\n ?column?\n----------\n 2\n(1 row)\n\n ?column?\n----------\n 3\n(1 row)\n\n(there is a \\quit at the end of two.psql)\n\nDavid J.\n\nOn Sat, Nov 19, 2022 at 12:59 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Sat, Nov 19, 2022 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Greg Stark <stark@mit.edu> writes:\n> On Sat, 19 Nov 2022 at 14:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Under what circumstances would it be appropriate for a script to take\n>> it on itself to decide that?  It has no way of knowing what the next -f\n>> option is or what the user intended.\n\n> Presumably when they're written by the same person so the script does\n> effectively know what the \"user\" intended because it's written by the\n> same user.\n\nEven so, embedding that knowledge in the first script doesn't seem\nlike the sort of design we ought to encourage.  It'd be better if\n\"don't run the next script if the first one fails\" were directed\nby a command-line switch or the like.  I also wonder exactly how\nthis interacts with existing features like ON_ERROR_STOP.\nvagrant@vagrant:~$ /usr/local/pgsql/bin/psql -v ON_ERROR_STOP=1 -f two.psql -f three.psql postgrespsql:two.psql:1: ERROR:  division by zerovagrant@vagrant:~$ /usr/local/pgsql/bin/psql -f two.psql -f three.psql postgrespsql:two.psql:1: ERROR:  division by zero ?column?----------        2(1 row) ?column?----------        3(1 row)Sorry, forgot the \\quit test:vagrant@vagrant:~$ /usr/local/pgsql/bin/psql -v ON_ERROR_STOP=1 -f two.psql -f three.psql postgres ?column?----------        2(1 row) ?column?----------        3(1 row)(there is a \\quit at the end of two.psql)David J.", "msg_date": "Sat, 19 Nov 2022 13:00:33 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to *really* quit psql?" }, { "msg_contents": "Hello Tom,\n\n>> - when the current script is included from something,\n>> you quit the current script and proceed after the \\i of next -f, BAD\n>\n>> Question: is there any way to really abort a psql script from an included\n>> file?\n>\n> Under what circumstances would it be appropriate for a script to take\n> it on itself to decide that?\n\nThe use case is psql scripts which update or cleanup an application \nschema. For security, some of these scripts check for conditions (eg, we \nare not in production, the application schema is in the expected version, \nwhatever…) and should abort if the conditions are not okay. As checking \nfor the conditions requires a few lines of code and is always the same, a \nsimple approach is to include another script which does the check and \naborts the run if necessary, eg:\n\n```sql\n-- this script should not run in \"prod\"!\n\\ir not_in_prod.sql\n-- should have aborted if it is a \"prod\" version.\nDROP TABLE AllMyUsers CASCADE;\nDROP TABLE QuiteImportantData CASCADE;\n```\n\n> It has no way of knowing what the next -f option is or what the user \n> intended.\n\nThe intention of the user who wrote the script is to abort in some cases, \nto avoid damaging the database contents.\n\n-- \nFabien.", "msg_date": "Sun, 20 Nov 2022 08:42:35 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: How to *really* quit psql?" }, { "msg_contents": "Hello David,\n\n> vagrant@vagrant:~$ /usr/local/pgsql/bin/psql -v ON_ERROR_STOP=1 -f two.psql\n> -f three.psql postgres\n> ?column?\n> ----------\n> 2\n> (1 row)\n>\n> ?column?\n> ----------\n> 3\n> (1 row)\n>\n> (there is a \\quit at the end of two.psql)\n\nYep, that summarizes my issues!\n\nON_ERROR_STOP is only of SQL errors, so a script can really stop by having \nan intentional SQL error.\n\n-- \nFabien.", "msg_date": "Sun, 20 Nov 2022 08:54:53 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: How to *really* quit psql?" }, { "msg_contents": "\nHello David,\n\n>>> Question: is there any way to really abort a psql script from an \n>>> included file?\n>>\n>> Under what circumstances would it be appropriate for a script to take\n>> it on itself to decide that? It has no way of knowing what the next -f\n>> option is or what the user intended.\n>\n> Can we add an exit code argument to the \\quit meta-command that could be\n> set to non-zero and, combined with ON_ERROR_STOP, produces the desired\n> effect of aborting everything just like an error under ON_ERROR_STOP does\n> (which is the workaround here I suppose, but an ugly one that involves the\n> server).\n\nI like the simple idea of adding an optional exit status argument to \n\\quit. I'm unsure whether \"ON_ERROR_STOP\" should or should not change the \nbehavior, or whether it should just exit(n) with \\quit n.\n\nNote that using quit to abort a psql script is already used when loading \nextensions to prevent them to be run directly by psql:\n\n -- from some sql files in \"contrib/pg_stat_statements/\":\n \\echo Use \"ALTER EXTENSION pg_stat_statements UPDATE TO '1.10'\" to load this file. \\quit\n\nBut the same trick would fail if the guard is reach with an include.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 20 Nov 2022 09:01:59 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: How to *really* quit psql?" } ]
[ { "msg_contents": "Attached patch has vacuumlazy.c pass its VacuumParams state directly\nto vacuum_set_xid_limits(), the vacuum.c function that figures out\nwhich actual cutoffs for freezing should be used. The patch also makes\nthe function use output parameter symbol names that match those used\nby its vacuumlazy.c caller.\n\nThe signature of vacuum_set_xid_limits() is probably going to gain\nadditional parameters before too long. In any case this seems like a\nclear improvement.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 19 Nov 2022 11:42:55 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Simplify vacuum_set_xid_limits()'s signature (minor refactoring)" } ]
[ { "msg_contents": "Hi,\n\nIn [1] Robert justifiably complained about the use of PROC_QUEUE. I've\npreviously been bothered by this in [2], but didn't get around to finishing\nthe patches.\n\nOne thing that made me hesitate was the naming of the function for a) checking\nwhether a dlist_node is a list, b) initializing and deleting nodes in a way to\nallow for a). My patch adds dlist_node_init(), dlist_delete_thoroughly() /\ndlist_delete_from_thoroughly() / dclist_delete_from_thoroughly() and\ndlist_node_is_detached(). Thomas proposed dlist_delete_and_reinit() and\ndlist_node_is_linked() instead.\n\n\nAttached is a revised version of the patches from [2].\n\nI left the naming of the detached / thoroughly as it was before, for\nnow. Another alternative could be to try to just get rid of needing the\ndetached state at all, although that likely would make the code changes\nbigger.\n\nI've switched the PROC_QUEUE uses to dclist, which we only got recently. The\nprior version of the patchset contained a patch to remove the use of the size\nfield of PROC_QUEUE, as it's only needed in a few places. But it seems easier\nto just replace it with dclist for now.\n\nRobert had previously complained about the ilist.h patch constifying some\nfunctions. I don't really understand the complaint in this case - none of the\ncases should require constifying outside code. It just allows to replace\nthings like SHMQueueEmpty() which were const, because there's a few places\nthat get passed a const PGPROC. There's more that could be constified\n(removing the need for one unconstify() in predicate.c - but that seems like a\nlot more work with less clear benefit. Either way, I split the constification\ninto a separate patch.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20221117201304.vemjfsxaizmt3zbb%40awork3.anarazel.de\n[2] https://www.postgresql.org/message-id/20200211042229.msv23badgqljrdg2%40alap3.anarazel.de", "msg_date": "Sat, 19 Nov 2022 21:59:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Replace PROC_QUEUE / SHM_QUEUE with ilist.h" }, { "msg_contents": "Hi,\n\nOn 2022-11-19 21:59:30 -0800, Andres Freund wrote:\n> In [1] Robert justifiably complained about the use of PROC_QUEUE. I've\n> previously been bothered by this in [2], but didn't get around to finishing\n> the patches.\n> \n> One thing that made me hesitate was the naming of the function for a) checking\n> whether a dlist_node is a list, b) initializing and deleting nodes in a way to\n> allow for a). My patch adds dlist_node_init(), dlist_delete_thoroughly() /\n> dlist_delete_from_thoroughly() / dclist_delete_from_thoroughly() and\n> dlist_node_is_detached(). Thomas proposed dlist_delete_and_reinit() and\n> dlist_node_is_linked() instead.\n\nAny comments on these names? Otherwise I'm planning to push ahead with the\nnames as is, lest I forget this patchset for another ~2 years.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 3 Dec 2022 10:17:22 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replace PROC_QUEUE / SHM_QUEUE with ilist.h" }, { "msg_contents": "Hi,\n\nOn 2022-12-03 10:17:22 -0800, Andres Freund wrote:\n> On 2022-11-19 21:59:30 -0800, Andres Freund wrote:\n> > In [1] Robert justifiably complained about the use of PROC_QUEUE. I've\n> > previously been bothered by this in [2], but didn't get around to finishing\n> > the patches.\n> > \n> > One thing that made me hesitate was the naming of the function for a) checking\n> > whether a dlist_node is a list, b) initializing and deleting nodes in a way to\n> > allow for a). My patch adds dlist_node_init(), dlist_delete_thoroughly() /\n> > dlist_delete_from_thoroughly() / dclist_delete_from_thoroughly() and\n> > dlist_node_is_detached(). Thomas proposed dlist_delete_and_reinit() and\n> > dlist_node_is_linked() instead.\n> \n> Any comments on these names? Otherwise I'm planning to push ahead with the\n> names as is, lest I forget this patchset for another ~2 years.\n\nFinally pushed, with some fairly minor additional cleanup. No more SHM_QUEUE,\nyay!\n\nAlso, predicate.c really needs some love.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Jan 2023 18:58:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replace PROC_QUEUE / SHM_QUEUE with ilist.h" }, { "msg_contents": "On Thu, Jan 19, 2023 at 9:58 PM Andres Freund <andres@anarazel.de> wrote:\n> Finally pushed, with some fairly minor additional cleanup. No more SHM_QUEUE,\n> yay!\n\nNice. I won't miss it one bit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 13:34:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replace PROC_QUEUE / SHM_QUEUE with ilist.h" } ]
[ { "msg_contents": "Greetings PGSQL hackers,\n\nI am working on a backport of CVE-2022-2625 to PostgreSQL 9.6 and 9.4.\nI am starting from commit 5919bb5a5989cda232ac3d1f8b9d90f337be2077.\n\nThe backport to 9.6 was relatively straightforward, the principal change\nbeing to omit some of the hunks related to commands in 9.6 that did not\nhave support for 'IF NOT EXISTS'. When it came to 9.4, things got a\nlittle more interesting. There were additional instances of commands\nthat did not have support for 'IF NOT EXISTS' and some of the\ncontructions were slightly different as well, but nothing insurmountable\nthere.\n\nI did have to hack at the 9.4 test harness a bit since the\ntest_extensions sub-directory seems to have been introduced post-9.4 and\nit seemed like a good idea to have the actual tests from the\naforementioned commit to help guard against some sort of unintended\nchange on my part. However, after I got through the CINE changes and\nstarted dealing with the COR changes I ran into something fairly\npeculiar. The test output included this:\n\n DROP VIEW ext_cor_view; \n CREATE TYPE test_ext_type;\n CREATE EXTENSION test_ext_cor; -- fail\n ERROR: type test_ext_type is not a member of extension \"test_ext_cor\"\n DETAIL: An extension is not allowed to replace an object that it does not own.\n DROP TYPE test_ext_type;\n -- this makes a shell \"point <<@@ polygon\" operator too\n CREATE OPERATOR @@>> ( PROCEDURE = poly_contain_pt,\n LEFTARG = polygon, RIGHTARG = point,\n COMMUTATOR = <<@@ );\n CREATE EXTENSION test_ext_cor; -- fail\n ERROR: operator <<@@(point,polygon) is not a member of extension \"test_ext_cor\"\n DETAIL: An extension is not allowed to replace an object that it does not own.\n DROP OPERATOR <<@@ (point, polygon);\n CREATE EXTENSION test_ext_cor; -- now it should work\n+ERROR: operator 16427 is not a member of extension \"test_ext_cor\"\n+DETAIL: An extension is not allowed to replace an object that it does not own.\n SELECT ext_cor_func();\n\nThis made me suspect that there was an issue with 'DROP OPERATOR'.\nAfter a little scavenger hunt, I located a commit which appears to be\nrelated, c94959d4110a1965472956cfd631082a96f64a84, and which was made\npost-9.4. So then, my question: is the existing behavior that produces\n\"ERROR: operator ... is not a member of extension ...\" a sufficient\nguard against the CVE-2022-2625 vulnerability when it comes to\noperators? (My thought is that it might be sufficient, and if it is I\nwould need to add something like 'DROP OPERATOR @@>> (point, polygon);'\nto allow the extension creation to work and the test to complete.)\n\nIf the apparently buggy behavior is not a sufficient guard, then is a\nbackport of c94959d4110a1965472956cfd631082a96f64a84 in conjunction with\nthe CVE-2022-2625 fix the correct solution?\n\nRegards,\n\n-Roberto\n\n-- \nRoberto C. S�nchez\n\n\n", "msg_date": "Sun, 20 Nov 2022 09:29:19 -0500", "msg_from": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org>", "msg_from_op": true, "msg_subject": "Question concerning backport of CVE-2022-2625" }, { "msg_contents": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org> writes:\n> -- this makes a shell \"point <<@@ polygon\" operator too\n> CREATE OPERATOR @@>> ( PROCEDURE = poly_contain_pt,\n> LEFTARG = polygon, RIGHTARG = point,\n> COMMUTATOR = <<@@ );\n> CREATE EXTENSION test_ext_cor; -- fail\n> ERROR: operator <<@@(point,polygon) is not a member of extension \"test_ext_cor\"\n> DETAIL: An extension is not allowed to replace an object that it does not own.\n> DROP OPERATOR <<@@ (point, polygon);\n> CREATE EXTENSION test_ext_cor; -- now it should work\n> +ERROR: operator 16427 is not a member of extension \"test_ext_cor\"\n> +DETAIL: An extension is not allowed to replace an object that it does not own.\n\nThat is ... odd. Since 9.4 is long out of support I'm unenthused\nabout investigating it myself. (Why is it that people will move heaven\nand earth to fix \"security\" bugs in dead branches, but ignore even\ncatastrophic data-loss bugs?) But if you're stuck with pursuing\nthis exercise, I think you'd better figure out exactly what's\nhappening. I agree that it smells like c94959d41 could be related,\nbut I don't see just how that'd produce this symptom. Before that\ncommit, the DROP OPERATOR <<@@ would have left a dangling link\nbehind in @@>> 's oprcom field, but there doesn't seem to be a\nreason why that'd affect the test_ext_cor extension: it will not\nbe re-using the same operator OID, nor would it have any reason to\ntouch @@>>, since there's no COMMUTATOR clause in the extension.\n\nIt'd likely be a good idea to reproduce this with a gdb breakpoint\nset at errfinish, and see exactly what's leading up to the error.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Nov 2022 11:43:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question concerning backport of CVE-2022-2625" }, { "msg_contents": "Hi Tom,\n\nOn Sun, Nov 20, 2022 at 11:43:41AM -0500, Tom Lane wrote:\n> Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org> writes:\n> > -- this makes a shell \"point <<@@ polygon\" operator too\n> > CREATE OPERATOR @@>> ( PROCEDURE = poly_contain_pt,\n> > LEFTARG = polygon, RIGHTARG = point,\n> > COMMUTATOR = <<@@ );\n> > CREATE EXTENSION test_ext_cor; -- fail\n> > ERROR: operator <<@@(point,polygon) is not a member of extension \"test_ext_cor\"\n> > DETAIL: An extension is not allowed to replace an object that it does not own.\n> > DROP OPERATOR <<@@ (point, polygon);\n> > CREATE EXTENSION test_ext_cor; -- now it should work\n> > +ERROR: operator 16427 is not a member of extension \"test_ext_cor\"\n> > +DETAIL: An extension is not allowed to replace an object that it does not own.\n> \n> That is ... odd. Since 9.4 is long out of support I'm unenthused\n> about investigating it myself. (Why is it that people will move heaven\n> and earth to fix \"security\" bugs in dead branches, but ignore even\n> catastrophic data-loss bugs?) But if you're stuck with pursuing\n> this exercise, I think you'd better figure out exactly what's\n> happening. I agree that it smells like c94959d41 could be related,\n> but I don't see just how that'd produce this symptom. Before that\n> commit, the DROP OPERATOR <<@@ would have left a dangling link\n> behind in @@>> 's oprcom field, but there doesn't seem to be a\n> reason why that'd affect the test_ext_cor extension: it will not\n> be re-using the same operator OID, nor would it have any reason to\n> touch @@>>, since there's no COMMUTATOR clause in the extension.\n> \nI understand your reticence to dive into a branch that is long dead from\nyour perspective. That said, I am grateful for the insights you\nprovided here.\n\n> It'd likely be a good idea to reproduce this with a gdb breakpoint\n> set at errfinish, and see exactly what's leading up to the error.\n> \nThanks for this suggestion. I will see if I am able to isolate the\nprecise cause of the failure with this.\n\nRegards,\n\n-Roberto\n\n-- \nRoberto C. S�nchez\n\n\n", "msg_date": "Sun, 20 Nov 2022 17:00:45 -0500", "msg_from": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org>", "msg_from_op": true, "msg_subject": "Re: Question concerning backport of CVE-2022-2625" }, { "msg_contents": "Hi Tom,\n\nOn Sun, Nov 20, 2022 at 11:43:41AM -0500, Tom Lane wrote:\n> \n> It'd likely be a good idea to reproduce this with a gdb breakpoint\n> set at errfinish, and see exactly what's leading up to the error.\n> \nSo, I did as you suggested. The top few frames of the backtrace were:\n\n#0 errfinish (dummy=0)\n at /build/postgresql-9.4-9.4.26/build/../src/backend/utils/error/elog.c:419\n#1 0x00005563cc733f25 in recordDependencyOnCurrentExtension (\n object=object@entry=0x7ffcfc649310, isReplace=isReplace@entry=1 '\\001')\n at /build/postgresql-9.4-9.4.26/build/../src/backend/catalog/pg_depend.c:184\n#2 0x00005563cc735b72 in makeOperatorDependencies (tuple=0x5563cd10aaa8)\n at /build/postgresql-9.4-9.4.26/build/../src/backend/catalog/pg_operator.c:862\n\nThe code at pg_depend.c:184 came directly from the CVE-2022-2625 commit,\n5919bb5a5989cda232ac3d1f8b9d90f337be2077. However, when I looked at\npg_operator.c:862 I saw that I had had to omit the following change in\nbackporting to 9.4:\n\n /* Dependency on extension */\n- recordDependencyOnCurrentExtension(&myself, true);\n+ recordDependencyOnCurrentExtension(&myself, isUpdate);\n\nThe reason is that the function makeOperatorDependencies() does not have\nthe parameter isUpdate in 9.4. I found that the parameter was\nintroduced in 0dab5ef39b3d9d86e45bbbb2f6ea60b4f5517d9a, which fixed a\nproblem with the ALTER OPERATOR command, but which also seems to bring\nsome structural changes as well and it wasn't clear they would be\nparticularly beneficial in resolving the issue.\n\nIn the end, what I settled on was a minor change to pg_operator.c to add\nthe isUpdate parameter to the signature of makeOperatorDependencies(),\nalong with updates to the invocations of makeOperatorDependencies() so\nthat when it is invoked in OperatorCreate() the parameter is passed in.\nAfter that I was able to include the change I had originally omitted and\nall the tests passed as written (with appropriate adjustments for\ncommands that did not support CINE in 9.4).\n\nThanks again for the suggestion of where to look for the failure!\n\nRegards,\n\n-Roberto\n\n-- \nRoberto C. S�nchez\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:35:27 -0500", "msg_from": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org>", "msg_from_op": true, "msg_subject": "Re: Question concerning backport of CVE-2022-2625" } ]
[ { "msg_contents": "My very slow buildfarm animal mamba has failed pageinspect\nseveral times [1][2][3][4] with this symptom:\n\ndiff -U3 /home/buildfarm/bf-data/HEAD/pgsql.build/contrib/pageinspect/expected/page.out /home/buildfarm/bf-data/HEAD/pgsql.build/contrib/pageinspect/results/page.out\n--- /home/buildfarm/bf-data/HEAD/pgsql.build/contrib/pageinspect/expected/page.out\t2022-11-20 10:12:51.780935488 -0500\n+++ /home/buildfarm/bf-data/HEAD/pgsql.build/contrib/pageinspect/results/page.out\t2022-11-20 14:00:25.818743985 -0500\n@@ -92,9 +92,9 @@\n SELECT t_infomask, t_infomask2, raw_flags, combined_flags\n FROM heap_page_items(get_raw_page('test1', 0)),\n LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2);\n- t_infomask | t_infomask2 | raw_flags | combined_flags \n-------------+-------------+-----------------------------------------------------------+--------------------\n- 2816 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID} | {HEAP_XMIN_FROZEN}\n+ t_infomask | t_infomask2 | raw_flags | combined_flags \n+------------+-------------+-----------------------------------------+----------------\n+ 2304 | 2 | {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID} | {}\n (1 row)\n \n -- tests for decoding of combined flags\n\nIt's not hard to guess what the problem is here: the immediately preceding\nbit is hopelessly optimistic.\n\n-- If we freeze the only tuple on test1, the infomask should\n-- always be the same in all test runs.\nVACUUM (FREEZE, DISABLE_PAGE_SKIPPING) test1;\n\nThe fact that you asked for a freeze doesn't mean you'll get a freeze.\nI suppose that a background auto-analyze is holding back global xmin\nso that the tuple doesn't actually get frozen.\n\nThe core reloptions.sql and vacuum.sql tests are two places that are\nalso using this option, but they are applying it to temp tables,\nwhich I think makes it safe (and the lack of failures, seeing that\nthey run within parallel test groups, reinforces that). Can we apply\nthat idea in pageinspect?\n\ncontrib/amcheck and contrib/pg_visibility are also using\nDISABLE_PAGE_SKIPPING, so I wonder if they have similar hazards.\nI haven't seen them fall over, though.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-11-20%2015%3A13%3A19\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-10-31%2013%3A33%3A35\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-10-19%2016%3A34%3A07\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-08-29%2017%3A49%3A02\n\n\n", "msg_date": "Sun, 20 Nov 2022 15:37:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Unstable regression test for contrib/pageinspect" }, { "msg_contents": "On Sun, Nov 20, 2022 at 12:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The core reloptions.sql and vacuum.sql tests are two places that are\n> also using this option, but they are applying it to temp tables,\n> which I think makes it safe (and the lack of failures, seeing that\n> they run within parallel test groups, reinforces that). Can we apply\n> that idea in pageinspect?\n\nI believe so. The temp table horizons guarantee isn't all that old, so\nthe tests may well have been written before it was possible.\n\n> contrib/amcheck and contrib/pg_visibility are also using\n> DISABLE_PAGE_SKIPPING, so I wonder if they have similar hazards.\n> I haven't seen them fall over, though.\n\nDISABLE_PAGE_SKIPPING forces aggressive mode (which is also possible\nwith FREEZE), but unlike FREEZE it also forces VACUUM to scan even\nall-frozen pages. The other difference is that DISABLE_PAGE_SKIPPING\ndoesn't affect FreezeLimit/freeze_min_age, whereas FREEZE sets it to\n0.\n\nI think that most use of DISABLE_PAGE_SKIPPING by the regression tests\njust isn't necessary. Especially where it's combined with FREEZE like\nthis, as it often seems to be. Why should the behavior around skipping\nall-frozen pages (the only thing changed by using\nDISABLE_PAGE_SKIPPING on top of FREEZE) actually matter to these\ntests?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 20 Nov 2022 13:09:03 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Unstable regression test for contrib/pageinspect" }, { "msg_contents": "\n\n> On Nov 20, 2022, at 12:37 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> contrib/amcheck and contrib/pg_visibility are also using\n> DISABLE_PAGE_SKIPPING, so I wonder if they have similar hazards.\n> I haven't seen them fall over, though.\n\nIn the amcheck regression test case, it's because the test isn't sensitive to whether the freeze actually happens. You can comment out that line, and the only test difference is the comment:\n\n@@ -108,8 +108,8 @@\n ERROR: ending block number must be between 0 and 0\n SELECT * FROM verify_heapam(relation := 'heaptest', startblock := 10000, endblock := 11000);\n ERROR: starting block number must be between 0 and 0\n--- Vacuum freeze to change the xids encountered in subsequent tests\n-VACUUM (FREEZE, DISABLE_PAGE_SKIPPING) heaptest;\n+-- -- Vacuum freeze to change the xids encountered in subsequent tests\n+-- VACUUM (FREEZE, DISABLE_PAGE_SKIPPING) heaptest;\n -- Check that valid options are not rejected nor corruption reported\n -- for a non-empty frozen table\n SELECT * FROM verify_heapam(relation := 'heaptest', skip := 'none');\n\n\nThe amcheck TAP test is sensitive to commenting out the freeze, though:\n\nt/001_verify_heapam.pl .. 42/? \n# Failed test 'all-frozen corrupted table skipping all-frozen'\n# at t/001_verify_heapam.pl line 58.\n# got: '0|3||line pointer redirection to item at offset 21840 exceeds maximum offset 38\n# 0|4||line pointer to page offset 21840 with length 21840 ends beyond maximum page offset 8192\n# 0|5||line pointer redirection to item at offset 0 precedes minimum offset 1\n# 0|6||line pointer length 0 is less than the minimum tuple header size 24\n# 0|7||line pointer to page offset 15 is not maximally aligned\n# 0|8||line pointer length 15 is less than the minimum tuple header size 24'\n# expected: ''\nt/001_verify_heapam.pl .. 211/? # Looks like you failed 1 test of 272.\nt/001_verify_heapam.pl .. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/272 subtests \nt/002_cic.pl ............ ok \nt/003_cic_2pc.pl ........ ok \n\nTest Summary Report\n-------------------\nt/001_verify_heapam.pl (Wstat: 256 (exited 1) Tests: 272 Failed: 1)\n Failed test: 80\n Non-zero exit status: 1\nFiles=3, Tests=280, 10 wallclock secs ( 0.05 usr 0.02 sys + 3.84 cusr 3.10 csys = 7.01 CPU)\nResult: FAIL\nmake: *** [check] Error 1\n\n\nBut the TAP test also disables autovacuum, so a background auto-analyze shouldn't be running. Maybe that's why you haven't seen amcheck fall over?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 20 Nov 2022 19:21:20 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Unstable regression test for contrib/pageinspect" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Nov 20, 2022, at 12:37 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> contrib/amcheck and contrib/pg_visibility are also using\n>> DISABLE_PAGE_SKIPPING, so I wonder if they have similar hazards.\n>> I haven't seen them fall over, though.\n\n> In the amcheck regression test case, it's because the test isn't\n> sensitive to whether the freeze actually happens. You can comment\n> out that line, and the only test difference is the comment:\n\nInteresting. I tried that with pg_visibility, with the same result:\nremoving its VACUUM commands altogether changes nothing else in the\ntest output. I'm not sure this is a good thing. It makes one wonder\nwhether these tests really test what they claim to. But it certainly\nexplains the lack of failures.\n\n> The amcheck TAP test is sensitive to commenting out the freeze, though:\n> ...\n> But the TAP test also disables autovacuum, so a background\n> auto-analyze shouldn't be running. Maybe that's why you haven't\n> seen amcheck fall over?\n\nAh, right, I see\n\n\t$node->append_conf('postgresql.conf', 'autovacuum=off');\n\nin 001_verify_heapam.pl. So that one's okay too.\n\nBottom line seems to be that converting pageinspect's test table\nto a temp table should fix this. If no objections, I'll do that\ntomorrow.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Nov 2022 23:32:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Unstable regression test for contrib/pageinspect" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, Nov 20, 2022 at 12:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The core reloptions.sql and vacuum.sql tests are two places that are\n>> also using this option, but they are applying it to temp tables,\n>> which I think makes it safe (and the lack of failures, seeing that\n>> they run within parallel test groups, reinforces that). Can we apply\n>> that idea in pageinspect?\n\n> I believe so. The temp table horizons guarantee isn't all that old, so\n> the tests may well have been written before it was possible.\n\nAh, right, I see that that only dates back to v14 (cf a7212be8b).\nSo we can fix pageinspect's issue by making that table be temp,\nbut only as far back as v14.\n\nThat's probably good enough in terms of reducing the buildfarm\nnoise level, seeing that mamba has only reported this failure\non HEAD so far. I'd be tempted to propose back-patching\na7212be8b, but there would be ABI-stability issues, and it's\nprobably not worth dealing with that.\n\n>> contrib/amcheck and contrib/pg_visibility are also using\n>> DISABLE_PAGE_SKIPPING, so I wonder if they have similar hazards.\n\n> I think that most use of DISABLE_PAGE_SKIPPING by the regression tests\n> just isn't necessary.\n\nApparently not -- see followup discussion with Mark Dilger.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Nov 2022 23:46:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Unstable regression test for contrib/pageinspect" } ]
[ { "msg_contents": "Hi,\n\nThe lwlock wait queue scalability issue [1] was quite hard to find because of\nthe exponential backoff and because we adjust spins_per_delay over time within\na backend.\n\nI think the least we could do to make this easier would be to signal spin\ndelays as wait events. We surely don't want to do so for non-contended spins\nbecause of the overhead, but once we get to the point of calling pg_usleep()\nthat's not an issue.\n\nI don't think it's worth trying to hand down more detailed information about\nthe specific spinlock we're waiting on, at least for now. We'd have to invent\na whole lot of new wait events because most spinlocks don't have ones yet.\n\nI couldn't quite decide what wait_event_type to best group this under? In the\nattached patch I put it under timeouts, which doesn't seem awful.\n\nI reverted a4adc31f690 and indeed it shows SpinDelay wait events before the\nfix and not after.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20221027165914.2hofzp4cvutj6gin%40awork3.anarazel.de", "msg_date": "Sun, 20 Nov 2022 12:43:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "perform_spin_delay() vs wait events" }, { "msg_contents": "On Sun, Nov 20, 2022 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> The lwlock wait queue scalability issue [1] was quite hard to find because of\n> the exponential backoff and because we adjust spins_per_delay over time within\n> a backend.\n>\n> I think the least we could do to make this easier would be to signal spin\n> delays as wait events. We surely don't want to do so for non-contended spins\n> because of the overhead, but once we get to the point of calling pg_usleep()\n> that's not an issue.\n>\n> I don't think it's worth trying to hand down more detailed information about\n> the specific spinlock we're waiting on, at least for now. We'd have to invent\n> a whole lot of new wait events because most spinlocks don't have ones yet.\n>\n> I couldn't quite decide what wait_event_type to best group this under? In the\n> attached patch I put it under timeouts, which doesn't seem awful.\n\nI think it would be best to make it its own category, like we do with\nbuffer pins.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Nov 2022 17:26:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: perform_spin_delay() vs wait events" }, { "msg_contents": "Hi,\n\nOn 2022-11-20 17:26:11 -0500, Robert Haas wrote:\n> On Sun, Nov 20, 2022 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > I couldn't quite decide what wait_event_type to best group this under? In the\n> > attached patch I put it under timeouts, which doesn't seem awful.\n> \n> I think it would be best to make it its own category, like we do with\n> buffer pins.\n\nI was wondering about that too - but decided against it because it would only\nshow a single wait event. And wouldn't really describe spinlocks as a whole,\njust the \"extreme\" delays. If we wanted to report the spin waits more\ngranular, we'd presumably have to fit the wait events into the lwlock, buffers\nand some new category where we name individual spinlocks.\n\nBut I guess a single spinlock wait event type with an ExponentialBackoff wait\nevent or such wouldn't be too bad.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 20 Nov 2022 15:10:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: perform_spin_delay() vs wait events" }, { "msg_contents": "On Sun, Nov 20, 2022 at 6:10 PM Andres Freund <andres@anarazel.de> wrote:\n> I was wondering about that too - but decided against it because it would only\n> show a single wait event. And wouldn't really describe spinlocks as a whole,\n> just the \"extreme\" delays. If we wanted to report the spin waits more\n> granular, we'd presumably have to fit the wait events into the lwlock, buffers\n> and some new category where we name individual spinlocks.\n>\n> But I guess a single spinlock wait event type with an ExponentialBackoff wait\n> event or such wouldn't be too bad.\n\nOh, hmm. I guess it is actually bracketing a timed wait, now that I\nlook closer at what you did. So perhaps your first idea was best after\nall.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 10:35:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: perform_spin_delay() vs wait events" }, { "msg_contents": "On Mon, Nov 21, 2022 at 2:10 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-20 17:26:11 -0500, Robert Haas wrote:\n> > On Sun, Nov 20, 2022 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I couldn't quite decide what wait_event_type to best group this under? In the\n> > > attached patch I put it under timeouts, which doesn't seem awful.\n> >\n> > I think it would be best to make it its own category, like we do with\n> > buffer pins.\n>\n> I was wondering about that too - but decided against it because it would only\n> show a single wait event. And wouldn't really describe spinlocks as a whole,\n> just the \"extreme\" delays. If we wanted to report the spin waits more\n> granular, we'd presumably have to fit the wait events into the lwlock, buffers\n> and some new category where we name individual spinlocks.\n\n+1 for making a group of individual names spin delays.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 21 Nov 2022 23:58:16 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: perform_spin_delay() vs wait events" }, { "msg_contents": "Hi, \n\nOn November 21, 2022 12:58:16 PM PST, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>On Mon, Nov 21, 2022 at 2:10 AM Andres Freund <andres@anarazel.de> wrote:\n>> On 2022-11-20 17:26:11 -0500, Robert Haas wrote:\n>> > On Sun, Nov 20, 2022 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n>> > > I couldn't quite decide what wait_event_type to best group this under? In the\n>> > > attached patch I put it under timeouts, which doesn't seem awful.\n>> >\n>> > I think it would be best to make it its own category, like we do with\n>> > buffer pins.\n>>\n>> I was wondering about that too - but decided against it because it would only\n>> show a single wait event. And wouldn't really describe spinlocks as a whole,\n>> just the \"extreme\" delays. If we wanted to report the spin waits more\n>> granular, we'd presumably have to fit the wait events into the lwlock, buffers\n>> and some new category where we name individual spinlocks.\n>\n>+1 for making a group of individual names spin delays.\n\nPersonally I'm not interested in doing that work, tbh.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Mon, 21 Nov 2022 13:01:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: perform_spin_delay() vs wait events" }, { "msg_contents": "On Tue, Nov 22, 2022 at 12:01 AM Andres Freund <andres@anarazel.de> wrote:\n> On November 21, 2022 12:58:16 PM PST, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >On Mon, Nov 21, 2022 at 2:10 AM Andres Freund <andres@anarazel.de> wrote:\n> >> On 2022-11-20 17:26:11 -0500, Robert Haas wrote:\n> >> > On Sun, Nov 20, 2022 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> >> > > I couldn't quite decide what wait_event_type to best group this under? In the\n> >> > > attached patch I put it under timeouts, which doesn't seem awful.\n> >> >\n> >> > I think it would be best to make it its own category, like we do with\n> >> > buffer pins.\n> >>\n> >> I was wondering about that too - but decided against it because it would only\n> >> show a single wait event. And wouldn't really describe spinlocks as a whole,\n> >> just the \"extreme\" delays. If we wanted to report the spin waits more\n> >> granular, we'd presumably have to fit the wait events into the lwlock, buffers\n> >> and some new category where we name individual spinlocks.\n> >\n> >+1 for making a group of individual names spin delays.\n>\n> Personally I'm not interested in doing that work, tbh.\n\nOh, then I have no objection to the \"as is\" state, because it doesn't\nexclude the future improvements. But this is still my 2 cents though.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 22 Nov 2022 00:03:23 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: perform_spin_delay() vs wait events" }, { "msg_contents": "Hi,\n\nOn 2022-11-22 00:03:23 +0300, Alexander Korotkov wrote:\n> On Tue, Nov 22, 2022 at 12:01 AM Andres Freund <andres@anarazel.de> wrote:\n> > On November 21, 2022 12:58:16 PM PST, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > >On Mon, Nov 21, 2022 at 2:10 AM Andres Freund <andres@anarazel.de> wrote:\n> > >+1 for making a group of individual names spin delays.\n> >\n> > Personally I'm not interested in doing that work, tbh.\n> \n> Oh, then I have no objection to the \"as is\" state, because it doesn't\n> exclude the future improvements. But this is still my 2 cents though.\n\nI added a note about possibly extending this in the future to both code and\ncommit message. Attached.\n\nI plan to push this soon unless somebody has further comments.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 21 Nov 2022 19:01:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: perform_spin_delay() vs wait events" }, { "msg_contents": "On Mon, Nov 21, 2022 at 07:01:18PM -0800, Andres Freund wrote:\n> I plan to push this soon unless somebody has further comments.\n\n> @@ -146,7 +146,8 @@ typedef enum\n> \tWAIT_EVENT_RECOVERY_RETRIEVE_RETRY_INTERVAL,\n> \tWAIT_EVENT_REGISTER_SYNC_REQUEST,\n> \tWAIT_EVENT_VACUUM_DELAY,\n> -\tWAIT_EVENT_VACUUM_TRUNCATE\n> +\tWAIT_EVENT_VACUUM_TRUNCATE,\n> +\tWAIT_EVENT_SPIN_DELAY\n> } WaitEventTimeout;\n\nThat would be fine for stable branches, but could you keep that in an\nalphabetical order on HEAD?\n--\nMichael", "msg_date": "Tue, 22 Nov 2022 12:51:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: perform_spin_delay() vs wait events" }, { "msg_contents": "Hi,\n\nOn 2022-11-22 12:51:25 +0900, Michael Paquier wrote:\n> On Mon, Nov 21, 2022 at 07:01:18PM -0800, Andres Freund wrote:\n> > I plan to push this soon unless somebody has further comments.\n> \n> > @@ -146,7 +146,8 @@ typedef enum\n> > \tWAIT_EVENT_RECOVERY_RETRIEVE_RETRY_INTERVAL,\n> > \tWAIT_EVENT_REGISTER_SYNC_REQUEST,\n> > \tWAIT_EVENT_VACUUM_DELAY,\n> > -\tWAIT_EVENT_VACUUM_TRUNCATE\n> > +\tWAIT_EVENT_VACUUM_TRUNCATE,\n> > +\tWAIT_EVENT_SPIN_DELAY\n> > } WaitEventTimeout;\n> \n> That would be fine for stable branches, but could you keep that in an\n> alphabetical order on HEAD?\n\nFair point. I wasn't planning to backpatch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 20:17:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: perform_spin_delay() vs wait events" } ]
[ { "msg_contents": "Hi! Do I get it right, that bitwise operations have the same precedence?\n\nQuery *SELECT 1 & 2 | 3, 3 | 1 & 2*\nreturns 3 and 2 respectively. See also\nhttps://www.db-fiddle.com/f/iZHd8zG7A1HjbB6J2y8R7k/1. It looks like the\nresult is calculated from left to right and operators have\nthe same precedence.\n\nI checked relevant documentation pages (\nhttps://www.postgresql.org/docs/current/functions-bitstring.html and\nhttps://www.postgresql.org/docs/current/sql-syntax-lexical.html) and\ncouldn't find any information about bitwise operations precedence, only\ninformation about logical operations precedence.\n\nI'm not saying it's a bug, rather trying to clarify as precedence of\nbitwise operators is different in programming languages, say c++ (\nhttps://en.cppreference.com/w/c/language/operator_precedence) or java (\nhttps://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html)\n\nHi! Do I get it right, that bitwise operations have the same precedence?Query SELECT 1 & 2 | 3, 3 | 1 & 2returns 3 and 2 respectively. See also https://www.db-fiddle.com/f/iZHd8zG7A1HjbB6J2y8R7k/1. It looks like the result is calculated from left to right and operators have the same precedence.I checked relevant documentation pages (https://www.postgresql.org/docs/current/functions-bitstring.html and https://www.postgresql.org/docs/current/sql-syntax-lexical.html) and couldn't find any information about bitwise operations precedence, only information about logical operations precedence.I'm not saying it's a bug, rather trying to clarify as precedence of bitwise operators is different in programming languages, say c++ (https://en.cppreference.com/w/c/language/operator_precedence) or java (https://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html)", "msg_date": "Sun, 20 Nov 2022 22:27:19 +0100", "msg_from": "Bauyrzhan Sakhariyev <baurzhansahariev@gmail.com>", "msg_from_op": true, "msg_subject": "Precedence of bitwise operators" }, { "msg_contents": "Bauyrzhan Sakhariyev <baurzhansahariev@gmail.com> writes:\n> Hi! Do I get it right, that bitwise operations have the same precedence?\n\nYes, that is what the documentation says:\n\nhttps://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-PRECEDENCE\n\nOperator precedence is hard-wired into our parser, so we don't get\nto have a lot of flexibility in assigning precedences for any except\na very small set of operator names.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Nov 2022 16:42:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Precedence of bitwise operators" } ]
[ { "msg_contents": "I'm encountering some surprising (to me) behaviour related to WAL, and I'm\nwondering if anybody can point me at an article that might help me\nunderstand what is happening, or give a brief explanation.\n\nI'm trying to make a slimmed down version of my database for testing\npurposes. As part of this, I'm running a query something like this:\n\nUPDATE table1\n SET pdfcolumn = 'redacted'\n WHERE pdfcolumn IS NOT NULL;\n\n(literally 'redacted', not redacted here for your benefit)\n\nThe idea is to replace the actual contents of the column, which are PDF\ndocuments totalling 70GB, with just a short placeholder value, without\naffecting the other columns, which are a more ordinary collection - a few\nintegers and short strings.\n\nThe end result will be a database which is way easier to copy around but\nwhich still has all the records of the original; the only change is that an\nattempt to access one of the PDFs will not return the actual PDF but rather\na garbage value. For most testing this will make little to no difference.\n\nWhat I'm finding is that the UPDATE is taking over an hour for 5000\nrecords, and tons of WAL is being generated, several files per minute.\nSelecting the non-PDF columns from the entire table takes a few\nmilliseconds, and the only thing I'm doing with the records is updating\nthem to much smaller values. Why so much activity just to remove data? The\nnew rows are tiny.\n\nI'm encountering some surprising (to me) behaviour related to WAL, and I'm wondering if anybody can point me at an article that might help me understand what is happening, or give a brief explanation.I'm trying to make a slimmed down version of my database for testing purposes. As part of this, I'm running a query something like this:UPDATE table1    SET pdfcolumn = 'redacted'    WHERE pdfcolumn IS NOT NULL;(literally 'redacted', not redacted here for your benefit)The idea is to replace the actual contents of the column, which are PDF documents totalling 70GB, with just a short placeholder value, without affecting the other columns, which are a more ordinary collection - a few integers and short strings.The end result will be a database which is way easier to copy around but which still has all the records of the original; the only change is that an attempt to access one of the PDFs will not return the actual PDF but rather a garbage value. For most testing this will make little to no difference.What I'm finding is that the UPDATE is taking over an hour for 5000 records, and tons of WAL is being generated, several files per minute. Selecting the non-PDF columns from the entire table takes a few milliseconds, and the only thing I'm doing with the records is updating them to much smaller values. Why so much activity just to remove data? The new rows are tiny.", "msg_date": "Sun, 20 Nov 2022 20:24:11 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Understanding WAL - large amount of activity from removing data" }, { "msg_contents": "On Sun, Nov 20, 2022 at 6:24 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> What I'm finding is that the UPDATE is taking over an hour for 5000\n> records, and tons of WAL is being generated, several files per minute.\n> Selecting the non-PDF columns from the entire table takes a few\n> milliseconds, and the only thing I'm doing with the records is updating\n> them to much smaller values. Why so much activity just to remove data? The\n> new rows are tiny.\n>\n\nSimplistic answer (partly because the second part of this isn't spelled out\nexplicitly in the docs that I could find) when you UPDATE two things\nhappen, the old record is modified to indicate it has been deleted and a\nnew record is inserted. Both of these are written to the WAL, and a record\nis always written to the WAL as a self-contained unit, so the old record is\nfull sized in the newly written WAL. TOAST apparently has an optimization\nif you don't change the TOASTed value, but here you are so that\noptimization doesn't apply.\n\nDavid J.\n\nOn Sun, Nov 20, 2022 at 6:24 PM Isaac Morland <isaac.morland@gmail.com> wrote:What I'm finding is that the UPDATE is taking over an hour for 5000 records, and tons of WAL is being generated, several files per minute. Selecting the non-PDF columns from the entire table takes a few milliseconds, and the only thing I'm doing with the records is updating them to much smaller values. Why so much activity just to remove data? The new rows are tiny.Simplistic answer (partly because the second part of this isn't spelled out explicitly in the docs that I could find) when you UPDATE two things happen, the old record is modified to indicate it has been deleted and a new record is inserted.  Both of these are written to the WAL, and a record is always written to the WAL as a self-contained unit, so the old record is full sized in the newly written WAL.  TOAST apparently has an optimization if you don't change the TOASTed value, but here you are so that optimization doesn't apply.David J.", "msg_date": "Sun, 20 Nov 2022 19:02:12 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Understanding WAL - large amount of activity from removing data" }, { "msg_contents": "Hi,\n\nOn 2022-11-20 19:02:12 -0700, David G. Johnston wrote:\n> Both of these are written to the WAL, and a record is always written\n> to the WAL as a self-contained unit, so the old record is full sized\n> in the newly written WAL.\n\nThat's not really true. Normally the update record just logs the xmax,\noffset, infomask for the old tuple. However, full_page_writes can lead\nto the old tuple's whole page to be logged.\n\nWe do log the old tuple contents if the replica identity of the table is\nset to 'FULL' - if you're using that, we'll indeed log the whole old\nversion of the tuple to the WAL.\n\nI think the more likely explanation in this case is that deleting the\ntoast values with the PDF - which is what you're doing by updating the\nvalue to = 'redacted' - will have to actually mark all those toast\ntuples as deleted. Which then likely is causing a lot of full page\nwrites.\n\nIn a case like this you might have better luck forcing the table to be\nrewritten with something like\n\nALTER TABLE tbl ALTER COLUMN data TYPE text USING ('redacted');\n\nwhich should just drop the old toast table, without going through it\none-by-one.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:40:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Understanding WAL - large amount of activity from removing data" } ]
[ { "msg_contents": "Hi Hackers,\n\nforking this thread from the discussion [1] as suggested by Amit.\n\nCatalog_xmin is not advanced when a logical slot is invalidated (lost)\nuntil the invalidated slot is dropped. This patch ignores invalidated slots\nwhile computing the oldest xmin. Attached a small patch to address this and\nthe output after the patch is as shown below.\n\npostgres=# select * from pg_replication_slots;\nslot_name | plugin | slot_type | datoid | database | temporary |\nactive | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn | wal_status | safe_wal_size | two_phase\n-----------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\ns2 | test_decoding | logical | 5 | postgres | f | f\n| | | 771 | 0/30466368 | 0/304663A0\n| reserved | 28903824 | f\n(1 row)\n\npostgres=# create table t2(c int, c1 char(100));\nCREATE TABLE\npostgres=# drop table t2;\nDROP TABLE\npostgres=# vacuum pg_class;\nVACUUM\npostgres=# select n_dead_tup from pg_stat_all_tables where relname =\n'pg_class';\nn_dead_tup\n------------\n2\n(1 row)\n\npostgres=# select * from pg_stat_replication;\npid | usesysid | usename | application_name | client_addr |\nclient_hostname | client_port | backend_start | backend_xmin | state |\nsent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag | flush_lag |\nreplay_lag | sync_pri\nority | sync_state | reply_time\n-----+----------+---------+------------------+-------------+-----------------+-------------+---------------+--------------+-------+----------+-----------+-----------+------------+-----------+-----------+------------+---------\n------+------------+------------\n(0 rows)\n\npostgres=# insert into t1 select * from t1;\nINSERT 0 2097152\npostgres=# checkpoint;\nCHECKPOINT\npostgres=# select * from pg_replication_slots;\nslot_name | plugin | slot_type | datoid | database | temporary |\nactive | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn | wal_status | safe_wal_size | two_phase\n-----------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\ns2 | test_decoding | logical | 5 | postgres | f | f\n| | | 771 | | 0/304663A0\n| lost | | f\n(1 row)\n\npostgres=# vacuum pg_class;\nVACUUM\npostgres=# select n_dead_tup from pg_stat_all_tables where relname =\n'pg_class';\nn_dead_tup\n------------\n0\n(1 row)\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKrAKeW-sGqvkw-2zKuVYiVv%3DEOG4LEqJn01RJPsHfS2rQGYng%40mail.gmail.com\n\nThanks,\nSirisha", "msg_date": "Sun, 20 Nov 2022 22:57:32 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On 2022-Nov-20, sirisha chamarthi wrote:\n\n> Hi Hackers,\n> \n> forking this thread from the discussion [1] as suggested by Amit.\n> \n> Catalog_xmin is not advanced when a logical slot is invalidated (lost)\n> until the invalidated slot is dropped. This patch ignores invalidated slots\n> while computing the oldest xmin. Attached a small patch to address this and\n> the output after the patch is as shown below.\n\nOh wow, that's bad :-( I'll get it patched immediately.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Here's a general engineering tip: if the non-fun part is too complex for you\nto figure out, that might indicate the fun part is too ambitious.\" (John Naylor)\nhttps://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 10:18:58 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "Hi Sirisha,\n\nThanks for identifying the bug and the solution. Some review comments inlined.\n\nOn Mon, Nov 21, 2022 at 2:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Nov-20, sirisha chamarthi wrote:\n>\n> > Hi Hackers,\n> >\n> > forking this thread from the discussion [1] as suggested by Amit.\n> >\n> > Catalog_xmin is not advanced when a logical slot is invalidated (lost)\n> > until the invalidated slot is dropped. This patch ignores invalidated slots\n> > while computing the oldest xmin. Attached a small patch to address this and\n> > the output after the patch is as shown below.\n>\n> Oh wow, that's bad :-( I'll get it patched immediately.\n\n+ /* ignore invalid slots while computing the oldest xmin */\n+ if (TransactionIdIsValid(invalidated_at_lsn))\n+ continue;\n\nI think the condition should be\n\nif (!XLogRecPtrIsInvalid(invalidated_at_lsn)) LSN and XID are\ndifferent data types.\n\nand to be inline with pg_get_replication_slots()\n361 if (XLogRecPtrIsInvalid(slot_contents.data.restart_lsn) &&\n362 !XLogRecPtrIsInvalid(slot_contents.data.invalidated_at))\n363 walstate = WALAVAIL_REMOVED;\n\nwe should also check restart_lsn.\n\nI would write this as\n\nbool invalidated_slot = false;\n\nthen under spinlock\ninvalidated_slot = XLogRecPtrIsInvalid(s->data.restart_lsn) &&\n!XLogRecPtrIsInvalid(s->data.invalidated_at);\n\nif (invalidated_slot)\ncontinue.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 21 Nov 2022 16:57:03 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On 2022-Nov-21, Ashutosh Bapat wrote:\n\n> I think the condition should be\n> \n> if (!XLogRecPtrIsInvalid(invalidated_at_lsn)) LSN and XID are\n> different data types.\n\nYeah, this bit is wrong. I agree with your suggestion to just keep a\nboolean flag, as we don't need more than that.\n\n> and to be inline with pg_get_replication_slots()\n> 361 if (XLogRecPtrIsInvalid(slot_contents.data.restart_lsn) &&\n> 362 !XLogRecPtrIsInvalid(slot_contents.data.invalidated_at))\n> 363 walstate = WALAVAIL_REMOVED;\n> \n> we should also check restart_lsn.\n\nHmm, I'm not sure about this one. I'm not sure why we check both in\npg_get_replication_slots. I suppose we didn't want to ignore a slot\nonly if it had a non-zero invalidated_at in case it was accidental (say,\nwe initialize a slot as valid, but forget to zero-out the invalidated_at\nvalue); but I think that's pretty much useless. This is only changed\nwith the spinlock held, so it's not like you can see partially-set\nstate.\n\nIn fact, as I recall we could replace invalidated_at in\nReplicationSlotPersistentData with a boolean \"invalidated\" flag, and\nleave restart_lsn alone when invalidated. IIRC the only reason we\ndidn't do it that way was that we feared some code might observe some\nvalid value in restart_lsn without noticing that it belonged to an\ninvalidate slot. (Which is exactly what happened now, except with a\ndifferent field.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 21 Nov 2022 13:09:08 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On Mon, Nov 21, 2022 at 5:39 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Nov-21, Ashutosh Bapat wrote:\n>\n> > I think the condition should be\n> >\n> > if (!XLogRecPtrIsInvalid(invalidated_at_lsn)) LSN and XID are\n> > different data types.\n>\n> Yeah, this bit is wrong. I agree with your suggestion to just keep a\n> boolean flag, as we don't need more than that.\n>\n> > and to be inline with pg_get_replication_slots()\n> > 361 if (XLogRecPtrIsInvalid(slot_contents.data.restart_lsn) &&\n> > 362 !XLogRecPtrIsInvalid(slot_contents.data.invalidated_at))\n> > 363 walstate = WALAVAIL_REMOVED;\n> >\n> > we should also check restart_lsn.\n>\n> Hmm, I'm not sure about this one. I'm not sure why we check both in\n> pg_get_replication_slots. I suppose we didn't want to ignore a slot\n> only if it had a non-zero invalidated_at in case it was accidental (say,\n> we initialize a slot as valid, but forget to zero-out the invalidated_at\n> value); but I think that's pretty much useless. This is only changed\n> with the spinlock held, so it's not like you can see partially-set\n> state.\n>\n> In fact, as I recall we could replace invalidated_at in\n> ReplicationSlotPersistentData with a boolean \"invalidated\" flag, and\n> leave restart_lsn alone when invalidated. IIRC the only reason we\n> didn't do it that way was that we feared some code might observe some\n> valid value in restart_lsn without noticing that it belonged to an\n> invalidate slot. (Which is exactly what happened now, except with a\n> different field.)\n>\n\nMaybe. In that case pg_get_replication_slots() should be changed. We\nshould use the same criteria to decide whether a slot is invalidated\nor not at all the places.\nI am a fan of stricter, all-assumption-covering conditions. In case we\ndon't want to check restart_lsn, an Assert might be useful to validate\nour assumption.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 21 Nov 2022 18:00:47 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On 2022-Nov-21, Ashutosh Bapat wrote:\n\n> Maybe. In that case pg_get_replication_slots() should be changed. We\n> should use the same criteria to decide whether a slot is invalidated\n> or not at all the places.\n\nRight.\n\n> I am a fan of stricter, all-assumption-covering conditions. In case we\n> don't want to check restart_lsn, an Assert might be useful to validate\n> our assumption.\n\nAgreed. I'll throw in an assert.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:19:56 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "Thanks Alvaro, Ashutosh for your comments.\n\nOn Mon, Nov 21, 2022 at 6:20 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Nov-21, Ashutosh Bapat wrote:\n>\n> > Maybe. In that case pg_get_replication_slots() should be changed. We\n> > should use the same criteria to decide whether a slot is invalidated\n> > or not at all the places.\n>\n> Right.\n>\n\nAgreed.\n\n\n>\n> > I am a fan of stricter, all-assumption-covering conditions. In case we\n> > don't want to check restart_lsn, an Assert might be useful to validate\n> > our assumption.\n>\n> Agreed. I'll throw in an assert.\n>\n\nChanged this in the patch to throw an assert.\n\n\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>", "msg_date": "Mon, 21 Nov 2022 07:18:09 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On 2022-Nov-21, sirisha chamarthi wrote:\n\n> > > I am a fan of stricter, all-assumption-covering conditions. In case we\n> > > don't want to check restart_lsn, an Assert might be useful to validate\n> > > our assumption.\n> >\n> > Agreed. I'll throw in an assert.\n> \n> Changed this in the patch to throw an assert.\n\nThank you. I had pushed mine for CirrusCI to test, and it failed the\nassert I added in slot.c:\nhttps://cirrus-ci.com/build/4786354503548928\nNot yet sure why, looking into it.\n\nYou didn't add any asserts to the slot.c code.\n\nIn slotfuncs.c, I'm not sure I want to assert anything about restart_lsn\nin any cases other than when invalidated_at is set. In other words, I\nprefer this coding in pg_get_replication_slots:\n\n\t\tif (!XLogRecPtrIsInvalid(slot_contents.data.invalidated_at))\n\t\t{\n\t\t\tAssert(XLogRecPtrIsInvalid(slot_contents.data.restart_lsn));\n\t\t\twalstate = WALAVAIL_REMOVED;\n\t\t}\n\t\telse\n\t\t\twalstate = GetWALAvailability(slot_contents.data.restart_lsn);\n\n\nYour proposal is doing this:\n\n\t\tswitch (walstate)\n\t\t{\n\t\t\t[...]\n\t\t\tcase WALAVAIL_REMOVED:\n\n\t\t\t\tif (!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn))\n\t\t\t\t{\n\t\t\t\t\t[...]\n\t\t\t\t\tif (pid != 0)\n\t\t\t\t\t\t[...] break;\n\t\t\t\t}\n\t\t\t\tAssert(XLogRecPtrIsInvalid(slot_contents.data.restart_lsn));\n\nwhich sounds like it could be hit if the replica is connected to the\nslot.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:05:10 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On Mon, Nov 21, 2022 at 8:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Nov-21, sirisha chamarthi wrote:\n>\n> > > > I am a fan of stricter, all-assumption-covering conditions. In case\n> we\n> > > > don't want to check restart_lsn, an Assert might be useful to\n> validate\n> > > > our assumption.\n> > >\n> > > Agreed. I'll throw in an assert.\n> >\n> > Changed this in the patch to throw an assert.\n>\n> Thank you. I had pushed mine for CirrusCI to test, and it failed the\n> assert I added in slot.c:\n> https://cirrus-ci.com/build/4786354503548928\n> Not yet sure why, looking into it.\n>\n\nCan this be because restart_lsn is not set to InvalidXLogRecPtr for the\nphysical slots? My repro is as follows:\n\nselect pg_create_physical_replication_slot('s5');\n// Load some data to invalidate slot\npostgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s5 -D .\npg_receivewal: error: unexpected termination of replication stream: ERROR:\n requested WAL segment 0000000100000000000000EB has already been removed\npg_receivewal: disconnected; waiting 5 seconds to try again\npg_receivewal: error: unexpected termination of replication stream: ERROR:\n requested WAL segment 0000000100000000000000EB has already been removed\npg_receivewal: disconnected; waiting 5 seconds to try again\n^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psql\npsql (16devel)\nType \"help\" for help.\n\npostgres=# select * from pg_replication_slots;\n slot_name | plugin | slot_type | datoid | database | temporary |\nactive | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn | wal_status | safe_wal_size | two_phase\n-----------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\n s3 | test_decoding | logical | 5 | postgres | f | f\n | | | 769 | | 0/A992E7D0\n | lost | | f\n s5 | | physical | | | f | f\n | | | | 0/EB000000 |\n| lost | | f\n\n\n\n>\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>\n\nOn Mon, Nov 21, 2022 at 8:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Nov-21, sirisha chamarthi wrote:\n\n> > > I am a fan of stricter, all-assumption-covering conditions. In case we\n> > > don't want to check restart_lsn, an Assert might be useful to validate\n> > > our assumption.\n> >\n> > Agreed.  I'll throw in an assert.\n> \n> Changed this in the patch to throw an assert.\n\nThank you.  I had pushed mine for CirrusCI to test, and it failed the\nassert I added in slot.c:\nhttps://cirrus-ci.com/build/4786354503548928\nNot yet sure why, looking into it.Can this be because restart_lsn is not set to InvalidXLogRecPtr for the physical slots? My repro is as follows:select pg_create_physical_replication_slot('s5');// Load some data to invalidate slotpostgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s5 -D .pg_receivewal: error: unexpected termination of replication stream: ERROR:  requested WAL segment 0000000100000000000000EB has already been removedpg_receivewal: disconnected; waiting 5 seconds to try againpg_receivewal: error: unexpected termination of replication stream: ERROR:  requested WAL segment 0000000100000000000000EB has already been removedpg_receivewal: disconnected; waiting 5 seconds to try again^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psqlpsql (16devel)Type \"help\" for help.postgres=# select * from pg_replication_slots; slot_name |    plugin     | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status | safe_wal_size | two_phase -----------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+----------- s3        | test_decoding | logical   |      5 | postgres | f         | f      |            |      |          769 |             | 0/A992E7D0          | lost       |               | f s5        |               | physical  |        |          | f         | f      |            |      |              | 0/EB000000  |                     | lost       |               | f \n\n-- \nÁlvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/", "msg_date": "Mon, 21 Nov 2022 08:49:08 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On 2022-Nov-21, sirisha chamarthi wrote:\n\n> On Mon, Nov 21, 2022 at 8:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n\n> > Thank you. I had pushed mine for CirrusCI to test, and it failed the\n> > assert I added in slot.c:\n> > https://cirrus-ci.com/build/4786354503548928\n> > Not yet sure why, looking into it.\n> \n> Can this be because restart_lsn is not set to InvalidXLogRecPtr for the\n> physical slots?\n\nHmm, that makes no sense. Is that yet another bug? Looking.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No es bueno caminar con un hombre muerto\"\n\n\n", "msg_date": "Mon, 21 Nov 2022 18:12:54 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On Mon, Nov 21, 2022 at 9:12 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Nov-21, sirisha chamarthi wrote:\n>\n> > On Mon, Nov 21, 2022 at 8:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > wrote:\n>\n> > > Thank you. I had pushed mine for CirrusCI to test, and it failed the\n> > > assert I added in slot.c:\n> > > https://cirrus-ci.com/build/4786354503548928\n> > > Not yet sure why, looking into it.\n> >\n> > Can this be because restart_lsn is not set to InvalidXLogRecPtr for the\n> > physical slots?\n>\n> Hmm, that makes no sense. Is that yet another bug? Looking.\n>\n\nIt appears to be. wal_sender is setting restart_lsn to a valid LSN even\nwhen the slot is invalidated.\n\npostgres=# select pg_Create_physical_replication_slot('s1');\n pg_create_physical_replication_slot\n-------------------------------------\n (s1,)\n(1 row)\n\npostgres=# select * from pg_replication_slots;\n slot_name | plugin | slot_type | datoid | database | temporary | active |\nactive_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn |\nwal_status | safe_wal_size | two_phase\n-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\n s1 | | physical | | | f | f |\n | | | | |\n | -8254390272 | f\n(1 row)\n\npostgres=# checkpoint;\nCHECKPOINT\npostgres=# select * from pg_replication_slots;\n slot_name | plugin | slot_type | datoid | database | temporary | active |\nactive_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn |\nwal_status | safe_wal_size | two_phase\n-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\n s1 | | physical | | | f | f |\n | | | | |\n | -8374095064 | f\n(1 row)\n\npostgres=# \\q\npostgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s1 -D .\npg_receivewal: error: unexpected termination of replication stream: ERROR:\n requested WAL segment 0000000100000000000000EB has already been removed\npg_receivewal: disconnected; waiting 5 seconds to try again\n^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psql\npsql (16devel)\nType \"help\" for help.\n\npostgres=# select * from pg_replication_slots;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n!?> ^C\n!?>\n\n\nIn the log:\n2022-11-21 17:31:48.159 UTC [3953664] STATEMENT: START_REPLICATION SLOT\n\"s1\" 0/EB000000 TIMELINE 1\nTRAP: failed Assert(\"XLogRecPtrIsInvalid(slot_contents.data.restart_lsn)\"),\nFile: \"slotfuncs.c\", Line: 371, PID: 3953707\n\n\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n> \"No es bueno caminar con un hombre muerto\"\n>\n\nOn Mon, Nov 21, 2022 at 9:12 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Nov-21, sirisha chamarthi wrote:\n\n> On Mon, Nov 21, 2022 at 8:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n\n> > Thank you.  I had pushed mine for CirrusCI to test, and it failed the\n> > assert I added in slot.c:\n> > https://cirrus-ci.com/build/4786354503548928\n> > Not yet sure why, looking into it.\n> \n> Can this be because restart_lsn is not set to InvalidXLogRecPtr for the\n> physical slots?\n\nHmm, that makes no sense.  Is that yet another bug?  Looking.It appears to be. wal_sender is setting restart_lsn to a valid LSN even when the slot is invalidated.postgres=# select pg_Create_physical_replication_slot('s1'); pg_create_physical_replication_slot ------------------------------------- (s1,)(1 row)postgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status | safe_wal_size | two_phase -----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+----------- s1        |        | physical  |        |          | f         | f      |            |      |              |             |                     |            |   -8254390272 | f(1 row)postgres=# checkpoint;CHECKPOINTpostgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status | safe_wal_size | two_phase -----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+----------- s1        |        | physical  |        |          | f         | f      |            |      |              |             |                     |            |   -8374095064 | f(1 row)postgres=# \\qpostgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s1 -D .pg_receivewal: error: unexpected termination of replication stream: ERROR:  requested WAL segment 0000000100000000000000EB has already been removedpg_receivewal: disconnected; waiting 5 seconds to try again^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psqlpsql (16devel)Type \"help\" for help.postgres=# select * from pg_replication_slots;server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.The connection to the server was lost. Attempting reset: Failed.The connection to the server was lost. Attempting reset: Failed.!?> ^C!?> In the log:2022-11-21 17:31:48.159 UTC [3953664] STATEMENT:  START_REPLICATION SLOT \"s1\" 0/EB000000 TIMELINE 1TRAP: failed Assert(\"XLogRecPtrIsInvalid(slot_contents.data.restart_lsn)\"), File: \"slotfuncs.c\", Line: 371, PID: 3953707 \n\n-- \nÁlvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/\n\"No es bueno caminar con un hombre muerto\"", "msg_date": "Mon, 21 Nov 2022 09:35:53 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On 2022-Nov-21, sirisha chamarthi wrote:\n\n> It appears to be. wal_sender is setting restart_lsn to a valid LSN even\n> when the slot is invalidated.\n\n> postgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s1 -D .\n> pg_receivewal: error: unexpected termination of replication stream: ERROR:\n> requested WAL segment 0000000100000000000000EB has already been removed\n> pg_receivewal: disconnected; waiting 5 seconds to try again\n> ^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psql\n> psql (16devel)\n> Type \"help\" for help.\n> \n> postgres=# select * from pg_replication_slots;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n\nWhoa, I cannot reproduce this :-(\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n\n", "msg_date": "Mon, 21 Nov 2022 19:11:13 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On Mon, Nov 21, 2022 at 10:11 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Nov-21, sirisha chamarthi wrote:\n>\n> > It appears to be. wal_sender is setting restart_lsn to a valid LSN even\n> > when the slot is invalidated.\n>\n> > postgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s1 -D .\n> > pg_receivewal: error: unexpected termination of replication stream:\n> ERROR:\n> > requested WAL segment 0000000100000000000000EB has already been removed\n> > pg_receivewal: disconnected; waiting 5 seconds to try again\n> > ^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psql\n> > psql (16devel)\n> > Type \"help\" for help.\n> >\n> > postgres=# select * from pg_replication_slots;\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n>\n> Whoa, I cannot reproduce this :-(\n>\n\nI have a old .partial file in the data directory to reproduce this.\n\npostgres=# select * from pg_replication_slots;\n slot_name | plugin | slot_type | datoid | database | temporary | active |\nactive_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn |\nwal_status | safe_wal_size | two_phase\n-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\n s2 | | physical | | | f | f |\n | | | 2/DC000000 | | lost\n | | f\n(1 row)\n\npostgres=# \\q\npostgres@pgvm:~$ ls\n0000000100000002000000D8 0000000100000002000000D9\n 0000000100000002000000DA 0000000100000002000000DB\n 0000000100000002000000DC.partial\n\n\n>\n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n> \"Java is clearly an example of money oriented programming\" (A. Stepanov)\n>\n\nOn Mon, Nov 21, 2022 at 10:11 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Nov-21, sirisha chamarthi wrote:\n\n> It appears to be. wal_sender is setting restart_lsn to a valid LSN even\n> when the slot is invalidated.\n\n> postgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s1 -D .\n> pg_receivewal: error: unexpected termination of replication stream: ERROR:\n>  requested WAL segment 0000000100000000000000EB has already been removed\n> pg_receivewal: disconnected; waiting 5 seconds to try again\n> ^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psql\n> psql (16devel)\n> Type \"help\" for help.\n> \n> postgres=# select * from pg_replication_slots;\n> server closed the connection unexpectedly\n>         This probably means the server terminated abnormally\n>         before or while processing the request.\n\nWhoa, I cannot reproduce this :-( I have a old .partial file in the data directory to reproduce this.postgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status | safe_wal_size | two_phase -----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+----------- s2        |        | physical  |        |          | f         | f      |            |      |              | 2/DC000000  |                     | lost       |               | f(1 row)postgres=# \\qpostgres@pgvm:~$ ls 0000000100000002000000D8  0000000100000002000000D9  0000000100000002000000DA  0000000100000002000000DB  0000000100000002000000DC.partial  \n\n-- \nÁlvaro Herrera        Breisgau, Deutschland  —  https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\"  (A. Stepanov)", "msg_date": "Mon, 21 Nov 2022 10:40:08 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On Mon, Nov 21, 2022 at 10:40 AM sirisha chamarthi <\nsirichamarthi22@gmail.com> wrote:\n\n>\n>\n> On Mon, Nov 21, 2022 at 10:11 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>\n>> On 2022-Nov-21, sirisha chamarthi wrote:\n>>\n>> > It appears to be. wal_sender is setting restart_lsn to a valid LSN even\n>> > when the slot is invalidated.\n>>\n>> > postgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s1 -D .\n>> > pg_receivewal: error: unexpected termination of replication stream:\n>> ERROR:\n>> > requested WAL segment 0000000100000000000000EB has already been removed\n>> > pg_receivewal: disconnected; waiting 5 seconds to try again\n>> > ^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psql\n>> > psql (16devel)\n>> > Type \"help\" for help.\n>> >\n>> > postgres=# select * from pg_replication_slots;\n>> > server closed the connection unexpectedly\n>> > This probably means the server terminated abnormally\n>> > before or while processing the request.\n>>\n>> Whoa, I cannot reproduce this :-(\n>>\n>\n> I have a old .partial file in the data directory to reproduce this.\n>\n> postgres=# select * from pg_replication_slots;\n> slot_name | plugin | slot_type | datoid | database | temporary | active |\n> active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn |\n> wal_status | safe_wal_size | two_phase\n>\n> -----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\n> s2 | | physical | | | f | f |\n> | | | 2/DC000000 | | lost\n> | | f\n> (1 row)\n>\n> postgres=# \\q\n> postgres@pgvm:~$ ls\n> 0000000100000002000000D8 0000000100000002000000D9\n> 0000000100000002000000DA 0000000100000002000000DB\n> 0000000100000002000000DC.partial\n>\n\nJust to be clear, it was hitting the assert I added in the slotfuncs.c but\nnot in the code you mentioned. Apologies for the confusion. Also it appears\nin the above case I mentioned, the slot is not invalidated yet as the\ncheckpointer did not run though the state says it is lost.\n\n\n\n>\n>\n>>\n>> --\n>> Álvaro Herrera Breisgau, Deutschland —\n>> https://www.EnterpriseDB.com/\n>> \"Java is clearly an example of money oriented programming\" (A. Stepanov)\n>>\n>\n\nOn Mon, Nov 21, 2022 at 10:40 AM sirisha chamarthi <sirichamarthi22@gmail.com> wrote:On Mon, Nov 21, 2022 at 10:11 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Nov-21, sirisha chamarthi wrote:\n\n> It appears to be. wal_sender is setting restart_lsn to a valid LSN even\n> when the slot is invalidated.\n\n> postgres@pgvm:~$ /usr/local/pgsql/bin/pg_receivewal -S s1 -D .\n> pg_receivewal: error: unexpected termination of replication stream: ERROR:\n>  requested WAL segment 0000000100000000000000EB has already been removed\n> pg_receivewal: disconnected; waiting 5 seconds to try again\n> ^Cpostgres@pgvm:~$ /usr/local/pgsql/bin/psql\n> psql (16devel)\n> Type \"help\" for help.\n> \n> postgres=# select * from pg_replication_slots;\n> server closed the connection unexpectedly\n>         This probably means the server terminated abnormally\n>         before or while processing the request.\n\nWhoa, I cannot reproduce this :-( I have a old .partial file in the data directory to reproduce this.postgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status | safe_wal_size | two_phase -----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+----------- s2        |        | physical  |        |          | f         | f      |            |      |              | 2/DC000000  |                     | lost       |               | f(1 row)postgres=# \\qpostgres@pgvm:~$ ls 0000000100000002000000D8  0000000100000002000000D9  0000000100000002000000DA  0000000100000002000000DB  0000000100000002000000DC.partial Just to be clear, it was hitting the assert I added in the slotfuncs.c but not in the code you mentioned. Apologies for the confusion. Also it appears in the above case I mentioned, the slot is not invalidated yet as the checkpointer did not run though the state says it is lost.  \n\n-- \nÁlvaro Herrera        Breisgau, Deutschland  —  https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\"  (A. Stepanov)", "msg_date": "Mon, 21 Nov 2022 10:48:55 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On 2022-Nov-21, sirisha chamarthi wrote:\n\n> I have a old .partial file in the data directory to reproduce this.\n\nI don't think the .partial file is in itself important. But I think\nthis whole thing is a distraction. I managed to reproduce it\neventually, by messing with the slot and WAL at random, and my\nconclusion is that we shouldn't mess with this at all for this bugfix.\nInstead I'm going to do what Ashutosh mentioned at the start, which is\nto verify both the restart_lsn and the invalidated_at, when deciding\nwhether to ignore the slot.\n\nIt seems to me that there is a bigger mess here, considering that we use\nthe effective_xmin in some places and the other xmin (the one that's\nsaved to disk) in others. I have no patience for trying to disentangle\nthat at this point, though.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n", "msg_date": "Mon, 21 Nov 2022 19:56:39 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On Mon, Nov 21, 2022 at 10:56 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Nov-21, sirisha chamarthi wrote:\n>\n> > I have a old .partial file in the data directory to reproduce this.\n>\n> I don't think the .partial file is in itself important. But I think\n> this whole thing is a distraction.\n\nYes, sorry for the confusion.\n\n\n> I managed to reproduce it\n> eventually, by messing with the slot and WAL at random, and my\n> conclusion is that we shouldn't mess with this at all for this bugfix.\n>\n\nAgreed.\n\n\n> Instead I'm going to do what Ashutosh mentioned at the start, which is\n> to verify both the restart_lsn and the invalidated_at, when deciding\n> whether to ignore the slot.\n>\n\nSounds good to me. Thanks!\n\n\n>\n> It seems to me that there is a bigger mess here, considering that we use\n> the effective_xmin in some places and the other xmin (the one that's\n> saved to disk) in others. I have no patience for trying to disentangle\n> that at this point, though.\n\n\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"Having your biases confirmed independently is how scientific progress is\n> made, and hence made our great society what it is today\" (Mary Gardiner)\n>\n\nOn Mon, Nov 21, 2022 at 10:56 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Nov-21, sirisha chamarthi wrote:\n\n> I have a old .partial file in the data directory to reproduce this.\n\nI don't think the .partial file is in itself important.  But I think\nthis whole thing is a distraction. Yes, sorry for the confusion.  I managed to reproduce it\neventually, by messing with the slot and WAL at random, and my\nconclusion is that we shouldn't mess with this at all for this bugfix.Agreed. \nInstead I'm going to do what Ashutosh mentioned at the start, which is\nto verify both the restart_lsn and the invalidated_at, when deciding\nwhether to ignore the slot.Sounds good to me. Thanks! \n\nIt seems to me that there is a bigger mess here, considering that we use\nthe effective_xmin in some places and the other xmin (the one that's\nsaved to disk) in others.  I have no patience for trying to disentangle\nthat at this point, though. \n\n-- \nÁlvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)", "msg_date": "Mon, 21 Nov 2022 11:02:45 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" }, { "msg_contents": "On 2022-Nov-21, sirisha chamarthi wrote:\n\n> On Mon, Nov 21, 2022 at 10:56 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n\n> > Instead I'm going to do what Ashutosh mentioned at the start, which is\n> > to verify both the restart_lsn and the invalidated_at, when deciding\n> > whether to ignore the slot.\n> \n> Sounds good to me. Thanks!\n\nDone now. I also a new elog(DEBUG1), which I think makes the issue a\nbit easier notice.\n\nI think it would be even better if we reset the underlying data from\neffective_catalog_xmin ... even with this patch, we show a non-zero\nvalue for a slot in status \"lost\" (and we ignore it when computing the\noverall xmin), which I think is quite confusing. But we can do that in\nmaster only.\n\nThanks for reporting this issue.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:01:08 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Catalog_xmin is not advanced when a logical slot is lost" } ]
[ { "msg_contents": "Hi Hackers,\n\nThe comments atop seem to indicate that it is only showing active\nreplication slots. The comment is ambiguous as it also shows all the slots\nincluding lost and inactive slots. Attached a small patch to fix it.\n\nThanks,\nSirisha", "msg_date": "Sun, 20 Nov 2022 23:15:57 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Fix comments atop pg_get_replication_slots" }, { "msg_contents": "On Mon, Nov 21, 2022 at 12:45 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> The comments atop seem to indicate that it is only showing active replication slots. The comment is ambiguous as it also shows all the slots including lost and inactive slots. Attached a small patch to fix it.\n>\n\nI agree that it is a bit confusing. How about \"SQL SRF showing all\nreplication slots that currently exist on the database cluster\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Nov 2022 13:08:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix comments atop pg_get_replication_slots" }, { "msg_contents": "Amit, thanks for looking into this!\n\nOn Sun, Nov 20, 2022 at 11:38 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Mon, Nov 21, 2022 at 12:45 PM sirisha chamarthi\n> <sirichamarthi22@gmail.com> wrote:\n> >\n> > Hi Hackers,\n> >\n> > The comments atop seem to indicate that it is only showing active\n> replication slots. The comment is ambiguous as it also shows all the slots\n> including lost and inactive slots. Attached a small patch to fix it.\n> >\n>\n> I agree that it is a bit confusing. How about \"SQL SRF showing all\n> replication slots that currently exist on the database cluster\"?\n>\n\nLooks good to me. Attached a patch for the same.\n\n\n>\n> --\n> With Regards,\n> Amit Kapila.\n>", "msg_date": "Sun, 20 Nov 2022 23:52:59 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix comments atop pg_get_replication_slots" }, { "msg_contents": "On Mon, Nov 21, 2022 at 1:22 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> Looks good to me. Attached a patch for the same.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:02:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix comments atop pg_get_replication_slots" } ]
[ { "msg_contents": "Hi Hackers,\n\nAt present, calling pg_stat_reset* functions requires super user access\nunless explicitly grant execute permission on those. In this thread, I am\nproposing to grant execute on them to users with pg_monitor role\npermissions. This comes handy to the monitoring users (part of pg_monitor\nrole) to capture the stats fresh and analyze. Do you see any concerns with\nthis approach?\n\nThanks,\nSirisha\n\nHi Hackers,At present, calling pg_stat_reset* functions requires super user access unless explicitly grant execute permission on those. In this thread, I am proposing to grant execute on them to users with pg_monitor role permissions. This comes handy to the monitoring users (part of pg_monitor role) to capture the stats fresh and analyze. Do you see any concerns with this approach?Thanks,Sirisha", "msg_date": "Mon, 21 Nov 2022 00:16:20 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal: Allow user with pg_monitor role to call pg_stat_reset*\n functions" }, { "msg_contents": "Hi,\n\nOn 2022-11-21 00:16:20 -0800, sirisha chamarthi wrote:\n> At present, calling pg_stat_reset* functions requires super user access\n> unless explicitly grant execute permission on those. In this thread, I am\n> proposing to grant execute on them to users with pg_monitor role\n> permissions. This comes handy to the monitoring users (part of pg_monitor\n> role) to capture the stats fresh and analyze. Do you see any concerns with\n> this approach?\n\nI think the common assumption is that a monitoring role cannot modify\nthe system, but this would change that. Normally a monitoring tool\nshould be able to figure out what changed in stats by comparing values\nacross time, rather than resetting stats.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:45:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Allow user with pg_monitor role to call pg_stat_reset*\n functions" }, { "msg_contents": "On Mon, Nov 21, 2022 at 3:45 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-21 00:16:20 -0800, sirisha chamarthi wrote:\n> > At present, calling pg_stat_reset* functions requires super user access\n> > unless explicitly grant execute permission on those. In this thread, I am\n> > proposing to grant execute on them to users with pg_monitor role\n> > permissions. This comes handy to the monitoring users (part of pg_monitor\n> > role) to capture the stats fresh and analyze. Do you see any concerns with\n> > this approach?\n>\n> I think the common assumption is that a monitoring role cannot modify\n> the system, but this would change that. Normally a monitoring tool\n> should be able to figure out what changed in stats by comparing values\n> across time, rather than resetting stats.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:47:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Allow user with pg_monitor role to call pg_stat_reset*\n functions" }, { "msg_contents": "On Mon, Nov 21, 2022 at 03:47:38PM -0500, Robert Haas wrote:\n> On Mon, Nov 21, 2022 at 3:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-11-21 00:16:20 -0800, sirisha chamarthi wrote:\n> > > At present, calling pg_stat_reset* functions requires super user access\n> > > unless explicitly grant execute permission on those. In this thread, I am\n> > > proposing to grant execute on them to users with pg_monitor role\n> > > permissions. This comes handy to the monitoring users (part of pg_monitor\n> > > role) to capture the stats fresh and analyze. Do you see any concerns with\n> > > this approach?\n> >\n> > I think the common assumption is that a monitoring role cannot modify\n> > the system, but this would change that. Normally a monitoring tool\n> > should be able to figure out what changed in stats by comparing values\n> > across time, rather than resetting stats.\n> \n> +1.\n\nThis can have negative impact for autovacuum, so +1 too.\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:06:52 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Allow user with pg_monitor role to call pg_stat_reset*\n functions" }, { "msg_contents": "On Tue, Nov 22, 2022 at 2:15 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-11-21 00:16:20 -0800, sirisha chamarthi wrote:\n> > At present, calling pg_stat_reset* functions requires super user access\n> > unless explicitly grant execute permission on those. In this thread, I am\n> > proposing to grant execute on them to users with pg_monitor role\n> > permissions. This comes handy to the monitoring users (part of pg_monitor\n> > role) to capture the stats fresh and analyze. Do you see any concerns with\n> > this approach?\n>\n> I think the common assumption is that a monitoring role cannot modify\n> the system, but this would change that. Normally a monitoring tool\n> should be able to figure out what changed in stats by comparing values\n> across time, rather than resetting stats.\n\n+1.\n\nA bit more info: AFAICS, there's no explicit if (!superuser()) {\nerror;} checks in any of pg_stat_reset* functions, which means, one\ncan still grant execute permissions on them to anyone (any predefined\nroles, non-superusers or other superusers) via a superuser outside of\npostgres, if that's the use case [1] [2]. That's the flexibility\npostgres provides for some of the system functions but not all. Most\nof the extension functions and some core functions pg_nextoid,\npg_stop_making_pinned_objects, pg_rotate_logfile,\npg_import_system_collations, pg_cancel_backend, pg_terminate_backend,\npg_read_file still have such explicit if (!superuser()) { error;}\nchecks. I'm not sure if it's the right time to remove such explicit\nchecks and move to explicit GRANT-REVOKE system.\n\nFWIW, here's a recent commit f0b051e322d530a340e62f2ae16d99acdbcb3d05.\n\n[1]\n--\n-- The default permissions for functions mean that anyone can execute them.\n-- A number of functions shouldn't be executable by just anyone, but rather\n-- than use explicit 'superuser()' checks in those functions, we use the GRANT\n-- system to REVOKE access to those functions at initdb time. Administrators\n-- can later change who can access these functions, or leave them as only\n-- available to superuser / cluster owner, if they choose.\n--\n\n[2]\npostgres=# create role foo with nosuperuser;\nCREATE ROLE\npostgres=# set role foo;\nSET\npostgres=> select pg_stat_reset();\nERROR: permission denied for function pg_stat_reset\npostgres=> reset role;\nRESET\npostgres=# grant execute on function pg_stat_reset() to foo;\nGRANT\npostgres=# set role foo;\nSET\npostgres=> select pg_stat_reset();\n pg_stat_reset\n---------------\n\n(1 row)\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:13:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Allow user with pg_monitor role to call pg_stat_reset*\n functions" } ]
[ { "msg_contents": "Hi, hackers!\n\nIt is important for customer support to know what system operations \n(pg_resetwal, pg_rewind, pg_upgrade, ...) have been executed on the \ndatabase. A variant of implementation of the log for system operations \n(operation log) is attached to this email.\n\nIntroduction.\n-------------\nOperation log is designed to store information about critical system \nevents (like pg_upgrade, pg_resetwal, pg_resetwal, etc.).\nThis information is not interesting to the ordinary user, but it is very \nimportant for the vendor's technical support.\nAn example: client complains about DB damage to technical support \n(damage was caused by pg_resetwal which was \"silent\" executed by one of \nadministrators).\n\nConcepts.\n---------\n* operation log is placed in the file 'global/pg_control', starting from \nposition 4097 (log size is 4kB);\n* user can not modify the operation log; log can be changed by \nfunction call only (from postgres or from postgres utilities);\n* operation log is a ring buffer (with CRC-32 protection), deleting \nentries from the operation log is possible only when the buffer is \noverflowed;\n* SQL-function is used to read data of operation log.\n\nExample of operation log data.\n------------------------------\n\n >select * from pg_operation_log();\n event |edition|version| lsn | last |count\n------------+-------+-------+---------+----------------------+------\n startup |vanilla|10.22.0|0/8000028|2022-11-18 23:06:27+03| 1\n pg_upgrade |vanilla|15.0.0 |0/8000028|2022-11-18 23:06:27+03| 1\n startup |vanilla|15.0.0 |0/80001F8|2022-11-18 23:11:53+03| 3\n pg_resetwal|vanilla|15.0.0 |0/80001F8|2022-11-18 23:09:53+03| 2\n(4 rows)\n\nSome details about inserting data to operation log.\n---------------------------------------------------\nThere are two modes of inserting information about events in the \noperation log.\n\n* MERGE mode (events \"startup\", \"pg_resetwal\", \"pg_rewind\").\nWe searches in ring buffer of operation log an event with the same type \n(\"startup\" for example) with the same version number.\nIf event was found, we will increment event counter by 1 and update the \ndate/time of event (\"last\" field) with the current value.\nIf event was not found, we will add this event to the ring buffer (see \nINSERT mode).\n* INSERT mode (events \"bootstrap\", \"pg_upgrade\", \"promoted\").\nWe will add an event to the ring buffer (without searching).\n\n\nP.S. File 'global/pg_control' was chosen as operation log storage \nbecause data of this file cannot be removed or modified in a simple way \nand no need to change any extensions and utilities to support this file.\n\nI attached the patch (v1-0001-Operation-log.patch) and extended \ndescription of operation log (Operation-log.txt).\n\n\nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 21 Nov 2022 11:41:22 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Operation log for major operations" }, { "msg_contents": "See also prior discussions:\nhttps://www.postgresql.org/message-id/flat/62750df5b126e1d8ee039a79ef3cc64ac3d47cd5.camel%40j-davis.com\nhttps://www.postgresql.org/message-id/flat/20180228214311.jclah37cwh572t2c%40alap3.anarazel.de\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 21 Nov 2022 05:03:15 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "Thanks for references, Justin!\n\nCouple comments about these references.\n\n1) \"Make unlogged table resets detectable\".\nhttps://www.postgresql.org/message-id/flat/62750df5b126e1d8ee039a79ef3cc64ac3d47cd5.camel%40j-davis.com\n\nThis conversation is about specific problem (unlogged table repairing). \nOperation log has another use - it is primary a helper for diagnostics.\n\n2) \"RFC: Add 'taint' field to pg_control.\"\nhttps://www.postgresql.org/message-id/flat/20180228214311.jclah37cwh572t2c%40alap3.anarazel.de\n\nThis is almost the same problem that we want to solve with operation \nlog. Differences between the operation log and what is discussed in the \nthread:\n* there suggested to store operation log in pg_control file - but \nseparate from pg_control main data (and write data separately too);\n* operation log data can be represented in relational form (not flags), \nthis is more usable for RDBMS;\n* number of registered event types can be increased easy (without \nchanges of storage).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 16:55:36 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "On 2022-Nov-21, Dmitry Koval wrote:\n\n> Concepts.\n> ---------\n> * operation log is placed in the file 'global/pg_control', starting from\n> position 4097 (log size is 4kB);\n\nI think storing this in pg_control is a bad idea. That file is\nextremely critical and if you break it, you're pretty much SOL on\nrecovering your data. I suggest that this should use a separate file.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"All rings of power are equal,\nBut some rings of power are more equal than others.\"\n (George Orwell's The Lord of the Rings)\n\n\n", "msg_date": "Wed, 23 Nov 2022 10:00:21 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "Hi!\n\n >I think storing this in pg_control is a bad idea. That file is\n >extremely critical and if you break it, you're pretty much SOL on\n >recovering your data. I suggest that this should use a separate file.\n\nThanks. Operation log location changed to 'global/pg_control_log' (if \nthe name is not pretty, it can be changed).\n\nI attached the patch (v2-0001-Operation-log.patch) and description of \noperation log (Operation-log.txt).\n\n\nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 5 Dec 2022 11:11:51 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "On Mon, 5 Dec 2022 at 13:42, Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>\n> Hi!\n>\n> >I think storing this in pg_control is a bad idea. That file is\n> >extremely critical and if you break it, you're pretty much SOL on\n> >recovering your data. I suggest that this should use a separate file.\n>\n> Thanks. Operation log location changed to 'global/pg_control_log' (if\n> the name is not pretty, it can be changed).\n>\n> I attached the patch (v2-0001-Operation-log.patch) and description of\n> operation log (Operation-log.txt).\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nff23b592ad6621563d3128b26860bcb41daf9542 ===\n=== applying patch ./v2-0001-Operation-log.patch\n....\npatching file src/tools/msvc/Mkvcbuild.pm\nHunk #1 FAILED at 134.\n1 out of 1 hunk FAILED -- saving rejects to file src/tools/msvc/Mkvcbuild.pm.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4018.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 14 Jan 2023 12:23:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "Hi!\n\n >The patch does not apply on top of HEAD ...\n\nHere is a fixed version.\nSmall additional fixes:\n1) added CRC calculation for empty 'pg_control_log' file;\n2) added saving 'errno' before calling LWLockRelease and restoring after \nthat;\n3) corrected pg_upgrade for case old cluster does not have \n'pg_control_log' file.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com", "msg_date": "Sat, 14 Jan 2023 13:17:07 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "On Sat, 14 Jan 2023 at 15:47, Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>\n> Hi!\n>\n> >The patch does not apply on top of HEAD ...\n>\n> Here is a fixed version.\n> Small additional fixes:\n> 1) added CRC calculation for empty 'pg_control_log' file;\n> 2) added saving 'errno' before calling LWLockRelease and restoring after\n> that;\n> 3) corrected pg_upgrade for case old cluster does not have\n> 'pg_control_log' file.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n14bdb3f13de16523609d838b725540af5e23ddd3 ===\n=== applying patch ./v3-0001-Operation-log.patch\n...\npatching file src/tools/msvc/Mkvcbuild.pm\nHunk #1 FAILED at 134.\n1 out of 1 hunk FAILED -- saving rejects to file src/tools/msvc/Mkvcbuild.pm.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4018.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 19 Jan 2023 16:52:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": ">The patch does not apply on top of HEAD ...\n\nThanks!\nHere is a fixed version.\n\nAdditional changes:\n1) get_operation_log() function doesn't create empty operation log file;\n2) removed extra unlink() call.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 20 Jan 2023 00:11:58 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "On Thu, Jan 19, 2023 at 1:12 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n\n> >The patch does not apply on top of HEAD ...\n>\n> Thanks!\n> Here is a fixed version.\n>\n> Additional changes:\n> 1) get_operation_log() function doesn't create empty operation log file;\n> 2) removed extra unlink() call.\n>\n> --\n> With best regards,\n> Dmitry Koval\n>\n> Postgres Professional: http://postgrespro.com\n\nHi,\n\nCopyright (c) 1996-2022\n\nPlease update year for the license in pg_controllog.c\n\n+check_file_exists(const char *datadir, const char *path)\n\nThere is existing helper function such as:\n\nsrc/backend/utils/fmgr/dfmgr.c:static bool file_exists(const char *name);\n\nIs it possible to reuse that code ?\n\nCheers\n\nOn Thu, Jan 19, 2023 at 1:12 PM Dmitry Koval <d.koval@postgrespro.ru> wrote: >The patch does not apply on top of HEAD ...\n\nThanks!\nHere is a fixed version.\n\nAdditional changes:\n1) get_operation_log() function doesn't create empty operation log file;\n2) removed extra unlink() call.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,Copyright (c) 1996-2022Please update year for the license in pg_controllog.c+check_file_exists(const char *datadir, const char *path)There is existing helper function such as:src/backend/utils/fmgr/dfmgr.c:static bool file_exists(const char *name);Is it possible to reuse that code ?Cheers", "msg_date": "Thu, 19 Jan 2023 13:36:24 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "Thanks, Ted Yu!\n\n > Please update year for the license in pg_controllog.c\n\nLicense updated for files pg_controllog.c, controllog_utils.c, \npg_controllog.h, controllog_utils.h.\n\n > +check_file_exists(const char *datadir, const char *path)\n > There is existing helper function such as:\n > src/backend/utils/fmgr/dfmgr.c:static bool file_exists(const char *name);\n > Is it possible to reuse that code ?\n\nThere are a lot of functions that check the file existence:\n\n1) src/backend/utils/fmgr/dfmgr.c:static bool file_exists(const char *name);\n2) src/backend/jit/jit.c:static bool file_exists(const char *name);\n3) src/test/regress/pg_regress.c:bool file_exists(const char *file);\n4) src/bin/pg_upgrade/exec.c:bool pid_lock_file_exists(const char *datadir);\n5) src/backend/commands/extension.c:bool extension_file_exists(const \nchar *extensionName);\n\nBut there is no unified function: different components use their own \nfunction with their own specific.\nProbably we can not reuse code of dfmgr.c:file_exists() because this \nfunction skip \"errno == EACCES\" (this error is critical for us).\nI copied for src/bin/pg_rewind/file_ops.c:check_file_exists() code of \nfunction jit.c:file_exists() (with adaptation for the utility).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 20 Jan 2023 12:19:23 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "On Fri, Jan 20, 2023 at 1:19 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n\n> Thanks, Ted Yu!\n>\n> > Please update year for the license in pg_controllog.c\n>\n> License updated for files pg_controllog.c, controllog_utils.c,\n> pg_controllog.h, controllog_utils.h.\n>\n> > +check_file_exists(const char *datadir, const char *path)\n> > There is existing helper function such as:\n> > src/backend/utils/fmgr/dfmgr.c:static bool file_exists(const char\n> *name);\n> > Is it possible to reuse that code ?\n>\n> There are a lot of functions that check the file existence:\n>\n> 1) src/backend/utils/fmgr/dfmgr.c:static bool file_exists(const char\n> *name);\n> 2) src/backend/jit/jit.c:static bool file_exists(const char *name);\n> 3) src/test/regress/pg_regress.c:bool file_exists(const char *file);\n> 4) src/bin/pg_upgrade/exec.c:bool pid_lock_file_exists(const char\n> *datadir);\n> 5) src/backend/commands/extension.c:bool extension_file_exists(const\n> char *extensionName);\n>\n> But there is no unified function: different components use their own\n> function with their own specific.\n> Probably we can not reuse code of dfmgr.c:file_exists() because this\n> function skip \"errno == EACCES\" (this error is critical for us).\n> I copied for src/bin/pg_rewind/file_ops.c:check_file_exists() code of\n> function jit.c:file_exists() (with adaptation for the utility).\n>\n> --\n> With best regards,\n> Dmitry Koval\n>\n> Postgres Professional: http://postgrespro.com\n\nHi,\nMaybe another discussion thread can be created for the consolidation of\nXX_file_exists functions.\n\nCheers\n\nOn Fri, Jan 20, 2023 at 1:19 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:Thanks, Ted Yu!\n\n > Please update year for the license in pg_controllog.c\n\nLicense updated for files pg_controllog.c, controllog_utils.c, \npg_controllog.h, controllog_utils.h.\n\n > +check_file_exists(const char *datadir, const char *path)\n > There is existing helper function such as:\n > src/backend/utils/fmgr/dfmgr.c:static bool file_exists(const char *name);\n > Is it possible to reuse that code ?\n\nThere are a lot of functions that check the file existence:\n\n1) src/backend/utils/fmgr/dfmgr.c:static bool file_exists(const char *name);\n2) src/backend/jit/jit.c:static bool file_exists(const char *name);\n3) src/test/regress/pg_regress.c:bool file_exists(const char *file);\n4) src/bin/pg_upgrade/exec.c:bool pid_lock_file_exists(const char *datadir);\n5) src/backend/commands/extension.c:bool extension_file_exists(const \nchar *extensionName);\n\nBut there is no unified function: different components use their own \nfunction with their own specific.\nProbably we can not reuse code of dfmgr.c:file_exists() because this \nfunction skip \"errno == EACCES\" (this error is critical for us).\nI copied for src/bin/pg_rewind/file_ops.c:check_file_exists() code of \nfunction jit.c:file_exists() (with adaptation for the utility).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,Maybe another discussion thread can be created for the consolidation of XX_file_exists functions.Cheers", "msg_date": "Fri, 20 Jan 2023 05:23:16 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "Hi!\n\n> Maybe another discussion thread can be created for the consolidation of \n> XX_file_exists functions.\n\nUsually XX_file_exists functions are simple. They contain single call \nstat() or open() and specific error processing after this call.\n\nLikely the unified function will be too complex to cover all the uses of \nXX_file_exists functions.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 11:35:24 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "On Thu, 19 Jan 2023 at 16:12, Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>\n> >The patch does not apply on top of HEAD ...\n>\n> Thanks!\n> Here is a fixed version.\n\nSorry to say, but this needs a rebase again... Setting to Waiting on Author...\n\nAre there specific feedback needed to make progress? Once it's rebased\nif you think it's ready set it to Ready for Committer or if you still\nneed feedback then Needs Review -- but it's usually more helpful to do\nthat with an email expressing what questions you're blocked on.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Wed, 1 Mar 2023 15:59:23 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "Hi!\n\nThese changes did not interest the community. It was expected (topic is \nvery specifiс: vendor's technical support). So no sense to distract \ndevelopers ...\n\nI'll move patch to Withdrawn.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n", "msg_date": "Thu, 2 Mar 2023 20:57:43 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "On Thu, Mar 02, 2023 at 08:57:43PM +0300, Dmitry Koval wrote:\n> These changes did not interest the community. It was expected (topic is very\n> specifiс: vendor's technical support). So no sense to distract developers\n\nActually, I think there is interest, but it has to be phrased in a\nlimited sense to go into the control file.\n\nIn November, I referenced 2 threads, but I think you misunderstood one\nof them. If you skim the first couple mails, you'll find a discussion\nabout recording crash information in the control file. \n\nhttps://www.postgresql.org/message-id/666c2599a07addea00ae2d0af96192def8441974.camel%40j-davis.com\n\nIt's come up several times now, and there seems to be ample support for\nadding some limited information.\n\nBut a \"log\" which might exceed a few dozen bytes (now or later), that's\ninconsistent with the pre-existing purpose served by pg_control.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 2 Mar 2023 12:37:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Mar 02, 2023 at 08:57:43PM +0300, Dmitry Koval wrote:\n>> These changes did not interest the community. It was expected (topic is very\n>> specifiс: vendor's technical support). So no sense to distract developers\n\n> Actually, I think there is interest, but it has to be phrased in a\n> limited sense to go into the control file.\n\n> In November, I referenced 2 threads, but I think you misunderstood one\n> of them. If you skim the first couple mails, you'll find a discussion\n> about recording crash information in the control file. \n\n> https://www.postgresql.org/message-id/666c2599a07addea00ae2d0af96192def8441974.camel%40j-davis.com\n\n> It's come up several times now, and there seems to be ample support for\n> adding some limited information.\n\n> But a \"log\" which might exceed a few dozen bytes (now or later), that's\n> inconsistent with the pre-existing purpose served by pg_control.\n\nI'm pretty dubious about recording *anything* in the control file.\nEvery time we write to that, we risk the entire database on completing\nthe write successfully. I don't want to do that more often than once\nper checkpoint. If you want to put crash info in some less-critical\nplace, maybe we could talk.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Mar 2023 14:36:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "I'll try to expand my explanation.\nI fully understand and accept the arguments about \"limited sense to go \ninto the control file\" and \"about recording *anything* in the control \nfile\". This is totally correct for vanilla.\nBut vendors have forks of PostgreSQL with custom features and extensions.\nSometimes (especially at the first releases) these custom components \nhave bugs which can causes rare problems in data.\nThese problems can migrate with using pg_upgrade and \"lazy\" upgrade of \npages to higher versions of PostgreSQL fork.\n\nSo in error cases \"recording crash information\" etc. is not the only \nimportant information.\nVery important is history of this database (pg_upgrades, promotions, \npg_resets, pg_rewinds etc.).\nOften these \"history\" allows you to determine from which version of the \nPostgreSQL fork the error came from and what causes of errors we can \ndiscard immediately.\n\nThis \"history\" is the information that our technical support wants (and \nreason of this patch), but this information is not needed for vanilla...\n\nAnother important condition is that the user should not have easy ways \nto delete information about \"history\" (about reason to use pg_control \nfile as \"history\" storage, but write into it from position 4kB, 8kB,...).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n", "msg_date": "Fri, 3 Mar 2023 00:09:38 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "On Thu, Mar 2, 2023 at 4:09 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n\n> I'll try to expand my explanation.\n> I fully understand and accept the arguments about \"limited sense to go\n> into the control file\" and \"about recording *anything* in the control\n> file\". This is totally correct for vanilla.\n> But vendors have forks of PostgreSQL with custom features and extensions.\n> Sometimes (especially at the first releases) these custom components\n> have bugs which can causes rare problems in data.\n> These problems can migrate with using pg_upgrade and \"lazy\" upgrade of\n> pages to higher versions of PostgreSQL fork.\n>\n> So in error cases \"recording crash information\" etc. is not the only\n> important information.\n> Very important is history of this database (pg_upgrades, promotions,\n> pg_resets, pg_rewinds etc.).\n> Often these \"history\" allows you to determine from which version of the\n> PostgreSQL fork the error came from and what causes of errors we can\n> discard immediately.\n>\n> This \"history\" is the information that our technical support wants (and\n> reason of this patch), but this information is not needed for vanilla...\n>\n> Another important condition is that the user should not have easy ways\n> to delete information about \"history\" (about reason to use pg_control\n> file as \"history\" storage, but write into it from position 4kB, 8kB,...).\n>\n> --\n> With best regards,\n> Dmitry Koval\n>\n> Postgres Professional: http://postgrespro.com\n>\n> Dmitry, this is a great explanation. Thinking outside the box, it feels\nlike:\nWe need some kind of semaphore flag that tells us something awkward\nhappened.\nWhen it happened, and a little bit of extra information.\n\nYou also make the point that if such things have happened, it would\nprobably be a good idea to NOT\nallow pg_upgrade to run. It might even be a reason to constantly bother\nsomeone until the issue is repaired.\n\nTo that point, this feels like a \"postgresql_panic.log\" file (within the\npostgresql files?)... Something that would prevent pg_upgrade,\netc. That everyone recognizes is serious. Especially 3rd party vendors.\n\nI see the need for such a thing. I have to agree with others about\nquestioning the proper place to write this.\n\nAre there other places that make sense, that you could use, especially if\nknowing it exists means there was a serious issue?\n\nKirk\n\nOn Thu, Mar 2, 2023 at 4:09 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:I'll try to expand my explanation.\nI fully understand and accept the arguments about \"limited sense to go \ninto the control file\" and \"about recording *anything* in the control \nfile\". This is totally correct for vanilla.\nBut vendors have forks of PostgreSQL with custom features and extensions.\nSometimes (especially at the first releases) these custom components \nhave bugs which can causes rare problems in data.\nThese problems can migrate with using pg_upgrade and \"lazy\" upgrade of \npages to higher versions of PostgreSQL fork.\n\nSo in error cases \"recording crash information\" etc. is not the only \nimportant information.\nVery important is history of this database (pg_upgrades, promotions, \npg_resets, pg_rewinds etc.).\nOften these \"history\" allows you to determine from which version of the \nPostgreSQL fork the error came from and what causes of errors we can \ndiscard immediately.\n\nThis \"history\" is the information that our technical support wants (and \nreason of this patch), but this information is not needed for vanilla...\n\nAnother important condition is that the user should not have easy ways \nto delete information about \"history\" (about reason to use pg_control \nfile as \"history\" storage, but write into it from position 4kB, 8kB,...).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\nDmitry, this is a great explanation.  Thinking outside the box, it feels like:We need some kind of semaphore flag that tells us something awkward happened.When it happened, and a little bit of extra information.You also make the point that if such things have happened, it would probably be a good idea to NOTallow pg_upgrade to run.  It might even be a reason to constantly bother someone until the issue is repaired.To that point, this feels like a \"postgresql_panic.log\" file (within the postgresql files?)...  Something that would prevent pg_upgrade,etc.  That everyone recognizes is serious.  Especially 3rd party vendors. I see the need for such a thing.  I have to agree with others about questioning the proper place to write this.Are there other places that make sense, that you could use, especially if knowing it exists means there was a serious issue?Kirk", "msg_date": "Thu, 2 Mar 2023 16:49:43 -0500", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operation log for major operations" }, { "msg_contents": "Kirk, I'm sorry about the long pause in my reply.\n\n >We need some kind of semaphore flag that tells us something awkward\n >happened. When it happened, and a little bit of extra information.\n\nI agree that we do not have this kind of information.\nAdditionally, legal events like start of pg_rewind, pg_reset, ... are \ninteresting.\n\n\n >You also make the point that if such things have happened, it would\n >probably be a good idea to NOT allow pg_upgrade to run.\n >It might even be a reason to constantly bother someone until\n >the issue is repaired.\n\nI think no reason to forbid the run of pg_upgrade for the user \n(especially in automatic mode).\nIf we automatically do NOT allow pg_upgrade, what should the user do for \nallow pg_upgrade?\nUnfortunately, PostgreSQL does not have the utilities to correct errors \nin the database (in case of errors users uses copies of the DB or \ncorrects errors manually).\nAn ordinary user cannot correct errors on his own ...\nSo we cannot REQUIRE the user to correct database errors, we can only \nINFORM about them.\n\n\n >To that point, this feels like a \"postgresql_panic.log\" file (within\n >the postgresql files?)... Something that would prevent pg_upgrade,\n >etc. That everyone recognizes is serious. Especially 3rd party vendors.\n >I see the need for such a thing. I have to agree with others about\n >questioning the proper place to write this.\n >Are there other places that make sense, that you could use, especially\n >if knowing it exists means there was a serious issue?\n\nThe location of the operation log (like a \"postgresql_panic.log\") is not \neasy question.\nOur technical support is sure that the large number of failures are \ncaused by \"human factor\" (actions of database administrators).\nIt is not difficult for database administrators to delete the \n\"postgresql_panic.log\" file or edit it (for example, replacing it with \nan old version; CRC will not save you from such an action).\n\nTherefore, our technical support decided to place the operation log at \nthe end of the pg_control file, at an offset of 8192 bytes (and protect \nthis log with CRC).\nAbout writing to the pg_control file what worries Tom Lane: information \nin pg_control is written once at system startup (twice in case of \n\"promote\").\nAlso, some utilities write information to the operation log too - \npg_resetwal, pg_rewind, pg_upgrade (these utilities also modify the \npg_control file without the operation log).\n\nIf you are interested, I can attach the current patch (for info - I \nthink it makes no sense to offer this patch at commitfest).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n", "msg_date": "Tue, 14 Mar 2023 00:36:14 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Operation log for major operations" } ]
[ { "msg_contents": "Hi,\n\nI noticed that the comment on/beneath rs_numblocks in HeapScanDescData\nis duplicated above rs_strategy. I don't know if there should have\nbeen a different comment above rs_strategy, but the current one is\ndefinitely out of place, so I propose to remove it as per attached.\n\nThe comment was duplicated in c2fe139c20 with the update to the table\nscan APIs, which was first seen in PostgreSQL 11.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Mon, 21 Nov 2022 12:12:14 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Cleanup: Duplicated, misplaced comment in HeapScanDescData" }, { "msg_contents": "On Mon, 21 Nov 2022 at 12:12, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I noticed that the comment on/beneath rs_numblocks in HeapScanDescData\n> is duplicated above rs_strategy. I don't know if there should have\n> been a different comment above rs_strategy, but the current one is\n> definitely out of place, so I propose to remove it as per attached.\n>\n> The comment was duplicated in c2fe139c20 with the update to the table\n> scan APIs, which was first seen in PostgreSQL 11.\n\nI made a mistake in determining this version number; it was PostgreSQL\n12 where this commit was first included. Attached is the same patch\nwith the description updated accordingly.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Mon, 21 Nov 2022 12:34:12 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Cleanup: Duplicated, misplaced comment in HeapScanDescData" }, { "msg_contents": "Hi,\n\nOn 2022-11-21 12:34:12 +0100, Matthias van de Meent wrote:\n> On Mon, 21 Nov 2022 at 12:12, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I noticed that the comment on/beneath rs_numblocks in HeapScanDescData\n> > is duplicated above rs_strategy. I don't know if there should have\n> > been a different comment above rs_strategy, but the current one is\n> > definitely out of place, so I propose to remove it as per attached.\n> >\n> > The comment was duplicated in c2fe139c20 with the update to the table\n> > scan APIs, which was first seen in PostgreSQL 11.\n> \n> I made a mistake in determining this version number; it was PostgreSQL\n> 12 where this commit was first included. Attached is the same patch\n> with the description updated accordingly.\n\nI guess that happened because of the odd placement of the comment from\nbefore the change:\n\n bool rs_temp_snap; /* unregister snapshot at scan end? */\n-\n- /* state set up at initscan time */\n- BlockNumber rs_nblocks; /* total number of blocks in rel */\n- BlockNumber rs_startblock; /* block # to start at */\n- BlockNumber rs_numblocks; /* max number of blocks to scan */\n- /* rs_numblocks is usually InvalidBlockNumber, meaning \"scan whole rel\" */\n- BufferAccessStrategy rs_strategy; /* access strategy for reads */\n bool rs_syncscan; /* report location to syncscan logic? */\n\nWe rarely put comments document a struct member after it.\n\nI'm inclined to additionally move the \"legitimate\" copy of the comment\nto before rs_numblocks, rather than after it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:12:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Cleanup: Duplicated, misplaced comment in HeapScanDescData" } ]
[ { "msg_contents": "Hi hackers,\n\nthe case of planner's\nsrc/backend/utils/adt/selfuncs.c:get_actual_variable_endpoint()\nspending literally seconds seems to be well known fact across hackers\n(in the extreme wild case it can be over 1+ hour on VLDBs). For those\nunfamiliar it is planner estimation that tries to read real table\nindex (including deadrows) until min/max. It is blackbox mechanism\nthat works without any warning which often is hugely affected by\nnumber of dead tuples in indexes and there's no on/off switch or\nbuilt-in limitation of how far it can go. It was discussed on pgsql\nmailing lists several times [1]-[5]. It almost seems like it works\nfine in 99.9% cases, until it doesn't and blows up big time on larger\nsystems and from there operator doesn't have a lot of choices [a lot\nof time being already wasted on identifying the root-cause being the\nplanner]:\n1) one can properly VACUUM (which everybody seem to agree is the\nproper way to go, but it is often problematic due to other various\ncircumstances, especially on big tables without serious partitioning\nstrategies) - again this might be very time consuming\n2) one cannot trade a lot CPU/IO burning on planning (actually\nfetching indexes on multi-TB tables) to less accurate plans, and\nrealistically speaking rewriting queries is often impossible\n3) application might not support enable prepared statements and even\nif then simple queries/reports are also affected\n4) there is no visibility into how much activity is spent on btree\nindex get_actual_variable_endpoint() alone, so one cannot estimate the\nsystem-wide impact\n\nI would like to trigger the discussion on how to give at least partial\ncontrol to the end-user of what the optimizer performs. I was thinking\nabout several things and each of those has pros and cons:\n\na) the attached quick patch (by Simon Riggs) that put maximum allowed\ncost constraints on the index-lookup machinery as a safeguard (that\n#define is debatable; in my testing it reduced the hit from ~850ms to\n0.6ms +/- 0.3ms at the current value of 20).\nb) I was wondering about creating a new wait class \"Planner\" with the\nevent \"ReadingMinMaxIndex\" (or similar). The obvious drawback is the\nnaming/categorization as wait events are ... well.. as the name \"WAIT\"\nimplies, while those btree lookups could easily be ON-CPU activity.\nc) Any other idea, e.g. see [3] or [5] (cache was being proposed).\nd) For completeness : a new GUC/knob to completely disable the\nfunctionality (debug_optimizer_minmax_est), but that's actually\ntrimmed functionality of the patch.\ne) I was also wondering about some DEBUG/NOTICE elog() when taking\nmore than let's say arbitrary 10s, but that could easily spam the log\nfile\n\nReproducer on a very small dataset follows. Please note the reproducer\nhere shows the effect on 1st run EXPLAIN, however in real observed\nsituation (multi-TB unpartitioned table) each consecutive planner\noperation (just EXPLAIN) on that index was affected (I don't know why\nLP_DEAD/hints cleaning was not kicking in, but maybe it was, but given\nthe scale of the problem it was not helping much).\n\n-Jakub Wartak.\n\n[1] - https://www.postgresql.org/message-id/flat/54446AE2.6080909%40BlueTreble.com#f436bb41cf044b30eeec29472a13631e\n[2] - https://www.postgresql.org/message-id/flat/db7111f2-05ef-0ceb-c013-c34adf4f4121%40gmail.com\n[3] - https://www.postgresql.org/message-id/flat/05C72CF7-B5F6-4DB9-8A09-5AC897653113%40yandex.ru\n(SnapshotAny vs SnapshotDirty discussions between Tom and Robert)\n[4] - https://www.postgresql.org/message-id/flat/CAECtzeVPM4Oi6dTdqVQmjoLkDBVChNj7ed3hNs1RGrBbwCJ7Cw%40mail.gmail.com\n[5] - https://postgrespro.com/list/thread-id/2436130 (cache)\n\ns1:\n=# drop table test;\n=# create table test (id bigint primary key) with (autovacuum_enabled = 'off');\n=# insert into test select generate_series(1,10000000); -- ~310MB\ntable, ~210MB index\n\ns2/start the long running transaction:\n=# begin;\n=# select min(id) from test;\n\ns1:\n=# delete from test where id>1000000;\n=# analyze test;\n=# set enable_indexonlyscan = 'off'; -- just in case to force\nBitmapHeapScans which according to backend/access/nbtree/README\nwon'tset LP_DEAD, but my bt_page_items() tests indicate that it does\n(??)\n=# set enable_indexscan = 'off';\n=# explain (buffers, verbose) select * from test where id > 11000000;\n=> Planning: Buffers: shared hit=9155 read=55276 dirtied=55271\nwritten=53617 / Time: 851.761 ms\n=# explain (buffers, verbose) select * from test where id > 11000000;\n=> Planning: Buffers: shared read=24594 / Time: 49.172 ms\n=# vacuum (verbose) test; => index scan needed: 39824 pages from table\n(90.00% of total) had 9000000 dead item identifiers removed\n=# explain (buffers, verbose) select * from test where id > 11000000;\n=> Planning: Buffers: shared hit=14 read=3 / Time: 0.550 ms\n\nwith patch, the results are:\np=# explain (buffers, verbose) select * from test where id > 11000000;\n=> Planning: / Buffers: shared hit=17 read=6 dirtied=3 written=5 =>\nTime: 0.253 ms\np=# explain (buffers, verbose) select * from test where id > 11000000;\n=> Planning: / Buffers: shared hit=11 read=2 dirtied=2 => Time: 0.576\nms\nso there's no dramatic hit.", "msg_date": "Mon, 21 Nov 2022 13:00:34 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Damage control for planner's get_actual_variable_endpoint() runaway" }, { "msg_contents": "On 2022-Nov-21, Jakub Wartak wrote:\n\n> b) I was wondering about creating a new wait class \"Planner\" with the\n> event \"ReadingMinMaxIndex\" (or similar). The obvious drawback is the\n> naming/categorization as wait events are ... well.. as the name \"WAIT\"\n> implies, while those btree lookups could easily be ON-CPU activity.\n\nI think we should definitely do this, regardless of any other fixes we\nadd, so that this condition can be identified more easily. I wonder if\nwe can backpatch it safely.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)\n\n\n", "msg_date": "Mon, 21 Nov 2022 13:22:13 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, Nov 21, 2022 at 7:22 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Nov-21, Jakub Wartak wrote:\n> > b) I was wondering about creating a new wait class \"Planner\" with the\n> > event \"ReadingMinMaxIndex\" (or similar). The obvious drawback is the\n> > naming/categorization as wait events are ... well.. as the name \"WAIT\"\n> > implies, while those btree lookups could easily be ON-CPU activity.\n>\n> I think we should definitely do this, regardless of any other fixes we\n> add, so that this condition can be identified more easily. I wonder if\n> we can backpatch it safely.\n\nI don't think this is safe at all. Wait events can only bracket\nindividual operations, like the reads of the individual index blocks,\nnot report on which phase of a larger operation is in progress. If we\ntry to make them do the latter, we will have a hot mess on our hands.\nIt might not be a bad capability to have, but it's a different system.\n\nBut that doesn't mean we can't do anything about this problem, and I\nthink we should do something about this problem. It's completely crazy\nthat after this many years of people getting hosed by this, we haven't\ntaken more than half measures to fix the problem. I think commit\n3ca930fc39ccf987c1c22fd04a1e7463b5dd0dfd was the last time we poked at\nthis, and before that there was commit\nfccebe421d0c410e6378fb281419442c84759213, but neither of those\nprevented us from scanning an unbounded number of index tuples before\nfinding one that we're willing to use, as the commit messages pretty\nmuch admit.\n\nWhat we need is a solution that avoids reading an unbounded number of\ntuples under any circumstances. I previously suggested using\nSnapshotAny here, but Tom didn't like that. I'm not sure if there are\nsafety issues there or if Tom was just concerned about the results\nbeing misleading. Either way, maybe there's some variant on that theme\nthat could work. For instance, could we teach the index scan to stop\nif the first 100 tuples that it finds are all invisible? Or to reach\nat most 1 page, or at most 10 pages, or something? If we don't find a\nmatch, we could either try to use a dead tuple, or we could just\nreturn false which, I think, would end up using the value from\npg_statistic rather than any updated value. That is of course not a\ngreat outcome, but it is WAY WAY BETTER than spending OVER AN HOUR\nlooking for a more suitable tuple, as Jakub describes having seen on a\nproduction system.\n\nI really can't understand why this is even mildly controversial. What\nexactly to do here may be debatable, but the idea that it's OK to\nspend an unbounded amount of resources during any phase of planning is\nclearly wrong. We can say that at the time we wrote the code we didn't\nknow it was going to actually ever happen, and that is fine and true.\nBut there have been multiple reports of this over the years and we\nknow *for sure* that spending totally unreasonable amounts of time\ninside this function is a real-world problem that actually brings down\nproduction systems. Unless and until we find a way of putting a tight\nbound on the amount of effort that can be expended here, that's going\nto keep happening.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 09:48:27 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't think this is safe at all. Wait events can only bracket\n> individual operations, like the reads of the individual index blocks,\n> not report on which phase of a larger operation is in progress. If we\n> try to make them do the latter, we will have a hot mess on our hands.\n\nAgreed.\n\n> What we need is a solution that avoids reading an unbounded number of\n> tuples under any circumstances. I previously suggested using\n> SnapshotAny here, but Tom didn't like that. I'm not sure if there are\n> safety issues there or if Tom was just concerned about the results\n> being misleading. Either way, maybe there's some variant on that theme\n> that could work. For instance, could we teach the index scan to stop\n> if the first 100 tuples that it finds are all invisible? Or to reach\n> at most 1 page, or at most 10 pages, or something?\n\nA hard limit on the number of index pages examined seems like it\nmight be a good idea.\n\n> If we don't find a\n> match, we could either try to use a dead tuple, or we could just\n> return false which, I think, would end up using the value from\n> pg_statistic rather than any updated value.\n\nYeah, the latter seems like the best bet. Returning a definitely-dead\nvalue could be highly misleading. In the end this is meant to be\nan incremental improvement on what we could get from pg_statistic,\nso it's reasonable to limit how hard we'll work on it.\n\nIf we do install such a thing, should we undo any of the previous\nchanges that backed off the reliability of the result?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 10:01:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, 21 Nov 2022 at 15:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > What we need is a solution that avoids reading an unbounded number of\n> > tuples under any circumstances. I previously suggested using\n> > SnapshotAny here, but Tom didn't like that. I'm not sure if there are\n> > safety issues there or if Tom was just concerned about the results\n> > being misleading. Either way, maybe there's some variant on that theme\n> > that could work. For instance, could we teach the index scan to stop\n> > if the first 100 tuples that it finds are all invisible? Or to reach\n> > at most 1 page, or at most 10 pages, or something?\n>\n> A hard limit on the number of index pages examined seems like it\n> might be a good idea.\n\nGood, that is what the patch does.\n\n> > If we don't find a\n> > match, we could either try to use a dead tuple, or we could just\n> > return false which, I think, would end up using the value from\n> > pg_statistic rather than any updated value.\n>\n> Yeah, the latter seems like the best bet.\n\nYes, just breaking out of the loop is enough to use the default value.\n\n> If we do install such a thing, should we undo any of the previous\n> changes that backed off the reliability of the result?\n\nNot sure.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:14:07 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, Nov 21, 2022 at 10:14 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> > > What we need is a solution that avoids reading an unbounded number of\n> > > tuples under any circumstances. I previously suggested using\n> > > SnapshotAny here, but Tom didn't like that. I'm not sure if there are\n> > > safety issues there or if Tom was just concerned about the results\n> > > being misleading. Either way, maybe there's some variant on that theme\n> > > that could work. For instance, could we teach the index scan to stop\n> > > if the first 100 tuples that it finds are all invisible? Or to reach\n> > > at most 1 page, or at most 10 pages, or something?\n> >\n> > A hard limit on the number of index pages examined seems like it\n> > might be a good idea.\n>\n> Good, that is what the patch does.\n\n<looks at patch>\n\nOh, that's surprisingly simple. Nice!\n\nIs there any reason to tie this into page costs? I'd be more inclined\nto just make it a hard limit on the number of pages. I think that\nwould be more predictable and less prone to surprising (bad) behavior.\nAnd to be honest I would be inclined to make it quite a small number.\nPerhaps 5 or 10. Is there a good argument for going any higher?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 10:22:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, 21 Nov 2022 at 15:23, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 21, 2022 at 10:14 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > > > What we need is a solution that avoids reading an unbounded number of\n> > > > tuples under any circumstances. I previously suggested using\n> > > > SnapshotAny here, but Tom didn't like that. I'm not sure if there are\n> > > > safety issues there or if Tom was just concerned about the results\n> > > > being misleading. Either way, maybe there's some variant on that theme\n> > > > that could work. For instance, could we teach the index scan to stop\n> > > > if the first 100 tuples that it finds are all invisible? Or to reach\n> > > > at most 1 page, or at most 10 pages, or something?\n> > >\n> > > A hard limit on the number of index pages examined seems like it\n> > > might be a good idea.\n> >\n> > Good, that is what the patch does.\n>\n> <looks at patch>\n>\n> Oh, that's surprisingly simple. Nice!\n>\n> Is there any reason to tie this into page costs? I'd be more inclined\n> to just make it a hard limit on the number of pages. I think that\n> would be more predictable and less prone to surprising (bad) behavior.\n> And to be honest I would be inclined to make it quite a small number.\n> Perhaps 5 or 10. Is there a good argument for going any higher?\n\n+1, that makes the patch smaller and the behavior more predictable.\n\n(Just didn't want to do anything too simple, in case it looked like a kluge.)\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:30:31 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Is there any reason to tie this into page costs? I'd be more inclined\n> to just make it a hard limit on the number of pages. I think that\n> would be more predictable and less prone to surprising (bad) behavior.\n\nAgreed, a simple limit of N pages fetched seems appropriate.\n\n> And to be honest I would be inclined to make it quite a small number.\n> Perhaps 5 or 10. Is there a good argument for going any higher?\n\nSure: people are not complaining until it gets into the thousands.\nAnd you have to remember that the entire mechanism exists only\nbecause of user complaints about inaccurate estimates. We shouldn't\nbe too eager to resurrect that problem.\n\nI'd be happy with a limit of 100 pages.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 10:32:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, Nov 21, 2022 at 10:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Is there any reason to tie this into page costs? I'd be more inclined\n> > to just make it a hard limit on the number of pages. I think that\n> > would be more predictable and less prone to surprising (bad) behavior.\n>\n> Agreed, a simple limit of N pages fetched seems appropriate.\n>\n> > And to be honest I would be inclined to make it quite a small number.\n> > Perhaps 5 or 10. Is there a good argument for going any higher?\n>\n> Sure: people are not complaining until it gets into the thousands.\n> And you have to remember that the entire mechanism exists only\n> because of user complaints about inaccurate estimates. We shouldn't\n> be too eager to resurrect that problem.\n>\n> I'd be happy with a limit of 100 pages.\n\nOK.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 10:35:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Hi,\n\nDraft version of the patch attached (it is based on Simon's)\nI would be happier if we could make that #define into GUC (just in\ncase), although I do understand the effort to reduce the number of\nvarious knobs (as their high count causes their own complexity).\n\n-Jakub Wartak.\n\nOn Mon, Nov 21, 2022 at 4:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 21, 2022 at 10:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > Is there any reason to tie this into page costs? I'd be more inclined\n> > > to just make it a hard limit on the number of pages. I think that\n> > > would be more predictable and less prone to surprising (bad) behavior.\n> >\n> > Agreed, a simple limit of N pages fetched seems appropriate.\n> >\n> > > And to be honest I would be inclined to make it quite a small number.\n> > > Perhaps 5 or 10. Is there a good argument for going any higher?\n> >\n> > Sure: people are not complaining until it gets into the thousands.\n> > And you have to remember that the entire mechanism exists only\n> > because of user complaints about inaccurate estimates. We shouldn't\n> > be too eager to resurrect that problem.\n> >\n> > I'd be happy with a limit of 100 pages.\n>\n> OK.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Nov 2022 17:06:16 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Hi,\n\nOn 2022-11-21 17:06:16 +0100, Jakub Wartak wrote:\n> @@ -6213,14 +6216,26 @@ get_actual_variable_endpoint(Relation heapRel,\n> \t/* Fetch first/next tuple in specified direction */\n> \twhile ((tid = index_getnext_tid(index_scan, indexscandir)) != NULL)\n> \t{\n> +\t\tBlockNumber block = ItemPointerGetBlockNumber(tid);\n> \t\tif (!VM_ALL_VISIBLE(heapRel,\n> -\t\t\t\t\t\t\tItemPointerGetBlockNumber(tid),\n> +\t\t\t\t\t\t\tblock,\n> \t\t\t\t\t\t\t&vmbuffer))\n> \t\t{\n> \t\t\t/* Rats, we have to visit the heap to check visibility */\n> \t\t\tif (!index_fetch_heap(index_scan, tableslot))\n> \t\t\t\tcontinue;\t\t/* no visible tuple, try next index entry */\n> \n> +\t\t\t{\n> +\t\t\t\tCHECK_FOR_INTERRUPTS();\n> +\t\t\t\tif (block != last_block)\n> +\t\t\t\t\tvisited_pages++;\n> +#define VISITED_PAGES_LIMIT 100\n> +\t\t\t\tif (visited_pages > VISITED_PAGES_LIMIT)\n> +\t\t\t\t\tbreak;\n> +\t\t\t\telse\n> +\t\t\t\t\tcontinue; /* no visible tuple, try next index entry */\n> +\t\t\t}\n> +\n> \t\t\t/* We don't actually need the heap tuple for anything */\n> \t\t\tExecClearTuple(tableslot);\n> \n> -- \n> 2.30.2\n\nThis can't quite be right - isn't this only applying the limit if we found a\nvisible tuple?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 09:30:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "This patch version runs \"continue\" unconditionally (rather than\nconditionally, like the previous version).\n\n if (!index_fetch_heap(index_scan, tableslot))\n continue; /* no visible tuple, try next index entry */\n\n\n\n", "msg_date": "Mon, 21 Nov 2022 11:34:31 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, Nov 21, 2022 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:\n> This can't quite be right - isn't this only applying the limit if we found a\n> visible tuple?\n\nIt doesn't look that way to me, but perhaps I'm just too dense to see\nthe problem?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:37:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> This can't quite be right - isn't this only applying the limit if we found a\n> visible tuple?\n\nWhat it's restricting is the number of heap page fetches, which\nmight be good enough. We don't have a lot of visibility here\ninto how many index pages were scanned before returning the next\nnot-dead index entry, so I'm not sure how hard it'd be to do better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:37:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, Nov 21, 2022 at 12:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > This can't quite be right - isn't this only applying the limit if we found a\n> > visible tuple?\n>\n> What it's restricting is the number of heap page fetches, which\n> might be good enough. We don't have a lot of visibility here\n> into how many index pages were scanned before returning the next\n> not-dead index entry, so I'm not sure how hard it'd be to do better.\n\nOh. That's really sad. Because I think the whole problem here is that\nthe number of dead index entries can be huge.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:39:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Nov 21, 2022 at 12:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What it's restricting is the number of heap page fetches, which\n>> might be good enough. We don't have a lot of visibility here\n>> into how many index pages were scanned before returning the next\n>> not-dead index entry, so I'm not sure how hard it'd be to do better.\n\n> Oh. That's really sad. Because I think the whole problem here is that\n> the number of dead index entries can be huge.\n\nIf they're *actually* dead, we have enough mitigations in place I think,\nas explained by the long comment in get_actual_variable_endpoint.\nThe problem here is with still-visible-to-somebody tuples. At least,\nJakub's test case sets it up that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:45:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Hi,\n\nOn November 21, 2022 9:37:34 AM PST, Robert Haas <robertmhaas@gmail.com> wrote:\n>On Mon, Nov 21, 2022 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:\n>> This can't quite be right - isn't this only applying the limit if we found a\n>> visible tuple?\n>\n>It doesn't look that way to me, but perhaps I'm just too dense to see\n>the problem?\n\nThe earlier version didn't have the issue, but the latest seems to only limit after a visible tuple has been found. Note the continue; when fetching a heap tuple fails.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Mon, 21 Nov 2022 10:17:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "=?US-ASCII?Q?Re=3A_Damage_control_for_planner=27s_ge?=\n =?US-ASCII?Q?t=5Factual=5Fvariable=5Fendpoint=28=29_runaway?=" }, { "msg_contents": "On Mon, 21 Nov 2022 at 18:17, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On November 21, 2022 9:37:34 AM PST, Robert Haas <robertmhaas@gmail.com> wrote:\n> >On Mon, Nov 21, 2022 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:\n> >> This can't quite be right - isn't this only applying the limit if we found a\n> >> visible tuple?\n> >\n> >It doesn't look that way to me, but perhaps I'm just too dense to see\n> >the problem?\n>\n> The earlier version didn't have the issue, but the latest seems to only limit after a visible tuple has been found. Note the continue; when fetching a heap tuple fails.\n\nAgreed, resolved in this version.\n\n\nRobert, something like this perhaps? limit on both the index and the heap.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Mon, 21 Nov 2022 18:44:17 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Hi, \n\nOn November 21, 2022 10:44:17 AM PST, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>Robert, something like this perhaps? limit on both the index and the heap.\n\nI don't think we should add additional code / struct members into very common good paths for these limits. \n\nI don't really understand the point of limiting in the index - where would the large number of pages accessed come from?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Mon, 21 Nov 2022 11:09:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "=?US-ASCII?Q?Re=3A_Damage_control_for_planner=27s_ge?=\n =?US-ASCII?Q?t=5Factual=5Fvariable=5Fendpoint=28=29_runaway?=" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On November 21, 2022 10:44:17 AM PST, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>> Robert, something like this perhaps? limit on both the index and the heap.\n\n> I don't think we should add additional code / struct members into very common good paths for these limits. \n\nYeah, I don't like that either: for one thing, it seems completely\nunsafe to back-patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 14:15:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?US-ASCII?Q?Re=3A_Damage_control_for_planner=27s_ge?=\n =?US-ASCII?Q?t=5Factual=5Fvariable=5Fendpoint=28=29_runaway?=" }, { "msg_contents": "On Mon, Nov 21, 2022 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> On November 21, 2022 9:37:34 AM PST, Robert Haas <robertmhaas@gmail.com> wrote:\n> >On Mon, Nov 21, 2022 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:\n> >> This can't quite be right - isn't this only applying the limit if we found a\n> >> visible tuple?\n> >\n> >It doesn't look that way to me, but perhaps I'm just too dense to see\n> >the problem?\n>\n> The earlier version didn't have the issue, but the latest seems to only limit after a visible tuple has been found. Note the continue; when fetching a heap tuple fails.\n\nOh, that line was removed in Simon's patch but not in Jakub's version,\nI guess. Jakub's version also leaves out the last_block = block line\nwhich seems pretty critical.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:55:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, Nov 21, 2022 at 2:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On November 21, 2022 10:44:17 AM PST, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >> Robert, something like this perhaps? limit on both the index and the heap.\n>\n> > I don't think we should add additional code / struct members into very common good paths for these limits.\n>\n> Yeah, I don't like that either: for one thing, it seems completely\n> unsafe to back-patch.\n\nI have mixed feelings about back-patching this. On the one hand, the\nlack of any limit here can have *really* bad consequences. On the\nother hand, it's also possible to imagine someone getting hosed by the\nfix. No matter what limit we enforce, someone could in theory get a\nmuch better estimate by searching just a little further.\n\nI agree that adding members to IndexScanDescData doesn't seem very\nappealing, but I'm still trying to wrap my head around what exposure\nthat creates. In the patches thus far, we're basically counting the\nnumber of times that we get an index tuple that's not on the same page\nas the previous index tuple. That's not quite the same thing as the\nnumber of heap pages, because we might be just ping-ponging back and\nforth between 2 pages and it looks the same. I'm not sure that's a\nproblem, but it's something to note. If the limit is 100 pages, then\nwe'll try to fetch at least 100 index tuples, if they're all on\ndifferent pages, and perhaps as much as two orders of magnitude more,\nif they're all on the same page. That doesn't seem too bad, because we\nwon't really be doing 100 times as much work. Following 10,000 index\ntuples that all point at the same 100 heap pages is probably more work\nthan following 100 index tuples that each point to a separate heap\npage, but it's not anywhere near 100x as much work, especially if real\nI/O is involved. All in all, my first reaction is to think that it\nsounds fairly OK.\n\nThe real sticky wicket is that we don't know how dense the index is.\nIn a non-pathological case, we expect to find quite a few index tuples\nin each index page, so if we're fetching 100 heap pages the number of\nindex pages fetched is probably much less, like 1 or 2. And even if\nwe're fetching 10000 index tuples to visit 100 heap pages, they should\nstill be concentrated in a relatively reasonable number of index\npages. But ... what if they're not? Could the index contain a large\nnumber of pages containing just 1 tuple each, or no tuples at all? If\nso, maybe we can read ten bazillion index pages trying to find each\nheap tuple and still end up in trouble.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 16:17:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Hi,\n\nOn 2022-11-21 16:17:56 -0500, Robert Haas wrote:\n> But ... what if they're not? Could the index contain a large number of\n> pages containing just 1 tuple each, or no tuples at all? If so, maybe\n> we can read ten bazillion index pages trying to find each heap tuple\n> and still end up in trouble.\n\nISTM that if you have an index in such a poor condition that a single\nvalue lookup reads thousands of pages inside the index, planner\nestimates taking long is going to be the smallest of your worries...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 13:53:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-21 16:17:56 -0500, Robert Haas wrote:\n>> But ... what if they're not? Could the index contain a large number of\n>> pages containing just 1 tuple each, or no tuples at all? If so, maybe\n>> we can read ten bazillion index pages trying to find each heap tuple\n>> and still end up in trouble.\n\n> ISTM that if you have an index in such a poor condition that a single\n> value lookup reads thousands of pages inside the index, planner\n> estimates taking long is going to be the smallest of your worries...\n\nYeah, that sort of situation is going to make any operation on the\nindex slow, not only get_actual_variable_endpoint().\n\nI think we should content ourselves with improving the demonstrated\ncase, which is where we're forced to do a lot of heap fetches due\nto lots of not-all-visible tuples. Whether we can spend a lot of\ntime scanning the index without ever finding a tuple at all seems\nhypothetical. Without more evidence of a real problem, I do not\nwish to inject warts as horrid as this one into the index AM API.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:15:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, Nov 21, 2022 at 5:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think we should content ourselves with improving the demonstrated\n> case, which is where we're forced to do a lot of heap fetches due\n> to lots of not-all-visible tuples. Whether we can spend a lot of\n> time scanning the index without ever finding a tuple at all seems\n> hypothetical. Without more evidence of a real problem, I do not\n> wish to inject warts as horrid as this one into the index AM API.\n\nAll right. I've been bitten by this problem enough that I'm a little\ngun-shy about accepting anything that doesn't feel like a 100%\nsolution, but I admit that the scenario I described does seem a little\nbit far-fetched.\n\nI won't be completely shocked if somebody finds a way to hit it, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:48:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Nov 21, 2022 at 5:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think we should content ourselves with improving the demonstrated\n>> case, which is where we're forced to do a lot of heap fetches due\n>> to lots of not-all-visible tuples.\n\n> All right. I've been bitten by this problem enough that I'm a little\n> gun-shy about accepting anything that doesn't feel like a 100%\n> solution, but I admit that the scenario I described does seem a little\n> bit far-fetched.\n> I won't be completely shocked if somebody finds a way to hit it, though.\n\nWell, if we see a case where the time is indeed spent completely\nwithin the index AM, then we'll have to do something more or less\nlike what Simon sketched. But I don't want to go there without\nevidence that it's a live problem. API warts are really hard to\nget rid of once instituted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 18:17:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, Nov 21, 2022 at 6:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> evidence that it's a live problem. API warts are really hard to\n> get rid of once instituted.\n\nYeah, agreed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 18:34:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Hi all,\n\napologies the patch was rushed too quickly - my bad. I'm attaching a\nfixed one as v0004 (as it is the 4th patch floating around here).\n\n-Jakub Wartak\n\nOn Mon, Nov 21, 2022 at 9:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 21, 2022 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > On November 21, 2022 9:37:34 AM PST, Robert Haas <robertmhaas@gmail.com> wrote:\n> > >On Mon, Nov 21, 2022 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > >> This can't quite be right - isn't this only applying the limit if we found a\n> > >> visible tuple?\n> > >\n> > >It doesn't look that way to me, but perhaps I'm just too dense to see\n> > >the problem?\n> >\n> > The earlier version didn't have the issue, but the latest seems to only limit after a visible tuple has been found. Note the continue; when fetching a heap tuple fails.\n>\n> Oh, that line was removed in Simon's patch but not in Jakub's version,\n> I guess. Jakub's version also leaves out the last_block = block line\n> which seems pretty critical.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Nov 2022 09:03:54 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, 21 Nov 2022 at 23:34, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 21, 2022 at 6:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > evidence that it's a live problem. API warts are really hard to\n> > get rid of once instituted.\n>\n> Yeah, agreed.\n\nAgreed, happy not to; that version was just a WIP to see how it might work.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:16:28 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, 21 Nov 2022 at 22:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-21 16:17:56 -0500, Robert Haas wrote:\n> >> But ... what if they're not? Could the index contain a large number of\n> >> pages containing just 1 tuple each, or no tuples at all? If so, maybe\n> >> we can read ten bazillion index pages trying to find each heap tuple\n> >> and still end up in trouble.\n>\n> > ISTM that if you have an index in such a poor condition that a single\n> > value lookup reads thousands of pages inside the index, planner\n> > estimates taking long is going to be the smallest of your worries...\n>\n> Yeah, that sort of situation is going to make any operation on the\n> index slow, not only get_actual_variable_endpoint().\n\nThat was also my conclusion: this is actually a common antipattern for\nour indexes, not anything specific to the planner.\n\nIn another recent situation, I saw a very bad case of performance for\na \"queue table\". In that use case the rows are inserted at head and\nremoved from tail. Searching for the next item to be removed from the\nqueue involves an increasingly long tail search, in the case that a\nlong running transaction prevents us from marking the index entries\nkilled. Many tables exhibit such usage, for example, the neworder\ntable in TPC-C.\n\nWe optimized the case of frequent insertions into the rightmost index\npage; now we also need to optimize the case of a long series of\ndeletions from the leftmost index pages. Not sure how, just framing\nthe problem.\n\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 22 Nov 2022 14:16:21 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Mon, 21 Nov 2022 at 20:55, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 21, 2022 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > On November 21, 2022 9:37:34 AM PST, Robert Haas <robertmhaas@gmail.com> wrote:\n> > >On Mon, Nov 21, 2022 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > >> This can't quite be right - isn't this only applying the limit if we found a\n> > >> visible tuple?\n> > >\n> > >It doesn't look that way to me, but perhaps I'm just too dense to see\n> > >the problem?\n> >\n> > The earlier version didn't have the issue, but the latest seems to only limit after a visible tuple has been found. Note the continue; when fetching a heap tuple fails.\n>\n> Oh, that line was removed in Simon's patch but not in Jakub's version,\n> I guess. Jakub's version also leaves out the last_block = block line\n> which seems pretty critical.\n\nNew patch version reporting for duty, sir. Please take it from here!\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Tue, 22 Nov 2022 16:17:34 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> New patch version reporting for duty, sir. Please take it from here!\n\nWhy the CHECK_FOR_INTERRUPTS? I'd supposed that there's going to be\none somewhere down inside the index or heap access --- do you have\nreason to think there isn't?\n\nIs it appropriate to count distinct pages, rather than just the\nnumber of times we have to visit a heap tuple? That seems to\ncomplicate the logic a good deal, and I'm not sure it's buying\nmuch, especially since (as you noted) it's imprecise anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:35:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Tue, Nov 22, 2022 at 11:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > New patch version reporting for duty, sir. Please take it from here!\n>\n> Why the CHECK_FOR_INTERRUPTS? I'd supposed that there's going to be\n> one somewhere down inside the index or heap access --- do you have\n> reason to think there isn't?\n>\n> Is it appropriate to count distinct pages, rather than just the\n> number of times we have to visit a heap tuple? That seems to\n> complicate the logic a good deal, and I'm not sure it's buying\n> much, especially since (as you noted) it's imprecise anyway.\n\nFWW, the same question also occurred to me. But after mulling it over,\nwhat Simon did seems kinda reasonable to me. Although it's imprecise,\nit will generally cause us to stop sooner if we're bouncing all over\nthe heap and be willing to explore further if we're just hitting the\nsame heap page. I feel like that's pretty reasonable behavior.\nStopping early could hurt, so if we know that continuing isn't costing\nmuch, why not?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Nov 2022 13:22:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Nov 22, 2022 at 11:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Is it appropriate to count distinct pages, rather than just the\n>> number of times we have to visit a heap tuple? That seems to\n>> complicate the logic a good deal, and I'm not sure it's buying\n>> much, especially since (as you noted) it's imprecise anyway.\n\n> FWW, the same question also occurred to me. But after mulling it over,\n> what Simon did seems kinda reasonable to me. Although it's imprecise,\n> it will generally cause us to stop sooner if we're bouncing all over\n> the heap and be willing to explore further if we're just hitting the\n> same heap page. I feel like that's pretty reasonable behavior.\n> Stopping early could hurt, so if we know that continuing isn't costing\n> much, why not?\n\nFair I guess --- and I did say that I wanted it to be based on number\nof pages visited not number of tuples. So objection withdrawn to that\naspect.\n\nStill wondering if there's really no CHECK_FOR_INTERRUPT anywhere\nelse in this loop.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Nov 2022 13:27:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "I wrote:\n> Still wondering if there's really no CHECK_FOR_INTERRUPT anywhere\n> else in this loop.\n\nI did some experimentation using the test case Jakub presented\nto start with, and verified that that loop does respond promptly\nto control-C even in HEAD. So there are CFI(s) in the loop as\nI thought, and we don't need another.\n\nWhat we do need is some more work on nearby comments. I'll\nsee about that and push it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Nov 2022 13:44:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Tue, Nov 22, 2022 at 1:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Still wondering if there's really no CHECK_FOR_INTERRUPT anywhere\n> > else in this loop.\n>\n> I did some experimentation using the test case Jakub presented\n> to start with, and verified that that loop does respond promptly\n> to control-C even in HEAD. So there are CFI(s) in the loop as\n> I thought, and we don't need another.\n\nOK. Although an extra CFI isn't such a bad thing, either.\n\n> What we do need is some more work on nearby comments. I'll\n> see about that and push it.\n\nGreat!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Nov 2022 14:02:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" }, { "msg_contents": "On Tue, 22 Nov 2022 at 18:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Still wondering if there's really no CHECK_FOR_INTERRUPT anywhere\n> > else in this loop.\n>\n> I did some experimentation using the test case Jakub presented\n> to start with, and verified that that loop does respond promptly\n> to control-C even in HEAD. So there are CFI(s) in the loop as\n> I thought, and we don't need another.\n\nThanks for checking. Sorry for not responding earlier.\n\n> What we do need is some more work on nearby comments. I'll\n> see about that and push it.\n\nThanks; nicely streamlined.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Nov 2022 16:49:55 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Damage control for planner's get_actual_variable_endpoint()\n runaway" } ]
[ { "msg_contents": "Hi!\n\n\nPREAMBLE\n\nFor a last couple of months, I stumbled into a problem while running tests\non ARM (Debain, aarch64) and some more wired platforms\nfor my 64–bit XIDs patch set. Test contrib/test_decoding\n(catalog_change_snapshot) rarely failed with the next message:\n\nTRAP: FailedAssertion(\"TransactionIdIsNormal(InitialRunningXacts[0]) &&\nTransactionIdIsNormal(builder->xmin)\", File: \"snapbuild.c\"\n\nI have plenty of failing on ARM, couple on x86 and none (if memory serves)\non x86–64.\n\nAt first, my thought was to blame my 64–bit XID patch for what, but this is\nnot the case. This error persist from PG15 to PG10\nwithout any patch applied. Though hard to reproduce.\n\n\nPROBLEM\n\nAfter some investigation, I think, the problem is in the snapbuild.c\n(commit 272248a0c1b1, see [0]). We do allocate InitialRunningXacts\narray in the context of builder->context, but for the time when we call\nSnapBuildPurgeOlderTxn this context may be already free'd. Thus,\nInitialRunningXacts array become array of 2139062143 (i.e. 0x7F7F7F7F)\nvalues. This is not caused buildfarm to fail due to that code:\n\nif (!NormalTransactionIdPrecedes(InitialRunningXacts[0],\n builder->xmin))\n return;\n\nSince the cluster is initialised with XID way less than 0x7F7F7F7F, we get\nto return here, but the problem is still existing.\nI've attached the patch based on branch REL_15_STABLE to reproduce the\nproblem on x86-64.\n\nOn my patch set of 64–bit XID's this problem manifested since I do init\ncluster with XID far beyond 32–bit bound.\n\nAlternatively, I did try to use my patch [1] to init cluster with first\ntransaction 2139062143 (i.e. 0x7F7F7F7F).\nThen put pg_sleep call just like in the attached patch:\n--- a/src/backend/replication/logical/snapbuild.c\n+++ b/src/backend/replication/logical/snapbuild.c\n@@ -968,6 +968,8 @@ SnapBuildPurgeOlderTxn(SnapBuild *builder)\n if (NInitialRunningXacts == 0)\n return;\n\n+ pg_usleep(1000000L * 2L);\n+\n /* bound check if there is at least one transaction to remove */\n if (!NormalTransactionIdPrecedes(InitialRunningXacts[0],\n\n builder->xmin))\n\nRun installcheck-force for many times for a test_decoding/\ncatalog_change_snapshot's and got a segfault.\n\n\nCONCLUSION\n\nIn snapbuild.c, context allocated array InitialRunningXacts may be free'd,\nthis caused assertion failed (at best) or\nmay lead to the more serious problems.\n\n\nP.S.\n\nSimple fix like:\n@@ -1377,7 +1379,7 @@ SnapBuildFindSnapshot(SnapBuild *builder, XLogRecPtr\nlsn, xl_running_xacts *runn\n * changes. See SnapBuildXidSetCatalogChanges.\n */\n NInitialRunningXacts = nxacts;\n- InitialRunningXacts = MemoryContextAlloc(builder->context,\nsz);\n+ InitialRunningXacts = MemoryContextAlloc(TopMemoryContext,\nsz);\n memcpy(InitialRunningXacts, running->xids, sz);\n qsort(InitialRunningXacts, nxacts, sizeof(TransactionId),\nxidComparator);\n\nseems to solve the described problem, but I'm not in the context of [0] and\nwhy array is allocated in builder->context.\n\n\n[0] https://postgr.es/m/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG=ezaa4vqYjJ16yoxgrpa-=gXnf0Vv3Ey9bjGrRRFN2YyWFQ@mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 21 Nov 2022 15:47:12 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "[BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "Hi,\n\nOn 2022-11-21 15:47:12 +0300, Maxim Orlov wrote:\n> After some investigation, I think, the problem is in the snapbuild.c\n> (commit 272248a0c1b1, see [0]). We do allocate InitialRunningXacts\n> array in the context of builder->context, but for the time when we call\n> SnapBuildPurgeOlderTxn this context may be already free'd. Thus,\n> InitialRunningXacts array become array of 2139062143 (i.e. 0x7F7F7F7F)\n> values. This is not caused buildfarm to fail due to that code:\n\nAmit, that does indeed seem to be a problem...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:52:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "On Tue, Nov 22, 2022 at 2:22 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-21 15:47:12 +0300, Maxim Orlov wrote:\n> > After some investigation, I think, the problem is in the snapbuild.c\n> > (commit 272248a0c1b1, see [0]). We do allocate InitialRunningXacts\n> > array in the context of builder->context, but for the time when we call\n> > SnapBuildPurgeOlderTxn this context may be already free'd. Thus,\n> > InitialRunningXacts array become array of 2139062143 (i.e. 0x7F7F7F7F)\n> > values. This is not caused buildfarm to fail due to that code:\n>\n> Amit, that does indeed seem to be a problem...\n>\n\nI'll look into it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Nov 2022 08:34:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "On Mon, Nov 21, 2022 at 6:17 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> PROBLEM\n>\n> After some investigation, I think, the problem is in the snapbuild.c (commit 272248a0c1b1, see [0]). We do allocate InitialRunningXacts\n> array in the context of builder->context, but for the time when we call SnapBuildPurgeOlderTxn this context may be already free'd.\n>\n\nI think you are seeing it freed in SnapBuildPurgeOlderTxn when we\nfinish and restart decoding in the same session. After finishing the\nfirst decoding, it frees the decoding context but we forgot to reset\nNInitialRunningXacts and InitialRunningXacts array. So, next time when\nwe start decoding in the same session where we don't restore any\nserialized snapshot, it can lead to the problem you are seeing because\nNInitialRunningXacts (and InitialRunningXacts array) won't have sane\nvalues.\n\nThis can happen in the catalog_change_snapshot test as we have\nmultiple permutations and those use the same session across a restart\nof decoding.\n\n>\n> Simple fix like:\n> @@ -1377,7 +1379,7 @@ SnapBuildFindSnapshot(SnapBuild *builder, XLogRecPtr lsn, xl_running_xacts *runn\n> * changes. See SnapBuildXidSetCatalogChanges.\n> */\n> NInitialRunningXacts = nxacts;\n> - InitialRunningXacts = MemoryContextAlloc(builder->context, sz);\n> + InitialRunningXacts = MemoryContextAlloc(TopMemoryContext, sz);\n> memcpy(InitialRunningXacts, running->xids, sz);\n> qsort(InitialRunningXacts, nxacts, sizeof(TransactionId), xidComparator);\n>\n> seems to solve the described problem, but I'm not in the context of [0] and why array is allocated in builder->context.\n>\n\nIt will leak the memory for InitialRunningXacts. We need to reset\nNInitialRunningXacts (and InitialRunningXacts) as mentioned above.\n\nThank you for the report and initial analysis. I have added Sawada-San\nto know his views as he was the primary author of this work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Nov 2022 15:06:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": ">\n> Thank you for the report and initial analysis. I have added Sawada-San\n> to know his views as he was the primary author of this work.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nOK, thanks a lot. I hope, we'll fix this soon.\n\n-- \nBest regards,\nMaxim Orlov.\n\n\nThank you for the report and initial analysis. I have added Sawada-San\nto know his views as he was the primary author of this work.\n\n-- \nWith Regards,\nAmit Kapila. OK, thanks a lot. I hope, we'll fix this soon.-- Best regards,Maxim Orlov.", "msg_date": "Tue, 22 Nov 2022 12:53:37 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "Hi,\n\nOn Tue, Nov 22, 2022 at 6:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 21, 2022 at 6:17 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n> >\n> > PROBLEM\n> >\n> > After some investigation, I think, the problem is in the snapbuild.c (commit 272248a0c1b1, see [0]). We do allocate InitialRunningXacts\n> > array in the context of builder->context, but for the time when we call SnapBuildPurgeOlderTxn this context may be already free'd.\n> >\n>\n> I think you are seeing it freed in SnapBuildPurgeOlderTxn when we\n> finish and restart decoding in the same session. After finishing the\n> first decoding, it frees the decoding context but we forgot to reset\n> NInitialRunningXacts and InitialRunningXacts array. So, next time when\n> we start decoding in the same session where we don't restore any\n> serialized snapshot, it can lead to the problem you are seeing because\n> NInitialRunningXacts (and InitialRunningXacts array) won't have sane\n> values.\n>\n> This can happen in the catalog_change_snapshot test as we have\n> multiple permutations and those use the same session across a restart\n> of decoding.\n\nI have the same analysis. In order to restart the decoding from the\nLSN where we don't restore any serialized snapshot, we somehow need to\nadvance the slot's restart_lsn. In this case, I think it happened\nsince the same session drops at the end of the first scenario and\ncreates the replication slot with the same name at the beginning of\nthe second scenario in catalog_change_snapshot.spec.\n\n>\n> >\n> > Simple fix like:\n> > @@ -1377,7 +1379,7 @@ SnapBuildFindSnapshot(SnapBuild *builder, XLogRecPtr lsn, xl_running_xacts *runn\n> > * changes. See SnapBuildXidSetCatalogChanges.\n> > */\n> > NInitialRunningXacts = nxacts;\n> > - InitialRunningXacts = MemoryContextAlloc(builder->context, sz);\n> > + InitialRunningXacts = MemoryContextAlloc(TopMemoryContext, sz);\n> > memcpy(InitialRunningXacts, running->xids, sz);\n> > qsort(InitialRunningXacts, nxacts, sizeof(TransactionId), xidComparator);\n> >\n> > seems to solve the described problem, but I'm not in the context of [0] and why array is allocated in builder->context.\n> >\n>\n> It will leak the memory for InitialRunningXacts. We need to reset\n> NInitialRunningXacts (and InitialRunningXacts) as mentioned above.\n>\n> Thank you for the report and initial analysis. I have added Sawada-San\n> to know his views as he was the primary author of this work.\n\nThanks!\n\nI've attached a draft patch. To fix it, I think we can reset\nInitialRunningXacts and NInitialRunningXacts at FreeSnapshotBuilder()\nand add an assertion in AllocateSnapshotBuilder() to make sure both\nare reset. Regarding the tests, the patch includes a new scenario to\nreproduce this issue. However, since the issue can be reproduced also\nby the existing scenario (with low probability, though), I'm not sure\nit's worth adding the new scenario.\n\nI've not checked if the patch works for version 14 or older yet.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 22 Nov 2022 21:34:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "> I've attached a draft patch. To fix it, I think we can reset\n> InitialRunningXacts and NInitialRunningXacts at FreeSnapshotBuilder()\n> and add an assertion in AllocateSnapshotBuilder() to make sure both\n> are reset.\n>\nThanks for the patch. It works fine. I've tested this patch for 15 and 11\nversions on x86_64 and ARM\nand see no fails. But the function pg_current_xact_id added by 4c04be9b05ad\ndoesn't exist in PG11.\n\n\n> Regarding the tests, the patch includes a new scenario to\n> reproduce this issue. However, since the issue can be reproduced also\n> by the existing scenario (with low probability, though), I'm not sure\n> it's worth adding the new scenario.\n>\nAFAICS, the test added doesn't 100% reproduce this issue, so, maybe, it\ndoes not worth it. But, I do not have a strong opinion here.\nLet's add tests in a separate commit and let the actual committer to decide\nwhat to do, should we?\n\n-- \nBest regards,\nMaxim Orlov.\n\n\nI've attached a draft patch. To fix it, I think we can reset\nInitialRunningXacts and NInitialRunningXacts at FreeSnapshotBuilder()\nand add an assertion in AllocateSnapshotBuilder() to make sure both\nare reset. Thanks for the patch. It works fine. I've tested this patch for 15 and 11 versions on x86_64 and ARMand see no fails. But the function pg_current_xact_id added by 4c04be9b05ad doesn't exist in PG11. Regarding the tests, the patch includes a new scenario to\nreproduce this issue. However, since the issue can be reproduced also\nby the existing scenario (with low probability, though), I'm not sure\nit's worth adding the new scenario.AFAICS, the test added doesn't 100% reproduce this issue, so, maybe, it does not worth it. But, I do not have a strong opinion here. Let's add tests in a separate commit and let the actual committer to decide what to do, should we?-- Best regards,Maxim Orlov.", "msg_date": "Tue, 22 Nov 2022 20:02:52 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "On Tue, Nov 22, 2022 at 10:33 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n>>\n>>\n>> Regarding the tests, the patch includes a new scenario to\n>> reproduce this issue. However, since the issue can be reproduced also\n>> by the existing scenario (with low probability, though), I'm not sure\n>> it's worth adding the new scenario.\n>\n> AFAICS, the test added doesn't 100% reproduce this issue, so, maybe, it does not worth it. But, I do not have a strong opinion here.\n> Let's add tests in a separate commit and let the actual committer to decide what to do, should we?\n>\n\n+1 to not have a test for this as the scenario can already be tested\nby the existing set of tests.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Nov 2022 08:29:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "On Wed, Nov 23, 2022 at 12:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 22, 2022 at 10:33 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n> >>\n> >>\n> >> Regarding the tests, the patch includes a new scenario to\n> >> reproduce this issue. However, since the issue can be reproduced also\n> >> by the existing scenario (with low probability, though), I'm not sure\n> >> it's worth adding the new scenario.\n> >\n> > AFAICS, the test added doesn't 100% reproduce this issue, so, maybe, it does not worth it. But, I do not have a strong opinion here.\n> > Let's add tests in a separate commit and let the actual committer to decide what to do, should we?\n> >\n>\n> +1 to not have a test for this as the scenario can already be tested\n> by the existing set of tests.\n\nAgreed not to have a test case for this.\n\nI've attached an updated patch. I've confirmed this patch works for\nall supported branches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 24 Nov 2022 17:17:41 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": ">\n> Agreed not to have a test case for this.\n>\n> I've attached an updated patch. I've confirmed this patch works for\n> all supported branches.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n>\nIt works for me as well. Thanks!\n\nI've created a commitfest entry for this patch, see\nhttps://commitfest.postgresql.org/41/4024/\nHope, it will be committed soon.\n\n-- \nBest regards,\nMaxim Orlov.\n\n\nAgreed not to have a test case for this.\n\nI've attached an updated patch. I've confirmed this patch works for\nall supported branches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\nIt works for me as well. Thanks!I've created a commitfest entry for this patch, see https://commitfest.postgresql.org/41/4024/Hope, it will be committed soon.-- Best regards,Maxim Orlov.", "msg_date": "Thu, 24 Nov 2022 13:27:16 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "On Thu, Nov 24, 2022 at 1:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 23, 2022 at 12:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Agreed not to have a test case for this.\n>\n> I've attached an updated patch. I've confirmed this patch works for\n> all supported branches.\n>\n\nI have slightly changed the checks used in the patch, otherwise looks\ngood to me. I am planning to push (v11-v15) the attached tomorrow\nunless there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 24 Nov 2022 16:43:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "On Thu, Nov 24, 2022 at 4:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 24, 2022 at 1:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Nov 23, 2022 at 12:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Agreed not to have a test case for this.\n> >\n> > I've attached an updated patch. I've confirmed this patch works for\n> > all supported branches.\n> >\n>\n> I have slightly changed the checks used in the patch, otherwise looks\n> good to me. I am planning to push (v11-v15) the attached tomorrow\n> unless there are more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 25 Nov 2022 12:10:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "On Fri, 25 Nov 2022 at 09:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Nov 24, 2022 at 4:43 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Thu, Nov 24, 2022 at 1:48 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> > >\n> > > On Wed, Nov 23, 2022 at 12:00 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> > > Agreed not to have a test case for this.\n> > >\n> > > I've attached an updated patch. I've confirmed this patch works for\n> > > all supported branches.\n> > >\n> >\n> > I have slightly changed the checks used in the patch, otherwise looks\n> > good to me. I am planning to push (v11-v15) the attached tomorrow\n> > unless there are more comments.\n> >\n>\n> Pushed.\n>\nA big thanks to you! Could you also, close the commitfest entry\nhttps://commitfest.postgresql.org/41/4024/, please?\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Fri, 25 Nov 2022 at 09:40, Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Nov 24, 2022 at 4:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 24, 2022 at 1:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Nov 23, 2022 at 12:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Agreed not to have a test case for this.\n> >\n> > I've attached an updated patch. I've confirmed this patch works for\n> > all supported branches.\n> >\n>\n> I have slightly changed the checks used in the patch, otherwise looks\n> good to me. I am planning to push (v11-v15) the attached tomorrow\n> unless there are more comments.\n>\n\nPushed.A big thanks to you! Could you also, close the commitfest entry https://commitfest.postgresql.org/41/4024/, please?-- Best regards,Maxim Orlov.", "msg_date": "Fri, 25 Nov 2022 11:58:34 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" }, { "msg_contents": "On Fri, Nov 25, 2022 at 5:58 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n>\n>\n> On Fri, 25 Nov 2022 at 09:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, Nov 24, 2022 at 4:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > On Thu, Nov 24, 2022 at 1:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> > >\n>> > > On Wed, Nov 23, 2022 at 12:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > >\n>> > > Agreed not to have a test case for this.\n>> > >\n>> > > I've attached an updated patch. I've confirmed this patch works for\n>> > > all supported branches.\n>> > >\n>> >\n>> > I have slightly changed the checks used in the patch, otherwise looks\n>> > good to me. I am planning to push (v11-v15) the attached tomorrow\n>> > unless there are more comments.\n>> >\n>>\n>> Pushed.\n>\n> A big thanks to you! Could you also, close the commitfest entry https://commitfest.postgresql.org/41/4024/, please?\n\nClosed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 10:13:28 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] FailedAssertion in SnapBuildPurgeOlderTxn" } ]
[ { "msg_contents": "Hi everyone,\n\nI have made a patch that introduces support for libpq binary protocol\nin postgres_fdw. The idea is simple, when a user knows that the foreign\nserver is binary compatible with the local and his workload could\nsomehow benefit from using binary protocol, it can be switched on for a\nparticular server or even a particular table. \n\nThe patch adds a new foreign server and table option 'binary_format'\n(by default off) and implements serialization/deserialization of query\nresults and parameters for binary protocol. I have tested the patch by\nswitching foreign servers in postgres_fdw.sql tests to binary_mode, the\nonly diff was in the text of the error for parsing an invalid integer\nvalue, so it worked as expected for the test. There are a few minor\nissues I don't like in the code and I am yet to write the tests and\ndocs for it. It would be great to get some feedback and understand,\nwhether this is a welcome feature, before proceeding with all of the\nabovementioned.\n\nThanks,\nIlya Gladyshev", "msg_date": "Mon, 21 Nov 2022 19:20:06 +0400", "msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>", "msg_from_op": true, "msg_subject": "postgres_fdw binary protocol support" }, { "msg_contents": "Hi Illya,\n\nOn Mon, Nov 21, 2022 at 8:50 PM Ilya Gladyshev\n<ilya.v.gladyshev@gmail.com> wrote:\n>\n> Hi everyone,\n>\n> I have made a patch that introduces support for libpq binary protocol\n> in postgres_fdw. The idea is simple, when a user knows that the foreign\n> server is binary compatible with the local and his workload could\n> somehow benefit from using binary protocol, it can be switched on for a\n> particular server or even a particular table.\n>\n\nWhy do we need this feature? If it's for performance then do we have\nperformance numbers?\n\nAFAIU, binary compatibility of two postgresql servers depends upon the\nbinary compatibility of the platforms on which they run. So probably\npostgres_fdw can not infer the binary compatibility by itself. Is that\ntrue? We have many postgres_fdw options that user needs to set\nmanually to benefit from them. It will be good to infer those\nautomatically as much as possible. Hence this question.\n\n> The patch adds a new foreign server and table option 'binary_format'\n> (by default off) and implements serialization/deserialization of query\n> results and parameters for binary protocol. I have tested the patch by\n> switching foreign servers in postgres_fdw.sql tests to binary_mode, the\n> only diff was in the text of the error for parsing an invalid integer\n> value, so it worked as expected for the test. There are a few minor\n> issues I don't like in the code and I am yet to write the tests and\n> docs for it. It would be great to get some feedback and understand,\n> whether this is a welcome feature, before proceeding with all of the\n> abovementioned.\n>\n\nAbout the patch itself, I see a lot of if (binary) {} else {} block\nwhich are repeated. It will be good if we can add functions/macros to\navoid duplication.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 22 Nov 2022 18:40:01 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw binary protocol support" }, { "msg_contents": "On Tue, 22 Nov 2022 at 08:17, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> AFAIU, binary compatibility of two postgresql servers depends upon the\n> binary compatibility of the platforms on which they run.\n\nNo, libpq binary mode is not architecture-specific. I think you're\nthinking of on-disk binary compatibility. But libpq binary mode is\njust a binary network representation of the data instead of an ascii\nrepresentation. It should be faster and more efficient but it still\ngoes through binary input/output functions (which aren't named\ninput/output)\n\nI actually wonder if having this would be a good way to get some code\ncoverage of the binary input/output functions which I suspect is sadly\nlacking now. It wouldn't necessarily test that they're doing what\nthey're supposed to... but at least they would be getting run which I\ndon't think they are currently?\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 23 Nov 2022 14:23:45 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw binary protocol support" }, { "msg_contents": "\n\n> 22 нояб. 2022 г., в 17:10, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> написал(а):\n> \n> Hi Illya,\n> \n> On Mon, Nov 21, 2022 at 8:50 PM Ilya Gladyshev\n> <ilya.v.gladyshev@gmail.com> wrote:\n>> \n>> Hi everyone,\n>> \n>> I have made a patch that introduces support for libpq binary protocol\n>> in postgres_fdw. The idea is simple, when a user knows that the foreign\n>> server is binary compatible with the local and his workload could\n>> somehow benefit from using binary protocol, it can be switched on for a\n>> particular server or even a particular table.\n>> \n> \n> Why do we need this feature? If it's for performance then do we have\n> performance numbers?\nYes, it is for performance, but I am yet to do the benchmarks. My initial idea was that binary protocol must be more efficient than text, because as I understand that’s the whole point of it. However, the minor tests that I have done do not prove this and I couldn’t find any benchmarks for it online, so I will do further tests to find a use case for it.\n> About the patch itself, I see a lot of if (binary) {} else {} block\n> which are repeated. It will be good if we can add functions/macros to\n> avoid duplication.\nYea, that’s true, I have some ideas about improving it\n\n", "msg_date": "Thu, 24 Nov 2022 16:15:56 +0400", "msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw binary protocol support" } ]
[ { "msg_contents": "Prevent instability in contrib/pageinspect's regression test.\n\npageinspect has occasionally failed on slow buildfarm members,\nwith symptoms indicating that the expected effects of VACUUM\nFREEZE didn't happen. This is presumably because a background\ntransaction such as auto-analyze was holding back global xmin.\n\nWe can work around that by using a temp table in the test.\nSince commit a7212be8b, that will use an up-to-date cutoff xmin\nregardless of other processes. And pageinspect itself shouldn't\nreally care whether the table is temp.\n\nBack-patch to v14. There would be no point in older branches\nwithout back-patching a7212be8b, which seems like more trouble\nthan the problem is worth.\n\nDiscussion: https://postgr.es/m/2892135.1668976646@sss.pgh.pa.us\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e2933a6e11791191050cd925d52d34e785eece77\n\nModified Files\n--------------\ncontrib/pageinspect/expected/page.out | 3 ++-\ncontrib/pageinspect/sql/page.sql | 3 ++-\n2 files changed, 4 insertions(+), 2 deletions(-)", "msg_date": "Mon, 21 Nov 2022 15:51:05 +0000", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgsql: Prevent instability in contrib/pageinspect's regression test." }, { "msg_contents": "Hi,\n\nOn 2022-11-21 15:51:05 +0000, Tom Lane wrote:\n> Prevent instability in contrib/pageinspect's regression test.\n> \n> pageinspect has occasionally failed on slow buildfarm members,\n> with symptoms indicating that the expected effects of VACUUM\n> FREEZE didn't happen. This is presumably because a background\n> transaction such as auto-analyze was holding back global xmin.\n> \n> We can work around that by using a temp table in the test.\n> Since commit a7212be8b, that will use an up-to-date cutoff xmin\n> regardless of other processes. And pageinspect itself shouldn't\n> really care whether the table is temp.\n> \n> Back-patch to v14. There would be no point in older branches\n> without back-patching a7212be8b, which seems like more trouble\n> than the problem is worth.\n\nLooks like a chunk of the buildfarm doesn't like this - presumably because\nthey use force_parallel_mode = regress. Seems ok to just force that to off in\nthis test?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 09:22:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Looks like a chunk of the buildfarm doesn't like this - presumably because\n> they use force_parallel_mode = regress. Seems ok to just force that to off in\n> this test?\n\nUgh ... didn't occur to me to try that. I'll take a look.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:24:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Looks like a chunk of the buildfarm doesn't like this - presumably because\n>> they use force_parallel_mode = regress. Seems ok to just force that to off in\n>> this test?\n\n> Ugh ... didn't occur to me to try that. I'll take a look.\n\nHmm, so the problem is:\n\nSELECT octet_length(get_raw_page('test1', 'main', 0)) AS main_0;\nERROR: cannot access temporary tables during a parallel operation\n\nWhy in the world is get_raw_page() marked as parallel safe?\nIt clearly isn't, given this restriction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:35:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "On Mon, Nov 21, 2022 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Looks like a chunk of the buildfarm doesn't like this - presumably because\n> >> they use force_parallel_mode = regress. Seems ok to just force that to off in\n> >> this test?\n>\n> > Ugh ... didn't occur to me to try that. I'll take a look.\n>\n> Hmm, so the problem is:\n>\n> SELECT octet_length(get_raw_page('test1', 'main', 0)) AS main_0;\n> ERROR: cannot access temporary tables during a parallel operation\n>\n> Why in the world is get_raw_page() marked as parallel safe?\n> It clearly isn't, given this restriction.\n\nI suspect that restriction was overlooked when evaluating the marking\nof this function.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:52:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Nov 21, 2022 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm, so the problem is:\n>> \n>> SELECT octet_length(get_raw_page('test1', 'main', 0)) AS main_0;\n>> ERROR: cannot access temporary tables during a parallel operation\n>> \n>> Why in the world is get_raw_page() marked as parallel safe?\n>> It clearly isn't, given this restriction.\n\n> I suspect that restriction was overlooked when evaluating the marking\n> of this function.\n\nSo it would seem. PARALLEL RESTRICTED should work, though.\n\nI'll check to see if any sibling functions have the same issue,\nand push a patch to adjust them.\n\nPresumably the parallel labeling has to be fixed as far back as\nit's marked that way (didn't look). Maybe we should push the\ntest change further back too, just to exercise this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 13:08:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "I wrote:\n> I'll check to see if any sibling functions have the same issue,\n> and push a patch to adjust them.\n> Presumably the parallel labeling has to be fixed as far back as\n> it's marked that way (didn't look). Maybe we should push the\n> test change further back too, just to exercise this.\n\nHmm, so this is easy enough to fix in HEAD and v15, as attached.\nHowever, there's a problem in older branches: their newest\nversions of pageinspect are older than 1.10, so this doesn't\nwork as-is.\n\nWe could imagine inventing versions like 1.9.1, and providing\na script pageinspect--1.9--1.9.1.sql to do what's done here\nas well as (in later branches) pageinspect--1.9.1--1.10.sql that\nduplicates pageinspect--1.9--1.10.sql, and then the same again for\n1.8 and 1.7 for the older in-support branches. That seems like\nan awful lot of trouble for something that there have been no\nfield complaints about.\n\nI'm inclined to apply this in HEAD and v15, and revert the\ntest change in v14, and call it good.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 21 Nov 2022 14:12:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "Hi,\n\nOn 2022-11-21 12:52:01 -0500, Robert Haas wrote:\n> On Mon, Nov 21, 2022 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > >> Looks like a chunk of the buildfarm doesn't like this - presumably because\n> > >> they use force_parallel_mode = regress. Seems ok to just force that to off in\n> > >> this test?\n> >\n> > > Ugh ... didn't occur to me to try that. I'll take a look.\n> >\n> > Hmm, so the problem is:\n> >\n> > SELECT octet_length(get_raw_page('test1', 'main', 0)) AS main_0;\n> > ERROR: cannot access temporary tables during a parallel operation\n> >\n> > Why in the world is get_raw_page() marked as parallel safe?\n> > It clearly isn't, given this restriction.\n> \n> I suspect that restriction was overlooked when evaluating the marking\n> of this function.\n\nIt's somewhat sad to add this restriction - I've used get_raw_page() (+\nother functions) to scan a whole database for a bug. IIRC that actually\ndid end up using parallelism, albeit likely not very efficiently.\n\nDon't really have a better idea though.\n\nIt may be worth inventing a framework where a function could analyze its\narguments (presumably via prosupport) to determine the degree of\nparallel safety, but this doesn't seem sufficient reason...\n\nGreetings\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 11:56:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-21 12:52:01 -0500, Robert Haas wrote:\n>> On Mon, Nov 21, 2022 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Why in the world is get_raw_page() marked as parallel safe?\n>>> It clearly isn't, given this restriction.\n\n> It's somewhat sad to add this restriction - I've used get_raw_page() (+\n> other functions) to scan a whole database for a bug. IIRC that actually\n> did end up using parallelism, albeit likely not very efficiently.\n> Don't really have a better idea though.\n\nMe either.\n\n> It may be worth inventing a framework where a function could analyze its\n> arguments (presumably via prosupport) to determine the degree of\n> parallel safety, but this doesn't seem sufficient reason...\n\nMaybe, but in this example you could only decide you were parallel\nsafe if the argument is an OID constant, which'd be pretty limiting.\n\nIf I were trying to find a better fix I'd be looking for ways for\nparallel workers to be able to read the parent's temp tables.\n(Perhaps that could tie in with the blue-sky discussion we had\nthe other day about allowing autovacuum on temp tables??)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:12:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "On Mon, Nov 21, 2022 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-21 12:52:01 -0500, Robert Haas wrote:\n> >> On Mon, Nov 21, 2022 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Why in the world is get_raw_page() marked as parallel safe?\n> >>> It clearly isn't, given this restriction.\n>\n> > It's somewhat sad to add this restriction - I've used get_raw_page() (+\n> > other functions) to scan a whole database for a bug. IIRC that actually\n> > did end up using parallelism, albeit likely not very efficiently.\n> > Don't really have a better idea though.\n>\n> Me either.\n>\n> > It may be worth inventing a framework where a function could analyze its\n> > arguments (presumably via prosupport) to determine the degree of\n> > parallel safety, but this doesn't seem sufficient reason...\n>\n> Maybe, but in this example you could only decide you were parallel\n> safe if the argument is an OID constant, which'd be pretty limiting.\n>\n> If I were trying to find a better fix I'd be looking for ways for\n> parallel workers to be able to read the parent's temp tables.\n> (Perhaps that could tie in with the blue-sky discussion we had\n> the other day about allowing autovacuum on temp tables??)\n>\n>\n>\nI don't suppose we want to just document the fact that these power-user\nnon-core functions are unable to process temporary tables safely without\nfirst disabling parallelism for the session.\n\nDavid J.\n\nOn Mon, Nov 21, 2022 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-21 12:52:01 -0500, Robert Haas wrote:\n>> On Mon, Nov 21, 2022 at 12:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Why in the world is get_raw_page() marked as parallel safe?\n>>> It clearly isn't, given this restriction.\n\n> It's somewhat sad to add this restriction - I've used get_raw_page() (+\n> other functions) to scan a whole database for a bug. IIRC that actually\n> did end up using parallelism, albeit likely not very efficiently.\n> Don't really have a better idea though.\n\nMe either.\n\n> It may be worth inventing a framework where a function could analyze its\n> arguments (presumably via prosupport) to determine the degree of\n> parallel safety, but this doesn't seem sufficient reason...\n\nMaybe, but in this example you could only decide you were parallel\nsafe if the argument is an OID constant, which'd be pretty limiting.\n\nIf I were trying to find a better fix I'd be looking for ways for\nparallel workers to be able to read the parent's temp tables.\n(Perhaps that could tie in with the blue-sky discussion we had\nthe other day about allowing autovacuum on temp tables??)\nI don't suppose we want to just document the fact that these power-user non-core functions are unable to process temporary tables safely without first disabling parallelism for the session.David J.", "msg_date": "Mon, 21 Nov 2022 13:16:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "Hi,\n\nOn 2022-11-21 15:12:15 -0500, Tom Lane wrote:\n> If I were trying to find a better fix I'd be looking for ways for\n> parallel workers to be able to read the parent's temp tables.\n> (Perhaps that could tie in with the blue-sky discussion we had\n> the other day about allowing autovacuum on temp tables??)\n\nThat'd be a nontrivial change, because we explicitly don't use any\nlocking for anything relating to localbuf.c. One possible benefit could\nbe that we could substantially reduce the code duplication between\n\"normal\" bufmgr.c and localbuf.c.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 12:18:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-21 15:12:15 -0500, Tom Lane wrote:\n>> If I were trying to find a better fix I'd be looking for ways for\n>> parallel workers to be able to read the parent's temp tables.\n>> (Perhaps that could tie in with the blue-sky discussion we had\n>> the other day about allowing autovacuum on temp tables??)\n\n> That'd be a nontrivial change, because we explicitly don't use any\n> locking for anything relating to localbuf.c. One possible benefit could\n> be that we could substantially reduce the code duplication between\n> \"normal\" bufmgr.c and localbuf.c.\n\nI didn't say this was easy ;-). Aside from locking, the local buffers\nare inside the process's address space and not accessible from outside.\nMaybe they could be mapped into a shared memory region instead?\nAnd there are optimizations like commit a7212be8b that depend on the\nassumption that nothing else is accessing our process's temp tables.\nThat'd need a lot of thought, if we don't want to give up all the\nperformance benefits of temp tables.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:33:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." }, { "msg_contents": "On Mon, 21 Nov 2022 at 15:01, Andres Freund <andres@anarazel.de> wrote:\n>\n> It's somewhat sad to add this restriction - I've used get_raw_page() (+\n> other functions) to scan a whole database for a bug. IIRC that actually\n> did end up using parallelism, albeit likely not very efficiently.\n>\n> Don't really have a better idea though.\n\nGiven how specific the use case is here a simple solution would be to\njust have a dedicated get_raw_temp_page() and restrict get_raw_page()\nto persistent tables.\n\nI suppose slightly gilding it would be to make a get_raw_page_temp()\nand get_raw_page_persistent() and then you could have get_raw_page()\ncall the appropropriate one. They would be parallel restricted except\nfor get_raw_page_persistent() and if you explicitly called it you\ncould get parallel scans otherwise you wouldn't.\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 23 Nov 2022 11:58:41 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent instability in contrib/pageinspect's regression\n test." } ]
[ { "msg_contents": "Hello,\n\nThe operator `IS NULL` doesn't work if the argument has unknown type.\nIn psycopg 3:\n\n >>> conn.execute(\"select %s is null\", ['foo']).fetchone()\n IndeterminateDatatype: could not determine data type of parameter $1\n\nThis can get in the way of using the unknown type for strings (but\nspecifying the text oid for strings is worse, because there is no\nimplicit cast from string to most types).\n\nIt doesn't seem necessary to specify a type for an argument if it only\nhas to be compared with null: nullness is independent from the type\nand is even specified, in the query parameters, in a separate array\nfrom the parameter values.\n\nMaybe this behaviour can be relaxed?\n\nCheers\n\n-- Daniele\n\n\n", "msg_date": "Mon, 21 Nov 2022 20:47:54 +0100", "msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>", "msg_from_op": true, "msg_subject": "$1 IS NULL with unknown type" }, { "msg_contents": "Daniele Varrazzo <daniele.varrazzo@gmail.com> writes:\n> The operator `IS NULL` doesn't work if the argument has unknown type.\n> conn.execute(\"select %s is null\", ['foo']).fetchone()\n> IndeterminateDatatype: could not determine data type of parameter $1\n\nYeah.\n\n> It doesn't seem necessary to specify a type for an argument if it only\n> has to be compared with null: nullness is independent from the type\n> and is even specified, in the query parameters, in a separate array\n> from the parameter values.\n\nTrue, the IS NULL operator itself doesn't care about the data type,\nbut that doesn't mean that noplace else in the system does.\n\nAs an example, if we silently resolved the type as \"text\" as you seem\nto wish, and then the client sends a non-null integer in binary format,\nwe'll likely end up throwing a bad-encoding error.\n\nMaybe that's not a big problem, I dunno. But we can't just not\nresolve the type, we have to pick something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 15:02:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: $1 IS NULL with unknown type" } ]
[ { "msg_contents": "The CREATEROLE permission is in a very bad spot right now. The biggest\nproblem that I know about is that it allows you to trivially access\nthe OS user account under which PostgreSQL is running, which is\nexpected behavior for a superuser but simply wrong behavior for any\nother user. This is because CREATEROLE conveys powerful capabilities\nnot only to create roles but also to manipulate them in various ways,\nincluding granting any non-superuser role in the system to any new or\nexisting user, including themselves. Since v11, the roles that can be\ngranted include pg_execute_server_program and pg_write_server_files\nwhich are trivially exploitable. Perhaps this should have been treated\nas an urgent security issue and a fix back-patched, although it is not\nclear to me exactly what such a fix would look like. Since we haven't\ndone that, I went looking for a way to improve things in a principled\nway going forward, taking advantage also of recent master-only work to\nimprove various aspects of the role grant system.\n\nHere, I feel it important to point out that I think the current system\nwould be broken even if we didn't have predefined roles that are\ntrivially exploitable to obtain OS user access. We would still lack\nany way to restrict the scope of the CREATEROLE privilege. Sure, the\nprivilege doesn't extend to superusers, but that's not really good\nenough. Consider:\n\nrhaas=# create role alice createrole;\nCREATE ROLE\nrhaas=# create role bob password 'known_only_to_bob';\nCREATE ROLE\nrhaas=# set session authorization alice;\nSET\nrhaas=> alter role bob password 'known_to_alice';\nALTER ROLE\n\nAssuming that some form of password authentication is supported, alice\nis basically empowered to break into any non-superuser account on the\nsystem and assume all of its privileges. That's really not cool: it's\nOK, I think, to give a non-superuser the right to change somebody\nelse's passwords, but it should be possible to limit it in some way,\ne.g. to the users that alice creates. Also, while the ability to make\nthis sort of change seems to be the clear intention of the code, it's\nnot documented on the CREATE ROLE page. The problems with\npg_execute_server_program et. al. are not documented either; all it\nsays is that you should \"regard roles that have the CREATEROLE\nprivilege as almost-superuser-roles,\" which seems to me to be\nunderstating the extent of the problem.\n\nI have drafted a few patches to try to improve the situation. It seems\nto me that the root of any fix in this area must be to change the rule\nthat CREATEROLE can administer any role whatsoever. Instead, I propose\nto change things so that you can only administer roles for which you\nhave ADMIN OPTION. Administering a role here includes changing the\npassword for a role, renaming a role, dropping a role, setting the\ncomment or security label on a role, or granting membership in that\nrole to another role, whether at role creation time or later. All of\nthese options are treated in essentially two places in the code, so it\nmakes sense to handle them all in a symmetric way. One problem with\nthis proposal is that, if we did exactly this much, then a CREATEROLE\nuser wouldn't be able to administer the roles which they themselves\nhad just created. That seems like it would be restricting the\nprivileges of CREATEROLE users too much.\n\nTo fix that, I propose when a non-superuser creates a role, the role\nbe implicitly granted back to the creator WITH ADMIN OPTION. This\narguably doesn't add any fundamentally new capability because the\nCREATEROLE user could do something like \"CREATE ROLE some_new_role\nADMIN myself\" anyway, but that's awkward to remember and doing it\nautomatically seems a lot more convenient. However, there's a little\nbit of trickiness here, too. Granting the new role back to the creator\nmight, depending on whether the INHERIT or SET flags are true or false\nfor the new grant, allow the CREATEROLE user to inherit the privileges\nof, or set role to, the target role, which under current rules would\nnot be allowed. We can minimize behavior changes from the status quo\nby setting up the new, implicit grant with SET FALSE, INHERIT FALSE.\n\nHowever, that might not be what everyone wants. It's definitely not\nwhat *I* want. I want a way for non-superusers to create new roles and\nautomatically inherit the privileges of those roles just as a\nsuperuser automatically inherits everyone's privileges. I just don't\nwant the users who can do this to also be able to break out to the OS\nas if they were superusers when they're not actually supposed to be.\nHowever, it's clear from previous discussion that other people do NOT\nwant that, so I propose to make it configurable. Thus, this patch\nseries also proposes to add INHERITCREATEDROLES and SETCREATEDROLES\nproperties to roles. These have no meaning if the role is not marked\nCREATEROLE. If it is, then they control the properties of the implicit\ngrant that happens when a CREATEROLE user who is not a superuser\ncreates a role. If INHERITCREATEDROLES is set, then the implicit grant\nback to the creator is WITH INHERIT TRUE, else it's WITH INHERIT\nFALSE; likewise, SETCREATEDROLES affects whether the implicit grant is\nWITH SET TRUE or WITH SET FALSE.\n\nI'm curious to hear what other people think of these proposals, but\nlet me first say what I think about them. First, I think it's clear\nthat we need to do something, because things right now are pretty\nbadly broken and in a way that affects security. Although these\npatches are not back-patchable, they at least promise to improve\nthings as older versions go out of use. Second, it's possible that we\nshould look for back-patchable fixes here, but I can't really see that\nwe're going to come up with anything much better than just telling\npeople not to use this feature against older releases, because\nback-patching catalog changes or dramatic behavior changes seems like\na non-starter. In other words, I think this is going to be a\nmaster-only fix. Third, someone could well have a better or just\ndifferent idea how to fix the problems in this area than what I'm\nproposing here. This is the best that I've been able to come up with\nso far, but that's not to say it's free of problems or that no\nimprovements are possible.\n\nFinally, I think that whatever we do about the code, the documentation\nneeds quite a bit of work, because the code is doing a lot of stuff\nthat is security-critical and entirely non-obvious from the\ndocumentation. I have not in this version of these patches included\nany documentation changes and the regression test changes that I have\nincluded are quite minimal. That all needs to be fixed up before there\ncould be any thought of moving forward with these patches. However, I\nthought it best to get rough patches and an outline of the proposed\ndirection on the table first, before doing a lot of work refining\nthings.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Nov 2022 15:39:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "fixing CREATEROLE" }, { "msg_contents": "Robert Haas:\n> It seems\n> to me that the root of any fix in this area must be to change the rule\n> that CREATEROLE can administer any role whatsoever.\n\nAgreed.\n\n> Instead, I propose\n> to change things so that you can only administer roles for which you\n> have ADMIN OPTION. [...] > I'm curious to hear what other people think of these proposals, [...]\n> Third, someone could well have a better or just\n> different idea how to fix the problems in this area than what I'm\n> proposing here.\n\nOnce you can restrict CREATEROLE to only manage \"your own\" (no matter \nhow that is defined, e.g. via ADMIN or through some \"ownership\" concept) \nroles, the possibility to \"namespace\" those roles somehow will become a \nlot more important. For example in a multi-tenant setup in the same \ncluster, where each tenant has their own database and admin user with a \nrestricted CREATEROLE privilege, it will very quickly be at least quite \nannoying to have conflicts with other tenants' role names. I'm not sure \nwhether it could even be a serious problem, because I should still be \nable to GRANT my own roles to other roles from other tenants - and that \ncould affect matching of +group records in pg_hba.conf?\n\nMy suggestion to $subject and the namespace problem would be to \nintroduce database-specific roles, i.e. add a database column to \npg_authid. Having this column set to 0 will make the role a cluster-wide \nrole (\"cluster role\") just as currently the case. But having a database \noid set will make the role exist in the context of that database only \n(\"database role\"). Then, the following principles should be enforced:\n\n- database roles can not share the same name with a cluster role.\n- database roles can have the same name as database roles in other \ndatabases.\n- database roles can not be members of database roles in **other** \ndatabases.\n- database roles with CREATEROLE can only create or alter database roles \nin their own database, but not roles in other databases and also not \ncluster roles.\n- database roles with CREATEROLE can GRANT all database roles in the \nsame database, but only those cluster roles they have ADMIN privilege on.\n- database roles with CREATEROLE can not set SUPERUSER.\n\nTo be able to create database roles with a cluster role, there needs to \nbe some syntax, e.g. something like\n\nCREATE ROLE name IN DATABASE dbname ...\n\nA database role with CREATEROLE should not need to use that syntax, \nthough - every CREATE ROLE should be IN DATABASE anyway.\n\nWith database roles, it would be possible to hand out CREATEROLE without \nthe ability to grant SUPERUSER or any of the built-in roles. It would be \nmuch more useful on top of that, too. Not only is the namespace problem \nmentioned above solved, but it would also be possible to let pg_dump \ndump a whole database, including the database roles and their \nmemberships. This would allow dumping (and restoring) a single \ntenant/application including the relevant roles and privileges - without \ndumping all roles in the cluster. Plus, it's backwards compatible \nbecause without creating database roles, everything stays exactly the same.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 22 Nov 2022 09:02:15 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Tue, Nov 22, 2022 at 3:02 AM <walther@technowledgy.de> wrote:\n> My suggestion to $subject and the namespace problem would be to\n> introduce database-specific roles, i.e. add a database column to\n> pg_authid. Having this column set to 0 will make the role a cluster-wide\n> role (\"cluster role\") just as currently the case. But having a database\n> oid set will make the role exist in the context of that database only\n> (\"database role\"). Then, the following principles should be enforced:\n>\n> - database roles can not share the same name with a cluster role.\n> - database roles can have the same name as database roles in other\n> databases.\n> - database roles can not be members of database roles in **other**\n> databases.\n> - database roles with CREATEROLE can only create or alter database roles\n> in their own database, but not roles in other databases and also not\n> cluster roles.\n> - database roles with CREATEROLE can GRANT all database roles in the\n> same database, but only those cluster roles they have ADMIN privilege on.\n> - database roles with CREATEROLE can not set SUPERUSER.\n>\n> To be able to create database roles with a cluster role, there needs to\n> be some syntax, e.g. something like\n>\n> CREATE ROLE name IN DATABASE dbname ...\n\nI have three comments on this:\n\n1. It's a good idea and might make for some interesting followup work.\n\n2. There are some serious implementation challenges because the\nconstraints on duplicate object names must be something which can be\nenforced by unique constraints on the relevant catalogs. Off-hand, I\ndon't see how to do that. It would be easy to make the cluster roles\nall have unique names, and it would be easy to make the database roles\nhave unique names within each database, but I have no idea how you\nwould keep a database role from having the same name as a cluster\nrole. For anyone to try to implement this, we'd need to have a\nsolution to that problem.\n\n3. I don't want to sidetrack this thread into talking about possible\nfuture features or followup work. There is enough to do just getting\nconsensus on the design ideas that I proposed without addressing the\nquestion of what else we might do later. I do not think there is any\nreasonable argument that we can't clean up the CREATEROLE mess without\nalso implementing database-specific roles, and I have no intention of\nincluding that in this patch series. Whether I or someone else might\nwork on it in the future is a question we can leave for another day.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Nov 2022 08:45:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas:\n> 2. There are some serious implementation challenges because the\n> constraints on duplicate object names must be something which can be\n> enforced by unique constraints on the relevant catalogs. Off-hand, I\n> don't see how to do that. It would be easy to make the cluster roles\n> all have unique names, and it would be easy to make the database roles\n> have unique names within each database, but I have no idea how you\n> would keep a database role from having the same name as a cluster\n> role. For anyone to try to implement this, we'd need to have a\n> solution to that problem.\n\nFor each database created, create a partial unique index:\n\nCREATE UNIQUE INDEX ... ON pg_authid (rolname) WHERE roldatabase IN (0, \n<database_oid>);\n\nIs that possible on catalogs?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 22 Nov 2022 15:27:09 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "walther@technowledgy.de writes:\n> Robert Haas:\n>> 2. There are some serious implementation challenges because the\n>> constraints on duplicate object names must be something which can be\n>> enforced by unique constraints on the relevant catalogs. Off-hand, I\n>> don't see how to do that.\n\n> For each database created, create a partial unique index:\n> CREATE UNIQUE INDEX ... ON pg_authid (rolname) WHERE roldatabase IN (0, \n> <database_oid>);\n> Is that possible on catalogs?\n\nNo, we don't support partial indexes on catalogs, and I don't think\nwe want to change that. Partial indexes would require expression\nevaluations occurring at very inopportune times.\n\nAlso, we don't support creating shared indexes post-initdb.\nThe code has hard-wired lists of which relations are shared,\nbesides which there's no way to update other databases' pg_class.\n\nEven without that, the idea of a shared catalog ending up with 10000\nindexes after you create 10000 databases (requiring 10^8 pg_class\nentries across the whole cluster) seems ... unattractive.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Nov 2022 09:50:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Tom Lane:\n> No, we don't support partial indexes on catalogs, and I don't think\n> we want to change that. Partial indexes would require expression\n> evaluations occurring at very inopportune times.\n\nI see. Is that the same for indexes *on* an expression? Or would those \nbe ok?\n\nWith a custom operator, an EXCLUDE constraint on the ROW(reldatabase, \nrelname) expression could work. The operator would compare:\n- (0, name1) and (0, name2) as name1 == name2\n- (db1, name1) and (0, name2) as name1 == name2\n- (0, name1) and (db2, name2) as name1 == name2\n- (db1, name1) and (db2, name2) as db1 == db2 && name1 == name2\n\nor just (db1 == 0 || db2 == 0 || db1 == db2) && name1 == name2.\n\nNow, you are going to tell me that EXCLUDE constraints are not supported \non catalogs either, right? ;)\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:04:58 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Wolfgang Walther:\n> Tom Lane:\n>> No, we don't support partial indexes on catalogs, and I don't think\n>> we want to change that.  Partial indexes would require expression\n>> evaluations occurring at very inopportune times.\n> \n> I see. Is that the same for indexes *on* an expression? Or would those \n> be ok?\n> \n> With a custom operator, an EXCLUDE constraint on the ROW(reldatabase, \n> relname) expression could work. The operator would compare:\n> - (0, name1) and (0, name2) as name1 == name2\n> - (db1, name1) and (0, name2) as name1 == name2\n> - (0, name1) and (db2, name2) as name1 == name2\n> - (db1, name1) and (db2, name2) as db1 == db2 && name1 == name2\n> \n> or just (db1 == 0 || db2 == 0 || db1 == db2) && name1 == name2.\n\nDoes it even need to be on the expression? I don't think so. It would be \nenough to just make it compare relname WITH = and reldatabase WITH the \ncustom operator (db1 == 0 || db2 == 0 || db1 == db2), right?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:11:11 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "walther@technowledgy.de writes:\n>> No, we don't support partial indexes on catalogs, and I don't think\n>> we want to change that. Partial indexes would require expression\n>> evaluations occurring at very inopportune times.\n\n> I see. Is that the same for indexes *on* an expression? Or would those \n> be ok?\n\nRight, we don't support those on catalogs either.\n\n> Now, you are going to tell me that EXCLUDE constraints are not supported \n> on catalogs either, right? ;)\n\nNor those.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:29:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On 11/21/22 15:39, Robert Haas wrote:\n> I'm curious to hear what other people think of these proposals, but\n> let me first say what I think about them. First, I think it's clear\n> that we need to do something, because things right now are pretty\n> badly broken and in a way that affects security. Although these\n> patches are not back-patchable, they at least promise to improve\n> things as older versions go out of use.\n\n+1\n\n> Second, it's possible that we should look for back-patchable fixes\n> here, but I can't really see that we're going to come up with\n> anything much better than just telling people not to use this feature\n> against older releases, because back-patching catalog changes or\n> dramatic behavior changes seems like a non-starter. In other words, I\n> think this is going to be a master-only fix.\n\nYep, seems highly likely\n\n> Third, someone could well have a better or just different idea how to\n> fix the problems in this area than what I'm proposing here. This is\n> the best that I've been able to come up with so far, but that's not\n> to say it's free of problems or that no improvements are possible.\n\nOn quick inspection I like what you have proposed and no significantly \n\"better\" ideas jump to mind. I will try to think on it though.\n\n> Finally, I think that whatever we do about the code, the documentation\n> needs quite a bit of work, because the code is doing a lot of stuff\n> that is security-critical and entirely non-obvious from the\n> documentation. I have not in this version of these patches included\n> any documentation changes and the regression test changes that I have\n> included are quite minimal. That all needs to be fixed up before there\n> could be any thought of moving forward with these patches. However, I\n> thought it best to get rough patches and an outline of the proposed\n> direction on the table first, before doing a lot of work refining\n> things.\n\nI have looked at, and even done some doc improvements in this area in \nthe past, and concluded that it is simply hard to describe it in a \nclear, straightforward way.\n\nThere are multiple competing concepts (privs on objects, attributes of \nroles, membership, when things are inherited versus not, settings bound \nto roles, etc). I don't know what to do about it, but yeah, fixing the \ndocumentation would be a noble goal.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 11:40:17 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\n\n> On Nov 21, 2022, at 12:39 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I have drafted a few patches to try to improve the situation.\n\nThe 0001 and 0002 patches appear to be uncontroversial refactorings. Patch 0003 looks on-point and a move in the right direction. The commit message in that patch is well written. Patch 0004 feels like something that won't get committed. The INHERITCREATEDROLES and SETCREATEDROLES in 0004 seems clunky.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 12:01:37 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Tue, Nov 22, 2022 at 3:01 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Nov 21, 2022, at 12:39 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > I have drafted a few patches to try to improve the situation.\n>\n> The 0001 and 0002 patches appear to be uncontroversial refactorings. Patch 0003 looks on-point and a move in the right direction. The commit message in that patch is well written.\n\nThanks.\n\n> Patch 0004 feels like something that won't get committed. The INHERITCREATEDROLES and SETCREATEDROLES in 0004 seems clunky.\n\nI think role properties are kind of clunky in general, the way we've\nimplemented them in PostgreSQL, but I don't really see why these are\nworse than anything else. I think we need some way to control the\nbehavior, and I don't really see a reasonable place to put it other\nthan a per-role property. And if we're going to do that then they\nmight as well look like the other properties that we've already got.\n\nDo you have a better idea?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:02:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\n\n> On Nov 22, 2022, at 2:02 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> Patch 0004 feels like something that won't get committed. The INHERITCREATEDROLES and SETCREATEDROLES in 0004 seems clunky.\n> \n> I think role properties are kind of clunky in general, the way we've\n> implemented them in PostgreSQL, but I don't really see why these are\n> worse than anything else. I think we need some way to control the\n> behavior, and I don't really see a reasonable place to put it other\n> than a per-role property. And if we're going to do that then they\n> might as well look like the other properties that we've already got.\n> \n> Do you have a better idea?\n\nWhatever behavior is to happen in the CREATE ROLE statement should be spelled out in that statement. \"CREATE ROLE bob WITH INHERIT false WITH SET false\" doesn't seem too unwieldy, and has the merit that it can be read and understood without reference to hidden parameters. Forcing this to be explicit should be safer if these statements ultimately make their way into dump/restore scripts, or into logical replication.\n\nThat's not to say that I wouldn't rather that it always work one way or always the other. It's just to say that I don't want it to work differently based on some poorly advertised property of the role executing the command.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 14:48:47 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Tue, Nov 22, 2022 at 5:48 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Whatever behavior is to happen in the CREATE ROLE statement should be spelled out in that statement. \"CREATE ROLE bob WITH INHERIT false WITH SET false\" doesn't seem too unwieldy, and has the merit that it can be read and understood without reference to hidden parameters. Forcing this to be explicit should be safer if these statements ultimately make their way into dump/restore scripts, or into logical replication.\n>\n> That's not to say that I wouldn't rather that it always work one way or always the other. It's just to say that I don't want it to work differently based on some poorly advertised property of the role executing the command.\n\nThat seems rather pejorative. If we stipulate from the outset that the\nproperty is poorly advertised, then obviously anything at all\ndepending on it is not going to seem like a very good idea. But why\nshould we assume that it will be poorly-advertised? I clearly said\nthat the documentation needs a bunch of work, and that I planned to\nwork on it. As an initial matter, the documentation is where we\nadvertise new features, so I think you ought to take it on faith that\nthis will be well-advertised, unless you think that I'm completely\nhopeless at writing documentation or something.\n\nOn the actual issue, I think that one key question is who should\ncontrol what happens when a role is created. Is that the superuser's\ndecision, or the CREATEROLE user's decision? I think it's better for\nit to be the superuser's decision. Consider first the use case where\nyou want to set up a user who \"feels like a superuser\" i.e. inherits\nthe privileges of users they create. You don't want them to have to\nspecify anything when they create a role for that to happen. You just\nwant it to happen. So you want to set up their account so that it will\nhappen automatically, not throw the complexity back on them. In the\nreverse scenario where you don't want the privileges inherited, I\nthink it's a little less clear, possibly just because I haven't\nthought about that scenario as much, but I think it's very reasonable\nhere too to want the superuser to set up a configuration for the\nCREATEROLE user that does what the superuser wants, rather than what\nthe CREATEROLE user wants.\n\nEven aside from the question of who controls what, I think it is far\nbetter from a usability perspective to have ways of setting up good\ndefaults. That is why we have the default_tablespace GUC, for example.\nWe could have made the CREATE TABLE command always use the database's\ndefault tablespace, or we could have made it always use the main\ntablespace. Then it would not be dependent on (poorly advertised?)\nsettings elsewhere. But it would also be really inconvenient, because\nif someone is creating a lot of tables and wants them all to end up in\nthe same place, they don't want to have to specify the name of that\ntablespace each time. They want to set a default and have that get\nused by each command.\n\nThere's another, subtler consideration here, too. Since\nce6b672e4455820a0348214be0da1a024c3f619f, there are constraints on who\ncan validly be recorded as the grantor of a particular role grant,\njust as we have always done for other types of grants. The grants have\nto form a tree, with each grant having a grantor that was themselves\ngranted ADMIN OPTION by someone else, until eventually you get back to\nthe bootstrap superuser who is the source of all privileges. Thus,\ntoday, when a CREATEROLE user grants membership in a role, the grantor\nis recorded as the bootstrap superuser, because they might not\nactually possess ADMIN OPTION on the role at all, and so we can't\nnecessarily record them as the grantor. But this patch changes that,\nwhich I think is a significant improvement. The implicit grant that is\ncreated when CREATE ROLE is issued has the bootstrap superuser as\ngrantor, because there is no other legal option, but then any further\ngrants performed by the CREATE ROLE user rely on that user having that\ngrant, and thus record the OID of the CREATEROLE user as the grantor,\nnot the bootstrap superuser.\n\nThat, in turn, has a number of rather nice consequences. It means in\nparticular that the CREATEROLE user can't remove the implicit grant,\nnor can they alter it. They are, after all, not the grantor, who is\nthe bootstrap superuser, nor do they any longer have the authority to\nact as the bootstrap superuser. Thus, if you have two or more\nCREATEROLE users running around doing stuff, you can tell who did\nwhat. Every role that those users created is linked back to the\ncreating role in a way that the creator can't alter. A CREATEROLE user\nis unable to contrive a situation where they no longer control a role\nthat they created. That also means that the superuser, if desired, can\nrevoke all membership grants performed by any particular CREATEROLE\nuser by revoking the implicit grants with CASCADE.\n\nBut since this implicit grant has, and must have, the bootstrap\nsuperuser as grantor, it is also only reasonable that superusers get\nto determine what options are used when creating that grant, rather\nthan leaving that up to the CREATEROLE user.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 12:01:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\n\n> On Nov 23, 2022, at 9:01 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> That's not to say that I wouldn't rather that it always work one way or always the other. It's just to say that I don't want it to work differently based on some poorly advertised property of the role executing the command.\n> \n> That seems rather pejorative. If we stipulate from the outset that the\n> property is poorly advertised, then obviously anything at all\n> depending on it is not going to seem like a very good idea. But why\n> should we assume that it will be poorly-advertised? I clearly said\n> that the documentation needs a bunch of work, and that I planned to\n> work on it. As an initial matter, the documentation is where we\n> advertise new features, so I think you ought to take it on faith that\n> this will be well-advertised, unless you think that I'm completely\n> hopeless at writing documentation or something.\n\nOh, I don't mean that it will be poorly documented. I mean that the way the command is written won't advertise what it is going to do. That's concerning if you fat-finger a CREATE ROLE command, then realize you need to drop and recreate the role, only to discover that a property you weren't thinking about, and which you are accustomed to being set the opposite way, is set such that you can't drop the role you just created. I think if you're going to create-and-disown something, you should have to say so, to make sure you mean it.\n\nThis consideration differs from the default schema or default tablespace settings. If I fat-finger the creation of a table, regardless of where it gets placed, I'm still the owner of the table, and I can still drop and recreate the table to fix my mistake.\n\nWhy not make this be a permissions issue, rather than a default behavior issue? Instead of a single CREATEROLE privilege, grant roles privileges to CREATE-WITH-INHERIT, CREATE-WITH-ADMIN, CREATE-SANS-INHERIT, CREATE-SANS-ADMIN, and thereby limit which forms of the command they may execute. That way, the semantics of the command do not depend on some property external to the command. Users (and older scripts) will expect the traditional syntax to behave consistent with how CREATE ROLE has worked in the past. The behaviors can be specified explicitly.\n\n> On the actual issue, I think that one key question is who should\n> control what happens when a role is created. Is that the superuser's\n> decision, or the CREATEROLE user's decision? I think it's better for\n> it to be the superuser's decision. Consider first the use case where\n> you want to set up a user who \"feels like a superuser\" i.e. inherits\n> the privileges of users they create. You don't want them to have to\n> specify anything when they create a role for that to happen. You just\n> want it to happen. So you want to set up their account so that it will\n> happen automatically, not throw the complexity back on them. In the\n> reverse scenario where you don't want the privileges inherited, I\n> think it's a little less clear, possibly just because I haven't\n> thought about that scenario as much, but I think it's very reasonable\n> here too to want the superuser to set up a configuration for the\n> CREATEROLE user that does what the superuser wants, rather than what\n> the CREATEROLE user wants.\n> \n> Even aside from the question of who controls what, I think it is far\n> better from a usability perspective to have ways of setting up good\n> defaults. That is why we have the default_tablespace GUC, for example.\n> We could have made the CREATE TABLE command always use the database's\n> default tablespace, or we could have made it always use the main\n> tablespace. Then it would not be dependent on (poorly advertised?)\n> settings elsewhere. But it would also be really inconvenient, because\n> if someone is creating a lot of tables and wants them all to end up in\n> the same place, they don't want to have to specify the name of that\n> tablespace each time. They want to set a default and have that get\n> used by each command.\n> \n> There's another, subtler consideration here, too. Since\n> ce6b672e4455820a0348214be0da1a024c3f619f, there are constraints on who\n> can validly be recorded as the grantor of a particular role grant,\n> just as we have always done for other types of grants. The grants have\n> to form a tree, with each grant having a grantor that was themselves\n> granted ADMIN OPTION by someone else, until eventually you get back to\n> the bootstrap superuser who is the source of all privileges. Thus,\n> today, when a CREATEROLE user grants membership in a role, the grantor\n> is recorded as the bootstrap superuser, because they might not\n> actually possess ADMIN OPTION on the role at all, and so we can't\n> necessarily record them as the grantor. But this patch changes that,\n> which I think is a significant improvement. The implicit grant that is\n> created when CREATE ROLE is issued has the bootstrap superuser as\n> grantor, because there is no other legal option, but then any further\n> grants performed by the CREATE ROLE user rely on that user having that\n> grant, and thus record the OID of the CREATEROLE user as the grantor,\n> not the bootstrap superuser.\n> \n> That, in turn, has a number of rather nice consequences. It means in\n> particular that the CREATEROLE user can't remove the implicit grant,\n> nor can they alter it. They are, after all, not the grantor, who is\n> the bootstrap superuser, nor do they any longer have the authority to\n> act as the bootstrap superuser. Thus, if you have two or more\n> CREATEROLE users running around doing stuff, you can tell who did\n> what. Every role that those users created is linked back to the\n> creating role in a way that the creator can't alter. A CREATEROLE user\n> is unable to contrive a situation where they no longer control a role\n> that they created. That also means that the superuser, if desired, can\n> revoke all membership grants performed by any particular CREATEROLE\n> user by revoking the implicit grants with CASCADE.\n> \n> But since this implicit grant has, and must have, the bootstrap\n> superuser as grantor, it is also only reasonable that superusers get\n> to determine what options are used when creating that grant, rather\n> than leaving that up to the CREATEROLE user.\n\nYes, this all makes sense, but does it entail that the CREATE ROLE command must behave differently on the basis of a setting?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 09:36:52 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 12:36 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Oh, I don't mean that it will be poorly documented. I mean that the way the command is written won't advertise what it is going to do. That's concerning if you fat-finger a CREATE ROLE command, then realize you need to drop and recreate the role, only to discover that a property you weren't thinking about, and which you are accustomed to being set the opposite way, is set such that you can't drop the role you just created.\n\nThat doesn't ever happen. No matter how the properties are set, you\nend up with ADMIN OPTION on the newly-created role and can drop it.\nThe flags control things like whether you can select from the newly\ncreated role's tables even if you otherwise lack permissions on them\n(INHERIT), and whether you can SET ROLE to it (SET). You can always\nadminister it, i.e. grant rights on it to others, change its password,\ndrop it.\n\n> I think if you're going to create-and-disown something, you should have to say so, to make sure you mean it.\n\nReasonable, but not relevant, since that isn't what's happening.\n\n> Why not make this be a permissions issue, rather than a default behavior issue? Instead of a single CREATEROLE privilege, grant roles privileges to CREATE-WITH-INHERIT, CREATE-WITH-ADMIN, CREATE-SANS-INHERIT, CREATE-SANS-ADMIN, and thereby limit which forms of the command they may execute. That way, the semantics of the command do not depend on some property external to the command. Users (and older scripts) will expect the traditional syntax to behave consistent with how CREATE ROLE has worked in the past. The behaviors can be specified explicitly.\n\nPerhaps if we get the confusion above cleared up you won't be as\nconcerned about this, but let me just say that this patch is\nabsolutely breaking backward compatibility. I don't feel bad about\nthat, either. I think it's a good thing in this case, because the\ncurrent behavior is abjectly broken and horrible. What we've been\ndoing for the last several years is shipping a product that has a\nbuilt-in exploit that a clever 10-year-old could use to escalate\nprivileges from CREATEROLE to SUPERUSER. We should not be OK with\nthat, and we should be OK with changing the behavior however much is\nrequired to fix it. I'm personally of the opinion that this patch set\ndoes a rather clever job minimizing that blast radius, but that might\nbe my own bias as the patch author. Regardless, I don't think there's\nany reasonable argument for maintaining the current behavior. I don't\nentirely follow exactly what you have in mind in the sentence above,\nbut if it involves keeping the current CREATEROLE behavior around in\nany form, -1 from me.\n\n> > But since this implicit grant has, and must have, the bootstrap\n> > superuser as grantor, it is also only reasonable that superusers get\n> > to determine what options are used when creating that grant, rather\n> > than leaving that up to the CREATEROLE user.\n>\n> Yes, this all makes sense, but does it entail that the CREATE ROLE command must behave differently on the basis of a setting?\n\nWell, we certainly don't HAVE to add those new role-level properties;\nthat's why they are in a separate patch. But I think they add a lot of\nuseful functionality for a pretty minimal amount of extra code\ncomplexity.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 12:58:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Nov 23, 2022 at 12:36 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Yes, this all makes sense, but does it entail that the CREATE ROLE command must behave differently on the basis of a setting?\n\n> Well, we certainly don't HAVE to add those new role-level properties;\n> that's why they are in a separate patch. But I think they add a lot of\n> useful functionality for a pretty minimal amount of extra code\n> complexity.\n\nI haven't thought about these issues hard enough to form an overall\nopinion (though I agree that making CREATEROLE less tantamount\nto superuser would be an improvement). However, I share Mark's\ndiscomfort about making these commands act differently depending on\ncontext. We learned long ago that allowing GUCs to affect query\nsemantics was a bad idea. Basing query semantics on properties\nof the issuing role (beyond success-or-permissions-failure) doesn't\nseem a whole lot different from that. It still means that\napplications can't just issue command X and expect Y to happen;\nthey have to inquire about context in order to find out that Z might\nhappen instead. That's bad in any case, but it seems especially bad\nfor security-critical behaviors.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:11:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 1:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I haven't thought about these issues hard enough to form an overall\n> opinion (though I agree that making CREATEROLE less tantamount\n> to superuser would be an improvement). However, I share Mark's\n> discomfort about making these commands act differently depending on\n> context. We learned long ago that allowing GUCs to affect query\n> semantics was a bad idea. Basing query semantics on properties\n> of the issuing role (beyond success-or-permissions-failure) doesn't\n> seem a whole lot different from that. It still means that\n> applications can't just issue command X and expect Y to happen;\n> they have to inquire about context in order to find out that Z might\n> happen instead. That's bad in any case, but it seems especially bad\n> for security-critical behaviors.\n\nI'm not sure that this behavior qualifies as security-critical. If\nINHERITCREATEDROLES and SETCREATEDROLES are both true, then the grant\nhas INHERIT TRUE and SET TRUE and there are no more rights to be\ngained. If not, the createrole user can do something like GRANT\nnew_role TO my_own_account WITH INHERIT TRUE, SET TRUE. Even if we\nsomehow disallowed that, they could gain access to the privilege of\nthe created role in a bunch of other ways, such as granting the rights\nto someone else, or just changing the password and using the new\npassword to log into the account.\n\nWhen I started working in this area, I thought non-inherited grants\nwere pretty useless, because you can so easily work around it.\nHowever, other people did not agree. From what I can gather, I think\nthe reason why people like non-inherited grants is that they prevent\nmistakes. A user who has ADMIN OPTION on another role but does not\ninherit its privileges can break into that account and do whatever\nthey want, but they cannot ACCIDENTALLY perform an operation that\nmakes use of that user's privileges. They will have to SET ROLE, or\nGRANT themselves something, and those actions can be logged and\naudited if desired. Because of the potential for that sort of logging\nand auditing, you can certainly make an argument that this is a\nsecurity-critical behavior, but it's not that clear cut, because it's\nmaking assumptions about the behavior of other software, and of human\nbeings. Strictly speaking, looking just at PostgreSQL, these options\ndon't affect security.\n\nOn the more general question of configurability, I tend to agree that\nit's not great to have the behavior of commands depend too much on\ncontext, especially for security-critical things. A particularly toxic\nexample IMHO is search_path, which I think is an absolute disaster in\nterms of security that I believe we will never be able to fully fix.\nYet there are plenty of examples of configurability that no one finds\nproblematic. No one agitates against the idea that a database can have\na default tablespace, or that you can ALTER USER or ALTER DATABASE to\nconfigure an setting on a user-specific or database-specific setting,\neven a security-critical one like search_path, or one that affects\nquery behavior like work_mem. No one is outraged that a data type has\na default btree operator class that is used for indexes unless you\nspecify another one explicitly. What people mostly complain about IME\nis stuff like standard_conforming_strings, or bytea_output, or\ndatestyle. Often, when proposal come up on pgsql-hackers and get shot\ndown on these grounds, the issue is that they would essentially make\nit impossible to write SQL that will run portably on PostgreSQL.\nInstead, you'd need to write your application to issue different SQL\ndepending on the value of settings on the local system. That's un-fun\nat best, and impossible at worst, as in the case of extension scripts,\nwhose content has to be static when you ship the thing.\n\nBut it's not exactly clear to me what the broader organizing principle\nis here, or ought to be. I think it would be ridiculous to propose --\nand I assume that you are not proposing -- that no command should have\nbehavior that in any way depends on what SQL commands have been\nexecuted previously. Taken to a silly extreme, that would imply that\nCREATE TABLE ought to be removed, because the behavior of SELECT *\nFROM something will otherwise depend on whether someone has previously\nissued CREATE TABLE something. Obviously that is a stupid argument.\nBut on the other hand, it would also be ridiculous to propose the\nreverse, that it's fine to add arbitrary settings that affect the\nbehavior of any command whatsoever in arbitrary ways. Simon's proposal\nto add a GUC that would make vacuum request a background vacuum rather\nthan performing one in the foreground is a good example of a proposal\nthat did not sit well with either of us.\n\nBut I don't know on what basis exactly we put a proposal like this in\none category rather than the other. I'm not sure I can really\narticulate the general principle in a sensible way. For me, this\nclearly falls into the \"good\" category: it's configuration that you\nput into the database that makes things happen the way you want, not a\nbehavior-changing setting that comes along and ruins somebody's day.\nBut if someone else feels otherwise, I'm not sure I can defend that\nview in a really rigorous way, because I'm not really sure what the\nlitmus test is, or should be. I think the best that I can do is to say\nthat if we *don't* add those options but *do* adopt the rest of the\npatch set, we will have to make a decision about what behavior\neveryone is going to get, and no matter what we decide, some people\nare not going to be really unhappy with the result. I would like to\nfind a way to avoid that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 14:02:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\n\n> On Nov 23, 2022, at 11:02 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> For me, this\n> clearly falls into the \"good\" category: it's configuration that you\n> put into the database that makes things happen the way you want, not a\n> behavior-changing setting that comes along and ruins somebody's day.\n\nI had incorrectly imagined that if the bootstrap superuser granted CREATEROLE to Alice with particular settings, those settings would limit the things that Alice could do when creating role Bob, specifically limiting how much she could administer/inherit/set role Bob thereafter. Apparently, your proposal only configures what happens by default, and Alice can work around that if she wants to. But if that's the case, did I misunderstand upthread that these are properties the superuser specifies about Alice? Can Alice just set these properties about herself, so she gets the behavior she wants? I'm confused now about who controls these settings.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 11:28:14 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:28 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Nov 23, 2022, at 11:02 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > For me, this\n> > clearly falls into the \"good\" category: it's configuration that you\n> > put into the database that makes things happen the way you want, not a\n> > behavior-changing setting that comes along and ruins somebody's day.\n>\n> I had incorrectly imagined that if the bootstrap superuser granted CREATEROLE to Alice with particular settings, those settings would limit the things that Alice could do when creating role Bob, specifically limiting how much she could administer/inherit/set role Bob thereafter. Apparently, your proposal only configures what happens by default, and Alice can work around that if she wants to.\n\nRight.\n\n> But if that's the case, did I misunderstand upthread that these are properties the superuser specifies about Alice? Can Alice just set these properties about herself, so she gets the behavior she wants? I'm confused now about who controls these settings.\n\nBecause they are role-level properties, they can be set by whoever has\nADMIN OPTION on the role. That always includes every superuser, and it\nnever includes Alice herself (except if she's a superuser). It could\ninclude other users depending on the system configuration. For\nexample, under this proposal, the superuser might create alice and set\nher account to CREATEROLE, configuring the INHERITCREATEDROLES and\nSETCREATEDROLES properties on Alice's account according to preference.\nThen, alice might create another user, say bob, and make him\nCREATEROLE as well. In such a case, either the superuser or alice\ncould set these properties for role bob, because alice enjoys ADMIN\nOPTION on role bob.\n\nSomewhat to one side of the question you were asking, but related to\nthe above, I believe there is an opportunity, and perhaps a need, to\nmodify the scope of CREATEROLE in terms of what role-level options a\nCREATEROLE user can set. For instance, if a CREATEROLE user doesn't\nhave CREATEDB, they can still create users and give them that\nprivilege, even with these patches, and likewise these two new\nproperties. This patch is only concerned about which roles you can\nmanipulate, not what role-level properties you can set. Somebody might\nfeel that's a serious problem, and they might even feel that this\npatch set ought to something about it. In my view, the issues are\nsomewhat severable. I don't think you can do anything as evil by\nsetting role-level properties (except for SUPERUSER, of course) as\nwhat you can do by granting predefined roles, so I don't find\nrestricting those capabilities to be as urgent as doing something to\nrestrict role grants.\n\nAlso, and this definitely plays into it too, I think there's some\ndebate about what the model ought to look like there. For instance,\nyou could simply stipulate that you can't give what you don't have,\nbut that would mean that every CREATEROLE user can create additional\nCREATEROLE users, and I suspect some people might like to restrict\nthat. We could add a new CREATECREATEROLE property to decide whether a\nuser can make CREATEROLE users, but by that argument we'd also need a\nnew CREATECREATECREATEROLE property to decide whether a role can make\nCREATECREATEROLE users, and then it just recurses indefinitely from\nthere. Similarly for CREATEDB. Also, what if anything should you\nrestrict about how the new INHERITCREATEDROLES and SETCREATEDROLES\nproperties should be set? You could argue that they ought to be\nsuperuser-only (because the implicit grant is performed by the\nbootstrap superuser) or that it's fine for them to be set by a\nCREATEROLE user with ADMIN OPTION (because it's not all that\nsecurity-critical how they get set) or maybe even that a user ought to\nbe able to set those properties on his or her own role.\n\nI'm not very certain about any of that stuff; I don't have a clear\nmental model of how it should work, or even what exact problem we're\ntrying to solve. To me, the patches that I posted make sense as far as\nthey go, but I'm not under the illusion that they solve all the\nproblems in this area, or even that I understand what all of the\nproblems are.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:04:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\n\n> On Nov 23, 2022, at 12:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> But if that's the case, did I misunderstand upthread that these are properties the superuser specifies about Alice? Can Alice just set these properties about herself, so she gets the behavior she wants? I'm confused now about who controls these settings.\n> \n> Because they are role-level properties, they can be set by whoever has\n> ADMIN OPTION on the role. That always includes every superuser, and it\n> never includes Alice herself (except if she's a superuser). It could\n> include other users depending on the system configuration. For\n> example, under this proposal, the superuser might create alice and set\n> her account to CREATEROLE, configuring the INHERITCREATEDROLES and\n> SETCREATEDROLES properties on Alice's account according to preference.\n> Then, alice might create another user, say bob, and make him\n> CREATEROLE as well. In such a case, either the superuser or alice\n> could set these properties for role bob, because alice enjoys ADMIN\n> OPTION on role bob.\n\nOk, so the critical part of this proposal is that auditing tools can tell when Alice circumvents these settings. Without that bit, the whole thing is inane. Why make Alice jump through hoops that you are explicitly allowing her to jump through? Apparently the answer is that you can point a high speed camera at the hoops.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 12:11:24 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Nov 23, 2022 at 2:28 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I had incorrectly imagined that if the bootstrap superuser granted\n>> CREATEROLE to Alice with particular settings, those settings would\n>> limit the things that Alice could do when creating role Bob,\n>> specifically limiting how much she could administer/inherit/set role\n>> Bob thereafter. Apparently, your proposal only configures what happens\n>> by default, and Alice can work around that if she wants to.\n\n> Right.\n\nOkay ...\n\n>> But if that's the case, did I misunderstand upthread that these are\n>> properties the superuser specifies about Alice? Can Alice just set\n>> these properties about herself, so she gets the behavior she wants?\n>> I'm confused now about who controls these settings.\n\n> Because they are role-level properties, they can be set by whoever has\n> ADMIN OPTION on the role. That always includes every superuser, and it\n> never includes Alice herself (except if she's a superuser).\n\nThat is just bizarre. Alice can do X, and she can do Y, but she\ncan't control a flag that says which of those happens by default?\nHow is that sane (disregarding the question of whether the existence\nof the flag is a good idea, which I'm now even less sold on)?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:32:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 3:11 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Ok, so the critical part of this proposal is that auditing tools can tell when Alice circumvents these settings. Without that bit, the whole thing is inane. Why make Alice jump through hoops that you are explicitly allowing her to jump through? Apparently the answer is that you can point a high speed camera at the hoops.\n\nWell put.\n\nAlso, it's a bit like 'su', right? Typically you don't just log in as\nroot and do everything a root, even if you have access to root\nprivileges. You log in as 'mdilger' or whatever and then when you want\nto exercise elevated privileges you use 'su' or 'sudo' or something.\nSimilarly here you can make an argument that it's a lot cleaner to\ngive Alice the potential to access all of these privileges than to\nmake her have them all the time.\n\nBut on the flip side, one big advantage of having 'alice' have the\nprivileges all the time is that, for example, she can probably restore\na database dump that might otherwise be restorable only with superuser\nprivileges. As long as she has been granted all the relevant roles\nwith INHERIT TRUE, SET TRUE, the kinds of locutions that pg_dump spits\nout should pretty much work fine, whereas if Alice is firewalled from\nthe privileges of the roles she manages, that is not going to work\nwell at all. To me, that is a pretty huge advantage, and it's a major\nreason why I initially thought that alice should just categorically,\nalways inherit the privileges of the roles she creates.\n\nBut having been burned^Wenlightened by previous community discussion,\nI can now see both sides of the argument, which is why I am now\nproposing to let people pick the behavior they happen to want.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:34:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 1:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> I'm not very certain about any of that stuff; I don't have a clear\n> mental model of how it should work, or even what exact problem we're\n> trying to solve. To me, the patches that I posted make sense as far as\n> they go, but I'm not under the illusion that they solve all the\n> problems in this area, or even that I understand what all of the\n> problems are.\n>\n>\nI haven't yet formed a complete thought here but is there any reason we\ncannot convert the permission-like attributes to predefined roles?\n\npg_login\npg_replication\npg_bypassrls\npg_createdb\npg_createrole\npg_haspassword (password and valid until)\npg_hasconnlimit\n\nPresently, attributes are never inherited, but having that be controlled\nvia the INHERIT property of the grant seems desirable.\n\nWITH ADMIN controls passing on of membership to other roles.\n\nExample:\nI have pg_createrole (set, noinherit, no with admin), pg_password (no set,\ninherit, no with admin), and pg_createdb (set, inherit, with admin),\npg_login (no set, inherit, with admin)\nRoles I create cannot be members of pg_createrole or pg_password but can be\ngiven pg_createdb and pg_login (this would be a way to enforce external\nauthentication for roles created by me)\nI can execute CREATE DATABASE due to inheriting pg_createdb\nI must set role to pg_createrole in order to execute CREATE ROLE\nSince I don't have admin on pg_createrole I cannot change my own\nset/inherit, but I could do that for pg_createdb\n\nDavid J.\n\nOn Wed, Nov 23, 2022 at 1:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\nI'm not very certain about any of that stuff; I don't have a clear\nmental model of how it should work, or even what exact problem we're\ntrying to solve. To me, the patches that I posted make sense as far as\nthey go, but I'm not under the illusion that they solve all the\nproblems in this area, or even that I understand what all of the\nproblems are.I haven't yet formed a complete thought here but is there any reason we cannot convert the permission-like attributes to predefined roles?pg_loginpg_replicationpg_bypassrlspg_createdbpg_createrolepg_haspassword (password and valid until)pg_hasconnlimitPresently, attributes are never inherited, but having that be controlled via the INHERIT property of the grant seems desirable.WITH ADMIN controls passing on of membership to other roles.Example:I have pg_createrole (set, noinherit, no with admin), pg_password (no set, inherit, no with admin), and pg_createdb (set, inherit, with admin), pg_login (no set, inherit, with admin)Roles I create cannot be members of pg_createrole or pg_password but can be given pg_createdb and pg_login (this would be a way to enforce external authentication for roles created by me)I can execute CREATE DATABASE due to inheriting pg_createdbI must set role to pg_createrole in order to execute CREATE ROLESince I don't have admin on pg_createrole I cannot change my own set/inherit, but I could do that for pg_createdbDavid J.", "msg_date": "Wed, 23 Nov 2022 13:59:31 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 3:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Because they are role-level properties, they can be set by whoever has\n> > ADMIN OPTION on the role. That always includes every superuser, and it\n> > never includes Alice herself (except if she's a superuser).\n>\n> That is just bizarre. Alice can do X, and she can do Y, but she\n> can't control a flag that says which of those happens by default?\n> How is that sane (disregarding the question of whether the existence\n> of the flag is a good idea, which I'm now even less sold on)?\n\nLook, I admitted later in that same email that I don't really know\nwhat the rules for setting role-level properties ought to be. If you\nhave an idea, I'd love to hear it, but I'd rather if you didn't just\nlabel things into which I have put quite a bit of work as insane\nwithout giving any constructive feedback, especially if you haven't\nyet fully understood the proposal.\n\nYour description of the behavior here is not quite accurate.\nRegardless of how the flags are set, alice, as a CREATEROLE user, can\ngain access to all the privileges of the target role, and she can\narrange to have a grant of permissions on that role with INHERIT TRUE\nand SET TRUE. However, there's a difference between the case where (a)\nINHERITCREATEDROLE and SETCREATEDROLE are set, and alice gets the\npermissions of the role by default and the one where (b)\nNOINHERITCREATEDROLE and NOSETCREATEDROLE are set, and therefore alice\ngets the permissions only if she does GRANT created_role TO ALICE WITH\nINHERIT TRUE, SET TRUE. In the former case, there is only one grant,\nand it has grantor=bootstrap_superuser/admin_option=true/inherit_option=true/set_option=true.\nIn the latter case there are two, one with\ngrantor=bootstrap_supeuser/admin_option=true/set_option=false/inherit_option=false\nand a second with\ngrantor=alice/admin_option=false/set_option=true/inherit_option=true.\nThat is pretty nearly equivalent, but it is not the same, and it will\nnot, for example, be dumped in the same way. Furthermore, it's not\nequivalent in the other direction at all. If the superuser gives alice\nINHERITCREATEDROLES and SETCREATEDROLES, she can't renounce those\npermissions in the patch as written. All of which is to say that I\ndon't think your characterization of this as \"Alice can do X, and she\ncan do Y, but she can't control a flag that says which of those\nhappens by default?\" is really correct. It's subtler than that.\n\nBut having said that, I could certainly change the patches so that any\nuser, or maybe just a createrole user since it's otherwise irrelevant,\ncan flip the INHERITCREATEDROLE and SETCREATEDROLE bits on their own\naccount. There would be no harm in that from a security or auditing\nperspective AFAICS. It would, however, make the handling of those\nflags different from the handling of most other role-level properties.\nThat is in fact why things ended up the way that they did: I just made\nthe new role-level properties which I added work like most of the\nexisting ones. I don't think that's insane at all. I even think it\nmight be the right decision. But it's certainly arguable. If you think\nit should work differently, make an argument for that. What I would\nparticularly like to hear in such an argument, though, is a theory\nthat goes beyond those two particular properties and addresses what\nought to be done with all the other ones, especially CREATEDB and\nCREATEROLE. If we can't come up with such a grand unifying theory but\nare confident we know what to do about this case, so be it, but we\nshouldn't make an idiosyncratic rule for this case without at least\nthinking about the overall picture.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 16:01:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 3:59 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I haven't yet formed a complete thought here but is there any reason we cannot convert the permission-like attributes to predefined roles?\n>\n> pg_login\n> pg_replication\n> pg_bypassrls\n> pg_createdb\n> pg_createrole\n> pg_haspassword (password and valid until)\n> pg_hasconnlimit\n>\n> Presently, attributes are never inherited, but having that be controlled via the INHERIT property of the grant seems desirable.\n\nI think that something like this might be possible, but I'm not\nconvinced that it's a good idea. I've always felt that the role-level\nproperties seemed kind of like warts, but in studying these issues\nrecently, I've come to the conclusion that in some ways that's just a\nvisual impression. The syntax LOOKS outdated and clunky, whereas\ngranting someone a predefined role feels clean and modern. But the\nreality is that the predefined roles system is full of really\nunpleasant warts. For example, in talking through the now-committed\npatch to allow control over SET ROLE, we had some fairly extensive\ndiscussion of the fact that there was previously no way to avoid\nhaving a user who has been granted the pg_read_all_stats predefined\nrole to create objects owned by pg_read_all_stats, or to alter\nexisting objects. That's really pretty grotty. We now have a way to\nprevent that, but perhaps we should have something even better. I'm\nalso not really sure that's the only problem here, but maybe it is.\n\nEither way, I'm not quite sure what the benefit of converting these\nthings to predefined roles is. I think actually the strongest argument\nwould be to do this for the superuser property! Make the bootstrap\nsuperuser the only real superuser, and anyone else who wants to be a\nsuperuser has to inherit that from that role. It's really unclear to\nme why inheriting a lot of permissions is allowable, but inheriting\nall of them is not allowable. Doing it for something like\npg_hasconnlimit seems pretty unappealing by contrast, because that's\nan integer-valued property, not a Boolean, and it's not at all clear\nto me why that should be inherited or what the semantics ought to be.\nReally, I'm not that tempted to try to rejigger this kind of stuff\naround because it seems like a lot of work for not a whole lot of\nbenefit. I think there's a perfectly reasonable case for setting some\nthings on a per-role basis that are actually per-role and not\ninherited. A password is a fine example of that. You should never\ninherit someone else's password. Whether we've chosen the right set of\nthings to treat as per-role properties rather than predefined roles is\nvery much debatable, though, as are a number of other aspects of the\nrole system.\n\nFor instance, I'm pretty well unconvinced that merging users and\ngroups into a uniformed thing called roles was a good idea. I think it\nmakes all of this machinery a LOT harder to understand, which may be\npart of the reason why this area doesn't seem to have had much TLC in\nquite a long time. But I think it's too late to revisit that decision,\nand I also think it's too late to revisit the question of having\npredefined roles at all. For better or for worse, that's what we did,\nand what remains now is to find a way to make the best of it in light\nof those decisions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 16:18:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> But having said that, I could certainly change the patches so that any\n> user, or maybe just a createrole user since it's otherwise irrelevant,\n> can flip the INHERITCREATEDROLE and SETCREATEDROLE bits on their own\n> account. There would be no harm in that from a security or auditing\n> perspective AFAICS. It would, however, make the handling of those\n> flags different from the handling of most other role-level properties.\n> That is in fact why things ended up the way that they did: I just made\n> the new role-level properties which I added work like most of the\n> existing ones.\n\nTo be clear, I'm not saying that I know a better answer. But the fact\nthat these end up so different from other role-property bits seems to\nme to suggest that making them role-property bits is the wrong thing.\nThey aren't privileges in any usual sense of the word --- if they\nwere, allowing Alice to flip her own bits would obviously be silly.\nBut all the existing role-property bits, with the exception of\nrolinherit, certainly are in the nature of privileges.\n\n> What I would\n> particularly like to hear in such an argument, though, is a theory\n> that goes beyond those two particular properties and addresses what\n> ought to be done with all the other ones, especially CREATEDB and\n> CREATEROLE.\n\nCREATEDB and CREATEROLE don't particularly bother me. We've talked before\nabout replacing them with memberships in predefined roles, and that would\nbe fine. But the reason nobody's got around to that (IMNSHO) is that it\nwon't really add much. The thing that I think is a big wart is\nrolinherit. I don't know quite what to do about it. But these two new\nproposed bits seem to be much the same kind of wart, so I'd rather not\ninvent them, at least not in the form of role properties.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 16:18:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n\n> In the latter case there are two, one with\n>\n> grantor=bootstrap_supeuser/admin_option=true/set_option=false/inherit_option=false\n> and a second with\n> grantor=alice/admin_option=false/set_option=true/inherit_option=true.\n>\n\nThis, IMO, is preferable. And I'd probably typically want to hide the\nfirst grant from the user in typical cases - it is an implementation detail.\n\nWe have to grant the creating role membership in the created role, with\nadmin option, as a form of bookkeeping.\n\nIf the creating role really wants to be a member of the created role for\nother reasons that should be done explicitly and granted by the creating\nrole.\n\nThis patch series need not be concerned about how easy or difficult it is\nto get the additional grant entry into the database. The ability to refine\nthe permissions in the data model is there so there should be no complaints\nthat \"it is impossible to set up this combination of permissions\". We've\nprovided a detailed model and commands to alter it - the users can build\ntheir scripts to glue those things together.\n\nDavid J.\n\nOn Wed, Nov 23, 2022 at 2:01 PM Robert Haas <robertmhaas@gmail.com> wrote: In the latter case there are two, one with\ngrantor=bootstrap_supeuser/admin_option=true/set_option=false/inherit_option=false\nand a second with\ngrantor=alice/admin_option=false/set_option=true/inherit_option=true.This, IMO, is preferable.  And I'd probably typically want to hide the first grant from the user in typical cases - it is an implementation detail.We have to grant the creating role membership in the created role, with admin option, as a form of bookkeeping.If the creating role really wants to be a member of the created role for other reasons that should be done explicitly and granted by the creating role.This patch series need not be concerned about how easy or difficult it is to get the additional grant entry into the database.  The ability to refine the permissions in the data model is there so there should be no complaints that \"it is impossible to set up this combination of permissions\".  We've provided a detailed model and commands to alter it - the users can build their scripts to glue those things together.David J.", "msg_date": "Wed, 23 Nov 2022 14:19:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Nov 23, 2022 at 3:59 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > I haven't yet formed a complete thought here but is there any reason we\n> cannot convert the permission-like attributes to predefined roles?\n> >\n> > pg_login\n> > pg_replication\n> > pg_bypassrls\n> > pg_createdb\n> > pg_createrole\n> > pg_haspassword (password and valid until)\n> > pg_hasconnlimit\n> >\n> > Presently, attributes are never inherited, but having that be controlled\n> via the INHERIT property of the grant seems desirable.\n>\n> I think that something like this might be possible, but I'm not\n> convinced that it's a good idea.\n>\n\n\n> Either way, I'm not quite sure what the benefit of converting these\n> things to predefined roles is.\n\n\nSpecifically, you gain inheritance/set and \"admin option\" for free. So\nwhether I have an ability and whether I can grant it are separate concerns.\n\n\n\n> A password is a fine example of that. You should never\n> inherit someone else's password. Whether we've chosen the right set of\n> things to treat as per-role properties rather than predefined roles is\n> very much debatable, though, as are a number of other aspects of the\n> role system.\n>\n\nYou aren't inheriting a specific password, you are inheriting the right to\nhave a password stored in the database, with an optional expiration date.\n\n>\n> For instance, I'm pretty well unconvinced that merging users and\n> groups into a uniformed thing called roles was a good idea.\n\n\nI agree. No one was interested in the, admittedly complex, psql queries I\nwrote the other month but I decided to undo some of that decision there.\n\nDavid J.\n\nOn Wed, Nov 23, 2022 at 2:18 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Nov 23, 2022 at 3:59 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I haven't yet formed a complete thought here but is there any reason we cannot convert the permission-like attributes to predefined roles?\n>\n> pg_login\n> pg_replication\n> pg_bypassrls\n> pg_createdb\n> pg_createrole\n> pg_haspassword (password and valid until)\n> pg_hasconnlimit\n>\n> Presently, attributes are never inherited, but having that be controlled via the INHERIT property of the grant seems desirable.\n\nI think that something like this might be possible, but I'm not\nconvinced that it's a good idea. \nEither way, I'm not quite sure what the benefit of converting these\nthings to predefined roles is.Specifically, you gain inheritance/set and \"admin option\" for free.  So whether I have an ability and whether I can grant it are separate concerns.  A password is a fine example of that. You should never\ninherit someone else's password. Whether we've chosen the right set of\nthings to treat as per-role properties rather than predefined roles is\nvery much debatable, though, as are a number of other aspects of the\nrole system.You aren't inheriting a specific password, you are inheriting the right to have a password stored in the database, with an optional expiration date.\n\nFor instance, I'm pretty well unconvinced that merging users and\ngroups into a uniformed thing called roles was a good idea.I agree.  No one was interested in the, admittedly complex, psql queries I wrote the other month but I decided to undo some of that decision there.David J.", "msg_date": "Wed, 23 Nov 2022 14:27:55 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Wed, Nov 23, 2022 at 2:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Either way, I'm not quite sure what the benefit of converting these\n>> things to predefined roles is.\n\n> Specifically, you gain inheritance/set and \"admin option\" for free.\n\nRight: the practical issue with CREATEROLE/CREATEDB is that you need\nsome mechanism for managing who can grant those privileges. The\ncurrent answer isn't very flexible, which has been complained of\nrepeatedly. If they become predefined roles then we get a lot of\nalready-built-out infrastructure to solve that, instead of having to\nwrite even more single-purpose logic. I think it's a sensible future\npath, but said lack of flexibility hasn't yet spurred anyone to do it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 16:40:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Wed, Nov 23, 2022 at 4:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> To be clear, I'm not saying that I know a better answer. But the fact\n> that these end up so different from other role-property bits seems to\n> me to suggest that making them role-property bits is the wrong thing.\n> They aren't privileges in any usual sense of the word --- if they\n> were, allowing Alice to flip her own bits would obviously be silly.\n> But all the existing role-property bits, with the exception of\n> rolinherit, certainly are in the nature of privileges.\n\nI think that's somewhat true, but I don't completely agree. I don't\nthink that INHERIT, LOGIN, CONNECTION LIMIT, PASSWORD, or VALID UNTIL\nare privileges either. I think they're just properties. I would put\nthese in the same category: properties, not privileges. I think that\nSUPERUSER, CREATEDB, CREATEROLE, REPLICATION, and BYPASSRLS are\nprivileges.\n\n> CREATEDB and CREATEROLE don't particularly bother me. We've talked before\n> about replacing them with memberships in predefined roles, and that would\n> be fine. But the reason nobody's got around to that (IMNSHO) is that it\n> won't really add much.\n\nI agree, although I'm not sure that means that we don't need to do\nanything about them as we evolve the system.\n\n> The thing that I think is a big wart is\n> rolinherit. I don't know quite what to do about it.\n\nOne option is to nuke it from orbit. Now that you can set that\nproperty on a per-grant basis, the per-role basis serves only to set\nthe default. I think that's of dubious value, and arguably backwards,\nbecause ISTM that in a lot of cases whether you want a role grant to\nbe inherited will depend on the nature of the role being granted\nrather than the role to which it is being granted. The setting we have\nworks the other way around, and I can never keep in my head what the\nuse case for that is. I think there must be one, though, because Peter\nEisentraut seemed to like having it around. I don't understand why,\nbut I respect Peter. :-)\n\n> But these two new\n> proposed bits seem to be much the same kind of wart, so I'd rather not\n> invent them, at least not in the form of role properties.\n\nI have to admit that when I realized that was the natural place to put\nthem to make the patch work, my first reaction internally was \"well,\nthat can't possibly be right, role properties suck!\". But I didn't and\nstill don't see where else to put them that makes any sense at all, so\nI eventually decided that my initial reaction was misguided. So I\ncan't really blame you for not liking it either, and would be happy if\nwe could come up with something else that feels better. I just don't\nknow what it is: at least as of this moment in time, I believe these\nnaturally ARE properties of the role, and therefore I'm inclined to\nview my initial reluctance to implement it that way as a reflex rather\nthan a well-considered opinion. That is, the CREATE ROLE syntax is\nclunky, and it controls some things that are properties and others\nthat are permissions, but they're not inherited like regular\npermissions, so it stinks, and thus adding things to it also feels\nstinky. But if the existing command weren't such a mess I'm not sure\nadding this stuff to it would feel bad either.\n\nThat might be the wrong view. As I say, I'm open to other ideas, and\nit's possible there's some nicer way to do it that I just don't see\nright now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 16:41:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas:\n> I have to admit that when I realized that was the natural place to put\n> them to make the patch work, my first reaction internally was \"well,\n> that can't possibly be right, role properties suck!\". But I didn't and\n> still don't see where else to put them that makes any sense at all, so\n> I eventually decided that my initial reaction was misguided. So I\n> can't really blame you for not liking it either, and would be happy if\n> we could come up with something else that feels better. I just don't\n> know what it is: at least as of this moment in time, I believe these\n> naturally ARE properties of the role [...]\n> \n> That might be the wrong view. As I say, I'm open to other ideas, and\n> it's possible there's some nicer way to do it that I just don't see\n> right now.\n\nINHERITCREATEDROLES and SETCREATEDROLES behave much like DEFAULT \nPRIVILEGES. What about something like:\n\nALTER DEFAULT PRIVILEGES FOR alice\nGRANT TO alice WITH INHERIT FALSE, SET TRUE, ADMIN TRUE\n\nThe \"abbreviated grant\" is very much abbreviated, because the original \nsyntax GRANT a TO b is already quite short to begin with, i.e. there is \nno ON ROLE or something like that in it.\n\nThe initial DEFAULT privilege would be INHERIT FALSE, SET FALSE, ADMIN \nTRUE, I guess?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Thu, 24 Nov 2022 08:41:46 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Thu, Nov 24, 2022 at 2:41 AM <walther@technowledgy.de> wrote:\n> INHERITCREATEDROLES and SETCREATEDROLES behave much like DEFAULT\n> PRIVILEGES. What about something like:\n>\n> ALTER DEFAULT PRIVILEGES FOR alice\n> GRANT TO alice WITH INHERIT FALSE, SET TRUE, ADMIN TRUE\n>\n> The \"abbreviated grant\" is very much abbreviated, because the original\n> syntax GRANT a TO b is already quite short to begin with, i.e. there is\n> no ON ROLE or something like that in it.\n>\n> The initial DEFAULT privilege would be INHERIT FALSE, SET FALSE, ADMIN\n> TRUE, I guess?\n\nI don't know if changing the syntax from A to B is really getting us\nanywhere. I generally agree that the ALTER DEFAULT PRIVILEGES syntax\nlooks nicer than the CREATE/ALTER ROLE syntax, but I'm not sure that's\na sufficient reason to move the control over this behavior to ALTER\nDEFAULT PRIVILEGES. One thing to consider is that, as I've designed\nthis, whether or not ADMIN is included in the grant is non-negotiable.\nI am, at least at present, inclined to think that was the right call,\npartly because Mark Dilger expressed a lot of concern about the\nCREATEROLE user losing control over the role they'd just created, and\nallowing ADMIN to be turned off would have exactly that effect. Plus a\ngrant with INHERIT FALSE, SET FALSE, ADMIN FALSE would end up being\nalmost identical to no great at all, which seems pointless. Basically,\nwithout ADMIN, the implicit GRANT fails to accomplish its intended\npurpose, so I don't like having that as a possibility.\n\nThe other thing that's a little weird about the syntax which you\npropose is that it's not obviously related to CREATE ROLE. The intent\nof the patch as implemented is to allow control over only the implicit\nGRANT that is created when a new role is created, not all grants that\nmight be created by or to a particular user. Setting defaults for all\ngrants doesn't seem like a particularly good idea to me, but it's\ndefinitely a different idea than what the patch proposes to do.\n\nI did spend some time thinking about trying to tie this to the\nCREATEROLE syntax itself. For example, instead of CREATE ROLE alice\nCREATEROLE INHERITCREATEDROLES SETCREATEDROLES you could write CREATE\nROLE alice CREATEROLE WITH (INHERIT TRUE, SET TRUE) or something like\nthis. That would avoid introducing new, lengthy keywords that are just\nconcatenations of other English words, a kind of syntax that doesn't\nlook particularly nice to me and probably is less friendly to\nnon-English speakers as well. I didn't do it that way because the\nparser support would be more complex, but I could. CREATEROLE would\nhave to become a keyword again, but that's not a catastrophe.\n\nAnother idea would be to break the CREATEROLE stuff off from CREATE\nROLE entirely and put it all into GRANT. You could imagine syntax like\nGRANT CREATEROLE (or CREATE ROLE?) TO role_specification WITH (INHERIT\nTRUE/FALSE, SET TRUE/FALSE). There are a few potential issues with\nthis direction. One, if we did this, then CREATEROLE probably ought to\nbecome inheritable, because that's the way grants work in general, and\nthis likely shouldn't be an exception, but this would be a behavior\nchange. However, if it is the consensus that such a behavior change\nwould be an improvement, that might be OK. Two, I wonder what we'd do\nabout the GRANTED BY role_specification clause. We could leave it out,\nbut that would be asymmetric with other GRANT commands. We could also\nsupport it and record that information and make this work more like\nother cases, including, I suppose, the possibility of dependent\ngrants. We'd have to think about what that means exactly. If you\nrevoke CREATEROLE from someone who has granted CREATEROLE to others, I\nsuppose that's a clear dependent grant and needs to be recursively\nrevoked. But what about the implicit grants that were created because\nthe person had CREATEROLE? Are those also dependent grants? And what\nabout the roles themselves? Should revoking CREATEROLE drop the roles\nthat the user in question created? That gets complicated, because\nthose roles might own objects. That's scary, because you might not\nexpect revoking a role permission to result in tables getting dropped.\nIt's also problematic, because those tables might be in some other\ndatabase where they are inaccessible to the current session. All in\nall I'm inclined to think that recursing to the roles themselves is a\nbad plan, but it's debatable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 11:07:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas:\n> I don't know if changing the syntax from A to B is really getting us\n> anywhere. I generally agree that the ALTER DEFAULT PRIVILEGES syntax\n> looks nicer than the CREATE/ALTER ROLE syntax, but I'm not sure that's\n> a sufficient reason to move the control over this behavior to ALTER\n> DEFAULT PRIVILEGES.\n\nThe list of role attributes can currently be roughly divided into the \nfollowing categories:\n- Settings with role-specific values: CONNECTION LIMIT, PASSWORD, VALID \nUNTIL. It's hard to imagine storing them anywhere else, because they \nneed to have a different value for each role. Those are not just \"flags\" \nlike all the other attributes.\n- Two special attributes in INHERIT and BYPASSRLS regarding \nsecurity/privileges. Those were invented because there was no other \nsyntax to do the same thing. Those could be interpreted as privileges to \ndo something, too - but lacking the ability to do that explicit. There \nis no SET BYPASSRLS ON/OFF or SET INHERIT ON/OFF. Of course the INHERIT \ncase is now a bit different, because there is the inherit grant option \nyou introduced.\n- Cluster-wide privileges: SUPERUSER, CREATEDB, CREATEROLE, LOGIN, \nREPLICATION. Those can't be granted on some kind of object, because \nthere is no such global object. You could imagine inventing some kind of \nglobal CLUSTER object and do something like GRANT SUPERUSER ON CLUSTER \nTO alice; instead. Turning those into role attributes was the choice \nmade instead. Most likely it would have been only a syntactic difference \nanyway: Even if there was something like GRANT .. ON CLUSTER, you'd most \nlikely implement that as... storing those grants as role attributes.\n\nYour patch is introducing a new category of role attributes - those that \nare affecting default behavior. But there is already a way to express \nthis right now, and that's ALTER DEFAULT PRIVILEGES in this case. Imho, \nthe question asked should not be \"why change from syntax A to B?\" but \nrather: Why introduce a new category of role attributes, when there is a \nway to express the same concept already? I can't see any compelling \nreason for that, yet.\n\nOr not \"yet\", but rather \"anymore\". When I understood and remember \ncorrectly, you implemented it in a way that a user could not change \nthose new attributes on their own role. This is in fact different to how \nALTER DEFAULT PRIVILEGES works, so you could have made an argument that \nthis was better expressed as role attributes. But I think this was asked \nand agreed on to act differently, so that the user can change this \ndefault behavior of what happens when they create a role for themselves. \nAnd now this reason is gone - there is no reason NOT to implement it as \nDEFAULT PRIVILEGES.\n\n> One thing to consider is that, as I've designed\n> this, whether or not ADMIN is included in the grant is non-negotiable.\n> I am, at least at present, inclined to think that was the right call,\n> partly because Mark Dilger expressed a lot of concern about the\n> CREATEROLE user losing control over the role they'd just created, and\n> allowing ADMIN to be turned off would have exactly that effect. Plus a\n> grant with INHERIT FALSE, SET FALSE, ADMIN FALSE would end up being\n> almost identical to no great at all, which seems pointless. Basically,\n> without ADMIN, the implicit GRANT fails to accomplish its intended\n> purpose, so I don't like having that as a possibility.\n\nWith how you implemented it right now, is it possible to do the following?\n\nCREATE ROLE alice;\nREVOKE ADMIN OPTION FOR alice FROM CURRENT_USER;\n\nIf the answer is yes, then there is no reason to allow a user to set a \nshortcut for SET and INHERIT, but not for ADMIN.\n\nIf the answer is no, then you could just not allow specifying the ADMIN \noption in the ALTER DEFAULT PRIVILEGES statement and always force it to \nbe TRUE.\n\n\n> The other thing that's a little weird about the syntax which you\n> propose is that it's not obviously related to CREATE ROLE. The intent\n> of the patch as implemented is to allow control over only the implicit\n> GRANT that is created when a new role is created, not all grants that\n> might be created by or to a particular user. Setting defaults for all\n> grants doesn't seem like a particularly good idea to me, but it's\n> definitely a different idea than what the patch proposes to do.\n\nBefore I proposed that I was confused for a moment about this, too - but \nit turns out to be wrong. ALTER DEFAULT PRIVILEGES in general works as:\n\nWhen object A is created, issue a GRANT ON A automatically.\n\nIn my proposal, the \"object\" is not the GRANT of that role. It's the \nrole itself. So the default privileges express what should happen when \nthe role is created. The default privileges would NOT affect a regular \nGRANT role TO role_spec command. They only run that command when a role \nis created.\n\n> I did spend some time thinking about trying to tie this to the\n> CREATEROLE syntax itself. For example, instead of CREATE ROLE alice\n> CREATEROLE INHERITCREATEDROLES SETCREATEDROLES you could write CREATE\n> ROLE alice CREATEROLE WITH (INHERIT TRUE, SET TRUE) or something like\n> this. That would avoid introducing new, lengthy keywords that are just\n> concatenations of other English words, a kind of syntax that doesn't\n> look particularly nice to me and probably is less friendly to\n> non-English speakers as well. I didn't do it that way because the\n> parser support would be more complex, but I could. CREATEROLE would\n> have to become a keyword again, but that's not a catastrophe.\n\nI agree, this would not have been any better.\n\n> Another idea would be to break the CREATEROLE stuff off from CREATE\n> ROLE entirely and put it all into GRANT. You could imagine syntax like\n> GRANT CREATEROLE (or CREATE ROLE?) TO role_specification WITH (INHERIT\n> TRUE/FALSE, SET TRUE/FALSE). There are a few potential issues with\n> this direction. One, if we did this, then CREATEROLE probably ought to\n> become inheritable, because that's the way grants work in general, and\n> this likely shouldn't be an exception, but this would be a behavior\n> change. However, if it is the consensus that such a behavior change\n> would be an improvement, that might be OK. Two, I wonder what we'd do\n> about the GRANTED BY role_specification clause. We could leave it out,\n> but that would be asymmetric with other GRANT commands. We could also\n> support it and record that information and make this work more like\n> other cases, including, I suppose, the possibility of dependent\n> grants. We'd have to think about what that means exactly. If you\n> revoke CREATEROLE from someone who has granted CREATEROLE to others, I\n> suppose that's a clear dependent grant and needs to be recursively\n> revoked. But what about the implicit grants that were created because\n> the person had CREATEROLE? Are those also dependent grants? And what\n> about the roles themselves? Should revoking CREATEROLE drop the roles\n> that the user in question created? That gets complicated, because\n> those roles might own objects. That's scary, because you might not\n> expect revoking a role permission to result in tables getting dropped.\n> It's also problematic, because those tables might be in some other\n> database where they are inaccessible to the current session. All in\n> all I'm inclined to think that recursing to the roles themselves is a\n> bad plan, but it's debatable.\n\nI'm not sure how that relates to the role attributes vs. default \nprivileges discussion. Those seem to be orthogonal to the question of \nhow to treat the CREATEROLE privilege itself. Right now, it's a role \nattribute. I proposed \"database roles\" and making CREATEROLE a privilege \non the database level. David Johnston proposed to use a pg_createrole \nbuilt-in role instead. Your proposal here is to invent a CREATEROLE \nprivilege that can be granted, which is very similar to what I wrote \nabove about \"GRANT CREATEROLE ON CLUSTER\". Side note: Without the ON \nCLUSTER, there'd be no target object in your GRANT statement and as such \nCREATEROLE should be treated as a role name - so I'm not sure your \nproposal actually works. In any case: All those proposals change the \nsemantics of how this whole CREATEROLE \"privilege\" works in terms of \ninheritance etc. However, those proposals all don't really change the \nway you'll want to treat the ADMIN option on the role, I think, and can \nall be made to create that implicit GRANT WITH ADMIN, when you create \nthe role. And once you do that, the question of how that GRANT looks by \ndefault comes up - so in all those scenarios, we could talk about role \nattributes vs. default privileges. Or we could just decide not to, \nbecause is it really that hard to just issue a GRANT statement \nimmediately after CREATE ROLE, when you want to have SET or INHERIT \noptions on that role?\n\nThe answer to that question was \"yes it is too hard\" a while back and as \nsuch DEFAULT PRIVILEGES were introduced.\n\nBest,\n\nWolfgang\n\n\n\n", "msg_date": "Mon, 28 Nov 2022 19:56:56 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 11:57 AM <walther@technowledgy.de> wrote:\n\n> Robert Haas:\n> > I don't know if changing the syntax from A to B is really getting us\n> > anywhere. I generally agree that the ALTER DEFAULT PRIVILEGES syntax\n> > looks nicer than the CREATE/ALTER ROLE syntax, but I'm not sure that's\n> > a sufficient reason to move the control over this behavior to ALTER\n> > DEFAULT PRIVILEGES.\n>\n> Your patch is introducing a new category of role attributes - those that\n> are affecting default behavior. But there is already a way to express\n> this right now, and that's ALTER DEFAULT PRIVILEGES in this case.\n\n\nI do not like ALTER DEFAULT PRIVILEGES (ADP) for this. I don't really like\ndefaults, period, for this.\n\nThe role doing the creation and the role being created are both in scope\nwhen the command is executed and if anything it is the role doing to the\ncreation that is receiving the privileges not the role being created. For\nADP, the role being created gets the privileges and it is objects not in\nthe scope of the executed command that are being affected.\n\n\n> > One thing to consider is that, as I've designed\n> > this, whether or not ADMIN is included in the grant is non-negotiable.\n> > I am, at least at present, inclined to think that was the right call,\n> > partly because Mark Dilger expressed a lot of concern about the\n> > CREATEROLE user losing control over the role they'd just created, and\n> > allowing ADMIN to be turned off would have exactly that effect. Plus a\n> > grant with INHERIT FALSE, SET FALSE, ADMIN FALSE would end up being\n> > almost identical to no great at all, which seems pointless. Basically,\n> > without ADMIN, the implicit GRANT fails to accomplish its intended\n> > purpose, so I don't like having that as a possibility.\n>\n> With how you implemented it right now, is it possible to do the following?\n>\n> CREATE ROLE alice;\n> REVOKE ADMIN OPTION FOR alice FROM CURRENT_USER;\n>\n> If the answer is yes, then there is no reason to allow a user to set a\n> shortcut for SET and INHERIT, but not for ADMIN.\n>\n> If the answer is no, then you could just not allow specifying the ADMIN\n> option in the ALTER DEFAULT PRIVILEGES statement and always force it to\n> be TRUE.\n>\n\nA prior email described that the creation of a role by a CREATEROLE role\nresults in the necessary creation of an ADMIN grant from the creator to the\nnew role granted by the bootstrap superuser (or, possibly, whichever\nsuperuser granted CREATEROLE). That REVOKE will not work as there would be\nno existing \"grant by current_user over alice granted by current_user\"\nimmediately after current_user creates alice.\n\nOr we could just decide not to,\n> because is it really that hard to just issue a GRANT statement\n> immediately after CREATE ROLE, when you want to have SET or INHERIT\n> options on that role?\n>\n> The answer to that question was \"yes it is too hard\" a while back and as\n> such DEFAULT PRIVILEGES were introduced.\n>\n>\nA quick tally of the thread so far:\n\nNo Defaults needed: David J., Mark?, Tom?\nDefaults needed - attached to role directly: Robert\nDefaults needed - defined within Default Privileges: Walther?\nThe capability itself seems orthogonal to the rest of the patch to track\nthese details better. I think we can \"Fix CREATEROLE\" without any feature\nregarding optional default behaviors and would suggest this patch be so\nlimited and that another thread be started for discussion of (assuming a\ndefault specifying mechanism is wanted overall) how it should look. Let's\nnot let a usability debate distract us from fixing a real problem.\n\nDavid J.\n\nOn Mon, Nov 28, 2022 at 11:57 AM <walther@technowledgy.de> wrote:Robert Haas:\n> I don't know if changing the syntax from A to B is really getting us\n> anywhere. I generally agree that the ALTER DEFAULT PRIVILEGES syntax\n> looks nicer than the CREATE/ALTER ROLE syntax, but I'm not sure that's\n> a sufficient reason to move the control over this behavior to ALTER\n> DEFAULT PRIVILEGES.\nYour patch is introducing a new category of role attributes - those that \nare affecting default behavior. But there is already a way to express \nthis right now, and that's ALTER DEFAULT PRIVILEGES in this case.I do not like ALTER DEFAULT PRIVILEGES (ADP) for this.  I don't really like defaults, period, for this.The role doing the creation and the role being created are both in scope when the command is executed and if anything it is the role doing to the creation that is receiving the privileges not the role being created.  For ADP, the role being created gets the privileges and it is objects not in the scope of the executed command that are being affected. \n> One thing to consider is that, as I've designed\n> this, whether or not ADMIN is included in the grant is non-negotiable.\n> I am, at least at present, inclined to think that was the right call,\n> partly because Mark Dilger expressed a lot of concern about the\n> CREATEROLE user losing control over the role they'd just created, and\n> allowing ADMIN to be turned off would have exactly that effect. Plus a\n> grant with INHERIT FALSE, SET FALSE, ADMIN FALSE would end up being\n> almost identical to no great at all, which seems pointless. Basically,\n> without ADMIN, the implicit GRANT fails to accomplish its intended\n> purpose, so I don't like having that as a possibility.\n\nWith how you implemented it right now, is it possible to do the following?\n\nCREATE ROLE alice;\nREVOKE ADMIN OPTION FOR alice FROM CURRENT_USER;\n\nIf the answer is yes, then there is no reason to allow a user to set a \nshortcut for SET and INHERIT, but not for ADMIN.\n\nIf the answer is no, then you could just not allow specifying the ADMIN \noption in the ALTER DEFAULT PRIVILEGES statement and always force it to \nbe TRUE.A prior email described that the creation of a role by a CREATEROLE role results in the necessary creation of an ADMIN grant from the creator to the new role granted by the bootstrap superuser (or, possibly, whichever superuser granted CREATEROLE).  That REVOKE will not work as there would be no existing \"grant by current_user over alice granted by current_user\" immediately after current_user creates alice.Or we could just decide not to, \nbecause is it really that hard to just issue a GRANT statement \nimmediately after CREATE ROLE, when you want to have SET or INHERIT \noptions on that role?\n\nThe answer to that question was \"yes it is too hard\" a while back and as \nsuch DEFAULT PRIVILEGES were introduced.A quick tally of the thread so far:No Defaults needed: David J., Mark?, Tom?Defaults needed - attached to role directly: RobertDefaults needed - defined within Default Privileges: Walther?The capability itself seems orthogonal to the rest of the patch to track these details better.  I think we can \"Fix CREATEROLE\" without any feature regarding optional default behaviors and would suggest this patch be so limited and that another thread be started for discussion of (assuming a default specifying mechanism is wanted overall) how it should look.  Let's not let a usability debate distract us from fixing a real problem.David J.", "msg_date": "Mon, 28 Nov 2022 12:34:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 1:56 PM <walther@technowledgy.de> wrote:\n> And now this reason is gone - there is no reason NOT to implement it as\n> DEFAULT PRIVILEGES.\n\nI think there is, and it's this, which you wrote further down:\n\n> In my proposal, the \"object\" is not the GRANT of that role. It's the\n> role itself. So the default privileges express what should happen when\n> the role is created. The default privileges would NOT affect a regular\n> GRANT role TO role_spec command. They only run that command when a role\n> is created.\n\nI agree that this is what you are proposing, but it is not what your\nproposed syntax says. Your proposed syntax simply says ALTER DEFAULT\nPRIVILEGES .. GRANT. Users who read that are going to think it\ncontrols the default behavior for all grants, because that's what the\nsyntax says. If the proposed syntax mentioned CREATE ROLE someplace,\nmaybe that would have some potential. A proposal to make a command\nthat controls CREATE ROLE and only CREATE ROLE and mentions neither\nCREATE nor ROLE anywhere in the syntax is never going to be\nacceptable.\n\n> With how you implemented it right now, is it possible to do the following?\n>\n> CREATE ROLE alice;\n> REVOKE ADMIN OPTION FOR alice FROM CURRENT_USER;\n>\n> If the answer is yes, then there is no reason to allow a user to set a\n> shortcut for SET and INHERIT, but not for ADMIN.\n>\n> If the answer is no, then you could just not allow specifying the ADMIN\n> option in the ALTER DEFAULT PRIVILEGES statement and always force it to\n> be TRUE.\n\nIt's no. Well, OK, you can do it, but it doesn't revoke anything,\nbecause you can only revoke your own grant, not the bootstrap\nsuperuser's grant.\n\n> attributes vs. default privileges. Or we could just decide not to,\n> because is it really that hard to just issue a GRANT statement\n> immediately after CREATE ROLE, when you want to have SET or INHERIT\n> options on that role?\n\nIt's not difficult in the sense that climbing Mount Everest is\ndifficult, but it makes the user experience as a CREATEROLE\nnon-superuser quite noticeably different from being a superuser.\nHaving a way to paper over such differences is, in my opinion, an\nimportant usability feature.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 14:36:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "David G. Johnston:\n> A quick tally of the thread so far:\n> \n> No Defaults needed: David J., Mark?, Tom?\n> Defaults needed - attached to role directly: Robert\n> Defaults needed - defined within Default Privileges: Walther?\n\ns/Walther/Wolfgang\n\n> The capability itself seems orthogonal to the rest of the patch to track \n> these details better.  I think we can \"Fix CREATEROLE\" without any \n> feature regarding optional default behaviors and would suggest this \n> patch be so limited and that another thread be started for discussion of \n> (assuming a default specifying mechanism is wanted overall) how it \n> should look.  Let's not let a usability debate distract us from fixing a \n> real problem.\n\n+1\n\nI didn't argue for whether defaults are needed in this case or not. I \njust said that ADP is better for defaults than role attributes are. Or \nthe other way around: I think role attributes are not a good way to \nexpress those.\n\nPersonally, I'm in the No Defaults needed camp, too.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Mon, 28 Nov 2022 20:42:15 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 12:42 PM <walther@technowledgy.de> wrote:\n\n> David G. Johnston:\n> > A quick tally of the thread so far:\n> >\n> > No Defaults needed: David J., Mark?, Tom?\n> > Defaults needed - attached to role directly: Robert\n> > Defaults needed - defined within Default Privileges: Walther?\n>\n> s/Walther/Wolfgang\n>\n\nSorry 'bout that, I was just reading the To: line in my email reply.\n\n>\n> Personally, I'm in the No Defaults needed camp, too.\n>\n\nI kinda thought so from your final comments, thanks for clarifying.\n\nDavid J.\n\nOn Mon, Nov 28, 2022 at 12:42 PM <walther@technowledgy.de> wrote:David G. Johnston:\n> A quick tally of the thread so far:\n> \n> No Defaults needed: David J., Mark?, Tom?\n> Defaults needed - attached to role directly: Robert\n> Defaults needed - defined within Default Privileges: Walther?\n\ns/Walther/WolfgangSorry 'bout that, I was just reading the To: line in my email reply.\n\nPersonally, I'm in the No Defaults needed camp, too.I kinda thought so from your final comments, thanks for clarifying.David J.", "msg_date": "Mon, 28 Nov 2022 12:52:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas:\n>> In my proposal, the \"object\" is not the GRANT of that role. It's the\n>> role itself. So the default privileges express what should happen when\n>> the role is created. The default privileges would NOT affect a regular\n>> GRANT role TO role_spec command. They only run that command when a role\n>> is created.\n> \n> I agree that this is what you are proposing, but it is not what your\n> proposed syntax says. Your proposed syntax simply says ALTER DEFAULT\n> PRIVILEGES .. GRANT. Users who read that are going to think it\n> controls the default behavior for all grants, because that's what the\n> syntax says. If the proposed syntax mentioned CREATE ROLE someplace,\n> maybe that would have some potential. A proposal to make a command\n> that controls CREATE ROLE and only CREATE ROLE and mentions neither\n> CREATE nor ROLE anywhere in the syntax is never going to be\n> acceptable.\n\nYes, I agree - the abbreviated GRANT syntax is confusing/misleading in \nthat case. Consistent with the other syntaxes, but easily confused \nnonetheless.\n\n> It's no. Well, OK, you can do it, but it doesn't revoke anything,\n> because you can only revoke your own grant, not the bootstrap\n> superuser's grant.\n\nAh, I see. I didn't get that difference regarding the bootstrap \nsuperuser, so far.\n\nSo in that sense, the ADP GRANT would be an additional GRANT issued by \nthe user that created the role in addition to the bootstrap superuser's \ngrant. You can't revoke the bootstrap superuser's grant - but you can't \nmodify it either. And there is no need to add SET or INHERIT to the \nboostrap superuser's grant, because you can grant the role yourself \nagain, with those options.\n\nI think it would be very strange to have a default for that bootstrap \nsuperuser's grant. Or rather: A different default than the minimum \nrequired - and that's just ADMIN, not SET, not INHERIT. When you have \nthe minimum, you can always choose to grant SET and INHERIT later on \nyourself - and revoke it, too! But when the SET and INHERIT are on the \nboostrap superuser's grant - then there is no way for you to revoke SET \nor INHERIT on that grant anymore later.\n\nWhy should the superuser, who gave you CREATEROLE, insist on you having \nSET or INHERIT forever and disallow to revoke it from yourself?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Mon, 28 Nov 2022 20:53:08 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\n\n> On Nov 28, 2022, at 11:34 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> No Defaults needed: David J., Mark?, Tom?\n\nAs Robert has the patch organized, I think defaults are needed, but I see that as a strike against the patch.\n\n> Defaults needed - attached to role directly: Robert\n> Defaults needed - defined within Default Privileges: Walther?\n> The capability itself seems orthogonal to the rest of the patch to track these details better. I think we can \"Fix CREATEROLE\" without any feature regarding optional default behaviors and would suggest this patch be so limited and that another thread be started for discussion of (assuming a default specifying mechanism is wanted overall) how it should look. Let's not let a usability debate distract us from fixing a real problem.\n\nIn Robert's initial email, he wrote, \"It seems to me that the root of any fix in this area must be to change the rule that CREATEROLE can administer any role whatsoever.\"\n\nThe obvious way to fix that is to revoke that rule and instead automatically grant ADMIN OPTION to a creator over any role they create. That's problematic, though, because as things stand, ADMIN OPTION is granted to somebody by granting them membership in the administered role WITH ADMIN OPTION, so membership in the role and administration of the role are conflated.\n\nRobert's patch tries to deal with the (possibly unwanted) role membership by setting up defaults to mitigate the effects, but that is more confusing to me than just de-conflating role membership from role administration, and giving role creators administration over roles they create, without in so doing giving them role membership. I don't recall enough details about how hard it is to de-conflate role membership from role administration, and maybe that's a non-starter for reasons I don't recall at the moment. I expect Robert has already contemplated that idea and instead proposed this patch for good reasons. Robert?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 28 Nov 2022 12:02:34 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Mark Dilger:\n> Robert's patch tries to deal with the (possibly unwanted) role membership by setting up defaults to mitigate the effects, but that is more confusing to me than just de-conflating role membership from role administration, and giving role creators administration over roles they create, without in so doing giving them role membership. I don't recall enough details about how hard it is to de-conflate role membership from role administration, and maybe that's a non-starter for reasons I don't recall at the moment.\n\nIsn't this just GRANT .. WITH SET FALSE, INHERIT FALSE, ADMIN TRUE? That \nshould allow role administration, without actually granting membership \nin that role, yet, right?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Mon, 28 Nov 2022 21:08:17 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 3:02 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Robert's patch tries to deal with the (possibly unwanted) role membership by setting up defaults to mitigate the effects, but that is more confusing to me than just de-conflating role membership from role administration, and giving role creators administration over roles they create, without in so doing giving them role membership. I don't recall enough details about how hard it is to de-conflate role membership from role administration, and maybe that's a non-starter for reasons I don't recall at the moment. I expect Robert has already contemplated that idea and instead proposed this patch for good reasons. Robert?\n\n\"De-conflating role membership from role administration\" isn't really\na specific proposal that someone can go out and implement. You have to\nmake some decision about *how* you are going to separate those\nconcepts. And that's what I did: I made INHERIT and SET into\ngrant-level options. That allows you to give someone access to the\nprivileges of a role without the ability to administer it (at least\none of INHERIT and SET true, and ADMIN false) or the ability to\nadminister a role without having any direct access to its privileges\n(INHERIT FALSE, SET FALSE, ADMIN TRUE). I don't see that we can, or\nneed to, separate things any more than that.\n\nYou can argue that a grant with INHERIT FALSE, SET FALSE, ADMIN TRUE\nstill grants membership, and I think formally that's true, but I also\nthink it's just picking something to bicker about. The need isn't to\nseparate membership per se from administration. It's to separate\nprivilege inheritance and the ability to SET ROLE from role\nadministration. And I've done that.\n\nI strongly disagree with the idea that the ability for users to\ncontrol defaults here isn't needed. You can set a default tablespace\nfor your database, and a default tablespace for your session, and a\ndefault tablespace for new partitions of an existing partition table.\nYou can set default privileges for every type of object you can\ncreate, and a default search path to find objects in the database. You\ncan set defaults for all of your connection parameters to the database\nusing environment variables, and the default data directory for\ncommands that need one. You can set defaults for all of your psql\nsettings in ~/.psqlrc. You can set defaults for the character sets,\nlocales and collations of new databases. You can set the default\nversion of an extension in the control file, so that the user doesn't\nhave to specify a version. And so on and so on. There's absolutely\nscads of things for which it is useful to be able to set defaults and\nfor which we give people the ability to set defaults, and I don't\nthink anyone is making a real argument for why that isn't also true\nhere. The argument that has been made is essentially that you could\nget by without it, but that's true of *every* default. Yet we keep\nadding the ability to set defaults for new things, and to set the\ndefaults for existing things in new ways, and there's a very good\nreason for that: it's extremely convenient. And that's true here, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 15:28:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\n\n> On Nov 28, 2022, at 12:08 PM, walther@technowledgy.de wrote:\n> \n> Isn't this just GRANT .. WITH SET FALSE, INHERIT FALSE, ADMIN TRUE? That should allow role administration, without actually granting membership in that role, yet, right?\n\nCan you clarify what you mean here? Are you inventing a new syntax?\n\n+GRANT bob TO alice WITH SET FALSE, INHERIT FALSE, ADMIN TRUE;\n+ERROR: syntax error at or near \"SET\"\n+LINE 1: GRANT bob TO alice WITH SET FALSE, INHERIT FALSE, ADMIN TRUE...\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 28 Nov 2022 12:33:47 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "\n\n> On Nov 28, 2022, at 12:33 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> Isn't this just GRANT .. WITH SET FALSE, INHERIT FALSE, ADMIN TRUE? That should allow role administration, without actually granting membership in that role, yet, right?\n> \n> Can you clarify what you mean here? Are you inventing a new syntax?\n\nNevermind. After reading Robert's email, it's clear enough what you mean here.\n \n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 28 Nov 2022 12:46:28 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 1:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Nov 28, 2022 at 3:02 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>\n\n> You can argue that a grant with INHERIT FALSE, SET FALSE, ADMIN TRUE\n> still grants membership, and I think formally that's true, but I also\n> think it's just picking something to bicker about. The need isn't to\n> separate membership per se from administration. It's to separate\n> privilege inheritance and the ability to SET ROLE from role\n> administration. And I've done that.\n>\n\nWe seem to now be in agreement on this design choice, and the related bit\nabout bootstrap superuser granting admin on newly created roles by the\ncreaterole user.\n\nThis seems like a patch in its own right.\n\nIt still leaves open the default membership behavior as well as whether we\nwant to rework the attributes into predefined roles.\n\n\n> I strongly disagree with the idea that the ability for users to\n> control defaults here isn't needed.\n\n\nThat's fine, but are you saying this patch is incapable (or simply\nundesirable) of having the parts about handling defaults separated out from\nthe parts that define how the system works with a given set of permissions;\nand the one implementation detail of having the bootstrap superuser\nautomatically grant admin to any roles a createuser role creates? If you\nand others feel strongly about defaults I'm sure that the suggested other\nthread focused on that will get attention and be committed in a timely\nmanner. But the system will work, and not be broken, if that got stalled,\nand it could be added in later.\n\nDavid J.\n\nOn Mon, Nov 28, 2022 at 1:28 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Nov 28, 2022 at 3:02 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n\nYou can argue that a grant with INHERIT FALSE, SET FALSE, ADMIN TRUE\nstill grants membership, and I think formally that's true, but I also\nthink it's just picking something to bicker about. The need isn't to\nseparate membership per se from administration. It's to separate\nprivilege inheritance and the ability to SET ROLE from role\nadministration. And I've done that.We seem to now be in agreement on this design choice, and the related bit about bootstrap superuser granting admin on newly created roles by the createrole user.This seems like a patch in its own right.It still leaves open the default membership behavior as well as whether we want to rework the attributes into predefined roles.\n\nI strongly disagree with the idea that the ability for users to\ncontrol defaults here isn't needed.That's fine, but are you saying this patch is incapable (or simply undesirable) of having the parts about handling defaults separated out from the parts that define how the system works with a given set of permissions; and the one implementation detail of having the bootstrap superuser automatically grant admin to any roles a createuser role creates? If you and others feel strongly about defaults I'm sure that the suggested other thread focused on that will get attention and be committed in a timely manner.  But the system will work, and not be broken, if that got stalled, and it could be added in later.David J.", "msg_date": "Mon, 28 Nov 2022 14:19:35 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 4:19 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> That's fine, but are you saying this patch is incapable (or simply undesirable) of having the parts about handling defaults separated out from the parts that define how the system works with a given set of permissions; and the one implementation detail of having the bootstrap superuser automatically grant admin to any roles a createuser role creates? If you and others feel strongly about defaults I'm sure that the suggested other thread focused on that will get attention and be committed in a timely manner. But the system will work, and not be broken, if that got stalled, and it could be added in later.\n\nThe topics are so closely intertwined that I don't believe that trying\nto have separate discussions will be useful or productive. There's no\nhope of anybody understanding 0004 or having an educated opinion about\nit without first understanding the earlier patches, and there's no\nrequirement that someone has to review 0004, or like it, just because\nthey review or like 0001-0003.\n\nBut so far nobody has actually reviewed anything, and all that's\nhappened is people have complained about 0004 for reasons which in my\nopinion are pretty nebulous and largely ignore the factors that caused\nit to exist in the first place. We had about 400 emails during the\nlast release cycle arguing about a whole bunch of topics related to\nuser management, and it became absolutely crystal clear in that\ndiscussion that Stephen Frost and David Steele wanted to have roles\nthat could create other roles but not immediately be able to access\ntheir privileges. Mark and I, on the other hand, wanted to have roles\nthat could create other roles WITH immediate access to their\nprivileges. That argument was probably the main thing that derailed\nthat entire patch set, which represented months of work by Mark. Now,\nI have come up with a competing patch set that for the price of 100\nlines of code and a couple of slightly ugly option names can do either\nthing. So Stephen and David and any like-minded users can have what\nthey want, and Mark and I and any like-minded users can have what we\nwant. And the result is that I've got like five people, some of whom\nparticulated in those discussions, showing up to say \"hey, we don't\nneed the ability to set defaults.\" Well, if that's the case, then why\ndid we have hundreds and hundreds of emails within the last 12 months\narguing about which way it should work?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 16:55:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 2:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Nov 28, 2022 at 4:19 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > That's fine, but are you saying this patch is incapable (or simply\n> undesirable) of having the parts about handling defaults separated out from\n> the parts that define how the system works with a given set of permissions;\n> and the one implementation detail of having the bootstrap superuser\n> automatically grant admin to any roles a createuser role creates? If you\n> and others feel strongly about defaults I'm sure that the suggested other\n> thread focused on that will get attention and be committed in a timely\n> manner. But the system will work, and not be broken, if that got stalled,\n> and it could be added in later.\n>\n> The topics are so closely intertwined that I don't believe that trying\n> to have separate discussions will be useful or productive. There's no\n> hope of anybody understanding 0004 or having an educated opinion about\n> it without first understanding the earlier patches, and there's no\n> requirement that someone has to review 0004, or like it, just because\n> they review or like 0001-0003.\n>\n> But so far nobody has actually reviewed anything\n\n\n\n> Well, if that's the case, then why\n> did we have hundreds and hundreds of emails within the last 12 months\n> arguing about which way it should work?\n>\n>\nWhen ya'll come to some final conclusion on how you want the defaults to\nlook, come tell the rest of us. You already have 4 people debating the\nmatter, I don't really see the point of adding more voices to that\ncachopany. As you noted - voicing an opinion about 0004 is optional.\n\nI'll reiterate my review from before, with a bit more confidence this time.\n\n0001-0003 implements a desirable behavior change. In order for someone to\nmake some other role a member in some third role that someone must have\nadmin privileges on both other roles. CREATEROLE is not exempt from this\nrule. A user with CREATEROLE will, upon creating a new role, be granted\nadmin privilege on that role by the bootstrap superuser.\n\nThe consequence of 0001-0003 in the current environment is that since the\nnewly created CREATEROLE user will not have admin rights on any existing\nroles in the cluster, while they can create new roles in the system they\nare unable to grant those new roles membership in any other roles not also\ncreated by them. The ability to assign attributes to newly created roles\nis unaffected.\n\nAs a unit of work, those are \"ready-to-commit\" for me. I'll leave it to\nyou and others to judge the technical quality of the patch and finishing up\nthe FIXMEs that have been noted.\n\nDesirable follow-on patches include:\n\n1) Automatically install an additional membership grant, with the\nCREATEROLE user as the grantor, specifying INHERIT OR SET as TRUE (I\npersonally favor attaching these to ALTER ROLE, modifiable only by oneself)\n\n2) Convert Attributes into default roles\n\nDavid J.\n\nOn Mon, Nov 28, 2022 at 2:55 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Nov 28, 2022 at 4:19 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> That's fine, but are you saying this patch is incapable (or simply undesirable) of having the parts about handling defaults separated out from the parts that define how the system works with a given set of permissions; and the one implementation detail of having the bootstrap superuser automatically grant admin to any roles a createuser role creates? If you and others feel strongly about defaults I'm sure that the suggested other thread focused on that will get attention and be committed in a timely manner.  But the system will work, and not be broken, if that got stalled, and it could be added in later.\n\nThe topics are so closely intertwined that I don't believe that trying\nto have separate discussions will be useful or productive. There's no\nhope of anybody understanding 0004 or having an educated opinion about\nit without first understanding the earlier patches, and there's no\nrequirement that someone has to review 0004, or like it, just because\nthey review or like 0001-0003.\n\nBut so far nobody has actually reviewed anything Well, if that's the case, then why\ndid we have hundreds and hundreds of emails within the last 12 months\narguing about which way it should work?When ya'll come to some final conclusion on how you want the defaults to look, come tell the rest of us.  You already have 4 people debating the matter, I don't really see the point of adding more voices to that cachopany.  As you noted - voicing an opinion about 0004 is optional.I'll reiterate my review from before, with a bit more confidence this time.0001-0003 implements a desirable behavior change.  In order for someone to make some other role a member in some third role that someone must have admin privileges on both other roles.  CREATEROLE is not exempt from this rule.  A user with CREATEROLE will, upon creating a new role, be granted admin privilege on that role by the bootstrap superuser.The consequence of 0001-0003 in the current environment is that since the newly created CREATEROLE user will not have admin rights on any existing roles in the cluster, while they can create new roles in the system they are unable to grant those new roles membership in any other roles not also created by them.  The ability to assign attributes to newly created roles is unaffected.As a unit of work, those are \"ready-to-commit\" for me.  I'll leave it to you and others to judge the technical quality of the patch and finishing up the FIXMEs that have been noted.Desirable follow-on patches include:1) Automatically install an additional membership grant, with the CREATEROLE user as the grantor, specifying INHERIT OR SET as TRUE (I personally favor attaching these to ALTER ROLE, modifiable only by oneself)2) Convert Attributes into default rolesDavid J.", "msg_date": "Mon, 28 Nov 2022 16:31:49 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 4:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> But so far nobody has actually reviewed anything, ...\n\nActually this isn't true. Mark did review. Thanks, Mark.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 18:42:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 6:32 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Desirable follow-on patches include:\n>\n> 1) Automatically install an additional membership grant, with the CREATEROLE user as the grantor, specifying INHERIT OR SET as TRUE (I personally favor attaching these to ALTER ROLE, modifiable only by oneself)\n\nHmm, that's an interesting alternative to what I actually implemented.\nSome people might like it better, because it puts the behavior fully\nunder the control of the CREATEROLE user, which a number of you seem\nto favor.\n\nI suppose if we did it that way, it could even be a GUC, like\ncreate_role_automatic_grant_options.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 20:33:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Mark Dilger:\n>> Isn't this just GRANT .. WITH SET FALSE, INHERIT FALSE, ADMIN TRUE? That should allow role administration, without actually granting membership in that role, yet, right?\n> \n> Can you clarify what you mean here? Are you inventing a new syntax?\n> \n> +GRANT bob TO alice WITH SET FALSE, INHERIT FALSE, ADMIN TRUE;\n> +ERROR: syntax error at or near \"SET\"\n> +LINE 1: GRANT bob TO alice WITH SET FALSE, INHERIT FALSE, ADMIN TRUE...\n\nThis is valid syntax on latest master.\n\nBest,\n\nWolfgang\n\n\n\n", "msg_date": "Tue, 29 Nov 2022 08:05:46 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas:\n> And the result is that I've got like five people, some of whom\n> particulated in those discussions, showing up to say \"hey, we don't\n> need the ability to set defaults.\" Well, if that's the case, then why\n> did we have hundreds and hundreds of emails within the last 12 months\n> arguing about which way it should work?\n\nFor me: \"Needed\" as in \"required\". I don't think we *require* defaults \nto make this useful, just as David said as well. Personally, I don't \nneed defaults either, at least I didn't have a use-case for it, yet. I'm \nnot objecting to introduce defaults, but I do object to *how* they were \nintroduced in your patch set, so far. It just wasn't consistent with the \nother stuff that already exists.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 29 Nov 2022 08:19:27 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Robert Haas:\n>> 1) Automatically install an additional membership grant, with the CREATEROLE user as the grantor, specifying INHERIT OR SET as TRUE (I personally favor attaching these to ALTER ROLE, modifiable only by oneself)\n> \n> Hmm, that's an interesting alternative to what I actually implemented.\n> Some people might like it better, because it puts the behavior fully\n> under the control of the CREATEROLE user, which a number of you seem\n> to favor.\n\n+1\n\n> I suppose if we did it that way, it could even be a GUC, like\n> create_role_automatic_grant_options.\n\nI don't think using GUCs for that is any better. ALTER DEFAULT \nPRIVILEGES is the correct way to do it. The only argument against it \nwas, so far, that it's easy to confuse with default options for newly \ncreated role grants, due to the abbreviated grant syntax.\n\nI propose a slightly different syntax instead:\n\nALTER DEFAULT PRIVILEGES GRANT CREATED ROLE TO role_specification WITH ...;\n\nThis, together with the proposal above regarding the grantor, should be \nconsistent.\n\nIs there any other argument to be made against ADP?\n\nNote, that ADP allows much more than just creating a grant for the \nCREATEROLE user, which would be the case if the default GRANT was made \nTO the_create_role_user. But it could be made towards *other* users as \nwell, so you could do something like this:\n\nCREATE ROLE alice CREATEROLE;\nCREATE ROLE bob;\n\nALTER DEFAULT PRIVILEGES FOR alice GRANT CREATED ROLE TO bob WITH SET \nTRUE, INHERIT FALSE;\n\nThis is much more flexible than role attributes or GUCs.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 29 Nov 2022 08:32:19 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Tue, Nov 29, 2022 at 12:32 AM <walther@technowledgy.de> wrote:\n\n>\n> Is there any other argument to be made against ADP?\n>\n\nThese aren't privileges, they are memberships. The pg_default_acl catalog\nis also per-data while these settings should be present in a catalog which,\nlike pg_authid, is catalog-wide. This latter point, for me, disqualifies\nthe command itself from being used for this purpose. If we'd like to\ncreate ALTER DEFAULT MEMBERSHIP (and a corresponding cluster-wide catalog)\nthen maybe the rest of the design would work within that.\n\n\n>\n> Note, that ADP allows much more than just creating a grant for the\n> CREATEROLE user, which would be the case if the default GRANT was made\n> TO the_create_role_user. But it could be made towards *other* users as\n> well, so you could do something like this:\n>\n> CREATE ROLE alice CREATEROLE;\n> CREATE ROLE bob;\n>\n> ALTER DEFAULT PRIVILEGES FOR alice GRANT CREATED ROLE TO bob WITH SET\n> TRUE, INHERIT FALSE;\n>\n\nWhat does that accomplish? bob cannot create roles to actually exercise\nhis privilege.\n\n\n> This is much more flexible than role attributes or GUCs.\n>\n>\nThe main advantage of GUC over a role attribute is that you can institute\nlayers of defaults according to a given cluster's specific needs. ALTER\nROLE SET (pg_db_role_setting - also cluster-wide) also comes into play;\nmaybe alice wants auto-inherit while in db-a but not db-b (this would/will\nbe more convincing if we end up having per-database roles).\n\nIf we accept that some external configuration knowledge is going to\ninfluence the result of executing this command (Tom?) then it seems that\nall the features a GUC provides are desirable in determining how the final\nexecution context is configured. Which makes sense as this kind of thing is\nprecisely what the GUC subsystem was designed to handle - session context\nenvironments related to the user and database presently connected.\n\nDavid J.\n\nOn Tue, Nov 29, 2022 at 12:32 AM <walther@technowledgy.de> wrote:\nIs there any other argument to be made against ADP?These aren't privileges, they are memberships.  The pg_default_acl catalog is also per-data while these settings should be present in a catalog which, like pg_authid, is catalog-wide.  This latter point, for me, disqualifies the command itself from being used for this purpose.  If we'd like to create ALTER DEFAULT MEMBERSHIP (and a corresponding cluster-wide catalog) then maybe the rest of the design would work within that. \n\nNote, that ADP allows much more than just creating a grant for the \nCREATEROLE user, which would be the case if the default GRANT was made \nTO the_create_role_user. But it could be made towards *other* users as \nwell, so you could do something like this:\n\nCREATE ROLE alice CREATEROLE;\nCREATE ROLE bob;\n\nALTER DEFAULT PRIVILEGES FOR alice GRANT CREATED ROLE TO bob WITH SET \nTRUE, INHERIT FALSE;What does that accomplish?  bob cannot create roles to actually exercise his privilege.\n\nThis is much more flexible than role attributes or GUCs.The main advantage of GUC over a role attribute is that you can institute layers of defaults according to a given cluster's specific needs.  ALTER ROLE SET (pg_db_role_setting - also cluster-wide) also comes into play; maybe alice wants auto-inherit while in db-a but not db-b (this would/will be more convincing if we end up having per-database roles).If we accept that some external configuration knowledge is going to influence the result of executing this command (Tom?) then it seems that all the features a GUC provides are desirable in determining how the final execution context is configured. Which makes sense as this kind of thing is precisely what the GUC subsystem was designed to handle - session context environments related to the user and database presently connected.David J.", "msg_date": "Tue, 29 Nov 2022 08:12:01 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Tue, Nov 29, 2022 at 2:32 AM <walther@technowledgy.de> wrote:\n> I propose a slightly different syntax instead:\n>\n> ALTER DEFAULT PRIVILEGES GRANT CREATED ROLE TO role_specification WITH ...;\n>\n> This, together with the proposal above regarding the grantor, should be\n> consistent.\n\nI think that is more powerful than what I proposed but less fit for\npurpose. If alice is a CREATEROLE user and issues CREATE ROLE bob, my\nproposal allows alice to automatically obtain access to bob's\nprivileges. Your proposal would allow that, but it would also allow\nalice to automatically confer bob's privileges on some third user, say\ncharlie. Maybe that's useful to somebody, I don't know.\n\nBut one significant disadvantage of this is that every CREATEROLE user\nmust have their own configuration. If we have CREATE ROLE users alice,\ndave, and ellen, then allice needs to execute ALTER DEFAULT PRIVILEGES\nGRANT CREATED ROLE TO alice WITH ...; dave needs to do the same thing\nwith dave instead of alice; and ellen needs to do the same thing with\nellen instead of alice. There's no way to apply a system-wide\nconfiguration that applies nicely to all CREATEROLE users.\n\nA GUC would of course allow that, because it could be set in\npostgresql.conf and then overridden for particular databases, users,\nor sessions.\n\nDavid claims that \"these aren't privileges, they are memberships.\" I\ndon't entirely agree with that, because I think that we're basically\nusing memberships as a pseudonym for privileges where roles are\nconcerned. However, it is true that there's no precedent for referring\nto role grants using the keyword PRIVILEGES at the SQL level, and the\nfact that the underlying works in somewhat similar ways doesn't\nnecessarily mean that it's OK to conflate the two concepts at the SQL\nlevel.\n\nSo I'm still not very sold on this idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Nov 2022 11:06:22 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Mon, Nov 28, 2022 at 8:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Hmm, that's an interesting alternative to what I actually implemented.\n> Some people might like it better, because it puts the behavior fully\n> under the control of the CREATEROLE user, which a number of you seem\n> to favor.\n\nHere's an updated patch set.\n\n0001 adds more precise and extensive documentation for the current\n(broken) state of affairs. I propose to back-patch this to all\nsupported branches. It also removes a <tip> suggesting that you should\nuse a CREATEDB & CREATEROLE role instead of a superuser, because that\nis pretty pointless as things stand, and is too simplistic for the new\nsystem that I'm proposing to put in place, too.\n\n0002 and 0003 are refactoring, unchanged from v1.\n\n0004 is the core fix to CREATEROLE. It has been updated from the\nprevious version with documentation and some bug fixes.\n\n0005 adopts David's suggestion: instead of giving the superuser a way\nto control the options on the implicit grant, give CREATEROLE users a\nway to grant newly-created roles to themselves automatically. I made\nthis a GUC, which means that the person setting up the system could\nconfigure a default in postgresql.conf, but a user who doesn't prefer\nthat default can also override it using ALTER ROLE .. SET or ~/.psqlrc\nor whatever. This is simpler than what I had before, doesn't involve a\ncatalog change, makes it clear that the behavior is not\nsecurity-critical, and puts the decision fully in the hands of the\nCREATEROLE user rather than being partly controlled by that user and\npartly by the superuser. Hopefully that's an improvement.\n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 2 Dec 2022 09:47:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "Reading 0001:\n\n+ However, <literal>CREATEROLE</literal> does not convey the ability to\n+ create <literal>SUPERUSER</literal> roles, nor does it convey any\n+ power over <literal>SUPERUSER</literal> roles that already exist.\n+ Furthermore, <literal>CREATEROLE</literal> does not convey the power\n+ to create <literal>REPLICATION</literal> users, nor the ability to\n+ grant or revoke the <literal>REPLICATION</literal> privilege, nor the\n+ ability to the role properties of such users.\n\n\"... nor the ability to the role properties ...\"\nI think a verb is missing here.\n\nThe contents looks good to me other than that problem, and I agree to\nbackpatch it.\n\n\nWhy did you choose to use two dots for ellipses in some command\n<literal>s rather than three? I know I've made that choice too on\noccassion, but there aren't many such cases and maybe we should put a\nstop to it (or a period) before it spreads too much.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 22 Dec 2022 15:13:49 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Thu, Dec 22, 2022 at 9:14 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> The contents looks good to me other than that problem, and I agree to\n> backpatch it.\n\nCool. Thanks for the review.\n\n> Why did you choose to use two dots for ellipses in some command\n> <literal>s rather than three? I know I've made that choice too on\n> occassion, but there aren't many such cases and maybe we should put a\n> stop to it (or a period) before it spreads too much.\n\nHonestly, I wasn't aware that we had some other convention for it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Dec 2022 16:55:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Fri, Dec 23, 2022 at 4:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Dec 22, 2022 at 9:14 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > The contents looks good to me other than that problem, and I agree to\n> > backpatch it.\n>\n> Cool. Thanks for the review.\n>\n> > Why did you choose to use two dots for ellipses in some command\n> > <literal>s rather than three? I know I've made that choice too on\n> > occassion, but there aren't many such cases and maybe we should put a\n> > stop to it (or a period) before it spreads too much.\n>\n> Honestly, I wasn't aware that we had some other convention for it.\n\nCommitted and back-patched 0001 with fixes for the issues that you pointed out.\n\nHere's a trivial rebase of the rest of the patch set.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 3 Jan 2023 15:11:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Tue, Jan 3, 2023 at 3:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Committed and back-patched 0001 with fixes for the issues that you pointed out.\n>\n> Here's a trivial rebase of the rest of the patch set.\n\nI committed 0001 and 0002 after improving the commit messages a bit.\nHere's the remaining two patches back. I've done a bit more polishing\nof these as well, specifically in terms of fleshing out the regression\ntests. I'd like to move forward with these soon, if nobody's too\nvehemently opposed to that.\n\nPrevious feedback, especially from Tom but also others, was that the\nrole-level properties the final patch was creating were not good. Now\nit doesn't create any new role-level properties, and in fact it has\nnothing to say about role-level properties in any way. That might not\nbe the right thing. Right now, if you have CREATEROLE, you can create\nnew roles with any combination of attributes you like, except that you\ncannot set the SUPERUSER, REPLICATION, or BYPASSRLS properties. While\nI think it makes sense that a CREATEROLE user can't hand out SUPERUSER\nor REPLICATION privileges, it is really not obvious to me why a\nCREATEROLE user shouldn't be permitted to hand out BYPASSRLS, at least\nif they have it themselves, and right now there's no way to allow\nthat. On the other hand, I think that some superusers might want to\nrestrict a CREATEROLE user's ability to hand out CREATEROLE or\nCREATEDB to the users they create, and right now there's no way to\nprohibit that.\n\nI don't have a great idea about what a system for handling this\nproblem ought to look like. In a vacuum, I think it would be\nreasonable to change CREATEROLE to only allow CREATEDB, BYPASSRLS, and\nsimilar to be given to new users if the creating user possesses them,\nbut that approach does not work for CREATEROLE, because if you didn't\nhave that, you couldn't create any new users at all. It's also pretty\nweird for, say, CONNECTION LIMIT. I doubt that there's any connection\nbetween the CONNECTION LIMIT of the CREATEROLE user and the values\nthat they ought to be able to set for users that they create. Probably\nyou just want to allow setting CONNECTION LIMIT for downstream users,\nor not. Or maybe it's not even worth worrying about -- I think there\nmight be a decent argument that limiting the ability to set CONNECTION\nLIMIT just isn't interesting.\n\nIf someone else has a good idea what we ought to do about this part of\nthe problem, I'd be interested to hear it. Absent such a good idea --\nor if that good idea is more work to implement that can be done in the\nnear term -- I think it would be OK to ship as much as I've done here\nand revisit the topic at some later point when we've had a chance to\nabsorb user feedback.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 5 Jan 2023 14:53:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Thu, Jan 5, 2023 at 2:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jan 3, 2023 at 3:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Committed and back-patched 0001 with fixes for the issues that you pointed out.\n> >\n> > Here's a trivial rebase of the rest of the patch set.\n>\n> I committed 0001 and 0002 after improving the commit messages a bit.\n> Here's the remaining two patches back. I've done a bit more polishing\n> of these as well, specifically in terms of fleshing out the regression\n> tests. I'd like to move forward with these soon, if nobody's too\n> vehemently opposed to that.\n\nDone now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 12:46:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Tue, 10 Jan 2023 at 23:16, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jan 5, 2023 at 2:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Tue, Jan 3, 2023 at 3:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > Committed and back-patched 0001 with fixes for the issues that you pointed out.\n> > >\n> > > Here's a trivial rebase of the rest of the patch set.\n> >\n> > I committed 0001 and 0002 after improving the commit messages a bit.\n> > Here's the remaining two patches back. I've done a bit more polishing\n> > of these as well, specifically in terms of fleshing out the regression\n> > tests. I'd like to move forward with these soon, if nobody's too\n> > vehemently opposed to that.\n>\n> Done now.\n\nI'm not sure if any work is left here, if there is nothing more to do,\ncan we close this?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 14 Jan 2023 12:56:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Sat, Jan 14, 2023 at 2:26 AM vignesh C <vignesh21@gmail.com> wrote:\n> I'm not sure if any work is left here, if there is nothing more to do,\n> can we close this?\n\nThere's a discussion on another thread about some follow-up\ndocumentation adjustments, but feel free to close the CF entry for\nthis patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 14 Jan 2023 19:32:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing CREATEROLE" }, { "msg_contents": "On Sun, 15 Jan 2023 at 06:02, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jan 14, 2023 at 2:26 AM vignesh C <vignesh21@gmail.com> wrote:\n> > I'm not sure if any work is left here, if there is nothing more to do,\n> > can we close this?\n>\n> There's a discussion on another thread about some follow-up\n> documentation adjustments, but feel free to close the CF entry for\n> this patch.\n\nThanks, I have marked the CF entry as committed.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 15 Jan 2023 08:08:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing CREATEROLE" } ]
[ { "msg_contents": "In SELinux file context files you can specify <<none>> for a file\nmeaning you don't want restorecon to relabel it. <<none>> is\nespecially useful in an SELinux MLS environment when objects are\ncreated at a specific security level and you don't want restorecon to\nrelabel them to the wrong security level.\n\nTed", "msg_date": "Mon, 21 Nov 2022 14:57:21 -0600", "msg_from": "Ted Toth <txtoth@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "On 11/21/22 15:57, Ted Toth wrote:\n> In SELinux file context files you can specify <<none>> for a file\n> meaning you don't want restorecon to relabel it. <<none>> is\n> especially useful in an SELinux MLS environment when objects are\n> created at a specific security level and you don't want restorecon to\n> relabel them to the wrong security level.\n\n+1\n\nPlease add to the next commitfest here: \nhttps://commitfest.postgresql.org/41/\n\nThanks,\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:35:31 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "On 11/21/22 17:35, Joe Conway wrote:\n> On 11/21/22 15:57, Ted Toth wrote:\n>> In SELinux file context files you can specify <<none>> for a file\n>> meaning you don't want restorecon to relabel it. <<none>> is\n>> especially useful in an SELinux MLS environment when objects are\n>> created at a specific security level and you don't want restorecon to\n>> relabel them to the wrong security level.\n> \n> +1\n> \n> Please add to the next commitfest here:\n> https://commitfest.postgresql.org/41/\n\n\nComments:\n\n1. It seems like the check for a \"<<none>>\" context should go into \nsepgsql_object_relabel() directly rather than exec_object_restorecon(). \nThe former gets registered as a hook in _PG_init(), so the with the \ncurrent location we would fail to skip the relabel when that gets called.\n\n2. Please provide one or more test case (likely in label.sql)\n\n3. An example, or at least a note, mentioning \"<<none>>\" context and the \nimplications would be appropriate.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 15 Jan 2023 14:11:51 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "On Sun, Jan 15, 2023 at 1:11 PM Joe Conway <mail@joeconway.com> wrote:\n\n> On 11/21/22 17:35, Joe Conway wrote:\n> > On 11/21/22 15:57, Ted Toth wrote:\n> >> In SELinux file context files you can specify <<none>> for a file\n> >> meaning you don't want restorecon to relabel it. <<none>> is\n> >> especially useful in an SELinux MLS environment when objects are\n> >> created at a specific security level and you don't want restorecon to\n> >> relabel them to the wrong security level.\n> >\n> > +1\n> >\n> > Please add to the next commitfest here:\n> > https://commitfest.postgresql.org/41/\n>\n>\n> Comments:\n>\n> 1. It seems like the check for a \"<<none>>\" context should go into\n> sepgsql_object_relabel() directly rather than exec_object_restorecon().\n> The former gets registered as a hook in _PG_init(), so the with the\n> current location we would fail to skip the relabel when that gets called.\n>\n\nThe intent is not to stop all relabeling only to stop sepgsql_restorecon\nfrom doing a bulk relabel. I believe sepgsql_object_relabel is called by\nthe 'SECURITY LABEL' statement which I'm using to set the label of db\nobjects to a specific context which I would not want altered later by a\nrestorecon.\n\n\n> 2. Please provide one or more test case (likely in label.sql)\n>\n> 3. An example, or at least a note, mentioning \"<<none>>\" context and the\n> implications would be appropriate.\n>\n> --\n> Joe Conway\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n\nOn Sun, Jan 15, 2023 at 1:11 PM Joe Conway <mail@joeconway.com> wrote:On 11/21/22 17:35, Joe Conway wrote:\n> On 11/21/22 15:57, Ted Toth wrote:\n>> In SELinux file context files you can specify <<none>> for a file\n>> meaning you don't want restorecon to relabel it. <<none>> is\n>> especially useful in an SELinux MLS environment when objects are\n>> created at a specific security level and you don't want restorecon to\n>> relabel them to the wrong security level.\n> \n> +1\n> \n> Please add to the next commitfest here:\n> https://commitfest.postgresql.org/41/\n\n\nComments:\n\n1. It seems like the check for a \"<<none>>\" context should go into \nsepgsql_object_relabel() directly rather than exec_object_restorecon(). \nThe former gets registered as a hook in _PG_init(), so the with the \ncurrent location we would fail to skip the relabel when that gets called.The intent is not to stop all relabeling only to stop sepgsql_restorecon from doing a bulk relabel. I believe sepgsql_object_relabel is called by the 'SECURITY LABEL'  statement which I'm using to set the label of db objects to a specific context which I would not want altered later by a restorecon.\n\n2. Please provide one or more test case (likely in label.sql)\n\n3. An example, or at least a note, mentioning \"<<none>>\" context and the \nimplications would be appropriate.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 16 Jan 2023 08:55:07 -0600", "msg_from": "Ted Toth <txtoth@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "On 1/16/23 09:55, Ted Toth wrote:\n> \n> \n> On Sun, Jan 15, 2023 at 1:11 PM Joe Conway <mail@joeconway.com \n> <mailto:mail@joeconway.com>> wrote:\n> \n> On 11/21/22 17:35, Joe Conway wrote:\n> > On 11/21/22 15:57, Ted Toth wrote:\n> >> In SELinux file context files you can specify <<none>> for a file\n> >> meaning you don't want restorecon to relabel it. <<none>> is\n> >> especially useful in an SELinux MLS environment when objects are\n> >> created at a specific security level and you don't want\n> restorecon to\n> >> relabel them to the wrong security level.\n> >\n> > +1\n> >\n> > Please add to the next commitfest here:\n> > https://commitfest.postgresql.org/41/\n> <https://commitfest.postgresql.org/41/>\n> \n> \n> Comments:\n> \n> 1. It seems like the check for a \"<<none>>\" context should go into\n> sepgsql_object_relabel() directly rather than exec_object_restorecon().\n> The former gets registered as a hook in _PG_init(), so the with the\n> current location we would fail to skip the relabel when that gets\n> called.\n> \n> \n> The intent is not to stop all relabeling only to stop sepgsql_restorecon \n> from doing a bulk relabel. I believe sepgsql_object_relabel is called by \n> the 'SECURITY LABEL'  statement which I'm using to set the label of db \n> objects to a specific context which I would not want altered later by a \n> restorecon.\n\n\nOk, sounds reasonable. Maybe just add a comment to that effect.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 16 Jan 2023 09:58:12 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "The intent of this patch is not to stop all relabeling only to stop sepgsql_restorecon from doing a bulk relabel. I believe sepgsql_object_relabel is called by the 'SECURITY LABEL' statement which I'm using to set the label of db objects to a specific context which I would not want altered later by a restorecon. This is particularly important in a MLS (multi-level security) environment where for example if a row were labeled at the 'secret' level I would not restorecon to relabel it possibly causing a downgrade.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 17 Jan 2023 14:59:02 +0000", "msg_from": "Ted X Toth <txtoth@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThis needs regression test support for the feature and some minimal documentation that shows how to make use of it.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Wed, 18 Jan 2023 18:26:48 +0000", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "On Wed, 18 Jan 2023 at 23:57, Joe Conway <mail@joeconway.com> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> This needs regression test support for the feature and some minimal documentation that shows how to make use of it.\n>\n> The new status of this patch is: Waiting on Author\n\nBy mistake instead of setting the patch to \"Moved to Next CF\", I had\nselected \"Returned with Feedback\". Sorry about that.\nI have recreated the entry for this patch in the 03/23 commitfest:\nhttps://commitfest.postgresql.org/42/4158/\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 31 Jan 2023 23:11:32 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "> Ok, sounds reasonable. Maybe just add a comment to that effect.\n\n> This needs regression test support for the feature and some minimal documentation that shows how to make use of it.\n\nHm. It sounds like this patch is uncontroversial but is missing\ndocumentation and tests? Has this been addressed? Do you think you'll\nget a chance to resolve those issues this month in time for this\nrelease?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 20 Mar 2023 16:05:03 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "Not this month unfortunately other work has taken precedence. I'll need to\nlook at what it's going to take to create a test. Hopefully I can piggyback\non an existing test.\n\nTed\n\nOn Mon, Mar 20, 2023 at 3:05 PM Gregory Stark (as CFM) <stark.cfm@gmail.com>\nwrote:\n\n> > Ok, sounds reasonable. Maybe just add a comment to that effect.\n>\n> > This needs regression test support for the feature and some minimal\n> documentation that shows how to make use of it.\n>\n> Hm. It sounds like this patch is uncontroversial but is missing\n> documentation and tests? Has this been addressed? Do you think you'll\n> get a chance to resolve those issues this month in time for this\n> release?\n>\n> --\n> Gregory Stark\n> As Commitfest Manager\n>\n\nNot this month unfortunately other work has taken precedence. I'll need to look at what it's going to take to create a test. Hopefully I can piggyback on an existing test.TedOn Mon, Mar 20, 2023 at 3:05 PM Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:> Ok, sounds reasonable. Maybe just add a comment to that effect.\n\n> This needs regression test support for the feature and some minimal documentation that shows how to make use of it.\n\nHm. It sounds like this patch is uncontroversial but is missing\ndocumentation and tests? Has this been addressed? Do you think you'll\nget a chance to resolve those issues this month in time for this\nrelease?\n\n-- \nGregory Stark\nAs Commitfest Manager", "msg_date": "Mon, 20 Mar 2023 15:17:09 -0500", "msg_from": "Ted Toth <txtoth@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" }, { "msg_contents": "> On 20 Mar 2023, at 21:17, Ted Toth <txtoth@gmail.com> wrote:\n> \n> Not this month unfortunately other work has taken precedence. I'll need to look at what it's going to take to create a test. Hopefully I can piggyback on an existing test.\n\nThis patch has been marked Waiting on Author since January, I'm marking it\nReturned with Feedback. Please feel free to resubmit when there is time and\ninterest to resume work on this.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 3 Jul 2023 18:42:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add <<none>> support to sepgsql_restorecon" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile working on avoiding unnecessary wakeups in logical/worker.c (as was\ndone for walreceiver.c in 05a7be9), I noticed that the tests began taking\nmuch longer. This seems to be caused by the reduced frequency of calls to\nmaybe_reread_subscription() in LogicalRepApplyLoop(). Presently,\nLogicalRepApplyLoop() only waits for up to a second, so the subscription\ninfo is re-read by workers relatively frequently. If LogicalRepApplyLoop()\nsleeps for longer, the subscription info may not be read for much longer.\n\nI think the fix for this problem can be considered independently, as\nrelying on frequent wakeups seems less than ideal, and the patch seems to\nprovide a small improvement even before applying the\navoid-unnecessary-wakeups patch. On my machine, the attached patch\nimproved 'check-world -j8' run time by ~12 seconds (from 3min 8sec to 2min\n56 sec) and src/test/subscription test time by ~17 seconds (from 139\nseconds to 122 seconds).\n\nI put the new logic in launcher.c, but it might make more sense to put it\nin logical/worker.c. I think that might require some new #includes in a\ncouple of files, but otherwise, the patch would likely look about the same.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 21 Nov 2022 16:41:19 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Nov 22, 2022 at 1:41 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On my machine, the attached patch\n> improved 'check-world -j8' run time by ~12 seconds (from 3min 8sec to 2min\n> 56 sec) and src/test/subscription test time by ~17 seconds (from 139\n> seconds to 122 seconds).\n\nNice!\n\nMaybe a comment to explain why a single variable is enough? And an\nassertion that it wasn't already set? And a note to future self: this\nwould be a candidate user of the nearby SetLatches() patch (which is\nabout moving SetLatch() syscalls out from under LWLocks, though this\none may not be very hot).\n\n\n", "msg_date": "Tue, 22 Nov 2022 15:16:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Nov 22, 2022 at 03:16:05PM +1300, Thomas Munro wrote:\n> Maybe a comment to explain why a single variable is enough?\n\nThis crossed my mind shortly after sending my previous message. Looking\ncloser, I see that several types of ALTER SUBSCRIPTION do not call\nPreventInTransactionBlock(), so a single variable might not be enough.\nPerhaps we can put a list in TopTransactionContext. I'll do some more\ninvestigation and report back.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 21 Nov 2022 18:50:33 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Hi Nathan,\n\nI have done almost same thing locally for [1], but I thought your code seemed better.\n\nJust One comment: IIUC the statement \"ALTER SUBSCRIPTION\" can be executed\ninside the transaction. So if two subscriptions are altered in the same\ntransaction, only one of them will awake. Is it expected behavior?\n\nI think we can hold a suboid list and record oids when the subscription are\naltered, and then the backend process can consume all of list cells at the end of\nthe transaction.\n\nHow do you think?\n\n[1]: https://commitfest.postgresql.org/40/3581/\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 03:03:52 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Nov 22, 2022 at 03:03:52AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Just One comment: IIUC the statement \"ALTER SUBSCRIPTION\" can be executed\n> inside the transaction. So if two subscriptions are altered in the same\n> transaction, only one of them will awake. Is it expected behavior?\n> \n> I think we can hold a suboid list and record oids when the subscription are\n> altered, and then the backend process can consume all of list cells at the end of\n> the transaction.\n\nI think you are correct. I did it this way in v2. I've also moved the\nbulk of the logic to logical/worker.c.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 21 Nov 2022 20:39:16 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Dear Nathan,\n\n> I think you are correct. I did it this way in v2. I've also moved the\n> bulk of the logic to logical/worker.c.\n\nThanks for updating! It becomes better. Further comments:\n\n01. AlterSubscription()\n\n```\n+\tLogicalRepWorkersWakeupAtCommit(subid);\n+\n```\n\nCurrently subids will be recorded even if the subscription is not modified.\nI think LogicalRepWorkersWakeupAtCommit() should be called inside the if (update_tuple).\n\n02. LogicalRepWorkersWakeupAtCommit()\n\n```\n+\toldcxt = MemoryContextSwitchTo(TopTransactionContext);\n+\ton_commit_wakeup_workers_subids = lappend_oid(on_commit_wakeup_workers_subids,\n+\t\t\t\t\t\t\t\t\t\t\t\t subid);\n```\n\nIf the subscription is altered twice in the same transaction, the same subid will be recorded twice.\nI'm not sure whether it may be caused some issued, but list_member_oid() can be used to avoid that.\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 06:49:29 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tuesday, November 22, 2022 1:39 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Tue, Nov 22, 2022 at 03:03:52AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> > Just One comment: IIUC the statement \"ALTER SUBSCRIPTION\" can be\n> > executed inside the transaction. So if two subscriptions are altered\n> > in the same transaction, only one of them will awake. Is it expected\n> behavior?\n> >\n> > I think we can hold a suboid list and record oids when the\n> > subscription are altered, and then the backend process can consume all\n> > of list cells at the end of the transaction.\n> \n> I think you are correct. I did it this way in v2. I've also moved the bulk of\n> the logic to logical/worker.c.\nHi, thanks for updating.\n\n\nI just quickly had a look at your patch and had one minor question.\n\nWith this patch, when we execute alter subscription in a sub transaction\nand additionally rollback to it, is there any possibility that\nwe'll wake up the workers that don't need to do so ?\n\nI'm not sure if this brings about some substantial issue,\nbut just wondering if there is any need of improvement for this.\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 07:18:40 +0000", "msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tuesday, November 22, 2022 2:49 PM Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>\n> \n> Dear Nathan,\n> \n> > I think you are correct. I did it this way in v2. I've also moved\n> > the bulk of the logic to logical/worker.c.\n> \n> Thanks for updating! It becomes better. Further comments:\n> \n> 01. AlterSubscription()\n> \n> ```\n> +\tLogicalRepWorkersWakeupAtCommit(subid);\n> +\n> ```\n> \n> Currently subids will be recorded even if the subscription is not modified.\n> I think LogicalRepWorkersWakeupAtCommit() should be called inside the if\n> (update_tuple).\n\nI think an exception would be REFRESH PULLICATION in which case update_tuple is\nfalse, but it seems better to wake up apply worker in this case as well,\nbecause the apply worker is also responsible to start table sync workers for\nnewly subscribed tables(in process_syncing_tables()).\n\nBesides, it seems not a must to wake up apply worker for ALTER SKIP TRANSACTION,\nAlthough there might be no harm for waking up in this case.\n\n> \n> 02. LogicalRepWorkersWakeupAtCommit()\n> \n> ```\n> +\toldcxt = MemoryContextSwitchTo(TopTransactionContext);\n> +\ton_commit_wakeup_workers_subids =\n> lappend_oid(on_commit_wakeup_workers_subids,\n> +\n> \t\t subid);\n> ```\n> \n> If the subscription is altered twice in the same transaction, the same subid will\n> be recorded twice.\n> I'm not sure whether it may be caused some issued, but list_member_oid() can\n> be used to avoid that.\n\n+1, list_append_unique_oid might be better.\n\nBest regards,\nHou zj\n\n\n", "msg_date": "Tue, 22 Nov 2022 07:25:36 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Nov 22, 2022 at 6:11 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> While working on avoiding unnecessary wakeups in logical/worker.c (as was\n> done for walreceiver.c in 05a7be9), I noticed that the tests began taking\n> much longer. This seems to be caused by the reduced frequency of calls to\n> maybe_reread_subscription() in LogicalRepApplyLoop().\n>\n\nI think it would be interesting to know why tests started taking more\ntime after a reduced frequency of calls to\nmaybe_reread_subscription(). IIRC, we anyway call\nmaybe_reread_subscription for each xact.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Nov 2022 16:59:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Nov 22, 2022 at 07:25:36AM +0000, houzj.fnst@fujitsu.com wrote:\n> On Tuesday, November 22, 2022 2:49 PM Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>\n>> Thanks for updating! It becomes better. Further comments:\n>> \n>> 01. AlterSubscription()\n>> \n>> ```\n>> +\tLogicalRepWorkersWakeupAtCommit(subid);\n>> +\n>> ```\n>> \n>> Currently subids will be recorded even if the subscription is not modified.\n>> I think LogicalRepWorkersWakeupAtCommit() should be called inside the if\n>> (update_tuple).\n> \n> I think an exception would be REFRESH PULLICATION in which case update_tuple is\n> false, but it seems better to wake up apply worker in this case as well,\n> because the apply worker is also responsible to start table sync workers for\n> newly subscribed tables(in process_syncing_tables()).\n> \n> Besides, it seems not a must to wake up apply worker for ALTER SKIP TRANSACTION,\n> Although there might be no harm for waking up in this case.\n\nIn v3, I moved the call to LogicalRepWorkersWakeupAtCommit() to the end of\nthe function. This should avoid waking up workers in some cases where it's\nunnecessary (e.g., if ALTER SUBSCRIPTION ERRORs in a subtransaction), but\nthere are still cases where we'll wake up the workers unnecessarily. I\nthink this is unlikely to cause any real problems in practice.\n\n>> 02. LogicalRepWorkersWakeupAtCommit()\n>> \n>> ```\n>> +\toldcxt = MemoryContextSwitchTo(TopTransactionContext);\n>> +\ton_commit_wakeup_workers_subids =\n>> lappend_oid(on_commit_wakeup_workers_subids,\n>> +\n>> \t\t subid);\n>> ```\n>> \n>> If the subscription is altered twice in the same transaction, the same subid will\n>> be recorded twice.\n>> I'm not sure whether it may be caused some issued, but list_member_oid() can\n>> be used to avoid that.\n> \n> +1, list_append_unique_oid might be better.\n\nDone in v3.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 23 Nov 2022 12:50:27 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Nov 22, 2022 at 04:59:28PM +0530, Amit Kapila wrote:\n> On Tue, Nov 22, 2022 at 6:11 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> While working on avoiding unnecessary wakeups in logical/worker.c (as was\n>> done for walreceiver.c in 05a7be9), I noticed that the tests began taking\n>> much longer. This seems to be caused by the reduced frequency of calls to\n>> maybe_reread_subscription() in LogicalRepApplyLoop().\n> \n> I think it would be interesting to know why tests started taking more\n> time after a reduced frequency of calls to\n> maybe_reread_subscription(). IIRC, we anyway call\n> maybe_reread_subscription for each xact.\n\nAt the moment, commands like ALTER SUBSCRIPTION don't wake up the logical\nworkers for the target subscription, so the next call to\nmaybe_reread_subscription() may not happen for a while. Presently, we'll\nonly sleep up to a second in the apply loop, but with my new\nprevent-unnecessary-wakeups patch, we may sleep for much longer. This\ncauses wait_for_subscription_sync to take more time after some ALTER\nSUBSCRIPTION commands.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:05:26 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Dear Nathan,\n\nThank you for updating the patch!\n\n> In v3, I moved the call to LogicalRepWorkersWakeupAtCommit() to the end of\n> the function. This should avoid waking up workers in some cases where it's\n> unnecessary (e.g., if ALTER SUBSCRIPTION ERRORs in a subtransaction), but\n> there are still cases where we'll wake up the workers unnecessarily. I\n> think this is unlikely to cause any real problems in practice.\n\nI understood you could accept false-positive event to avoid missing true-negative\nlike ALTER SUBSCRIPTION REFRESH. +1.\n\n> >> 02. LogicalRepWorkersWakeupAtCommit()\n> >>\n> >> ```\n> >> +\toldcxt = MemoryContextSwitchTo(TopTransactionContext);\n> >> +\ton_commit_wakeup_workers_subids =\n> >> lappend_oid(on_commit_wakeup_workers_subids,\n> >> +\n> >> \t\t subid);\n> >> ```\n> >>\n> >> If the subscription is altered twice in the same transaction, the same subid will\n> >> be recorded twice.\n> >> I'm not sure whether it may be caused some issued, but list_member_oid() can\n> >> be used to avoid that.\n> >\n> > +1, list_append_unique_oid might be better.\n> \n> Done in v3.\n\nI have no comments for the v3 patch.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Thu, 24 Nov 2022 05:26:27 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Nov 24, 2022 at 05:26:27AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> I have no comments for the v3 patch.\n\nThanks for reviewing! Does anyone else have thoughts on the patch?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 27 Nov 2022 15:45:28 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "I spent some more time on the prevent-unnecessary-wakeups patch for\nlogical/worker.c that I've been alluding to in this thread, and I found a\nfew more places where we depend on the worker periodically waking up. This\nseems to be a common technique, so I'm beginning to wonder whether these\nchanges are worthwhile. I think there's a good chance it would become a\ngame of whac-a-mole.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 29 Nov 2022 20:10:28 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Nov 30, 2022 at 5:10 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I spent some more time on the prevent-unnecessary-wakeups patch for\n> logical/worker.c that I've been alluding to in this thread, and I found a\n> few more places where we depend on the worker periodically waking up. This\n> seems to be a common technique, so I'm beginning to wonder whether these\n> changes are worthwhile. I think there's a good chance it would become a\n> game of whac-a-mole.\n\nAren't they all bugs, though, making our tests and maybe even real\nsystems slower than they need to be?\n\n\n", "msg_date": "Wed, 30 Nov 2022 17:23:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Nov 30, 2022 at 5:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Nov 30, 2022 at 5:10 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > I spent some more time on the prevent-unnecessary-wakeups patch for\n> > logical/worker.c that I've been alluding to in this thread, and I found a\n> > few more places where we depend on the worker periodically waking up. This\n> > seems to be a common technique, so I'm beginning to wonder whether these\n> > changes are worthwhile. I think there's a good chance it would become a\n> > game of whac-a-mole.\n>\n> Aren't they all bugs, though, making our tests and maybe even real\n> systems slower than they need to be?\n\n(Which isn't to suggest that it's your job to fix them, but please do\nshare what you have if you run out of whack-a-mole steam, since we\nseem to have several people keen to finish those moles off.)\n\n\n", "msg_date": "Wed, 30 Nov 2022 17:27:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Dear Nathan,\n\n> I spent some more time on the prevent-unnecessary-wakeups patch for\n> logical/worker.c that I've been alluding to in this thread, and I found a\n> few more places where we depend on the worker periodically waking up. This\n> seems to be a common technique, so I'm beginning to wonder whether these\n> changes are worthwhile. I think there's a good chance it would become a\n> game of whac-a-mole.\n\nI think at least this feature is needed for waking up workers that are slept due to the min_apply_delay.\nThe author supposed this patch and pinned our thread[1].\n\n[1]: https://www.postgresql.org/message-id/TYCPR01MB8373775ECC6972289AF8CB30ED0F9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Wed, 30 Nov 2022 04:48:03 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Nov 30, 2022 at 05:27:40PM +1300, Thomas Munro wrote:\n> On Wed, Nov 30, 2022 at 5:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Wed, Nov 30, 2022 at 5:10 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> > I spent some more time on the prevent-unnecessary-wakeups patch for\n>> > logical/worker.c that I've been alluding to in this thread, and I found a\n>> > few more places where we depend on the worker periodically waking up. This\n>> > seems to be a common technique, so I'm beginning to wonder whether these\n>> > changes are worthwhile. I think there's a good chance it would become a\n>> > game of whac-a-mole.\n>>\n>> Aren't they all bugs, though, making our tests and maybe even real\n>> systems slower than they need to be?\n\nYeah, you're right, it's probably worth proceeding with this particular\nthread even if we don't end up porting the suppress-unnecessary-wakeups\npatch to logical/worker.c.\n\n> (Which isn't to suggest that it's your job to fix them, but please do\n> share what you have if you run out of whack-a-mole steam, since we\n> seem to have several people keen to finish those moles off.)\n\nI don't mind fixing it! There are a couple more I'd like to track down\nbefore posting another revision.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 29 Nov 2022 21:04:41 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Nov 29, 2022 at 09:04:41PM -0800, Nathan Bossart wrote:\n> I don't mind fixing it! There are a couple more I'd like to track down\n> before posting another revision.\n\nOkay, here is a new version of the patch. This seems to clear up\neverything that I could find via the tests.\n\nThanks to this effort, I discovered that we need to include\nwal_retrieve_retry_interval in our wait time calculations after failed\ntablesyncs (for the suppress-unnecessary-wakeups patch). I'll make that\nchange and post that patch in a new thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 1 Dec 2022 16:21:30 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Dec 01, 2022 at 04:21:30PM -0800, Nathan Bossart wrote:\n> Okay, here is a new version of the patch. This seems to clear up\n> everything that I could find via the tests.\n\nI cleaned up the patch a bit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 2 Dec 2022 11:21:01 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Hi Nathan,\n\n@@ -410,6 +411,12 @@ ExecRenameStmt(RenameStmt *stmt)\n> stmt->newname);\n> table_close(catalog, RowExclusiveLock);\n>\n> + /*\n> + * Wake up the logical replication workers to handle this\n> + * change quickly.\n> + */\n> + LogicalRepWorkersWakeupAtCommit(address.objectId);\n\n\nIs it really necessary to wake logical workers up when renaming other than\nsubscription or publication? address.objectId will be a valid subid only\nwhen renaming a subscription.\n\n@@ -322,6 +323,9 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char\n> state,\n>\n> /* Cleanup. */\n> table_close(rel, NoLock);\n> +\n> + /* Wake up the logical replication workers to handle this change\n> quickly. */\n> + LogicalRepWorkersWakeupAtCommit(subid);\n\n\nI wonder why a wakeup call is needed every time a subscription relation is\nupdated.\nIt seems to me that there are two places where UpdateSubscriptionRelState\nis called and we need another worker to wake up:\n- When a relation is in SYNCWAIT state, it waits for the apply worker to\nwake up and change the relation state to CATCHUP. Then tablesync worker\nneeds to wake up to continue from CATCHUP state.\n- When the state is SYNCDONE and the apply worker has to wake up to change\nthe state to READY.\n\nI think we already call logicalrep_worker_wakeup_ptr wherever it's needed\nfor the above cases? What am I missing here?\n\nBest,\n--\nMelih Mutlu\nMicrosoft\n\nHi Nathan,@@ -410,6 +411,12 @@ ExecRenameStmt(RenameStmt *stmt) \t\t\t\t\t\t\t\t\t\t   stmt->newname); \t\t\t\ttable_close(catalog, RowExclusiveLock); +\t\t\t\t/*+\t\t\t\t * Wake up the logical replication workers to handle this+\t\t\t\t * change quickly.+\t\t\t\t */+\t\t\t\tLogicalRepWorkersWakeupAtCommit(address.objectId);Is it really necessary to wake logical workers up when renaming other than subscription or publication? address.objectId will be a valid subid only when renaming a subscription.@@ -322,6 +323,9 @@ UpdateSubscriptionRelState(Oid subid, Oid relid, char state,  \t/* Cleanup. */ \ttable_close(rel, NoLock);++\t/* Wake up the logical replication workers to handle this change quickly. */+\tLogicalRepWorkersWakeupAtCommit(subid); I wonder why a wakeup call is needed every time a subscription relation is updated.It seems to me that there are two places where UpdateSubscriptionRelState is called and we need another worker to wake up:- When a relation is in SYNCWAIT state, it waits for the apply worker to wake up and change the relation state to CATCHUP. Then tablesync worker needs to wake up to continue from CATCHUP state.- When the state is SYNCDONE and the apply worker has to wake up to change the state to READY.I think we already call logicalrep_worker_wakeup_ptr wherever it's needed for the above cases? What am I missing here?Best,--Melih MutluMicrosoft", "msg_date": "Tue, 6 Dec 2022 19:44:46 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Thanks for reviewing!\n\nOn Tue, Dec 06, 2022 at 07:44:46PM +0300, Melih Mutlu wrote:\n> Is it really necessary to wake logical workers up when renaming other than\n> subscription or publication? address.objectId will be a valid subid only\n> when renaming a subscription.\n\nOops, that is a mistake. I only meant to wake up the workers for ALTER\nSUBSCRIPTION RENAME. I think I've fixed this in v6.\n\n> - When the state is SYNCDONE and the apply worker has to wake up to change\n> the state to READY.\n> \n> I think we already call logicalrep_worker_wakeup_ptr wherever it's needed\n> for the above cases? What am I missing here?\n\nIIUC we must restart all the apply workers for a subscription to enable\ntwo_phase mode. It looks like finish_sync_worker() only wakes up its own\napply worker. I moved this logic to where the sync worker marks the state\nas SYNCDONE and added a check that two_phase mode is pending. Even so,\nthere can still be unnecessary wakeups, but this adjustment should limit\nthem.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 6 Dec 2022 11:25:51 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Dec 06, 2022 at 11:25:51AM -0800, Nathan Bossart wrote:\n> On Tue, Dec 06, 2022 at 07:44:46PM +0300, Melih Mutlu wrote:\n>> - When the state is SYNCDONE and the apply worker has to wake up to change\n>> the state to READY.\n>> \n>> I think we already call logicalrep_worker_wakeup_ptr wherever it's needed\n>> for the above cases? What am I missing here?\n> \n> IIUC we must restart all the apply workers for a subscription to enable\n> two_phase mode. It looks like finish_sync_worker() only wakes up its own\n> apply worker. I moved this logic to where the sync worker marks the state\n> as SYNCDONE and added a check that two_phase mode is pending. Even so,\n> there can still be unnecessary wakeups, but this adjustment should limit\n> them.\n\nActually, that's not quite right. The sync worker will wake up the apply\nworker to change the state from SYNCDONE to READY. AllTablesyncsReady()\nchecks that all tables are READY, so we need to wake up all the workers\nwhen an apply worker changes the state to READY. Each worker will then\nevaluate whether to restart for two_phase mode.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 6 Dec 2022 13:29:54 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Hi,\n\n\n> Actually, that's not quite right. The sync worker will wake up the apply\n> worker to change the state from SYNCDONE to READY. AllTablesyncsReady()\n> checks that all tables are READY, so we need to wake up all the workers\n> when an apply worker changes the state to READY. Each worker will then\n> evaluate whether to restart for two_phase mode.\n>\n\nRight. I didn't think about the two phase case thoroughly. Waking up all\napply workers can help.\n\nDo we also need to wake up all sync workers too? Even if not, I'm not\nactually sure whether doing that would harm anything though.\nJust asking since currently the patch wakes up all workers including sync\nworkers if any still exists.\n\nBest,\n--\nMelih Mutlu\nMicrosoft\n\nHi, Actually, that's not quite right.  The sync worker will wake up the apply\nworker to change the state from SYNCDONE to READY.  AllTablesyncsReady()\nchecks that all tables are READY, so we need to wake up all the workers\nwhen an apply worker changes the state to READY.  Each worker will then\nevaluate whether to restart for two_phase mode.Right. I didn't think about the two phase case thoroughly. Waking up all apply workers can help.Do we also need to wake up all sync workers too? Even if not, I'm not actually sure whether doing that would harm anything though.Just asking since currently the patch wakes up all workers including sync workers if any still exists.Best,--Melih MutluMicrosoft", "msg_date": "Wed, 7 Dec 2022 14:07:11 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Dec 07, 2022 at 02:07:11PM +0300, Melih Mutlu wrote:\n> Do we also need to wake up all sync workers too? Even if not, I'm not\n> actually sure whether doing that would harm anything though.\n> Just asking since currently the patch wakes up all workers including sync\n> workers if any still exists.\n\nAfter sleeping on this, I think we can do better. IIUC we can simply check\nfor AllTablesyncsReady() at the end of process_syncing_tables_for_apply()\nand wake up the logical replication workers (which should just consiѕt of\nsetting the current process's latch) if we are ready for two_phase mode.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 7 Dec 2022 10:11:45 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> After sleeping on this, I think we can do better. IIUC we can simply check\n> for AllTablesyncsReady() at the end of process_syncing_tables_for_apply()\n> and wake up the logical replication workers (which should just consiѕt of\n> setting the current process's latch) if we are ready for two_phase mode.\n\nI independently rediscovered the need for something like this after\nwondering why the subscription/t/031_column_list.pl test seemed to\ntake so much longer than its siblings. I found that a considerable\namount of the elapsed time was wasted because we were waiting up to\na full second (NAPTIME_PER_CYCLE) for the logrep worker to notice\nthat something had changed in the local subscription state. At least\non my machine, it seems that the worst-case timing is reliably hit\nmultiple times during this test. Now admittedly, this is probably not\na significant problem in real-world usage; but it's sure annoying that\nit eats time during check-world.\n\nHowever, this patch seems to still be leaving quite a bit on the\ntable. Here's the timings I see for the subscription suite in HEAD\n(test is just \"time make check PROVE_FLAGS=--timer\" with an\nassert-enabled build):\n\n+++ tap check in src/test/subscription +++\n[18:07:38] t/001_rep_changes.pl ............... ok 6659 ms ( 0.00 usr 0.00 sys + 0.89 cusr 0.52 csys = 1.41 CPU)\n[18:07:45] t/002_types.pl ..................... ok 1572 ms ( 0.00 usr 0.00 sys + 0.70 cusr 0.27 csys = 0.97 CPU)\n[18:07:47] t/003_constraints.pl ............... ok 1436 ms ( 0.01 usr 0.00 sys + 0.74 cusr 0.25 csys = 1.00 CPU)\n[18:07:48] t/004_sync.pl ...................... ok 3007 ms ( 0.00 usr 0.00 sys + 0.75 cusr 0.31 csys = 1.06 CPU)\n[18:07:51] t/005_encoding.pl .................. ok 1468 ms ( 0.00 usr 0.00 sys + 0.74 cusr 0.21 csys = 0.95 CPU)\n[18:07:53] t/006_rewrite.pl ................... ok 1494 ms ( 0.00 usr 0.00 sys + 0.72 cusr 0.24 csys = 0.96 CPU)\n[18:07:54] t/007_ddl.pl ....................... ok 2005 ms ( 0.00 usr 0.00 sys + 0.73 cusr 0.24 csys = 0.97 CPU)\n[18:07:56] t/008_diff_schema.pl ............... ok 1746 ms ( 0.01 usr 0.00 sys + 0.70 cusr 0.28 csys = 0.99 CPU)\n[18:07:58] t/009_matviews.pl .................. ok 1878 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.24 csys = 0.95 CPU)\n[18:08:00] t/010_truncate.pl .................. ok 2999 ms ( 0.00 usr 0.00 sys + 0.77 cusr 0.38 csys = 1.15 CPU)\n[18:08:03] t/011_generated.pl ................. ok 1467 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.24 csys = 0.95 CPU)\n[18:08:04] t/012_collation.pl ................. skipped: ICU not supported by this build\n[18:08:04] t/013_partition.pl ................. ok 4787 ms ( 0.01 usr 0.00 sys + 1.29 cusr 0.71 csys = 2.01 CPU)\n[18:08:09] t/014_binary.pl .................... ok 2564 ms ( 0.00 usr 0.00 sys + 0.72 cusr 0.28 csys = 1.00 CPU)\n[18:08:12] t/015_stream.pl .................... ok 2531 ms ( 0.01 usr 0.00 sys + 0.73 cusr 0.27 csys = 1.01 CPU)\n[18:08:14] t/016_stream_subxact.pl ............ ok 1590 ms ( 0.00 usr 0.00 sys + 0.70 cusr 0.24 csys = 0.94 CPU)\n[18:08:16] t/017_stream_ddl.pl ................ ok 1610 ms ( 0.00 usr 0.00 sys + 0.72 cusr 0.25 csys = 0.97 CPU)\n[18:08:17] t/018_stream_subxact_abort.pl ...... ok 1827 ms ( 0.00 usr 0.00 sys + 0.73 cusr 0.24 csys = 0.97 CPU)\n[18:08:19] t/019_stream_subxact_ddl_abort.pl .. ok 1474 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.24 csys = 0.95 CPU)\n[18:08:21] t/020_messages.pl .................. ok 2423 ms ( 0.01 usr 0.00 sys + 0.74 cusr 0.25 csys = 1.00 CPU)\n[18:08:23] t/021_twophase.pl .................. ok 4799 ms ( 0.00 usr 0.00 sys + 0.82 cusr 0.39 csys = 1.21 CPU)\n[18:08:28] t/022_twophase_cascade.pl .......... ok 4346 ms ( 0.00 usr 0.00 sys + 1.12 cusr 0.54 csys = 1.66 CPU)\n[18:08:32] t/023_twophase_stream.pl ........... ok 3656 ms ( 0.01 usr 0.00 sys + 0.78 cusr 0.32 csys = 1.11 CPU)\n[18:08:36] t/024_add_drop_pub.pl .............. ok 3585 ms ( 0.00 usr 0.00 sys + 0.73 cusr 0.29 csys = 1.02 CPU)\n[18:08:39] t/025_rep_changes_for_schema.pl .... ok 3631 ms ( 0.00 usr 0.00 sys + 0.77 cusr 0.34 csys = 1.11 CPU)\n[18:08:43] t/026_stats.pl ..................... ok 4096 ms ( 0.00 usr 0.00 sys + 0.77 cusr 0.32 csys = 1.09 CPU)\n[18:08:47] t/027_nosuperuser.pl ............... ok 4824 ms ( 0.01 usr 0.00 sys + 0.77 cusr 0.39 csys = 1.17 CPU)\n[18:08:52] t/028_row_filter.pl ................ ok 5321 ms ( 0.00 usr 0.00 sys + 0.90 cusr 0.50 csys = 1.40 CPU)\n[18:08:57] t/029_on_error.pl .................. ok 3748 ms ( 0.00 usr 0.00 sys + 0.75 cusr 0.32 csys = 1.07 CPU)\n[18:09:01] t/030_origin.pl .................... ok 4496 ms ( 0.00 usr 0.00 sys + 1.09 cusr 0.45 csys = 1.54 CPU)\n[18:09:06] t/031_column_list.pl ............... ok 13802 ms ( 0.01 usr 0.00 sys + 1.00 cusr 0.69 csys = 1.70 CPU)\n[18:09:19] t/100_bugs.pl ...................... ok 5195 ms ( 0.00 usr 0.00 sys + 2.05 cusr 0.76 csys = 2.81 CPU)\n[18:09:25]\nAll tests successful.\nFiles=32, Tests=379, 107 wallclock secs ( 0.09 usr 0.02 sys + 26.10 cusr 10.98 csys = 37.19 CPU)\nResult: PASS\n\nreal 1m47.503s\nuser 0m27.068s\nsys 0m11.452s\n\nWith the v8 patch, I get:\n\n+++ tap check in src/test/subscription +++\n[18:11:15] t/001_rep_changes.pl ............... ok 5505 ms ( 0.01 usr 0.00 sys + 0.90 cusr 0.49 csys = 1.40 CPU)\n[18:11:21] t/002_types.pl ..................... ok 1574 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.26 csys = 0.97 CPU)\n[18:11:23] t/003_constraints.pl ............... ok 1442 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.28 csys = 0.99 CPU)\n[18:11:24] t/004_sync.pl ...................... ok 2087 ms ( 0.01 usr 0.00 sys + 0.74 cusr 0.30 csys = 1.05 CPU)\n[18:11:26] t/005_encoding.pl .................. ok 1465 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.23 csys = 0.94 CPU)\n[18:11:28] t/006_rewrite.pl ................... ok 1489 ms ( 0.00 usr 0.00 sys + 0.73 cusr 0.24 csys = 0.97 CPU)\n[18:11:29] t/007_ddl.pl ....................... ok 2007 ms ( 0.00 usr 0.00 sys + 0.73 cusr 0.23 csys = 0.96 CPU)\n[18:11:31] t/008_diff_schema.pl ............... ok 1644 ms ( 0.00 usr 0.00 sys + 0.72 cusr 0.27 csys = 0.99 CPU)\n[18:11:33] t/009_matviews.pl .................. ok 1878 ms ( 0.00 usr 0.00 sys + 0.70 cusr 0.25 csys = 0.95 CPU)\n[18:11:35] t/010_truncate.pl .................. ok 3006 ms ( 0.00 usr 0.00 sys + 0.79 cusr 0.37 csys = 1.16 CPU)\n[18:11:38] t/011_generated.pl ................. ok 1470 ms ( 0.00 usr 0.00 sys + 0.72 cusr 0.23 csys = 0.95 CPU)\n[18:11:39] t/012_collation.pl ................. skipped: ICU not supported by this build\n[18:11:39] t/013_partition.pl ................. ok 4656 ms ( 0.01 usr 0.00 sys + 1.30 cusr 0.69 csys = 2.00 CPU)\n[18:11:44] t/014_binary.pl .................... ok 2570 ms ( 0.00 usr 0.00 sys + 0.74 cusr 0.27 csys = 1.01 CPU)\n[18:11:46] t/015_stream.pl .................... ok 2535 ms ( 0.00 usr 0.00 sys + 0.74 cusr 0.26 csys = 1.00 CPU)\n[18:11:49] t/016_stream_subxact.pl ............ ok 1601 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.26 csys = 0.97 CPU)\n[18:11:51] t/017_stream_ddl.pl ................ ok 1608 ms ( 0.00 usr 0.00 sys + 0.70 cusr 0.26 csys = 0.96 CPU)\n[18:11:52] t/018_stream_subxact_abort.pl ...... ok 1834 ms ( 0.00 usr 0.00 sys + 0.72 cusr 0.26 csys = 0.98 CPU)\n[18:11:54] t/019_stream_subxact_ddl_abort.pl .. ok 1476 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.24 csys = 0.95 CPU)\n[18:11:55] t/020_messages.pl .................. ok 1489 ms ( 0.00 usr 0.00 sys + 0.73 cusr 0.24 csys = 0.97 CPU)\n[18:11:57] t/021_twophase.pl .................. ok 4289 ms ( 0.00 usr 0.00 sys + 0.82 cusr 0.38 csys = 1.20 CPU)\n[18:12:01] t/022_twophase_cascade.pl .......... ok 3835 ms ( 0.01 usr 0.00 sys + 1.17 cusr 0.49 csys = 1.67 CPU)\n[18:12:05] t/023_twophase_stream.pl ........... ok 3158 ms ( 0.00 usr 0.00 sys + 0.79 cusr 0.32 csys = 1.11 CPU)\n[18:12:08] t/024_add_drop_pub.pl .............. ok 2553 ms ( 0.00 usr 0.00 sys + 0.72 cusr 0.28 csys = 1.00 CPU)\n[18:12:11] t/025_rep_changes_for_schema.pl .... ok 2703 ms ( 0.01 usr 0.00 sys + 0.77 cusr 0.32 csys = 1.10 CPU)\n[18:12:13] t/026_stats.pl ..................... ok 4101 ms ( 0.00 usr 0.00 sys + 0.77 cusr 0.31 csys = 1.08 CPU)\n[18:12:18] t/027_nosuperuser.pl ............... ok 4822 ms ( 0.00 usr 0.00 sys + 0.80 cusr 0.36 csys = 1.16 CPU)\n[18:12:22] t/028_row_filter.pl ................ ok 4396 ms ( 0.00 usr 0.00 sys + 0.90 cusr 0.50 csys = 1.40 CPU)\n[18:12:27] t/029_on_error.pl .................. ok 4382 ms ( 0.00 usr 0.00 sys + 0.75 cusr 0.33 csys = 1.08 CPU)\n[18:12:31] t/030_origin.pl .................... ok 2735 ms ( 0.00 usr 0.00 sys + 1.10 cusr 0.40 csys = 1.50 CPU)\n[18:12:34] t/031_column_list.pl ............... ok 10281 ms ( 0.01 usr 0.00 sys + 1.01 cusr 0.60 csys = 1.62 CPU)\n[18:12:44] t/100_bugs.pl ...................... ok 5214 ms ( 0.00 usr 0.00 sys + 2.05 cusr 0.79 csys = 2.84 CPU)\n[18:12:49]\nAll tests successful.\nFiles=32, Tests=379, 94 wallclock secs ( 0.10 usr 0.02 sys + 26.21 cusr 10.72 csys = 37.05 CPU)\nResult: PASS\n\nreal 1m35.275s\nuser 0m27.177s\nsys 0m11.182s\n\nThat's better, but not by an impressive amount: there's still an\nannoyingly large amount of daylight between the CPU time expended\nand the elapsed time (and I'm not even considering the possibility\nthat some of that CPU time could be parallelized).\n\nI poked into it some more, and what I'm seeing now is traces like\n\n2022-12-13 18:12:35.936 EST [2547426] 031_column_list.pl LOG: statement: ALTER SUBSCRIPTION sub1 SET PUBLICATION pub2, pub3\n2022-12-13 18:12:35.941 EST [2547327] LOG: logical replication apply worker for subscription \"sub1\" will restart because of a parameter change\n2022-12-13 18:12:35.944 EST [2547429] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\n2022-12-13 18:12:36.048 EST [2547431] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\n2022-12-13 18:12:36.151 EST [2547433] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\n2022-12-13 18:12:36.255 EST [2547435] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\n2022-12-13 18:12:36.359 EST [2547437] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\n2022-12-13 18:12:36.443 EST [2547441] LOG: logical replication apply worker for subscription \"sub1\" has started\n2022-12-13 18:12:36.446 EST [2547443] LOG: logical replication table synchronization worker for subscription \"sub1\", table \"tab5\" has started\n2022-12-13 18:12:36.451 EST [2547443] LOG: logical replication table synchronization worker for subscription \"sub1\", table \"tab5\" has finished\n2022-12-13 18:12:36.463 EST [2547446] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\n\nBefore, there was up to 1 second (with multiple \"SELECT count(1) = 0\"\nprobes from the test script) between the ALTER SUBSCRIPTION command\nand the \"apply worker will restart\" log entry. That wait is pretty\nwell zapped, but instead now we're waiting hundreds of ms for the\n\"apply worker has started\" message.\n\nI've not chased it further than that, but I venture that the apply\nlauncher also needs a kick in the pants, and/or there needs to be\nan interlock to ensure that it doesn't wake until after the old\napply worker quits.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Dec 2022 18:32:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Dec 13, 2022 at 06:32:08PM -0500, Tom Lane wrote:\n> Before, there was up to 1 second (with multiple \"SELECT count(1) = 0\"\n> probes from the test script) between the ALTER SUBSCRIPTION command\n> and the \"apply worker will restart\" log entry. That wait is pretty\n> well zapped, but instead now we're waiting hundreds of ms for the\n> \"apply worker has started\" message.\n> \n> I've not chased it further than that, but I venture that the apply\n> launcher also needs a kick in the pants, and/or there needs to be\n> an interlock to ensure that it doesn't wake until after the old\n> apply worker quits.\n\nThis is probably because the tests set wal_retrieve_retry_interval to\n500ms. Lowering that to 1ms in Cluster.pm seems to wipe out this\nparticular wait, and the total src/test/subscription test time drops from\n119 seconds to 95 seconds on my machine. This probably lowers the amount\nof test coverage we get on the wal_retrieve_retry_interval code paths, but\nif that's a concern, perhaps we should write a test specifically for\nwal_retrieve_retry_interval.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Dec 2022 16:01:45 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Tue, Dec 13, 2022 at 06:32:08PM -0500, Tom Lane wrote:\n>> I've not chased it further than that, but I venture that the apply\n>> launcher also needs a kick in the pants, and/or there needs to be\n>> an interlock to ensure that it doesn't wake until after the old\n>> apply worker quits.\n\n> This is probably because the tests set wal_retrieve_retry_interval to\n> 500ms. Lowering that to 1ms in Cluster.pm seems to wipe out this\n> particular wait, and the total src/test/subscription test time drops from\n> 119 seconds to 95 seconds on my machine.\n\nThat's not really the direction we should be going in, though. Ideally\nthere should be *no* situation where we are waiting for a timeout to\nelapse for a process to wake up and notice it ought to do something.\nIf we have timeouts at all, they should be backstops for the possibility\nof a lost interrupt, and it should be possible to set them quite high\nwithout any visible impact on normal operation. (This gets back to\nthe business about minimizing idle power consumption, which Simon was\nbugging us about recently but that's been on the radar screen for years.)\n\nI certainly don't think that \"wake the apply launcher every 1ms\"\nis a sane configuration. Unless I'm missing something basic about\nits responsibilities, it should seldom need to wake at all in\nnormal operation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Dec 2022 19:20:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Dec 13, 2022 at 07:20:14PM -0500, Tom Lane wrote:\n> I certainly don't think that \"wake the apply launcher every 1ms\"\n> is a sane configuration. Unless I'm missing something basic about\n> its responsibilities, it should seldom need to wake at all in\n> normal operation.\n\nThis parameter appears to control how often the apply launcher starts new\nworkers. If it starts new workers in a loop iteration, it updates its\nlast_start_time variable, and it won't start any more workers until another\nwal_retrieve_retry_interval has elapsed. If no new workers need to be\nstarted, it only wakes up every 3 minutes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Dec 2022 16:41:05 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Dec 13, 2022 at 04:41:05PM -0800, Nathan Bossart wrote:\n> On Tue, Dec 13, 2022 at 07:20:14PM -0500, Tom Lane wrote:\n>> I certainly don't think that \"wake the apply launcher every 1ms\"\n>> is a sane configuration. Unless I'm missing something basic about\n>> its responsibilities, it should seldom need to wake at all in\n>> normal operation.\n> \n> This parameter appears to control how often the apply launcher starts new\n> workers. If it starts new workers in a loop iteration, it updates its\n> last_start_time variable, and it won't start any more workers until another\n> wal_retrieve_retry_interval has elapsed. If no new workers need to be\n> started, it only wakes up every 3 minutes.\n\nLooking closer, I see that wal_retrieve_retry_interval is used for three\npurposes. It's main purpose seems to be preventing busy-waiting in\nWaitForWALToBecomeAvailable(), as that's what's documented. But it's also\nused for logical replication. The apply launcher uses it as I've describe\nabove, and the apply workers use it when launching sync workers. Unlike\nthe apply launcher, the apply workers store the last start time for each\ntable's sync worker and use that to determine whether to start a new one.\n\nMy first thought is that the latter two uses should be moved to a new\nparameter, and the apply launcher should store the last start time for each\napply worker like the apply workers do for the table-sync workers. In any\ncase, it probably makes sense to lower this parameter's value for testing\nso that tests that restart these workers frequently aren't waiting for so\nlong.\n\nI can put a patch together if this seems like a reasonable direction to go.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Dec 2022 09:10:23 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> My first thought is that the latter two uses should be moved to a new\n> parameter, and the apply launcher should store the last start time for each\n> apply worker like the apply workers do for the table-sync workers. In any\n> case, it probably makes sense to lower this parameter's value for testing\n> so that tests that restart these workers frequently aren't waiting for so\n> long.\n\n> I can put a patch together if this seems like a reasonable direction to go.\n\nNo, I'm still of the opinion that waiting for the launcher to timeout\nbefore doing something is fundamentally wrong design. We should signal\nit when we want it to do something. That's not different from what\nyou're fixing about the workers; why don't you see that it's appropriate\nfor the launcher too?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Dec 2022 12:42:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Dec 14, 2022 at 12:42:32PM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> My first thought is that the latter two uses should be moved to a new\n>> parameter, and the apply launcher should store the last start time for each\n>> apply worker like the apply workers do for the table-sync workers. In any\n>> case, it probably makes sense to lower this parameter's value for testing\n>> so that tests that restart these workers frequently aren't waiting for so\n>> long.\n> \n>> I can put a patch together if this seems like a reasonable direction to go.\n> \n> No, I'm still of the opinion that waiting for the launcher to timeout\n> before doing something is fundamentally wrong design. We should signal\n> it when we want it to do something. That's not different from what\n> you're fixing about the workers; why don't you see that it's appropriate\n> for the launcher too?\n\nI'm reasonably certain the launcher is already signaled like you describe.\nIt'll just wait to start new workers if it's been less than\nwal_retrieve_retry_interval milliseconds since the last time it started\nworkers.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Dec 2022 09:45:35 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I'm reasonably certain the launcher is already signaled like you describe.\n> It'll just wait to start new workers if it's been less than\n> wal_retrieve_retry_interval milliseconds since the last time it started\n> workers.\n\nOh. What in the world is the rationale for that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Dec 2022 13:23:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Dec 14, 2022 at 01:23:18PM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> I'm reasonably certain the launcher is already signaled like you describe.\n>> It'll just wait to start new workers if it's been less than\n>> wal_retrieve_retry_interval milliseconds since the last time it started\n>> workers.\n> \n> Oh. What in the world is the rationale for that?\n\nMy assumption is that this is meant to avoid starting workers as fast as\npossible if they repeatedly crash. I didn't see much discussion in the\noriginal logical replication thread [0], but I do see follow-up discussion\nabout creating a separate GUC for this [1] [2].\n\n[0] https://postgr.es/m/b8132323-b577-428c-b2aa-bf41a66b18e7%402ndquadrant.com\n[1] https://postgr.es/m/CAD21AoAjTTGm%2BOx70b2OGWvb77vPcRdYeRv3gkAWx76nXDo%2BEA%40mail.gmail.com\n[2] https://postgr.es/m/CAD21AoDCnyRJDUY%3DESVVe68AukvOP2dFomTeBFpAd1TiFbjsGg%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Dec 2022 10:37:59 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Dec 14, 2022 at 01:23:18PM -0500, Tom Lane wrote:\n>> Oh. What in the world is the rationale for that?\n\n> My assumption is that this is meant to avoid starting workers as fast as\n> possible if they repeatedly crash.\n\nI can see the point of rate-limiting if the workers are failing to connect\nor crashing while trying to process data. But it's not very sane to\napply the same policy to an intentional worker exit-for-reconfiguration.\n\nMaybe we could have workers that are exiting for that reason set a\nflag saying \"please restart me without delay\"?\n\nA *real* fix would be to not exit at all, at least for reconfigurations\nthat don't change the connection parameters, but instead cope with\nrecomputing whatever needs recomputed in the workers' state. I can\nbelieve that that'd be a lot of work though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Dec 2022 14:02:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Dec 14, 2022 at 02:02:58PM -0500, Tom Lane wrote:\n> Maybe we could have workers that are exiting for that reason set a\n> flag saying \"please restart me without delay\"?\n\nThat helps a bit, but there are still delays when starting workers for new\nsubscriptions. I think we'd need to create a new array in shared memory\nfor subscription OIDs that need their workers started immediately.\n\nI'm not totally sure this is worth the effort. These delays surface in the\ntests because the workers are started so frequently. In normal operation,\nthis is probably unusual, so the launcher would typically start new workers\nimmediately. But if you and/or others feel this is worthwhile, I don't\nmind working on the patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Dec 2022 15:17:27 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "I tried setting wal_retrieve_retry_interval to 1ms for all TAP tests\n(similar to what was done in 2710ccd), and I noticed that the recovery\ntests consistently took much longer. Upon further inspection, it looks\nlike the same (or a very similar) race condition described in e5d494d's\ncommit message [0]. With some added debug logs, I see that all of the\ncallers of MaybeStartWalReceiver() complete before SIGCHLD is processed, so\nServerLoop() waits for a minute before starting the WAL receiver.\n\nA simple fix is to have DetermineSleepTime() take the WalReceiverRequested\nflag into consideration. The attached 0002 patch shortens the sleep time\nto 100ms if it looks like we are waiting on a SIGCHLD. I'm not certain\nthis is the best approach, but it seems to fix the tests.\n\nOn my machine, I see the following improvements in the tests (all units in\nseconds):\n HEAD patched (v9)\n check-world -j8 165 138\n subscription 120 75\n recovery 111 108\n\n[0] https://postgr.es/m/21344.1498494720%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 15 Dec 2022 14:47:21 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Dec 15, 2022 at 02:47:21PM -0800, Nathan Bossart wrote:\n> I tried setting wal_retrieve_retry_interval to 1ms for all TAP tests\n> (similar to what was done in 2710ccd), and I noticed that the recovery\n> tests consistently took much longer. Upon further inspection, it looks\n> like the same (or a very similar) race condition described in e5d494d's\n> commit message [0]. With some added debug logs, I see that all of the\n> callers of MaybeStartWalReceiver() complete before SIGCHLD is processed, so\n> ServerLoop() waits for a minute before starting the WAL receiver.\n> \n> A simple fix is to have DetermineSleepTime() take the WalReceiverRequested\n> flag into consideration. The attached 0002 patch shortens the sleep time\n> to 100ms if it looks like we are waiting on a SIGCHLD. I'm not certain\n> this is the best approach, but it seems to fix the tests.\n\nThis seems to have somehow broken the archiving tests on Windows, so\nobviously I owe some better analysis here. I didn't see anything obvious\nin the logs, but I will continue to dig.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 18 Dec 2022 15:36:07 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Sun, Dec 18, 2022 at 03:36:07PM -0800, Nathan Bossart wrote:\n> This seems to have somehow broken the archiving tests on Windows, so\n> obviously I owe some better analysis here. I didn't see anything obvious\n> in the logs, but I will continue to dig.\n\nOn Windows, WaitForWALToBecomeAvailable() seems to depend on the call to\nWaitLatch() for wal_retrieve_retry_interval to ensure that signals are\ndispatched (i.e., pgwin32_dispatch_queued_signals()). My first instinct is\nto just always call WaitLatch() in this code path, even if\nwal_retrieve_rety_interval milliseconds have already elapsed. The attached\n0003 does this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 31 Dec 2022 15:50:19 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Dec 15, 2022 at 4:47 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Dec 14, 2022 at 02:02:58PM -0500, Tom Lane wrote:\n> > Maybe we could have workers that are exiting for that reason set a\n> > flag saying \"please restart me without delay\"?\n>\n> That helps a bit, but there are still delays when starting workers for new\n> subscriptions. I think we'd need to create a new array in shared memory\n> for subscription OIDs that need their workers started immediately.\n>\n\nThat would be tricky because the list of subscription OIDs can be\nlonger than the workers. Can't we set a boolean variable\n(check_immediate or something like that) in LogicalRepCtxStruct and\nuse that to traverse the subscriptions? So, when any worker will\nrestart because of a parameter change, we can set the variable and\nsend a signal to the launcher. The launcher can then check this\nvariable to decide whether to start the missing workers for enabled\nsubscriptions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 3 Jan 2023 11:03:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Dec 7, 2022 at 11:42 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Dec 07, 2022 at 02:07:11PM +0300, Melih Mutlu wrote:\n> > Do we also need to wake up all sync workers too? Even if not, I'm not\n> > actually sure whether doing that would harm anything though.\n> > Just asking since currently the patch wakes up all workers including sync\n> > workers if any still exists.\n>\n> After sleeping on this, I think we can do better. IIUC we can simply check\n> for AllTablesyncsReady() at the end of process_syncing_tables_for_apply()\n> and wake up the logical replication workers (which should just consiѕt of\n> setting the current process's latch) if we are ready for two_phase mode.\n>\n\nHow just waking up will help with two_phase mode? For that, we need to\nrestart the apply worker as we are doing at the beginning of\nprocess_syncing_tables_for_apply().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 3 Jan 2023 11:43:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Jan 03, 2023 at 11:03:32AM +0530, Amit Kapila wrote:\n> On Thu, Dec 15, 2022 at 4:47 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Wed, Dec 14, 2022 at 02:02:58PM -0500, Tom Lane wrote:\n>> > Maybe we could have workers that are exiting for that reason set a\n>> > flag saying \"please restart me without delay\"?\n>>\n>> That helps a bit, but there are still delays when starting workers for new\n>> subscriptions. I think we'd need to create a new array in shared memory\n>> for subscription OIDs that need their workers started immediately.\n> \n> That would be tricky because the list of subscription OIDs can be\n> longer than the workers. Can't we set a boolean variable\n> (check_immediate or something like that) in LogicalRepCtxStruct and\n> use that to traverse the subscriptions? So, when any worker will\n> restart because of a parameter change, we can set the variable and\n> send a signal to the launcher. The launcher can then check this\n> variable to decide whether to start the missing workers for enabled\n> subscriptions.\n\nMy approach was to add a variable to LogicalRepWorker that indicated\nwhether a worker needed to be restarted immediately. While this is a\nlittle weird because the workers array is treated as slots, it worked\nnicely for ALTER SUBSCRIPTION. However, this doesn't help at all for\nCREATE SUBSCRIPTION.\n\nIIUC you are suggesting just one variable that would bypass\nwal_retrieve_retry_interval for all subscriptions, not just those newly\naltered or created. This definitely seems like it would prevent delays,\nbut it would also cause wal_retrieve_retry_interval to be incorrectly\nbypassed for the other workers in some cases. Is this acceptable?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 3 Jan 2023 10:10:31 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Jan 03, 2023 at 11:43:59AM +0530, Amit Kapila wrote:\n> On Wed, Dec 7, 2022 at 11:42 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> After sleeping on this, I think we can do better. IIUC we can simply check\n>> for AllTablesyncsReady() at the end of process_syncing_tables_for_apply()\n>> and wake up the logical replication workers (which should just consiѕt of\n>> setting the current process's latch) if we are ready for two_phase mode.\n> \n> How just waking up will help with two_phase mode? For that, we need to\n> restart the apply worker as we are doing at the beginning of\n> process_syncing_tables_for_apply().\n\nRight. IIRC waking up causes the apply worker to immediately call\nprocess_syncing_tables_for_apply() again, which will then proc_exit(0) as\nappropriate. It might be possible to move the restart logic to the end of\nprocess_syncing_tables_for_apply() to avoid this extra wakeup. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 3 Jan 2023 10:21:55 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Jan 3, 2023 at 11:51 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Jan 03, 2023 at 11:43:59AM +0530, Amit Kapila wrote:\n> > On Wed, Dec 7, 2022 at 11:42 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> After sleeping on this, I think we can do better. IIUC we can simply check\n> >> for AllTablesyncsReady() at the end of process_syncing_tables_for_apply()\n> >> and wake up the logical replication workers (which should just consiѕt of\n> >> setting the current process's latch) if we are ready for two_phase mode.\n> >\n> > How just waking up will help with two_phase mode? For that, we need to\n> > restart the apply worker as we are doing at the beginning of\n> > process_syncing_tables_for_apply().\n>\n> Right. IIRC waking up causes the apply worker to immediately call\n> process_syncing_tables_for_apply() again, which will then proc_exit(0) as\n> appropriate.\n>\n\nBut we are already in apply worker and performing\nprocess_syncing_tables_for_apply(). This means the apply worker is not\nwaiting/sleeping, so what exactly are we trying to wake up?\n\n> It might be possible to move the restart logic to the end of\n> process_syncing_tables_for_apply() to avoid this extra wakeup. WDYT?\n>\n\nI am not sure if I understand the problem you are trying to solve with\nthis part of the patch. Are you worried that after we mark some of the\nrelation's state as READY, all the table syncs are in the READY state\nbut we will not immediately try to check the two_pahse stuff and\nprobably the apply worker may sleep before the next time it invokes\nprocess_syncing_tables_for_apply()? If so, we probably also need to\nensure that table_states_valid is marked false probably via\ninvalidations so that we can get the latest state and then perform\nthis check. I guess if we can do that then we can directly move the\nrestart logic to the end.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 4 Jan 2023 09:41:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Jan 3, 2023 at 11:40 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Jan 03, 2023 at 11:03:32AM +0530, Amit Kapila wrote:\n> > On Thu, Dec 15, 2022 at 4:47 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> On Wed, Dec 14, 2022 at 02:02:58PM -0500, Tom Lane wrote:\n> >> > Maybe we could have workers that are exiting for that reason set a\n> >> > flag saying \"please restart me without delay\"?\n> >>\n> >> That helps a bit, but there are still delays when starting workers for new\n> >> subscriptions. I think we'd need to create a new array in shared memory\n> >> for subscription OIDs that need their workers started immediately.\n> >\n> > That would be tricky because the list of subscription OIDs can be\n> > longer than the workers. Can't we set a boolean variable\n> > (check_immediate or something like that) in LogicalRepCtxStruct and\n> > use that to traverse the subscriptions? So, when any worker will\n> > restart because of a parameter change, we can set the variable and\n> > send a signal to the launcher. The launcher can then check this\n> > variable to decide whether to start the missing workers for enabled\n> > subscriptions.\n>\n> My approach was to add a variable to LogicalRepWorker that indicated\n> whether a worker needed to be restarted immediately. While this is a\n> little weird because the workers array is treated as slots, it worked\n> nicely for ALTER SUBSCRIPTION.\n>\n\nSo, are you planning to keep its in_use and subid flag as it is in\nlogicalrep_worker_cleanup()? Otherwise, without that it could be\nreused for some other subscription.\n\n> However, this doesn't help at all for\n> CREATE SUBSCRIPTION.\n>\n\nWhat if we maintain a hash table similar to 'last_start_times'\nmaintained in tablesync.c? It won't have entries for new\nsubscriptions, so for those we may not need to wait till\nwal_retrieve_retry_interval.\n\n> IIUC you are suggesting just one variable that would bypass\n> wal_retrieve_retry_interval for all subscriptions, not just those newly\n> altered or created. This definitely seems like it would prevent delays,\n> but it would also cause wal_retrieve_retry_interval to be incorrectly\n> bypassed for the other workers in some cases.\n>\n\nRight, but I guess it would be rare in practical cases that someone\nAltered/Created a subscription, and also some workers are restarted\ndue to errors/crashes as only in those cases launcher can restart the\nworker when it shouldn't. However, in that case, also, it won't\nrestart the apply worker again and again unless there are concurrent\nCreate/Alter Subscription operations going on. IIUC, currently also it\ncan always first time restart the worker immediately after ERROR/CRASH\nbecause we don't maintain last_start_time for each worker. I think\nthis is probably okay as we want to avoid repeated restarts after the\nERROR.\n\nBTW, now users also have a subscription option 'disable_on_error'\nwhich could also be used to avoid repeated restarts due to ERRORS.\n\n>\n Is this acceptable?\n>\n\nTo me, this sounds acceptable but if you and others don't think so\nthen we can try to develop some solution like per-worker-flag and a\nhash table as discussed in the earlier part of the email.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 4 Jan 2023 10:57:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Jan 04, 2023 at 09:41:47AM +0530, Amit Kapila wrote:\n> I am not sure if I understand the problem you are trying to solve with\n> this part of the patch. Are you worried that after we mark some of the\n> relation's state as READY, all the table syncs are in the READY state\n> but we will not immediately try to check the two_pahse stuff and\n> probably the apply worker may sleep before the next time it invokes\n> process_syncing_tables_for_apply()? \n\nYes.\n\n> If so, we probably also need to\n> ensure that table_states_valid is marked false probably via\n> invalidations so that we can get the latest state and then perform\n> this check. I guess if we can do that then we can directly move the\n> restart logic to the end.\n\nIMO this shows the advantage of just waking up the worker. It doesn't\nchange the apply worker's behavior besides making it more responsive.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 09:33:04 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Jan 04, 2023 at 10:57:43AM +0530, Amit Kapila wrote:\n> On Tue, Jan 3, 2023 at 11:40 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> My approach was to add a variable to LogicalRepWorker that indicated\n>> whether a worker needed to be restarted immediately. While this is a\n>> little weird because the workers array is treated as slots, it worked\n>> nicely for ALTER SUBSCRIPTION.\n> \n> So, are you planning to keep its in_use and subid flag as it is in\n> logicalrep_worker_cleanup()? Otherwise, without that it could be\n> reused for some other subscription.\n\nI believe I did something like this in my proof-of-concept. I might have\nused the new flag as another indicator that the slot was still \"in use\".\nIn any case, you are right that we need to prevent the slot from being\nreused.\n\n> What if we maintain a hash table similar to 'last_start_times'\n> maintained in tablesync.c? It won't have entries for new\n> subscriptions, so for those we may not need to wait till\n> wal_retrieve_retry_interval.\n\nI proposed this upthread [0]. I still think it is a worthwhile change.\nRight now, if a worker needs to be restarted but another unrelated worker\nwas restarted less than wal_retrieve_retry_interval milliseconds ago, the\nlauncher waits to restart it. I think it makes more sense for each worker\nto have its own restart interval tracked.\n\n>> IIUC you are suggesting just one variable that would bypass\n>> wal_retrieve_retry_interval for all subscriptions, not just those newly\n>> altered or created. This definitely seems like it would prevent delays,\n>> but it would also cause wal_retrieve_retry_interval to be incorrectly\n>> bypassed for the other workers in some cases.\n>\n> Right, but I guess it would be rare in practical cases that someone\n> Altered/Created a subscription, and also some workers are restarted\n> due to errors/crashes as only in those cases launcher can restart the\n> worker when it shouldn't. However, in that case, also, it won't\n> restart the apply worker again and again unless there are concurrent\n> Create/Alter Subscription operations going on. IIUC, currently also it\n> can always first time restart the worker immediately after ERROR/CRASH\n> because we don't maintain last_start_time for each worker. I think\n> this is probably okay as we want to avoid repeated restarts after the\n> ERROR.\n\nThis line of thinking is why I felt that lowering\nwal_retrieve_retry_interval for the tests might be sufficient. Besides the\nfact that it revealed multiple bugs, I don't see the point in adding much\nmore complexity here. In practice, workers will usually start right away,\nunless of course there are other worker starts happening around the same\ntime. This consistently causes testing delays because the tests stress\nthese code paths, but I don't think what the tests are doing is a typical\nuse-case.\n\n From the discussion thus far, it sounds like the alternatives are to 1) add\na global flag that causes wal_retrieve_retry_interval to be bypassed for\nall workers or to 2) add a hash map in the launcher and a\nrestart_immediately flag in each worker slot. I'll go ahead and create a\npatch for 2 since it seems like the most complete solution, and we can\nevaluate whether the complexity seems appropriate.\n\n[0] https://postgr.es/m/20221214171023.GA689106%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 10:12:19 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Jan 04, 2023 at 10:12:19AM -0800, Nathan Bossart wrote:\n> From the discussion thus far, it sounds like the alternatives are to 1) add\n> a global flag that causes wal_retrieve_retry_interval to be bypassed for\n> all workers or to 2) add a hash map in the launcher and a\n> restart_immediately flag in each worker slot. I'll go ahead and create a\n> patch for 2 since it seems like the most complete solution, and we can\n> evaluate whether the complexity seems appropriate.\n\nHere is a first attempt at adding a hash table to the launcher and a\nrestart_immediately flag in each worker slot. This provides a similar\nspeedup to lowering wal_retrieve_retry_interval to 1ms. I've noted a\ncouple of possible race conditions in comments, but none of them seemed\nparticularly egregious. Ideally, we'd put the hash table in shared memory\nso that other backends could adjust it directly, but IIUC that requires it\nto be a fixed size, and the number of subscriptions is virtually unbounded.\nThere might still be problems with the patch, but I'm hoping it at least\nhelps further the discussion about which approach to take.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 4 Jan 2023 16:49:06 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Jan 4, 2023 at 11:03 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Jan 04, 2023 at 09:41:47AM +0530, Amit Kapila wrote:\n> > I am not sure if I understand the problem you are trying to solve with\n> > this part of the patch. Are you worried that after we mark some of the\n> > relation's state as READY, all the table syncs are in the READY state\n> > but we will not immediately try to check the two_pahse stuff and\n> > probably the apply worker may sleep before the next time it invokes\n> > process_syncing_tables_for_apply()?\n>\n> Yes.\n>\n> > If so, we probably also need to\n> > ensure that table_states_valid is marked false probably via\n> > invalidations so that we can get the latest state and then perform\n> > this check. I guess if we can do that then we can directly move the\n> > restart logic to the end.\n>\n> IMO this shows the advantage of just waking up the worker. It doesn't\n> change the apply worker's behavior besides making it more responsive.\n>\n\nBut there doesn't appear to be any guarantee that the result for\nAllTablesyncsReady() will change between the time it is invoked\nearlier in the function and at the place you have it in the patch.\nThis is because the value of 'table_states_valid' may not have\nchanged. So, how is this supposed to work?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 5 Jan 2023 09:09:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Jan 05, 2023 at 09:09:12AM +0530, Amit Kapila wrote:\n> On Wed, Jan 4, 2023 at 11:03 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Wed, Jan 04, 2023 at 09:41:47AM +0530, Amit Kapila wrote:\n>> > If so, we probably also need to\n>> > ensure that table_states_valid is marked false probably via\n>> > invalidations so that we can get the latest state and then perform\n>> > this check. I guess if we can do that then we can directly move the\n>> > restart logic to the end.\n>>\n>> IMO this shows the advantage of just waking up the worker. It doesn't\n>> change the apply worker's behavior besides making it more responsive.\n> \n> But there doesn't appear to be any guarantee that the result for\n> AllTablesyncsReady() will change between the time it is invoked\n> earlier in the function and at the place you have it in the patch.\n> This is because the value of 'table_states_valid' may not have\n> changed. So, how is this supposed to work?\n\nThe call to CommandCounterIncrement() should set table_states_valid to\nfalse if needed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 20:12:37 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Jan 04, 2023 at 08:12:37PM -0800, Nathan Bossart wrote:\n> On Thu, Jan 05, 2023 at 09:09:12AM +0530, Amit Kapila wrote:\n>> But there doesn't appear to be any guarantee that the result for\n>> AllTablesyncsReady() will change between the time it is invoked\n>> earlier in the function and at the place you have it in the patch.\n>> This is because the value of 'table_states_valid' may not have\n>> changed. So, how is this supposed to work?\n> \n> The call to CommandCounterIncrement() should set table_states_valid to\n> false if needed.\n\nIn v12, I moved the restart for two_phase mode to the end of\nprocess_syncing_tables_for_apply() so that we don't need to rely on another\niteration of the loop.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 4 Jan 2023 20:46:22 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Jan 5, 2023 at 6:19 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Jan 04, 2023 at 10:12:19AM -0800, Nathan Bossart wrote:\n> > From the discussion thus far, it sounds like the alternatives are to 1) add\n> > a global flag that causes wal_retrieve_retry_interval to be bypassed for\n> > all workers or to 2) add a hash map in the launcher and a\n> > restart_immediately flag in each worker slot. I'll go ahead and create a\n> > patch for 2 since it seems like the most complete solution, and we can\n> > evaluate whether the complexity seems appropriate.\n>\n> Here is a first attempt at adding a hash table to the launcher and a\n> restart_immediately flag in each worker slot. This provides a similar\n> speedup to lowering wal_retrieve_retry_interval to 1ms. I've noted a\n> couple of possible race conditions in comments, but none of them seemed\n> particularly egregious. Ideally, we'd put the hash table in shared memory\n> so that other backends could adjust it directly, but IIUC that requires it\n> to be a fixed size, and the number of subscriptions is virtually unbounded.\n>\n\nTrue, if we want we can use dshash for this. The garbage collection\nmechanism used in the patch seems odd to me as that will remove/add\nentries to the hash table even when the corresponding subscription is\nnever dropped. Also, adding this garbage collection each time seems\nlike an overhead, especially for small values of\nwal_retrieve_retry_interval and a large number of subscriptions.\n\nAnother point is immediately after cleaning the worker info, trying to\nfind it again seems of no use. In logicalrep_worker_launch(), using\nboth in_use and restart_immediately to find an unused slot doesn't\nlook neat to me, we could probably keep the in_use flag intact if we\nwant to reuse the worker. But again after freeing the worker, keeping\nits associated slot allocated sounds odd to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 5 Jan 2023 10:57:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Jan 5, 2023 at 10:16 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Jan 04, 2023 at 08:12:37PM -0800, Nathan Bossart wrote:\n> > On Thu, Jan 05, 2023 at 09:09:12AM +0530, Amit Kapila wrote:\n> >> But there doesn't appear to be any guarantee that the result for\n> >> AllTablesyncsReady() will change between the time it is invoked\n> >> earlier in the function and at the place you have it in the patch.\n> >> This is because the value of 'table_states_valid' may not have\n> >> changed. So, how is this supposed to work?\n> >\n> > The call to CommandCounterIncrement() should set table_states_valid to\n> > false if needed.\n>\n> In v12, I moved the restart for two_phase mode to the end of\n> process_syncing_tables_for_apply() so that we don't need to rely on another\n> iteration of the loop.\n>\n\nThis should work but it is better to add a comment before calling\nCommandCounterIncrement() to indicate that this is for making changes\nto the relation state visible.\n\nThinking along similar lines, won't apply worker need to be notified\nof SUBREL_STATE_SYNCWAIT state change by the tablesync worker?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 5 Jan 2023 11:34:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Jan 05, 2023 at 11:34:37AM +0530, Amit Kapila wrote:\n> On Thu, Jan 5, 2023 at 10:16 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> In v12, I moved the restart for two_phase mode to the end of\n>> process_syncing_tables_for_apply() so that we don't need to rely on another\n>> iteration of the loop.\n> \n> This should work but it is better to add a comment before calling\n> CommandCounterIncrement() to indicate that this is for making changes\n> to the relation state visible.\n\nWill do.\n\n> Thinking along similar lines, won't apply worker need to be notified\n> of SUBREL_STATE_SYNCWAIT state change by the tablesync worker?\n\nwait_for_worker_state_change() should notify the apply worker in this case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 5 Jan 2023 09:19:33 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Jan 05, 2023 at 10:57:58AM +0530, Amit Kapila wrote:\n> True, if we want we can use dshash for this.\n\nI'll look into this.\n\n> The garbage collection\n> mechanism used in the patch seems odd to me as that will remove/add\n> entries to the hash table even when the corresponding subscription is\n> never dropped.\n\nYeah, I think this deserves a comment. We can remove anything beyond\nwal_retrieve_retry_interval because the lack of a hash table entry is taken\nto mean that we can start the worker immediately. There might be a corner\ncase when wal_retrieve_retry_interval is concurrently updated, in which\ncase we'll effectively use the previous value for the worker. That doesn't\nseem too terrible to me.\n\nIt might be possible to remove this garbage collection completely if we use\ndshash, but I haven't thought through that approach completely yet.\n\n> Also, adding this garbage collection each time seems\n> like an overhead, especially for small values of\n> wal_retrieve_retry_interval and a large number of subscriptions.\n\nRight.\n\n> Another point is immediately after cleaning the worker info, trying to\n> find it again seems of no use. In logicalrep_worker_launch(), using\n> both in_use and restart_immediately to find an unused slot doesn't\n> look neat to me, we could probably keep the in_use flag intact if we\n> want to reuse the worker. But again after freeing the worker, keeping\n> its associated slot allocated sounds odd to me.\n\nYeah, this flag certainly feels hacky. With a shared hash table, we could\njust have backends remove the last-start-time entry directly, and we\nwouldn't need the flag.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 5 Jan 2023 09:29:24 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Jan 05, 2023 at 09:29:24AM -0800, Nathan Bossart wrote:\n> On Thu, Jan 05, 2023 at 10:57:58AM +0530, Amit Kapila wrote:\n>> True, if we want we can use dshash for this.\n> \n> I'll look into this.\n\nHere is an attempt at using dshash. This is quite a bit cleaner since we\ndon't need garbage collection or the flag in the worker slots. There is\nsome extra work required to set up the table, but it doesn't seem too bad.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 5 Jan 2023 16:00:27 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Thu, Jan 5, 2023 at 10:49 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Thu, Jan 05, 2023 at 11:34:37AM +0530, Amit Kapila wrote:\n> > On Thu, Jan 5, 2023 at 10:16 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> In v12, I moved the restart for two_phase mode to the end of\n> >> process_syncing_tables_for_apply() so that we don't need to rely on another\n> >> iteration of the loop.\n> >\n> > This should work but it is better to add a comment before calling\n> > CommandCounterIncrement() to indicate that this is for making changes\n> > to the relation state visible.\n>\n> Will do.\n>\n\nIsn't it better to move this part into a separate patch as this is\nuseful even without the main patch to improve wakeups?\n\n> > Thinking along similar lines, won't apply worker need to be notified\n> > of SUBREL_STATE_SYNCWAIT state change by the tablesync worker?\n>\n> wait_for_worker_state_change() should notify the apply worker in this case.\n>\n\nI think this is yet to be included in the patch, right?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 6 Jan 2023 10:30:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "I found some additional places that should remove the last-start time from\nthe hash table. I've added those in v14.\n\nOn Fri, Jan 06, 2023 at 10:30:18AM +0530, Amit Kapila wrote:\n> On Thu, Jan 5, 2023 at 10:49 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Thu, Jan 05, 2023 at 11:34:37AM +0530, Amit Kapila wrote:\n>> > On Thu, Jan 5, 2023 at 10:16 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> >> In v12, I moved the restart for two_phase mode to the end of\n>> >> process_syncing_tables_for_apply() so that we don't need to rely on another\n>> >> iteration of the loop.\n>> >\n>> > This should work but it is better to add a comment before calling\n>> > CommandCounterIncrement() to indicate that this is for making changes\n>> > to the relation state visible.\n>>\n>> Will do.\n> \n> Isn't it better to move this part into a separate patch as this is\n> useful even without the main patch to improve wakeups?\n\nІ moved it to a separate patch in v14.\n\n>> > Thinking along similar lines, won't apply worker need to be notified\n>> > of SUBREL_STATE_SYNCWAIT state change by the tablesync worker?\n>>\n>> wait_for_worker_state_change() should notify the apply worker in this case.\n> \n> I think this is yet to be included in the patch, right?\n\nThis is already present on HEAD.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 5 Jan 2023 21:40:17 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I found some additional places that should remove the last-start time from\n> the hash table. I've added those in v14.\n\nI've pushed 0001 and 0002, which seem pretty uncontroversial.\nAttached is a rebased 0003, just to keep the cfbot happy.\nI'm kind of wondering whether 0003 is worth the complexity TBH,\nbut in any case I ran out of time to look at it closely today.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 06 Jan 2023 17:31:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Fri, Jan 06, 2023 at 05:31:26PM -0500, Tom Lane wrote:\n> I've pushed 0001 and 0002, which seem pretty uncontroversial.\n\nThanks!\n\n> Attached is a rebased 0003, just to keep the cfbot happy.\n> I'm kind of wondering whether 0003 is worth the complexity TBH,\n> but in any case I ran out of time to look at it closely today.\n\nYeah. It's not as bad as I was expecting, but it does add a bit more\ncomplexity than is probably warranted. I'm not wedded to this approach.\n\nBTW I intend to start a new thread for the bugs I mentioned upthread that\nwere revealed by setting wal_retrieve_retry_interval to 1ms in the tests.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 6 Jan 2023 16:45:25 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "rebased for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 9 Jan 2023 09:34:35 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Sat, Jan 7, 2023 at 6:15 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Jan 06, 2023 at 05:31:26PM -0500, Tom Lane wrote:\n>\n> > Attached is a rebased 0003, just to keep the cfbot happy.\n> > I'm kind of wondering whether 0003 is worth the complexity TBH,\n> > but in any case I ran out of time to look at it closely today.\n>\n> Yeah. It's not as bad as I was expecting, but it does add a bit more\n> complexity than is probably warranted.\n>\n\nPersonally, I think it is not as complex as we were initially thinking\nand does the job accurately unless we are missing something. So, +1 to\nproceed with this approach.\n\nI haven't looked in detail but isn't it better to explain somewhere in\nthe comments that it achieves to rate limit the restart of workers in\ncase of error and allows them to restart immediately in case of\nsubscription parameter change?\n\nAnother minor point: Don't we need to set the launcher's latch after\nremoving the entry from the hash table to avoid the launcher waiting\non the latch for a bit longer?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 Jan 2023 10:59:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Jan 10, 2023 at 10:59:14AM +0530, Amit Kapila wrote:\n> I haven't looked in detail but isn't it better to explain somewhere in\n> the comments that it achieves to rate limit the restart of workers in\n> case of error and allows them to restart immediately in case of\n> subscription parameter change?\n\nI expanded one of the existing comments to make this clear.\n\n> Another minor point: Don't we need to set the launcher's latch after\n> removing the entry from the hash table to avoid the launcher waiting\n> on the latch for a bit longer?\n\nThe launcher's latch should be set when the apply worker exits. The apply\nworker's notify_pid is set to the launcher, which means the launcher\nwill be sent SIGUSR1 on exit. The launcher's SIGUSR1 handler sets its\nlatch.\n\nOf course, if the launcher restarts, then the notify_pid will no longer be\naccurate. However, I see that workers also register a before_shmem_exit\ncallback that will send SIGUSR1 to the launcher_pid currently stored in\nshared memory. (I wonder if there is a memory ordering bug here.)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 10 Jan 2023 09:43:45 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Tue, Jan 10, 2023 at 10:59:14AM +0530, Amit Kapila wrote:\n>> I haven't looked in detail but isn't it better to explain somewhere in\n>> the comments that it achieves to rate limit the restart of workers in\n>> case of error and allows them to restart immediately in case of\n>> subscription parameter change?\n\n> I expanded one of the existing comments to make this clear.\n\nI pushed v17 with some mostly-cosmetic changes, including more comments.\n\n> Of course, if the launcher restarts, then the notify_pid will no longer be\n> accurate. However, I see that workers also register a before_shmem_exit\n> callback that will send SIGUSR1 to the launcher_pid currently stored in\n> shared memory. (I wonder if there is a memory ordering bug here.)\n\nI think it's all close enough in reality. There are other issues in\nthis code, and I'm about to start a new thread about one I identified\nwhile testing this patch, but I think we're in good shape on this\nparticular point. I've marked the CF entry as committed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Jan 2023 14:12:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Sun, Jan 22, 2023 at 02:12:54PM -0500, Tom Lane wrote:\n> I pushed v17 with some mostly-cosmetic changes, including more comments.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 09:25:45 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Monday, January 23, 2023 3:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nHi,\n\n> \n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > On Tue, Jan 10, 2023 at 10:59:14AM +0530, Amit Kapila wrote:\n> >> I haven't looked in detail but isn't it better to explain somewhere\n> >> in the comments that it achieves to rate limit the restart of workers\n> >> in case of error and allows them to restart immediately in case of\n> >> subscription parameter change?\n> \n> > I expanded one of the existing comments to make this clear.\n> \n> I pushed v17 with some mostly-cosmetic changes, including more comments.\n\nI noticed one minor thing in this commit. \n\n-\nLogicalRepCtx->last_start_dsh = DSM_HANDLE_INVALID;\n-\n\nThe code takes the last_start_dsh as dsm_handle, but it seems it is a dsa_pointer.\n\" typedef dsa_pointer dshash_table_handle;\" This won’t cause any problem, but I feel\nIt would be easier to understand if we take it as dsa_pointer and use InvalidDsaPointer here,\nlike what he attached patch does. What do you think ?\n\nBest regards,\nHou zj", "msg_date": "Tue, 24 Jan 2023 02:55:07 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Jan 24, 2023 at 02:55:07AM +0000, houzj.fnst@fujitsu.com wrote:\n> I noticed one minor thing in this commit. \n> \n> -\n> LogicalRepCtx->last_start_dsh = DSM_HANDLE_INVALID;\n> -\n> \n> The code takes the last_start_dsh as dsm_handle, but it seems it is a dsa_pointer.\n> \" typedef dsa_pointer dshash_table_handle;\" This won’t cause any problem, but I feel\n> It would be easier to understand if we take it as dsa_pointer and use InvalidDsaPointer here,\n> like what he attached patch does. What do you think ?\n\nIMO ideally there should be a DSA_HANDLE_INVALID and DSHASH_HANDLE_INVALID\nfor use with dsa_handle and dshash_table_handle, respectively. But your\npatch does seem like an improvement.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Jan 2023 09:13:29 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> IMO ideally there should be a DSA_HANDLE_INVALID and DSHASH_HANDLE_INVALID\n> for use with dsa_handle and dshash_table_handle, respectively. But your\n> patch does seem like an improvement.\n\nYeah, particularly given that dsa.h says\n\n/*\n * The handle for a dsa_area is currently implemented as the dsm_handle\n * for the first DSM segment backing this dynamic storage area, but client\n * code shouldn't assume that is true.\n */\ntypedef dsm_handle dsa_handle;\n\nbut then provides no way for client code to not be aware that a\ndsa_handle is a dsm_handle, if it needs to deal with \"invalid\" values.\nEither that comment needs to be rewritten or we need to invent some\nmore macros.\n\nI agree that the patch as given is an improvement on what was\ncommitted, but I wonder whether we shouldn't work a little harder\non cleaning this up more widely.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Jan 2023 13:13:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Tue, Jan 24, 2023 at 01:13:55PM -0500, Tom Lane wrote:\n> Either that comment needs to be rewritten or we need to invent some\n> more macros.\n\nHere is a first attempt at a patch. I scanned through all the existing\nuses of InvalidDsaPointer and DSM_HANDLE_INVALID and didn't notice anything\nelse that needed adjusting.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 24 Jan 2023 10:42:17 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "At Tue, 24 Jan 2023 10:42:17 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Tue, Jan 24, 2023 at 01:13:55PM -0500, Tom Lane wrote:\n> > Either that comment needs to be rewritten or we need to invent some\n> > more macros.\n> \n> Here is a first attempt at a patch. I scanned through all the existing\n> uses of InvalidDsaPointer and DSM_HANDLE_INVALID and didn't notice anything\n> else that needed adjusting.\n\nThere seems to be two cases for DSA_HANDLE_INVALID in dsa_get_handle\nand dsa_attach_in_place, one of which is Assert(), though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 25 Jan 2023 16:12:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Tue, 24 Jan 2023 10:42:17 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>> Here is a first attempt at a patch. I scanned through all the existing\n>> uses of InvalidDsaPointer and DSM_HANDLE_INVALID and didn't notice anything\n>> else that needed adjusting.\n\n> There seems to be two cases for DSA_HANDLE_INVALID in dsa_get_handle\n> and dsa_attach_in_place, one of which is Assert(), though.\n\nRight. I fixed some other infelicities and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Jan 2023 11:49:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Jan 25, 2023 at 04:12:00PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 24 Jan 2023 10:42:17 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>> Here is a first attempt at a patch. I scanned through all the existing\n>> uses of InvalidDsaPointer and DSM_HANDLE_INVALID and didn't notice anything\n>> else that needed adjusting.\n> \n> There seems to be two cases for DSA_HANDLE_INVALID in dsa_get_handle\n> and dsa_attach_in_place, one of which is Assert(), though.\n\nAh, sorry, I'm not sure how I missed this. Thanks for looking.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 09:57:29 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" }, { "msg_contents": "On Wed, Jan 25, 2023 at 11:49:27AM -0500, Tom Lane wrote:\n> Right. I fixed some other infelicities and pushed it.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Jan 2023 09:57:46 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: wake up logical workers after ALTER SUBSCRIPTION" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a patch proposal to $SUBJECT.\n\nThe idea has been proposed by Andres in [1] and can be seen as preparatory work for [1].\n\nThe patch introduces 2 new Macros, PGSTAT_DEFINE_REL_INT64_FIELD_ACCESSOR and PGSTAT_DEFINE_REL_TSTZ_FIELD_ACCESSOR.\n\nFor some functions (namely pg_stat_get_ins_since_vacuum(), pg_stat_get_dead_tuples(), pg_stat_get_mod_since_analyze(),\npg_stat_get_live_tuples(), pg_stat_get_last_autovacuum_time(), pg_stat_get_autovacuum_count(), pg_stat_get_last_vacuum_time(),\npg_stat_get_last_autoanalyze_time(), pg_stat_get_autoanalyze_count() and pg_stat_get_last_analyze_time()), I had to choose between renaming the function and the counter.\n\nI took the later option to avoid changing the linked views, tests....\n\nThis patch is also a step forward to \"cleaning\" the metrics/fields/functions naming (means having them match).\n\n[1]: https://www.postgresql.org/message-id/flat/f572abe7-a1bb-e13b-48c7-2ca150546822@gmail.com\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 22 Nov 2022 08:09:22 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "Overall, this change looks straightforward, and it saves a couple hundred\nlines.\n\nOn Tue, Nov 22, 2022 at 08:09:22AM +0100, Drouvot, Bertrand wrote:\n> +/* pg_stat_get_numscans */\n> +PGSTAT_DEFINE_REL_INT64_FIELD_ACCESSOR(pg_stat_get_, numscans);\n> +\n> +/* pg_stat_get_tuples_returned */\n> +PGSTAT_DEFINE_REL_INT64_FIELD_ACCESSOR(pg_stat_get_, tuples_returned);\n> +\n> +/* pg_stat_get_tuples_fetched */\n> +PGSTAT_DEFINE_REL_INT64_FIELD_ACCESSOR(pg_stat_get_, tuples_fetched);\n\nCan we hard-code the prefix in the macro? It looks like all of these use\nthe same one.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 2 Dec 2022 16:51:25 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "Hi,\n\nOn 12/3/22 1:51 AM, Nathan Bossart wrote:\n> Overall, this change looks straightforward, and it saves a couple hundred\n> lines.\n> \n\nThanks for looking at it!\n\n> On Tue, Nov 22, 2022 at 08:09:22AM +0100, Drouvot, Bertrand wrote:\n>> +/* pg_stat_get_numscans */\n>> +PGSTAT_DEFINE_REL_INT64_FIELD_ACCESSOR(pg_stat_get_, numscans);\n>> +\n>> +/* pg_stat_get_tuples_returned */\n>> +PGSTAT_DEFINE_REL_INT64_FIELD_ACCESSOR(pg_stat_get_, tuples_returned);\n>> +\n>> +/* pg_stat_get_tuples_fetched */\n>> +PGSTAT_DEFINE_REL_INT64_FIELD_ACCESSOR(pg_stat_get_, tuples_fetched);\n> \n> Can we hard-code the prefix in the macro? It looks like all of these use\n> the same one.\n> \n\nGood point! Done in V2 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 3 Dec 2022 10:31:19 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Sat, Dec 03, 2022 at 10:31:19AM +0100, Drouvot, Bertrand wrote:\n> On 12/3/22 1:51 AM, Nathan Bossart wrote:\n>> Can we hard-code the prefix in the macro? It looks like all of these use\n>> the same one.\n> \n> Good point! Done in V2 attached.\n\nThanks. I editorialized a bit in the attached v3. I'm not sure that my\nproposed names for the macros are actually an improvement. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 3 Dec 2022 12:16:17 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "Hi,\n\nOn 12/3/22 9:16 PM, Nathan Bossart wrote:\n> On Sat, Dec 03, 2022 at 10:31:19AM +0100, Drouvot, Bertrand wrote:\n>> On 12/3/22 1:51 AM, Nathan Bossart wrote:\n>>> Can we hard-code the prefix in the macro? It looks like all of these use\n>>> the same one.\n>>\n>> Good point! Done in V2 attached.\n> \n> Thanks. I editorialized a bit in the attached v3. I'm not sure that my\n> proposed names for the macros are actually an improvement. WDYT?\n> \n\nThanks! I do prefer the macros definition ordering that you're proposing (that makes pgstatfuncs.c \"easier\" to read).\n\nAs far the names, I think it's better to replace \"TAB\" with \"REL\" (like in v4 attached): the reason is that those macros will be used in [1] for both tables and indexes stats (and so we'd have to replace \"TAB\" with \"REL\" in [1]).\nHaving \"REL\" already in place reduces the changes that will be needed in [1].\n\n[1]: https://www.postgresql.org/message-id/flat/f572abe7-a1bb-e13b-48c7-2ca150546822@gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 4 Dec 2022 06:07:37 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Sun, Dec 04, 2022 at 06:07:37AM +0100, Drouvot, Bertrand wrote:\n> On 12/3/22 9:16 PM, Nathan Bossart wrote:\n>> Thanks. I editorialized a bit in the attached v3. I'm not sure that my\n>> proposed names for the macros are actually an improvement. WDYT?\n> \n> Thanks! I do prefer the macros definition ordering that you're proposing (that makes pgstatfuncs.c \"easier\" to read).\n> \n> As far the names, I think it's better to replace \"TAB\" with \"REL\" (like in v4 attached): the reason is that those macros will be used in [1] for both tables and indexes stats (and so we'd have to replace \"TAB\" with \"REL\" in [1]).\n> Having \"REL\" already in place reduces the changes that will be needed in [1].\n\nAlright. I marked this as ready-for-committer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 4 Dec 2022 09:32:07 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "Hi,\n\nOn 12/4/22 6:32 PM, Nathan Bossart wrote:\n> On Sun, Dec 04, 2022 at 06:07:37AM +0100, Drouvot, Bertrand wrote:\n>> On 12/3/22 9:16 PM, Nathan Bossart wrote:\n>>> Thanks. I editorialized a bit in the attached v3. I'm not sure that my\n>>> proposed names for the macros are actually an improvement. WDYT?\n>>\n>> Thanks! I do prefer the macros definition ordering that you're proposing (that makes pgstatfuncs.c \"easier\" to read).\n>>\n>> As far the names, I think it's better to replace \"TAB\" with \"REL\" (like in v4 attached): the reason is that those macros will be used in [1] for both tables and indexes stats (and so we'd have to replace \"TAB\" with \"REL\" in [1]).\n>> Having \"REL\" already in place reduces the changes that will be needed in [1].\n> \n> Alright. I marked this as ready-for-committer.\n> \n\nThanks!\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 08:27:15 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Mon, Dec 05, 2022 at 08:27:15AM +0100, Drouvot, Bertrand wrote:\n> On 12/4/22 6:32 PM, Nathan Bossart wrote:\n>> Alright. I marked this as ready-for-committer.\n> \n> Thanks!\n\nWell, that's kind of nice:\n 5 files changed, 139 insertions(+), 396 deletions(-)\nAnd I like removing code, so..\n\nIn the same area, I am counting a total of 21 (?) pgstat routines for\ndatabases that rely on pgstat_fetch_stat_dbentry() while returning an\nint64. This would lead to more cleanup.\n--\nMichael", "msg_date": "Mon, 5 Dec 2022 16:44:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "Hi,\n\nOn 12/5/22 8:44 AM, Michael Paquier wrote:\n> On Mon, Dec 05, 2022 at 08:27:15AM +0100, Drouvot, Bertrand wrote:\n>> On 12/4/22 6:32 PM, Nathan Bossart wrote:\n>>> Alright. I marked this as ready-for-committer.\n>>\n>> Thanks!\n> \n> Well, that's kind of nice:\n> 5 files changed, 139 insertions(+), 396 deletions(-)\n> And I like removing code, so..\n> \n\nThanks for looking at it!\n\n> In the same area, I am counting a total of 21 (?) pgstat routines for\n> databases that rely on pgstat_fetch_stat_dbentry() while returning an\n> int64. This would lead to more cleanup.\n> --\n\n\nYeah, good point, thanks!\n\nI'll look at the \"databases\" ones but I think in a separate patch. The reason is that the current one is preparatory work for [1].\nMeans, once the current patch is committed, working on [1] and \"cleaning\" the databases one could be done in parallel. Sounds good to you?\n\n\n[1]: https://commitfest.postgresql.org/41/3984/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 09:11:43 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Mon, Dec 05, 2022 at 09:11:43AM +0100, Drouvot, Bertrand wrote:\n> Means, once the current patch is committed, working on [1] and\n> \"cleaning\" the databases one could be done in parallel. Sounds good\n> to you? \n\nDoing that in a separate patch is fine by me.\n--\nMichael", "msg_date": "Mon, 5 Dec 2022 17:16:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Mon, Dec 05, 2022 at 05:16:56PM +0900, Michael Paquier wrote:\n> Doing that in a separate patch is fine by me.\n\nI have applied the patch for the tab entries, then could not resist\npoking at the parts for the db entries. This leads to more reduction\nthan the other one actually, as of:\n 4 files changed, 169 insertions(+), 447 deletions(-)\n\nLike the previous one, the functions have the same names and the field\nnames are updated to fit in the picture. Thoughts?\n--\nMichael", "msg_date": "Tue, 6 Dec 2022 11:45:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Tue, Dec 06, 2022 at 11:45:10AM +0900, Michael Paquier wrote:\n> I have applied the patch for the tab entries, then could not resist\n> poking at the parts for the db entries. This leads to more reduction\n> than the other one actually, as of:\n> 4 files changed, 169 insertions(+), 447 deletions(-)\n> \n> Like the previous one, the functions have the same names and the field\n> names are updated to fit in the picture. Thoughts?\n\nI might alphabetize the functions, but otherwise it looks good to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 19:54:45 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "Hi,\n\nOn 12/6/22 3:45 AM, Michael Paquier wrote:\n> On Mon, Dec 05, 2022 at 05:16:56PM +0900, Michael Paquier wrote:\n>> Doing that in a separate patch is fine by me.\n> \n> I have applied the patch for the tab entries, then could not resist\n> poking at the parts for the db entries. This leads to more reduction\n> than the other one actually, as of:\n> 4 files changed, 169 insertions(+), 447 deletions(-)\n> \n> Like the previous one, the functions have the same names and the field\n> names are updated to fit in the picture. Thoughts?\n\nThanks! For this one (the INT64 case) the fields renaming are not strictly mandatory as we could add the \"n_\" in the macro itself, something like:\n\n+#define PG_STAT_GET_DBENTRY_INT64(stat) \\\n+Datum \\\n+CppConcat(pg_stat_get_db_,stat)(PG_FUNCTION_ARGS) \\\n+{ \\\n+ Oid dbid = PG_GETARG_OID(0); \\\n+ int64 result; \\\n+ PgStat_StatDBEntry *dbentry; \\\n+ \\\n+ if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL) \\\n+ result = 0; \\\n+ else \\\n+ result = (int64) (dbentry->CppConcat(n_,stat)); \\\n+ \\\n+ PG_RETURN_INT64(result); \\\n+}\n\nFields renaming was mandatory in the previous ones as there was already a mix of with/without \"n_\" in the existing fields names.\n\nThat said, I think it's better to rename the fields as you did (to be \"consistent\" on the naming between relation/db stats), so the patch LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Dec 2022 05:28:47 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "Hi,\n\nOn 12/6/22 3:45 AM, Michael Paquier wrote:\n> On Mon, Dec 05, 2022 at 05:16:56PM +0900, Michael Paquier wrote:\n>> Doing that in a separate patch is fine by me.\n> \n> I have applied the patch for the tab entries,\n\nOops, I missed this part when reading the email the first time and just saw the patch has been committed.\n\nSo, thanks for having applied the patch!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Dec 2022 06:04:58 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Tue, Dec 6, 2022 at 8:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 05, 2022 at 05:16:56PM +0900, Michael Paquier wrote:\n> > Doing that in a separate patch is fine by me.\n>\n> I have applied the patch for the tab entries, then could not resist\n> poking at the parts for the db entries. This leads to more reduction\n> than the other one actually, as of:\n> 4 files changed, 169 insertions(+), 447 deletions(-)\n>\n> Like the previous one, the functions have the same names and the field\n> names are updated to fit in the picture. Thoughts?\n\nLikewise, is there a plan to add function generation macros for\npg_stat_get_bgwriter, pg_stat_get_xact and so on?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Dec 2022 12:23:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Tue, Dec 06, 2022 at 12:23:55PM +0530, Bharath Rupireddy wrote:\n> Likewise, is there a plan to add function generation macros for\n> pg_stat_get_bgwriter, pg_stat_get_xact and so on?\n\nYes, I saw that and we could do it, but I did not get as much\nenthusiastic in terms of code reduction.\n--\nMichael", "msg_date": "Tue, 6 Dec 2022 17:01:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Tue, Dec 06, 2022 at 05:28:47AM +0100, Drouvot, Bertrand wrote:\n> Fields renaming was mandatory in the previous ones as there was\n> already a mix of with/without \"n_\" in the existing fields names.\n> \n> That said, I think it's better to rename the fields as you did (to\n> be \"consistent\" on the naming between relation/db stats), so the\n> patch LGTM.\n\nYeah, PgStat_StatDBEntry is the last one using this style, so I have\nkept my change with the variables renamed rather than painting more\nCppConcat()s. The functions are still named the same as the original\nones.\n--\nMichael", "msg_date": "Wed, 7 Dec 2022 09:16:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "This series of patches has caused buildfarm member wrasse to\nstart complaining about \"empty declarations\":\n\n wrasse | 2022-12-09 21:08:33 | \"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/utils/adt/pgstatfuncs.c\", line 56: warning: syntax error: empty declaration\n wrasse | 2022-12-09 21:08:33 | \"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/utils/adt/pgstatfuncs.c\", line 59: warning: syntax error: empty declaration\n wrasse | 2022-12-09 21:08:33 | \"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/utils/adt/pgstatfuncs.c\", line 62: warning: syntax error: empty declaration\n\n[ etc etc ]\n\nPresumably it could be silenced by removing the semicolons after\nthe new macro calls:\n\n/* pg_stat_get_analyze_count */\nPG_STAT_GET_RELENTRY_INT64(analyze_count);\n\n/* pg_stat_get_autoanalyze_count */\nPG_STAT_GET_RELENTRY_INT64(autoanalyze_count);\n\n/* pg_stat_get_autovacuum_count */\nPG_STAT_GET_RELENTRY_INT64(autovacuum_count);\n\nI wondered if that would confuse pgindent, but a quick check\nsays no. (The blank lines in between may be helping.)\n\nWhile I'm nitpicking, I think that the way you've set up the\nmacro definitions is a bit dangerous:\n\n#define PG_STAT_GET_RELENTRY_INT64(stat) \\\nDatum \\\nCppConcat(pg_stat_get_,stat)(PG_FUNCTION_ARGS) \\\n{ \\\n... \\\n PG_RETURN_INT64(result); \\\n} \\\n\nThe backslash after the last right brace means that the line\nfollowing that is part of the macro body. This does no harm as\nlong as said line is blank ... but I think it's a foot-gun\nwaiting to bite somebody, because visually you'd think the macro\nends with the brace. So I'd leave off that last backslash.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Dec 2022 21:43:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Fri, Dec 09, 2022 at 09:43:56PM -0500, Tom Lane wrote:\n> Presumably it could be silenced by removing the semicolons after\n> the new macro calls:\n> \n> /* pg_stat_get_analyze_count */\n> PG_STAT_GET_RELENTRY_INT64(analyze_count);\n> \n> /* pg_stat_get_autoanalyze_count */\n> PG_STAT_GET_RELENTRY_INT64(autoanalyze_count);\n> \n> /* pg_stat_get_autovacuum_count */\n> PG_STAT_GET_RELENTRY_INT64(autovacuum_count);\n> \n> I wondered if that would confuse pgindent, but a quick check\n> says no. (The blank lines in between may be helping.)\n\nIndeed. Will fix.\n\n> The backslash after the last right brace means that the line\n> following that is part of the macro body. This does no harm as\n> long as said line is blank ... but I think it's a foot-gun\n> waiting to bite somebody, because visually you'd think the macro\n> ends with the brace. So I'd leave off that last backslash.\n\nWill address this one as well for all the macro definitions. Thanks!\n--\nMichael", "msg_date": "Sat, 10 Dec 2022 12:52:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Fri, Dec 09, 2022 at 09:43:56PM -0500, Tom Lane wrote:\n> Presumably it could be silenced by removing the semicolons after\n> the new macro calls:\n\n> The backslash after the last right brace means that the line\n> following that is part of the macro body. This does no harm as\n> long as said line is blank ... but I think it's a foot-gun\n> waiting to bite somebody, because visually you'd think the macro\n> ends with the brace. So I'd leave off that last backslash.\n\nIndeed. Patch attached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 9 Dec 2022 19:55:45 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "On Fri, Dec 09, 2022 at 07:55:45PM -0800, Nathan Bossart wrote:\n> Indeed. Patch attached.\n\nYep, thanks. I have exactly the same thing brewing in one of my\nstaging branches.\n--\nMichael", "msg_date": "Sat, 10 Dec 2022 13:07:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" }, { "msg_contents": "Hi,\n\nOn 12/10/22 4:55 AM, Nathan Bossart wrote:\n> On Fri, Dec 09, 2022 at 09:43:56PM -0500, Tom Lane wrote:\n>> Presumably it could be silenced by removing the semicolons after\n>> the new macro calls:\n> \n>> The backslash after the last right brace means that the line\n>> following that is part of the macro body. This does no harm as\n>> long as said line is blank ... but I think it's a foot-gun\n>> waiting to bite somebody, because visually you'd think the macro\n>> ends with the brace. So I'd leave off that last backslash.\n> \n> Indeed. Patch attached.\n> \n\nOh right. Thanks Tom for the explanations and Nathan/Michael for the fix.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 10 Dec 2022 09:06:06 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_* functions with Macros" } ]
[ { "msg_contents": "Hello.\n\nI noticed that $SUBJECT. \"spurious\" here means the locks on the rows\nthat are not seemingly qualified by the query condition (that is, EPQ\nfailure).\n\nIt doesn't seem to be a bug to me (or it\nseems just inevitable.). But that doesn't seems to be described either\nin the doc. If I'm right here, don't we need one, something like this?\n\n\ndiff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml\nindex 1f9538f2fe..97253dedc6 100644\n--- a/doc/src/sgml/ref/select.sgml\n+++ b/doc/src/sgml/ref/select.sgml\n@@ -200,6 +200,9 @@ TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ]\n <command>SELECT</command> statement locks the selected rows\n against concurrent updates. (See <xref linkend=\"sql-for-update-share\"/>\n below.)\n+ Note that concurrent udpates may cause some unmatched rows locked.\n+ SELECT statements with SKIP LOCKED may miss some rows that are not\n+ returned by concurrent FOR_UPDATE queries.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 22 Nov 2022 16:23:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "FOR UPDATE may leave spurious locks" } ]
[ { "msg_contents": "Hi hackers,\n\nWhen running tpcc on sysbench with high concurrency (96 threads, scale factor 5) we realized that a fix for visibility check (introduced in PG-14.5) causes sysbench to fail in 1 out of 70 runs.\nThe error is the following:\n\nSQL error, errno = 0, state = 'XX000': new multixact has more than one updating member\n\nAnd it is caused by the following statement:\n\nUPDATE warehouse1\n SET w_ytd = w_ytd + 234\n WHERE w_id = 3;\n\nThe commit that fixes the visibility check is the following:\nhttps://github.com/postgres/postgres/commit/e24615a0057a9932904317576cf5c4d42349b363\n\nWe reverted this commit and tpcc does not fail anymore, proving that this change is problematic.\nSteps to reproduce:\n1. Install sysbench\n https://github.com/akopytov/sysbench\n2. Install percona sysbench TPCC\n https://github.com/Percona-Lab/sysbench-tpcc\n3. Run percona sysbench -- prepare\n # sysbench-tpcc/tpcc.lua --pgsql-host=localhost --pgsql-port=5432 --pgsql-user={USER} --pgsql-password={PASSWORD} --pgsql-db=test_database --db-driver=pgsql --tables=1 --threads=96 --scale=5 --time=60 prepare\n4. Run percona sysbench -- run\n # sysbench-tpcc/tpcc.lua --pgsql-host=localhost --pgsql-port=5432 --pgsql-user={USER} --pgsql-password={PASSWORD} --pgsql-db=test_database --db-driver=pgsql --tables=1 --report-interval=1 --rand-seed=1 --threads=96 --scale=5 --time=60 run\n\nWe tested on a machine with 2 NUMA nodes, 16 physical cores per node, and 2 threads per core, resulting in 64 threads total. The total memory is 376GB.\nAttached please find the configuration file we used (postgresql.conf).\n\nThis commit was supposed to fix a race condition during the visibility check. Please let us know whether you are aware of this issue and if there is a quick fix.\nAny input is highly appreciated.\n\nThanks,\nDimos\n[ServiceNow]", "msg_date": "Tue, 22 Nov 2022 11:38:14 +0000", "msg_from": "Dimos Stamatakis <dimos.stamatakis@servicenow.com>", "msg_from_op": true, "msg_subject": "Fix for visibility check on 14.5 fails on tpcc with high concurrency" }, { "msg_contents": "Hello Dimos\n\nOn 2022-Nov-22, Dimos Stamatakis wrote:\n\n> When running tpcc on sysbench with high concurrency (96 threads, scale\n> factor 5) we realized that a fix for visibility check (introduced in\n> PG-14.5) causes sysbench to fail in 1 out of 70 runs.\n> The error is the following:\n> \n> SQL error, errno = 0, state = 'XX000': new multixact has more than one updating member\n\nOuch.\n\nI did not remember any reports of this. Searching I found this recent\none:\nhttps://postgr.es/m/17518-04e368df5ad7f2ee@postgresql.org\n\nHowever, the reporter there says they're using 12.10, and according to\nsrc/tools/git_changelog the commit appeared only in 12.12:\n\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nBranch: master Release: REL_15_BR [adf6d5dfb] 2022-06-27 08:21:08 +0300\nBranch: REL_14_STABLE Release: REL_14_5 [e24615a00] 2022-06-27 08:24:30 +0300\nBranch: REL_13_STABLE Release: REL_13_8 [7ba325fd7] 2022-06-27 08:24:35 +0300\nBranch: REL_12_STABLE Release: REL_12_12 [af530898e] 2022-06-27 08:24:36 +0300\nBranch: REL_11_STABLE Release: REL_11_17 [b49889f3c] 2022-06-27 08:24:37 +0300\nBranch: REL_10_STABLE Release: REL_10_22 [4822b4627] 2022-06-27 08:24:38 +0300\n\n Fix visibility check when XID is committed in CLOG but not in procarray.\n [...]\n\n\nThinking further, one problem in tracking this down is that at this\npoint the multixact in question is *being created*, so we don't have a\nWAL trail we could trace through.\n\nI suggest that we could improve that elog() so that it includes the\nmembers of the multixact in question, which could help us better\nunderstand what is going on.\n\n> This commit was supposed to fix a race condition during the visibility\n> check. Please let us know whether you are aware of this issue and if\n> there is a quick fix.\n\nI don't think so.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 23 Nov 2022 10:18:13 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "On 2022-Nov-23, Alvaro Herrera wrote:\n\n> I suggest that we could improve that elog() so that it includes the\n> members of the multixact in question, which could help us better\n> understand what is going on.\n\nSomething like the attached. It would result in output like this:\nWARNING: new multixact has more than one updating member: 0 2[17378 (keysh), 17381 (nokeyupd)]\n\nThen it should be possible to trace (in pg_waldump output) the\noperations of each of the transactions that have any status in the\nmultixact that includes some form of \"upd\".\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Just treat us the way you want to be treated + some extra allowance\n for ignorance.\" (Michael Brusser)", "msg_date": "Wed, 23 Nov 2022 11:53:51 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Something like the attached. It would result in output like this:\n> WARNING: new multixact has more than one updating member: 0 2[17378 (keysh), 17381 (nokeyupd)]\n>\n> Then it should be possible to trace (in pg_waldump output) the\n> operations of each of the transactions that have any status in the\n> multixact that includes some form of \"upd\".\n\nThat seems very useful.\n\nSeparately, I wonder if it would make sense to add additional\ndefensive checks to FreezeMultiXactId() for this. There is an\nassertion that should catch the presence of multiple updaters in a\nsingle Multi when it looks like we have to generate a new Multi to\ncarry the XID members forward (typically something we only need to do\nduring a VACUUM FREEZE). We could at least make that\n\"Assert(!TransactionIdIsValid(update_xid));\" line into a defensive\n\"can't happen\" ereport(). It couldn't hurt, at least -- we already\nhave a similar relfrozenxid check nearby, added after the \"freeze the\ndead\" bug was fixed.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 23 Nov 2022 08:14:08 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "On 2022-Nov-23, Peter Geoghegan wrote:\n\n> On Wed, Nov 23, 2022 at 2:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Something like the attached. It would result in output like this:\n> > WARNING: new multixact has more than one updating member: 0 2[17378 (keysh), 17381 (nokeyupd)]\n> >\n> > Then it should be possible to trace (in pg_waldump output) the\n> > operations of each of the transactions that have any status in the\n> > multixact that includes some form of \"upd\".\n> \n> That seems very useful.\n\nOkay, pushed to all branches.\n\n> Separately, I wonder if it would make sense to add additional\n> defensive checks to FreezeMultiXactId() for this. There is an\n> assertion that should catch the presence of multiple updaters in a\n> single Multi when it looks like we have to generate a new Multi to\n> carry the XID members forward (typically something we only need to do\n> during a VACUUM FREEZE). We could at least make that\n> \"Assert(!TransactionIdIsValid(update_xid));\" line into a defensive\n> \"can't happen\" ereport(). It couldn't hurt, at least -- we already\n> have a similar relfrozenxid check nearby, added after the \"freeze the\n> dead\" bug was fixed.\n\nHmm, agreed. I'll see about that separately.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I'm always right, but sometimes I'm more right than other times.\"\n (Linus Torvalds)\n\n\n", "msg_date": "Thu, 24 Nov 2022 10:49:48 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "Thanks for your feedback!\nI applied the patch to print more information about the error. Here’s what I got:\n\n2022-11-23 20:33:03 UTC [638 test_database]: [5458] ERROR: new multixact has more than one updating member: 0 2[248477 (nokeyupd), 248645 (nokeyupd)]\n2022-11-23 20:33:03 UTC [638 test_database]: [5459] STATEMENT: UPDATE warehouse1\n SET w_ytd = w_ytd + 498\n WHERE w_id = 5\n\nI then inspected the WAL and I found the log records for these 2 transactions:\n\n…\nrmgr: MultiXact len (rec/tot): 54/ 54, tx: 248477, lsn: 0/66DB82A8, prev 0/66DB8260, desc: CREATE_ID 133 offset 265 nmembers 2: 248477 (nokeyupd) 248500 (keysh)\nrmgr: Heap len (rec/tot): 70/ 70, tx: 248477, lsn: 0/66DB82E0, prev 0/66DB82A8, desc: HOT_UPDATE off 20 xmax 133 flags 0x20 IS_MULTI EXCL_LOCK ; new off 59 xmax 132, blkref #0: rel 1663/16384/16385 blk 422\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 248477, lsn: 0/66DBA710, prev 0/66DBA6D0, desc: ABORT 2022-11-23 20:33:03.712298 UTC\n…\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 248645, lsn: 0/66DBB060, prev 0/66DBB020, desc: ABORT 2022-11-23 20:33:03.712388 UTC\n\nAttached please find the relevant portion of the WAL.\nThanks for your help on this!\n\nDimos\n[ServiceNow]\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Thursday, 24. November 2022 at 10:49\nTo: Peter Geoghegan <pg@bowt.ie>\nCc: Dimos Stamatakis <dimos.stamatakis@servicenow.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Fix for visibility check on 14.5 fails on tpcc with high concurrency\n[External Email]\n\nOn 2022-Nov-23, Peter Geoghegan wrote:\n\n> On Wed, Nov 23, 2022 at 2:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Something like the attached. It would result in output like this:\n> > WARNING: new multixact has more than one updating member: 0 2[17378 (keysh), 17381 (nokeyupd)]\n> >\n> > Then it should be possible to trace (in pg_waldump output) the\n> > operations of each of the transactions that have any status in the\n> > multixact that includes some form of \"upd\".\n>\n> That seems very useful.\n\nOkay, pushed to all branches.\n\n> Separately, I wonder if it would make sense to add additional\n> defensive checks to FreezeMultiXactId() for this. There is an\n> assertion that should catch the presence of multiple updaters in a\n> single Multi when it looks like we have to generate a new Multi to\n> carry the XID members forward (typically something we only need to do\n> during a VACUUM FREEZE). We could at least make that\n> \"Assert(!TransactionIdIsValid(update_xid));\" line into a defensive\n> \"can't happen\" ereport(). It couldn't hurt, at least -- we already\n> have a similar relfrozenxid check nearby, added after the \"freeze the\n> dead\" bug was fixed.\n\nHmm, agreed. I'll see about that separately.\n\n--\nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/<https://www.EnterpriseDB.com>\n\"I'm always right, but sometimes I'm more right than other times.\"\n(Linus Torvalds)", "msg_date": "Thu, 24 Nov 2022 14:23:33 +0000", "msg_from": "Dimos Stamatakis <dimos.stamatakis@servicenow.com>", "msg_from_op": true, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "On 2022-Nov-24, Dimos Stamatakis wrote:\n\n> Thanks for your feedback!\n> I applied the patch to print more information about the error. Here’s what I got:\n> \n> 2022-11-23 20:33:03 UTC [638 test_database]: [5458] ERROR: new multixact has more than one updating member: 0 2[248477 (nokeyupd), 248645 (nokeyupd)]\n> 2022-11-23 20:33:03 UTC [638 test_database]: [5459] STATEMENT: UPDATE warehouse1\n> SET w_ytd = w_ytd + 498\n> WHERE w_id = 5\n> \n> I then inspected the WAL and I found the log records for these 2 transactions:\n> \n> …\n> rmgr: MultiXact len (rec/tot): 54/ 54, tx: 248477, lsn: 0/66DB82A8, prev 0/66DB8260, desc: CREATE_ID 133 offset 265 nmembers 2: 248477 (nokeyupd) 248500 (keysh)\n> rmgr: Heap len (rec/tot): 70/ 70, tx: 248477, lsn: 0/66DB82E0, prev 0/66DB82A8, desc: HOT_UPDATE off 20 xmax 133 flags 0x20 IS_MULTI EXCL_LOCK ; new off 59 xmax 132, blkref #0: rel 1663/16384/16385 blk 422\n> rmgr: Transaction len (rec/tot): 34/ 34, tx: 248477, lsn: 0/66DBA710, prev 0/66DBA6D0, desc: ABORT 2022-11-23 20:33:03.712298 UTC\n> …\n> rmgr: Transaction len (rec/tot): 34/ 34, tx: 248645, lsn: 0/66DBB060, prev 0/66DBB020, desc: ABORT 2022-11-23 20:33:03.712388 UTC\n\nAh, it seems clear enough: the transaction that aborted after having\nupdated the tuple, is still considered live when doing the second\nupdate. That sounds wrong.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Thu, 24 Nov 2022 18:34:49 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "On 2022-Nov-24, Alvaro Herrera wrote:\n\n> On 2022-Nov-24, Dimos Stamatakis wrote:\n> \n> > rmgr: MultiXact len (rec/tot): 54/ 54, tx: 248477, lsn: 0/66DB82A8, prev 0/66DB8260, desc: CREATE_ID 133 offset 265 nmembers 2: 248477 (nokeyupd) 248500 (keysh)\n> > rmgr: Heap len (rec/tot): 70/ 70, tx: 248477, lsn: 0/66DB82E0, prev 0/66DB82A8, desc: HOT_UPDATE off 20 xmax 133 flags 0x20 IS_MULTI EXCL_LOCK ; new off 59 xmax 132, blkref #0: rel 1663/16384/16385 blk 422\n> > rmgr: Transaction len (rec/tot): 34/ 34, tx: 248477, lsn: 0/66DBA710, prev 0/66DBA6D0, desc: ABORT 2022-11-23 20:33:03.712298 UTC\n> > …\n> > rmgr: Transaction len (rec/tot): 34/ 34, tx: 248645, lsn: 0/66DBB060, prev 0/66DBB020, desc: ABORT 2022-11-23 20:33:03.712388 UTC\n> \n> Ah, it seems clear enough: the transaction that aborted after having\n> updated the tuple, is still considered live when doing the second\n> update. That sounds wrong.\n\nHmm, if a transaction is aborted but its Xid is after latestCompletedXid,\nit would be reported as still running. AFAICS that is only modified\nwith ProcArrayLock held in exclusive mode, and only read with it held in\nshare mode, so this should be safe.\n\nI can see nothing else ATM.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\n http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php\n\n\n", "msg_date": "Thu, 24 Nov 2022 19:23:59 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "So does this mean there is no race condition in this case and that this error is redundant?\n\nThanks,\nDimos\n[ServiceNow]\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Thursday, 24. November 2022 at 19:24\nTo: Dimos Stamatakis <dimos.stamatakis@servicenow.com>\nCc: Peter Geoghegan <pg@bowt.ie>, simon.riggs@enterprisedb.com <simon.riggs@enterprisedb.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Fix for visibility check on 14.5 fails on tpcc with high concurrency\n[External Email]\n\nOn 2022-Nov-24, Alvaro Herrera wrote:\n\n> On 2022-Nov-24, Dimos Stamatakis wrote:\n>\n> > rmgr: MultiXact len (rec/tot): 54/ 54, tx: 248477, lsn: 0/66DB82A8, prev 0/66DB8260, desc: CREATE_ID 133 offset 265 nmembers 2: 248477 (nokeyupd) 248500 (keysh)\n> > rmgr: Heap len (rec/tot): 70/ 70, tx: 248477, lsn: 0/66DB82E0, prev 0/66DB82A8, desc: HOT_UPDATE off 20 xmax 133 flags 0x20 IS_MULTI EXCL_LOCK ; new off 59 xmax 132, blkref #0: rel 1663/16384/16385 blk 422\n> > rmgr: Transaction len (rec/tot): 34/ 34, tx: 248477, lsn: 0/66DBA710, prev 0/66DBA6D0, desc: ABORT 2022-11-23 20:33:03.712298 UTC\n> > …\n> > rmgr: Transaction len (rec/tot): 34/ 34, tx: 248645, lsn: 0/66DBB060, prev 0/66DBB020, desc: ABORT 2022-11-23 20:33:03.712388 UTC\n>\n> Ah, it seems clear enough: the transaction that aborted after having\n> updated the tuple, is still considered live when doing the second\n> update. That sounds wrong.\n\nHmm, if a transaction is aborted but its Xid is after latestCompletedXid,\nit would be reported as still running. AFAICS that is only modified\nwith ProcArrayLock held in exclusive mode, and only read with it held in\nshare mode, so this should be safe.\n\nI can see nothing else ATM.\n\n--\nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/<https://www.EnterpriseDB.com>\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\nhttp://archives.postgresql.org/pgsql-general/2007-06/msg00016.php<http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php>\n\n\n\n\n\n\n\n\n\nSo does this mean there is no race condition in this case and that this error is redundant?\n \nThanks,\nDimos\n[ServiceNow]\n \n \n\nFrom:\nAlvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Thursday, 24. November 2022 at 19:24\nTo: Dimos Stamatakis <dimos.stamatakis@servicenow.com>\nCc: Peter Geoghegan <pg@bowt.ie>, simon.riggs@enterprisedb.com <simon.riggs@enterprisedb.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Fix for visibility check on 14.5 fails on tpcc with high concurrency\n\n[External Email]\n\nOn 2022-Nov-24, Alvaro Herrera wrote:\n\n> On 2022-Nov-24, Dimos Stamatakis wrote:\n> \n> > rmgr: MultiXact len (rec/tot): 54/ 54, tx: 248477, lsn: 0/66DB82A8, prev 0/66DB8260, desc: CREATE_ID 133 offset 265 nmembers 2: 248477 (nokeyupd) 248500 (keysh)\n> > rmgr: Heap len (rec/tot): 70/ 70, tx: 248477, lsn: 0/66DB82E0, prev 0/66DB82A8, desc: HOT_UPDATE off 20 xmax 133 flags 0x20 IS_MULTI EXCL_LOCK ; new off 59 xmax 132, blkref #0: rel 1663/16384/16385 blk 422\n> > rmgr: Transaction len (rec/tot): 34/ 34, tx: 248477, lsn: 0/66DBA710, prev 0/66DBA6D0, desc: ABORT 2022-11-23 20:33:03.712298 UTC\n> > …\n> > rmgr: Transaction len (rec/tot): 34/ 34, tx: 248645, lsn: 0/66DBB060, prev 0/66DBB020, desc: ABORT 2022-11-23 20:33:03.712388 UTC\n> \n> Ah, it seems clear enough: the transaction that aborted after having\n> updated the tuple, is still considered live when doing the second\n> update. That sounds wrong.\n\nHmm, if a transaction is aborted but its Xid is after latestCompletedXid,\nit would be reported as still running. AFAICS that is only modified\nwith ProcArrayLock held in exclusive mode, and only read with it held in\nshare mode, so this should be safe.\n\nI can see nothing else ATM.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — \nhttps://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\nhttp://archives.postgresql.org/pgsql-general/2007-06/msg00016.php", "msg_date": "Fri, 25 Nov 2022 15:46:08 +0000", "msg_from": "Dimos Stamatakis <dimos.stamatakis@servicenow.com>", "msg_from_op": true, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "On 2022-Nov-25, Dimos Stamatakis wrote:\n\n> So does this mean there is no race condition in this case and that\n> this error is redundant?\n\nNo, it means I believe a bug exists but that I haven't spent enough time\non it to understand what it is.\n\nPlease do not top-post.\nhttps://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n", "msg_date": "Fri, 25 Nov 2022 20:58:19 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "[External Email]\r\n\r\nOn 2022-Nov-25, Dimos Stamatakis wrote:\r\n\r\n> So does this mean there is no race condition in this case and that\r\n> this error is redundant?\r\n\r\nNo, it means I believe a bug exists but that I haven't spent enough time\r\non it to understand what it is.\r\n\r\n\r\n\r\nGreat! Please keep me posted and let me know if you need any more evidence to debug. 😊\r\n\r\nThanks,\r\nDimos\r\n\n\n\n\n\n\n\n\n\n[External Email]\n\r\nOn 2022-Nov-25, Dimos Stamatakis wrote:\n\r\n> So does this mean there is no race condition in this case and that\r\n> this error is redundant?\n\r\nNo, it means I believe a bug exists but that I haven't spent enough time\r\non it to understand what it is.\n \n \n \nGreat! Please keep me posted and let me know if you need any more evidence to debug.\r\n😊\n \nThanks,\nDimos", "msg_date": "Tue, 29 Nov 2022 18:21:54 +0000", "msg_from": "Dimos Stamatakis <dimos.stamatakis@servicenow.com>", "msg_from_op": true, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" }, { "msg_contents": "Hi hackers,\n\nI was wondering whether there are any updates on the bug in visibility check introduced in version 14.5.\n\nMany thanks,\nDimos\n[ServiceNow]\n\n\n\n\n\n\n\n\n\nHi hackers,\n \nI was wondering whether there are any updates on the bug in visibility check introduced in version 14.5.\n \nMany thanks,\nDimos\n[ServiceNow]", "msg_date": "Wed, 26 Apr 2023 15:24:37 +0000", "msg_from": "Dimos Stamatakis <dimos.stamatakis@servicenow.com>", "msg_from_op": true, "msg_subject": "Re: Fix for visibility check on 14.5 fails on tpcc with high\n concurrency" } ]
[ { "msg_contents": "Hi,\n\nMy buildfarm animal grassquit just showed an odd failure [1] in REL_11_STABLE:\n\nok 10 - standby is in recovery\n# Running: pg_ctl -D /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/src/bin/pg_ctl/tmp_check/t_003_promote_standby2_data/pgdata promote\nwaiting for server to promote....pg_ctl: control file appears to be corrupt\nnot ok 11 - pg_ctl promote of standby runs\n\n# Failed test 'pg_ctl promote of standby runs'\n# at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/TestLib.pm line 474.\n\n\nI didn't find other references to this kind of failure. Nor has the error\nre-occurred on grassquit.\n\n\nI don't immediately see a way for this message to be hit that's not indicating\na bug somewhere. We should be updating the control file in an atomic way and\nread it in an atomic way.\n\n\nThe failure has to be happening in wait_for_postmaster_promote(), because the\nstandby2 is actually successfully promoted.\n\nGreetings,\n\nAndres Freund\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2022-11-22%2016%3A33%3A57\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:42:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "odd buildfarm failure - \"pg_ctl: control file appears to be corrupt\"" }, { "msg_contents": "On Tue, Nov 22, 2022 at 05:42:24PM -0800, Andres Freund wrote:\n> The failure has to be happening in wait_for_postmaster_promote(), because the\n> standby2 is actually successfully promoted.\n\nThat's the one under -fsanitize=address. It really smells to me like\na bug with a race condition all over it.\n--\nMichael", "msg_date": "Wed, 23 Nov 2022 15:12:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On 2022-Nov-22, Andres Freund wrote:\n\n> ok 10 - standby is in recovery\n> # Running: pg_ctl -D /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/src/bin/pg_ctl/tmp_check/t_003_promote_standby2_data/pgdata promote\n> waiting for server to promote....pg_ctl: control file appears to be corrupt\n> not ok 11 - pg_ctl promote of standby runs\n> \n> # Failed test 'pg_ctl promote of standby runs'\n> # at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/TestLib.pm line 474.\n\nThis triggered me on this proposal I saw yesterday\nhttps://postgr.es/m/02fe0063-bf77-90d0-3cf5-e9fe7c2a487b@postgrespro.ru\nI think trying to store more stuff in pg_control is dangerous and we\nshould resist it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 23 Nov 2022 10:02:04 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:42 PM Andres Freund <andres@anarazel.de> wrote:\n> The failure has to be happening in wait_for_postmaster_promote(), because the\n> standby2 is actually successfully promoted.\n\nI assume this is ext4. Presumably anything that reads the\ncontrolfile, like pg_ctl, pg_checksums, pg_resetwal,\npg_control_system(), ... by reading without interlocking against\nwrites could see garbage. I have lost track of the versions and the\nthread, but I worked out at some point by experimentation that this\nonly started relatively recently for concurrent read() and write(),\nbut always happened with concurrent pread() and pwrite(). The control\nfile uses the non-p variants which didn't mash old/new data like\ngrated cheese under concurrency due to some implementation detail, but\nnow does.\n\n\n", "msg_date": "Wed, 23 Nov 2022 23:03:45 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Wed, Nov 23, 2022 at 11:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Nov 23, 2022 at 2:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > The failure has to be happening in wait_for_postmaster_promote(), because the\n> > standby2 is actually successfully promoted.\n>\n> I assume this is ext4. Presumably anything that reads the\n> controlfile, like pg_ctl, pg_checksums, pg_resetwal,\n> pg_control_system(), ... by reading without interlocking against\n> writes could see garbage. I have lost track of the versions and the\n> thread, but I worked out at some point by experimentation that this\n> only started relatively recently for concurrent read() and write(),\n> but always happened with concurrent pread() and pwrite(). The control\n> file uses the non-p variants which didn't mash old/new data like\n> grated cheese under concurrency due to some implementation detail, but\n> now does.\n\nAs for what to do about it, some ideas:\n\n1. Use advisory range locking. (This would be an advisory version of\nwhat many other filesystems do automatically, AFAIK. Does Windows\nhave a thing like POSIX file locking, or need it here?)\n2. Retry after a short time on checksum failure. The probability is\nalready miniscule, and becomes pretty close to 0 if we read thrice\n100ms apart.\n3. Some scheme that involves renaming the file into place. (That\nmight be a pain on Windows; it only works for the relmap thing because\nall readers and writers are in the backend and use an LWLock to avoid\nsilly handle semantics.)\n4. ???\n\nFirst thought is that 2 is appropriate level of complexity for this\nrare and stupid problem.\n\n\n", "msg_date": "Thu, 24 Nov 2022 10:59:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Nov 23, 2022 at 11:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> I assume this is ext4. Presumably anything that reads the\n>> controlfile, like pg_ctl, pg_checksums, pg_resetwal,\n>> pg_control_system(), ... by reading without interlocking against\n>> writes could see garbage. I have lost track of the versions and the\n>> thread, but I worked out at some point by experimentation that this\n>> only started relatively recently for concurrent read() and write(),\n>> but always happened with concurrent pread() and pwrite(). The control\n>> file uses the non-p variants which didn't mash old/new data like\n>> grated cheese under concurrency due to some implementation detail, but\n>> now does.\n\nUgh.\n\n> As for what to do about it, some ideas:\n> 2. Retry after a short time on checksum failure. The probability is\n> already miniscule, and becomes pretty close to 0 if we read thrice\n> 100ms apart.\n\n> First thought is that 2 is appropriate level of complexity for this\n> rare and stupid problem.\n\nYeah, I was thinking the same. A variant could be \"repeat until\nwe see the same calculated checksum twice\".\n\n\t\t\tregards, tom lane\n\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 17:05:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Thu, Nov 24, 2022 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Wed, Nov 23, 2022 at 11:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > As for what to do about it, some ideas:\n> > 2. Retry after a short time on checksum failure. The probability is\n> > already miniscule, and becomes pretty close to 0 if we read thrice\n> > 100ms apart.\n>\n> > First thought is that 2 is appropriate level of complexity for this\n> > rare and stupid problem.\n>\n> Yeah, I was thinking the same. A variant could be \"repeat until\n> we see the same calculated checksum twice\".\n\nHmm. While writing a comment to explain why that's good enough, I\nrealised it's not really true for a standby that control file writes\nare always expected to be far apart in time. XLogFlush->\nUpdateMinRecoveryPoint() could coincide badly with our N attempts for\nany small N and for any nap time, which I think makes your idea better\nthan mine.\n\nWith some cartoon-level understanding of what's going on (to wit: I\nthink the kernel just pins the page but doesn't use a page-level\ncontent lock or range lock, so what you're seeing is raw racing memcpy\ncalls and unsynchronised cache line effects), I guess you'd be fairly\nlikely to make \"progress\" in seeing more new data even if you didn't\nsleep in between, but who knows. So I have a 10ms sleep to make\nprogress very likely; given your algorithm it doesn't matter if you\ndidn't make all the progress, just some. Since this is reachable from\nSQL, I think we also need a CFI call so you can't get uninterruptibly\nstuck here?\n\nI wrote a stupid throw-away function to force a write. If you have an\next4 system to hand (xfs, zfs, apfs, ufs, others don't suffer from\nthis) you can do:\n\n do $$ begin for i in 1..100000000 do loop perform\npg_update_control_file(); end loop; end; $$;\n\n... while you also do:\n\n select pg_control_system();\n \\watch 0.001\n\n... and you'll soon see:\n\nERROR: calculated CRC checksum does not match value stored in file\n\nThe attached draft patch fixes it.", "msg_date": "Thu, 24 Nov 2022 14:02:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Thu, Nov 24, 2022 at 2:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... and you'll soon see:\n>\n> ERROR: calculated CRC checksum does not match value stored in file\n\nI forgot to mention: this reproducer only seems to work if fsync =\noff. I don't know why, but I recall that was true also for bug\n#17064.\n\n\n", "msg_date": "Thu, 24 Nov 2022 14:59:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Hello!\n\nOn 24.11.2022 04:02, Thomas Munro wrote:\n> On Thu, Nov 24, 2022 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Thomas Munro <thomas.munro@gmail.com> writes:\n> \n> ERROR: calculated CRC checksum does not match value stored in file\n> \n> The attached draft patch fixes it.\n\nTried to catch this error on my PC, but failed to do it within a reasonable time.\nThe 1s interval is probably too long for me.\nIt seems there are more durable way to reproduce this bug with 0001 patch applied:\nAt the first backend:\n\ndo $$ begin loop perform pg_update_control_file(); end loop; end; $$;\n\nAt the second one:\n\ndo $$ begin loop perform pg_control_system(); end loop; end; $$;\n\nIt will fails almost immediately with:\n\"ERROR: calculated CRC checksum does not match value stored in file\"\nboth with fsync = off and fsync = on.\nChecked it out for master and REL_11_STABLE.\n\nAlso checked for a few hours that the patch 0002 fixes this error,\nbut there are some questions to its logical structure.\nThe equality between the previous and newly calculated crc is checked only\nif the last crc calculation was wrong, i.e not equal to the value stored in the file.\nIt is very unlikely that in this case the previous and new crc can match, so, in fact,\nthe loop will spin until crc is calculated correctly. In the other words,\nthis algorithm is practically equivalent to an infinite loop of reading from a file\nand calculating crc while(EQ_CRC32C(crc, ControlFile->crc) != true).\nBut then it can be simplified significantly by removing checksums equality checks,\nbool fist_try and by limiting the maximum number of iterations\nwith some constant in the e.g. for loop to avoid theoretically possible freeze.\n\nOr maybe use the variant suggested by Tom Lane, i.e, as far as i understand,\nrepeat the file_open-read-close-calculate_crc sequence twice without a pause between\nthem and check the both calculated crcs for the equality. If they match, exit and return\nthe bool result of comparing between the last calculation with the value from the file,\nif not, take a pause and repeat everything from the beginning.\nIn this case, no need to check *crc_ok_p inside get_controlfile()\nas it was in the present version; i think it's more logically correct\nsince this variable is intended top-level functions and the logic\ninside get_controlfile() should not depend on its value.\n\nAlso found a warning in 0001 patch for master branch. On my PC gcc gives:\n\nxlog.c:2507:1: warning: no previous prototype for ‘pg_update_control_file’ [-Wmissing-prototypes]\n 2507 | pg_update_control_file()\n\nFixed it with #include \"utils/fmgrprotos.h\" to xlog.c and\nadd PG_FUNCTION_ARGS to pg_update_control_file().\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 31 Jan 2023 04:09:58 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Tue, Jan 31, 2023 at 2:10 PM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> Also checked for a few hours that the patch 0002 fixes this error,\n> but there are some questions to its logical structure.\n\nHi Anton,\n\nThanks for looking!\n\n> The equality between the previous and newly calculated crc is checked only\n> if the last crc calculation was wrong, i.e not equal to the value stored in the file.\n> It is very unlikely that in this case the previous and new crc can match, so, in fact,\n> the loop will spin until crc is calculated correctly. In the other words,\n> this algorithm is practically equivalent to an infinite loop of reading from a file\n> and calculating crc while(EQ_CRC32C(crc, ControlFile->crc) != true).\n\nMaybe it's unlikely that two samples will match while running that\ntorture test, because it's overwriting the file as fast as it can.\nBut the idea is that a real system isn't writing the control file most\nof the time.\n\n> But then it can be simplified significantly by removing checksums equality checks,\n> bool fist_try and by limiting the maximum number of iterations\n> with some constant in the e.g. for loop to avoid theoretically possible freeze.\n\nYeah, I was thinking that we should also put a limit on the loop, just\nto be cautious.\n\nPrimary servers write the control file very infrequently. Standybys\nmore frequently, while writing data out, maybe every few seconds on a\nbusy system writing out a lot of data. UpdateMinRecoveryPoint() makes\nsome effort to avoid updating the file too often. You definitely see\nbursts of repeated flushes that might send this thing in a loop for a\nwhile if the timings were exactly wrong, but usually with periodic\ngaps; I haven't really studied the expected behaviour too closely.\n\n> Or maybe use the variant suggested by Tom Lane, i.e, as far as i understand,\n> repeat the file_open-read-close-calculate_crc sequence twice without a pause between\n> them and check the both calculated crcs for the equality. If they match, exit and return\n> the bool result of comparing between the last calculation with the value from the file,\n> if not, take a pause and repeat everything from the beginning.\n\nHmm. Would it be good enough to do two read() calls with no sleep in\nbetween? How sure are we that a concurrent write will manage to\nchange at least one bit that our second read can see? I guess it's\nlikely, but maybe hypervisors, preemptible kernels, I/O interrupts or\na glitch in the matrix could decrease our luck? I really don't know.\nSo I figured I should add a small sleep between the reads to change\nthe odds in our favour. But we don't want to slow down all reads of\nthe control file with a sleep, do we? So I thought we should only\nbother doing this slow stuff if the actual CRC check fails, a low\nprobability event.\n\nClearly there is an element of speculation or superstition here. I\ndon't know what else to do if both PostgreSQL and ext4 decided not to\nadd interlocking. Maybe we should rethink that. How bad would it\nreally be if control file access used POSIX file locking? I mean, the\nwriter is going to *fsync* the file, so it's not like one more wafer\nthin system call is going to hurt too much.\n\n\n", "msg_date": "Tue, 31 Jan 2023 17:09:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Tue, Jan 31, 2023 at 5:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Clearly there is an element of speculation or superstition here. I\n> don't know what else to do if both PostgreSQL and ext4 decided not to\n> add interlocking. Maybe we should rethink that. How bad would it\n> really be if control file access used POSIX file locking? I mean, the\n> writer is going to *fsync* the file, so it's not like one more wafer\n> thin system call is going to hurt too much.\n\nHere's an experimental patch for that alternative. I wonder if\nsomeone would want to be able to turn it off for some reason -- maybe\nsome NFS problem? It's less back-patchable, but maybe more\nprincipled?\n\nI don't know if Windows suffers from this type of problem.\nUnfortunately its equivalent functionality LockFile() looks non-ideal\nfor this purpose: if your program crashes, the documentation is very\nvague on when exactly it is released by the OS, but it's not\nimmediately on process exit. That seems non-ideal for a control file\nyou might want to access again very soon after a crash, to be able to\nrecover.\n\nA thought in passing: if UpdateMinRecoveryPoint() performance is an\nissue, maybe we should figure out how to use fdatasync() instead of\nfsync().", "msg_date": "Wed, 1 Feb 2023 00:38:33 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Hi, Thomas!\n\nThere are two variants of the patch now.\n\n1) As for the first workaround:\n\nOn 31.01.2023 07:09, Thomas Munro wrote:\n> \n> Maybe it's unlikely that two samples will match while running that\n> torture test, because it's overwriting the file as fast as it can.\n> But the idea is that a real system isn't writing the control file most\n> of the time.\n> \n........\n> Yeah, I was thinking that we should also put a limit on the loop, just\n> to be cautious.\n\nAt first i didn’t understand that the equality condition with the previous\ncalculated crc and the current one at the second+ attempts was intended\nfor the case when the pg_control file is really corrupted.\n\nIndeed, by making a few debugging variables and running the tortue test,\ni found that there were ~4000 crc errors (~0.0003%) in ~125 million reads from the file,\nand there was no case when the crc error appeared twice in a row.\nSo the second and moreover the third successive occurrence of an crc error\ncan be neglected and for this workaround seems a simpler and maybe more clear\nalgorithm is possible.\nFor instance:\n\nfor(try = 0 ; try < 3; try++)\n{\n open, read from and close pg_control;\n calculate crc;\n\n *crc_ok_p = EQ_CRC32C(crc, ControlFile->crc);\n\n if(*crc_ok_p)\n break;\n}\n\n2) As for the second variant of the patch with POSIX locks:\n\nOn 31.01.2023 14:38, Thomas Munro wrote:\n> On Tue, Jan 31, 2023 at 5:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Clearly there is an element of speculation or superstition here. I\n>> don't know what else to do if both PostgreSQL and ext4 decided not to\n>> add interlocking. Maybe we should rethink that. How bad would it\n>> really be if control file access used POSIX file locking? I mean, the\n>> writer is going to *fsync* the file, so it's not like one more wafer\n>> thin system call is going to hurt too much.\n> \n> Here's an experimental patch for that alternative. I wonder if\n> someone would want to be able to turn it off for some reason -- maybe\n> some NFS problem? It's less back-patchable, but maybe more\n> principled?\n\nIt looks very strange to me that there may be cases where the cluster data\nis stored in NFS. Can't figure out how this can be useful.\n\ni think this variant of the patch is a normal solution\nof the problem, not workaround. Found no problems on Linux.\n+1 for this variant.\n\nMight add a custom error message for EDEADLK\nsince it absent in errcode_for_file_access()?\n\n> I don't know if Windows suffers from this type of problem.\n> Unfortunately its equivalent functionality LockFile() looks non-ideal\n> for this purpose: if your program crashes, the documentation is very\n> vague on when exactly it is released by the OS, but it's not\n> immediately on process exit. That seems non-ideal for a control file\n> you might want to access again very soon after a crash, to be able to\n> recover.\n\nUnfortunately i've not had time to reproduce the problem and apply this patch on\nWindows yet but i'm going to do it soon on windows 10. If a crc error\nwill occur there, then we might use the workaround from the first\nvariant of the patch.\n\n> A thought in passing: if UpdateMinRecoveryPoint() performance is an\n> issue, maybe we should figure out how to use fdatasync() instead of\n> fsync().\n\nMay be choose it in accordance with GUC wal_sync_method?\n\n\nSincerely yours,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 1 Feb 2023 07:04:09 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Wed, Feb 1, 2023 at 5:04 PM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> On 31.01.2023 14:38, Thomas Munro wrote:\n> > Here's an experimental patch for that alternative. I wonder if\n> > someone would want to be able to turn it off for some reason -- maybe\n> > some NFS problem? It's less back-patchable, but maybe more\n> > principled?\n>\n> It looks very strange to me that there may be cases where the cluster data\n> is stored in NFS. Can't figure out how this can be useful.\n\nHeh. There are many interesting failure modes, but people do it. I\nguess my more general question when introducing any new system call\ninto this code is how some unusual system I can't even access is going\nto break. Maybe some obscure filesystem will fail with EOPNOTSUPP, or\ntake 5 seconds and then fail because there is no lock server\nconfigured or whatever, so that's why I don't think we can back-patch\nit, and we probably need a way to turn it off.\n\n> i think this variant of the patch is a normal solution\n> of the problem, not workaround. Found no problems on Linux.\n> +1 for this variant.\n\nI prefer it too.\n\n> Might add a custom error message for EDEADLK\n> since it absent in errcode_for_file_access()?\n\nAh, good thought. I think it shouldn't happen™, so it's OK that\nerrcode_for_file_access() would classify it as ERRCODE_INTERNAL_ERROR.\n\nOther interesting errors are:\n\nENOLCK: system limits exceeded; PANIC seems reasonable\nEOPNOTSUPP: this file doesn't support locking (seen on FreeBSD man\npages, not on POSIX)\n\n> > I don't know if Windows suffers from this type of problem.\n> > Unfortunately its equivalent functionality LockFile() looks non-ideal\n> > for this purpose: if your program crashes, the documentation is very\n> > vague on when exactly it is released by the OS, but it's not\n> > immediately on process exit. That seems non-ideal for a control file\n> > you might want to access again very soon after a crash, to be able to\n> > recover.\n>\n> Unfortunately i've not had time to reproduce the problem and apply this patch on\n> Windows yet but i'm going to do it soon on windows 10. If a crc error\n> will occur there, then we might use the workaround from the first\n> variant of the patch.\n\nThank you for investigating. I am afraid to read your results.\n\nOne idea would be to proceed with LockFile() for Windows, with a note\nsuggesting you file a bug with your OS vendor if you ever need it to\nget unstuck. Googling this subject, I read that MongoDB used to\nsuffer from stuck lock files, until an OS bug report led to recent\nversions releasing locks more promptly. I find that sufficiently\nscary that I would want to default the feature to off on Windows, even\nif your testing shows that it does really need it.\n\n> > A thought in passing: if UpdateMinRecoveryPoint() performance is an\n> > issue, maybe we should figure out how to use fdatasync() instead of\n> > fsync().\n>\n> May be choose it in accordance with GUC wal_sync_method?\n\nHere's a patch like that. I don't know if it's a good idea for\nwal_sync_method to affect other kinds of files or not, but, then, it\nalready does (fsync_writethough changes this behaviour). I would\nonly want to consider this if we also stop choosing \"open_datasync\" as\na default on at least Windows.", "msg_date": "Wed, 1 Feb 2023 19:45:34 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Hi, Thomas!\n\nThanks for your rapid answer and sorry for my delay with reply.\n\nOn 01.02.2023 09:45, Thomas Munro wrote:\n>> Might add a custom error message for EDEADLK\n>> since it absent in errcode_for_file_access()?\n> \n> Ah, good thought. I think it shouldn't happen™, so it's OK that\n> errcode_for_file_access() would classify it as ERRCODE_INTERNAL_ERROR.\n\nYes, i also think that is impossible since the lock is taken on\nthe entire file, so ERRCODE_INTERNAL_ERROR will be right here.\n\n> Other interesting errors are:\n> \n> ENOLCK: system limits exceeded; PANIC seems reasonable\n> EOPNOTSUPP: this file doesn't support locking (seen on FreeBSD man\n> pages, not on POSIX)\n\nAgreed that ENOLCK is a PANIC or at least FATAL. Maybe it's even better\nto do it FATAL to allow other backends to survive?\nAs for EOPNOTSUPP, maybe make a fallback to the workaround from the\nfirst variant of the patch? (In my previous letter i forgot the pause\nafter break;, of cause)\n\n>>> I don't know if Windows suffers from this type of problem.\n>>> Unfortunately its equivalent functionality LockFile() looks non-ideal\n>>> for this purpose: if your program crashes, the documentation is very\n>>> vague on when exactly it is released by the OS, but it's not\n>>> immediately on process exit. That seems non-ideal for a control file\n>>> you might want to access again very soon after a crash, to be able to\n>>> recover.\n>>\n>> Unfortunately i've not had time to reproduce the problem and apply this patch on\n>> Windows yet but i'm going to do it soon on windows 10. If a crc error\n>> will occur there, then we might use the workaround from the first\n>> variant of the patch.\n> \n> Thank you for investigating. I am afraid to read your results.\n\nFirst of all it seemed to me that is not a problem at all since msdn\nguarantees sector-by-sector atomicity.\n\"Physical Sector: The unit for which read and write operations to the device\nare completed in a single operation. This is the unit of atomic write...\"\nhttps://learn.microsoft.com/en-us/windows/win32/fileio/file-buffering\nhttps://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-writefile\n(Of course, only if the 512 bytes lays from the beginning of the file with a zero\noffset, but this is our case. The current size of ControlFileData is\n296 bytes at offset = 0.)\n\nI tried to verify this fact experimentally and was very surprised.\nUnfortunately it crashed in an hour during torture test:\n2023-02-13 15:10:52.675 MSK [5704] LOG: starting PostgreSQL 16devel, compiled by Visual C++ build 1929, 64-bit\n2023-02-13 15:10:52.768 MSK [5704] LOG: database system is ready to accept connections\n@@@@@@ sizeof(ControlFileData) = 296\n.........\n2023-02-13 16:10:41.997 MSK [9380] ERROR: calculated CRC checksum does not match value stored in file\n\nBut fortunately, this only happens when fsync=off.\nAlso i did several experiments with fsync=on and found more appropriate behavior:\nThe stress test with sizeof(ControlFileData) = 512+8 = 520 bytes failed in a 4,5 hours,\nbut the other one with ordinary sizeof(ControlFileData) = 296 not crashed in more than 12 hours.\n\nSeems in that case the behavior corresponds to msdn. So if it is possible\nto use fsync() under windows when the GUC fsync is off it maybe a solution\nfor this problem. If so there is no need to lock the pg_control file under windows at all.\n\n>> May be choose it in accordance with GUC wal_sync_method?\n> \n> Here's a patch like that. I don't know if it's a good idea for\n> wal_sync_method to affect other kinds of files or not, but, then, it\n> already does (fsync_writethough changes this behaviour). \n\n+1. Looks like it needs a little fix:\n\n+++ b/src/common/controldata_utils.c\n@@ -316,7 +316,7 @@ update_controlfile(const char *DataDir,\n if (pg_fsync(fd) != 0)\n ereport(PANIC,\n (errcode_for_file_access(),\n- errmsg(\"could not fdatasync file \\\"%s\\\": %m\",\n+ errmsg(\"could not fsync file \\\"%s\\\": %m\",\n ControlFilePath)));\n\nAnd it may be combined with 0001-Lock-pg_control-while-reading-or-writing.patch\n\n> I would\n> only want to consider this if we also stop choosing \"open_datasync\" as\n> a default on at least Windows.\n\nI didn't quite understand this point. Could you clarify it for me, please? If the performance\nof UpdateMinRecoveryPoint() wasn't a problem we could just use fsync in all platforms.\n \n\nSincerely yours,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 14 Feb 2023 06:38:22 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Hi, Thomas!\n\nOn 14.02.2023 06:38, Anton A. Melnikov wrote:\n> Also i did several experiments with fsync=on and found more appropriate behavior:\n> The stress test with sizeof(ControlFileData) = 512+8 = 520 bytes failed in a 4,5 hours,\n> but the other one with ordinary sizeof(ControlFileData) = 296 not crashed in more than 12 hours.\n\nNonetheless it crashed after 18 hours:\n\n2023-02-13 18:07:21.476 MSK [7640] LOG: starting PostgreSQL 16devel, compiled by Visual C++ build 1929, 64-bit\n2023-02-13 18:07:21.483 MSK [7640] LOG: listening on IPv6 address \"::1\", port 5432\n2023-02-13 18:07:21.483 MSK [7640] LOG: listening on IPv4 address \"127.0.0.1\", port 5432\n2023-02-13 18:07:21.556 MSK [1940] LOG: database system was shut down at 2023-02-13 18:07:12 MSK\n2023-02-13 18:07:21.590 MSK [7640] LOG: database system is ready to accept connections\n@@@@@@@@@@@@@ sizeof(ControlFileData) = 296\n2023-02-13 18:12:21.545 MSK [9532] LOG: checkpoint starting: time\n2023-02-13 18:12:21.583 MSK [9532] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 s, sync=0.009 s, total=0.038 s; sync files=2, longest=0.005 s, average=0.005 s; distance=0 kB, estimate=0 kB; lsn=0/17AC388, redo lsn=0/17AC350\n2023-02-14 12:12:21.738 MSK [8676] ERROR: calculated CRC checksum does not match value stored in file\n2023-02-14 12:12:21.738 MSK [8676] CONTEXT: SQL statement \"SELECT pg_control_system()\"\n\tPL/pgSQL function inline_code_block line 1 at PERFORM\n2023-02-14 12:12:21.738 MSK [8676] STATEMENT: do $$ begin loop perform pg_control_system(); end loop; end; $$;\n\n\nSo all of the following is incorrect:\n\n> Seems in that case the behavior corresponds to msdn. So if it is possible\n> to use fsync() under windows when the GUC fsync is off it maybe a solution\n> for this problem. If so there is no need to lock the pg_control file under windows at all.\n\nand cannot be a solution.\n\n\nSincerely yours,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 14 Feb 2023 18:52:20 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Tue, Feb 14, 2023 at 4:38 PM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> First of all it seemed to me that is not a problem at all since msdn\n> guarantees sector-by-sector atomicity.\n> \"Physical Sector: The unit for which read and write operations to the device\n> are completed in a single operation. This is the unit of atomic write...\"\n> https://learn.microsoft.com/en-us/windows/win32/fileio/file-buffering\n> https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-writefile\n> (Of course, only if the 512 bytes lays from the beginning of the file with a zero\n> offset, but this is our case. The current size of ControlFileData is\n> 296 bytes at offset = 0.)\n\nThere are two kinds of atomicity that we rely on for the control file today:\n\n* atomicity on power loss (= device property, in case of overwrite filesystems)\n* atomicity of concurrent reads and writes (= VFS or kernel buffer\npool interlocking policy)\n\nI assume that documentation is talking about the first thing (BTW, I\nsuspect that documentation is also now wrong in one special case: NTFS\nfilesystems mounted on DAX devices don't have sectors or sector-based\natomicity unless you turn on BTT and slow them down[1]; that might\neventually be something for PostgreSQL to think about, and it applies\nto other OSes too).\n\nWith this patch we would stop relying on the second thing. Supposedly\nPOSIX requires read/write atomicity, and many file systems offer it in\na strong form (internal range locking) or maybe a weak accidental form\n(page-level locking). Since some extremely popular implementations\njust don't care, and Windows isn't even a POSIX, we just have to do it\nourselves.\n\nBTW there are at least two other places where PostgreSQL already knows\nthat concurrent reads and writes are possibly non-atomic (and we also\ndon't even try to get the alignment right, making the question moot):\npg_basebackup, which enables full_page_writes implicitly if you didn't\nhave the GUC on already, and pg_rewind, which refuses to run if you\nhaven't enabled full_page_writes explicitly (as recently discussed on\nanother thread recently; that's an annoying difference, and also an\nannoying behaviour if you know your storage doesn't really need it!)\n\n> I tried to verify this fact experimentally and was very surprised.\n> Unfortunately it crashed in an hour during torture test:\n> 2023-02-13 15:10:52.675 MSK [5704] LOG: starting PostgreSQL 16devel, compiled by Visual C++ build 1929, 64-bit\n> 2023-02-13 15:10:52.768 MSK [5704] LOG: database system is ready to accept connections\n> @@@@@@ sizeof(ControlFileData) = 296\n> .........\n> 2023-02-13 16:10:41.997 MSK [9380] ERROR: calculated CRC checksum does not match value stored in file\n\nThanks. I'd seen reports of this in discussions on the 'net, but\nthose had no authoritative references or supporting test results. The\nfact that fsync made it take longer (in your following email) makes\nsense and matches Linux. It inserts a massive pause in the middle of\nthe experiment loop, affecting the probabilities.\n\nTherefore, we need a solution for Windows too. I tried to write the\nequivalent code, in the attached. I learned a few things: Windows\nlocks are mandatory (that is, if you lock a file, reads/writes can\nfail, unlike Unix). Combined with async release, that means we\nprobably also need to lock the file in xlog.c, when reading it in\nxlog.c:ReadControlFile() (see comments). Before I added that, on a CI\nrun, I found that the read in there would sometimes fail, and adding\nthe locking fixed that. I am a bit confused, though, because I\nexpected that to be necessary only if someone managed to crash while\nholding the write lock, which the CI tests shouldn't do. Do you have\nany ideas?\n\nWhile contemplating what else a mandatory file lock might break, I\nremembered that basebackup.c also reads the control file. Hrmph. Not\naddressed yet; I guess it might need to acquire/release around\nsendFile(sink, XLOG_CONTROL_FILE, ...)?\n\n> > I would\n> > only want to consider this if we also stop choosing \"open_datasync\" as\n> > a default on at least Windows.\n>\n> I didn't quite understand this point. Could you clarify it for me, please? If the performance\n> of UpdateMinRecoveryPoint() wasn't a problem we could just use fsync in all platforms.\n\nThe level of durability would be weakened on Windows. On all systems\nexcept Linux and FreeBSD, we default to wal_sync_method=open_datasync,\nso then we would start using FILE_FLAG_WRITE_THROUGH for the control\nfile too. We know from reading and experimentation that\nFILE_FLAG_WRITE_THROUGH doesn't flush the drive cache on consumer\nWindows hardware, but its fdatasync-like thing does[2]. I have not\nthought too hard about the consequences of using different durability\nlevels for different categories of file, but messing with write\nordering can't be good for crash recovery, so I'd rather increase WAL\ndurability than decrease control file durability. If a Windows user\ncomplains that it makes their fancy non-volatile cache slow down, they\ncan always adjust the settings in PostgreSQL, their OS, or their\ndrivers etc. I think we should just make fdatasync the default on all\nsystems.\n\nHere's a patch like that.\n\n[1] https://learn.microsoft.com/en-us/windows-server/storage/storage-spaces/persistent-memory-direct-access\n[2] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2Ba-7r4GpADsasCnuDBiqC1c31DAQQco2FayVtB9V3sQw%40mail.gmail.com#460bfa5a6b571cc98c575d23322e0b25", "msg_date": "Fri, 17 Feb 2023 16:21:14 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Fri, Feb 17, 2023 at 4:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> While contemplating what else a mandatory file lock might break, I\n> remembered that basebackup.c also reads the control file. Hrmph. Not\n> addressed yet; I guess it might need to acquire/release around\n> sendFile(sink, XLOG_CONTROL_FILE, ...)?\n\nIf we go this way, I suppose, in theory at least, someone with\nexternal pg_backup_start()-based tools might also want to hold the\nlock while copying pg_control. Otherwise they might fail to open it\non Windows (where that patch uses a mandatory lock) or copy garbage on\nLinux (as they can today, I assume), with non-zero probability -- at\nleast when copying files from a hot standby. Or backup tools might\nwant to get the file contents through some entirely different\nmechanism that does the right interlocking (whatever that might be,\nmaybe inside the server). Perhaps this is not so much the localised\nsystems programming curiosity I thought it was, and has implications\nthat'd need to be part of the documented low-level backup steps. It\nmakes me like the idea a bit less. It'd be good to hear from backup\ngurus what they think about that.\n\nOne cute hack I thought of to make the file lock effectively advisory\non Windows is to lock a byte range *past the end* of the file (the\ndocumentation says you can do that). That shouldn't stop programs\nthat want to read the file without locking and don't know/care about\nour scheme (that is, pre-existing backup tools that haven't considered\nthis problem and remain oblivious or accept the (low) risk of torn\nreads), but will block other participants in our scheme.\n\nIf we went back to the keep-rereading-until-it-stops-changing model,\nthen an external backup tool would need to be prepared to do that too,\nin theory at least. Maybe some already do something like that?\n\nOr maybe the problem is/was too theoretical before...\n\n\n", "msg_date": "Wed, 22 Feb 2023 13:10:35 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Hi, Thomas!\n\nOn 17.02.2023 06:21, Thomas Munro wrote:\n> There are two kinds of atomicity that we rely on for the control file today:\n> \n> * atomicity on power loss (= device property, in case of overwrite filesystems)\n> * atomicity of concurrent reads and writes (= VFS or kernel buffer\n> pool interlocking policy)\n> \n> I assume that documentation is talking about the first thing \n\nI think this is true, as documentation says about write operations only,\nbut not about the read ones.\n\n> (BTW, I\n> suspect that documentation is also now wrong in one special case: NTFS\n> filesystems mounted on DAX devices don't have sectors or sector-based\n> atomicity unless you turn on BTT and slow them down[1]; that might\n> eventually be something for PostgreSQL to think about, and it applies\n> to other OSes too).\n\nVery interesting find! For instance, the volume of Intel® Optane™ Persistent Memory\nalready reaches 512GB and can be potentially used for cluster data. As the\nfirst step it would be good to understand what Microsoft means by\nlarge memory pages, what size are they talking about, that is, where\nis the reasonable boundary for using BTT; i suppose, this will help choose\nwhether to use BTT or have to write own DAX-aware code.\n\n> With this patch we would stop relying on the second thing. Supposedly\n> POSIX requires read/write atomicity, and many file systems offer it in\n> a strong form (internal range locking) or maybe a weak accidental form\n> (page-level locking). Since some extremely popular implementations\n> just don't care, and Windows isn't even a POSIX, we just have to do it\n> ourselves.\n\nYes. indeed. But unless it's an atomic or transactional filesystem. In\nsuch a case there is almost nothing to do. Another thing is that\nit seems such a systems do not exist in reality although it has been\ndiscussed many times I've googled some information on this topic e.g\n[1], [2], [3] but all of project were abandoned or deprecated as\nMicrosoft's own development.\n\n> BTW there are at least two other places where PostgreSQL already knows\n> that concurrent reads and writes are possibly non-atomic (and we also\n> don't even try to get the alignment right, making the question moot):\n> pg_basebackup, which enables full_page_writes implicitly if you didn't\n> have the GUC on already, and pg_rewind, which refuses to run if you\n> haven't enabled full_page_writes explicitly (as recently discussed on\n> another thread recently; that's an annoying difference, and also an\n> annoying behaviour if you know your storage doesn't really need it!)\n\nIt seems a good topic for a separate thread patch. Would you provide a\nlink to the thread you mentioned please?\n \n> Therefore, we need a solution for Windows too. I tried to write the\n> equivalent code, in the attached. I learned a few things: Windows\n> locks are mandatory (that is, if you lock a file, reads/writes can\n> fail, unlike Unix). Combined with async release, that means we\n> probably also need to lock the file in xlog.c, when reading it in\n> xlog.c:ReadControlFile() (see comments). Before I added that, on a CI\n> run, I found that the read in there would sometimes fail, and adding\n> the locking fixed that. I am a bit confused, though, because I\n> expected that to be necessary only if someone managed to crash while\n> holding the write lock, which the CI tests shouldn't do. Do you have\n> any ideas?\n\nUnfortunately, no ideas so far. Was it a pg_control CRC or I/O errors?\nMaybe logs of such a fail were saved somewhere? I would like to see\nthem if possible.\n\n> While contemplating what else a mandatory file lock might break, I\n> remembered that basebackup.c also reads the control file. Hrmph. Not\n> addressed yet; I guess it might need to acquire/release around\n> sendFile(sink, XLOG_CONTROL_FILE, ...)?\n\nHere, possibly pass a boolean flag into sendFile()? When it is true,\nthen take a lock after OpenTransientFile() and release it before\nCloseTransientFile() if under Windows.\n\nThere are also two places where read or write from/to the pg_control\noccur. These are functions WriteControlFile() in xlog.c and\nread_controlfile() in pg_resetwal.c.\nFor the second case, locking definitely not necessary as\nthe server is stopped. For the first case seems too as BootStrapXLOG()\nwhere WriteControlFile() will be called inside must be called\nonly once on system install.\n\nSince i've smoothly moved on to the code review here there is a\nsuggestion at your discretion to add error messages to get_controlfile()\nand update_controlfile() if unlock_controlfile() fails.\n\n> I think we should just make fdatasync the default on all\n> systems.\n\nAgreed. And maybe choose the UPDATE_CONTROLFILE_FDATASYNC as the default\ncase in UpdateControlFile() since fdatasync now the default on all\nsystems and its metadata are of no interest?\n\n\nOn 22.02.2023 03:10, Thomas Munro wrote:\n> If we go this way, I suppose, in theory at least, someone with\n> external pg_backup_start()-based tools might also want to hold the\n> lock while copying pg_control. Otherwise they might fail to open it\n> on Windows (where that patch uses a mandatory lock) \n\nAs for external pg_backup_start()-based tools if somebody want to take the\nlock while copying pg_control i suppose it's a normal case. He may have to wait\na bit until we release the lock, like in lock_controlfile(). Moreover\nthis is a very good desire, as it guarantees the integrity of pg_control if\nonly someone is going to use F_SETLKW rather than non-waiting F_SETLK.\n\nBackup of locked files is the common place in Windows and all standard\nbackup tools can do it well via VSS (volume shadow copy) including\nembedded windows backup tool. Just in case, i tested it experimentally.\nDuring the torture test first try to copy pg_control and predictably caught:\nError: Cannot read C:\\pgbins\\master\\data\\global\\pg_control!\n\"The process cannot access the file because another process has locked a portion of the file.\"\nBut copy using own windows backup utility successfully copied it with VSS.\n\n> > One cute hack I thought of to make the file lock effectively advisory\n> on Windows is to lock a byte range *past the end* of the file (the\n> documentation says you can do that). That shouldn't stop programs\n> that want to read the file without locking and don't know/care about\n> our scheme (that is, pre-existing backup tools that haven't considered\n> this problem and remain oblivious or accept the (low) risk of torn\n> reads), but will block other participants in our scheme.\n\nA very interesting idea. It makes sense when someone using external\nbackup tools that can not work with VSS. But the fact of using such a tools\nunder Windows is highly doubtful, i guess. It will not allow to backup\nmany other applications and windows system itself.\nLet me to join you suggestion that it'd be good to hear from backup\ngurus what they think about that.\n\n> or copy garbage on\n> Linux (as they can today, I assume), with non-zero probability -- at\n> least when copying files from a hot standby.\n> Or backup tools might\n> want to get the file contents through some entirely different\n> mechanism that does the right interlocking (whatever that might be,\n> maybe inside the server). Perhaps this is not so much the localised\n> systems programming curiosity I thought it was, and has implications\n> that'd need to be part of the documented low-level backup steps. It\n> makes me like the idea a bit less. It'd be good to hear from backup\n> gurus what they think about that.\n> If we went back to the keep-rereading-until-it-stops-changing model,\n> then an external backup tool would need to be prepared to do that too,\n> in theory at least. Maybe some already do something like that?\n> \n> Or maybe the problem is/was too theoretical before...\n\nAs far as i understand, this problem has always been, but the probability of\nthis is extremely small in practice, which is directly pointed in\nthe docs [4]:\n\"So while it is theoretically a weak spot, pg_control does not seem\nto be a problem in practice.\"\n\n> Here's a patch like that.\n\nIn this patch, the problem is solved for the live database and\nmaybe remains for some possible cases of an external backup. In a whole,\ni think it can be stated that this is a sensible step forward.\n\nJust like last time, i ran a long stress test under windows with current patch.\nThere were no errors for more than 3 days even with fsync=off.\n\n[1] https://lwn.net/Articles/789600/\n[2] https://github.com/ut-osa/txfs\n[3] https://en.wikipedia.org/wiki/Transactional_NTFS[4] https://www.postgresql.org/docs/devel/wal-internals.html\n\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n", "msg_date": "Fri, 24 Feb 2023 13:12:47 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Fri, Feb 24, 2023 at 11:12 PM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> On 17.02.2023 06:21, Thomas Munro wrote:\n> > BTW there are at least two other places where PostgreSQL already knows\n> > that concurrent reads and writes are possibly non-atomic (and we also\n> > don't even try to get the alignment right, making the question moot):\n> > pg_basebackup, which enables full_page_writes implicitly if you didn't\n> > have the GUC on already, and pg_rewind, which refuses to run if you\n> > haven't enabled full_page_writes explicitly (as recently discussed on\n> > another thread recently; that's an annoying difference, and also an\n> > annoying behaviour if you know your storage doesn't really need it!)\n>\n> It seems a good topic for a separate thread patch. Would you provide a\n> link to the thread you mentioned please?\n\nhttps://www.postgresql.org/message-id/flat/367d01a7-90bb-9b70-4cda-248e81cc475c%40cosium.com\n\n> > Therefore, we need a solution for Windows too. I tried to write the\n> > equivalent code, in the attached. I learned a few things: Windows\n> > locks are mandatory (that is, if you lock a file, reads/writes can\n> > fail, unlike Unix). Combined with async release, that means we\n> > probably also need to lock the file in xlog.c, when reading it in\n> > xlog.c:ReadControlFile() (see comments). Before I added that, on a CI\n> > run, I found that the read in there would sometimes fail, and adding\n> > the locking fixed that. I am a bit confused, though, because I\n> > expected that to be necessary only if someone managed to crash while\n> > holding the write lock, which the CI tests shouldn't do. Do you have\n> > any ideas?\n>\n> Unfortunately, no ideas so far. Was it a pg_control CRC or I/O errors?\n> Maybe logs of such a fail were saved somewhere? I would like to see\n> them if possible.\n\nI think it was this one:\n\nhttps://cirrus-ci.com/task/5004082033721344\n\nFor example, see subscription/011_generated which failed like this:\n\n2023-02-16 06:57:25.724 GMT [5736][not initialized] PANIC: could not\nread file \"global/pg_control\": Permission denied\n\nThat was fixed after I changed it to also do locking in xlog.c\nReadControlFile(), in the version you tested. There must be something\nI don't understand going on, because that cluster wasn't even running\nbefore: it had just been created by initdb.\n\nBut, anyway, I have a new idea that makes that whole problem go away\n(though I'd still like to understand what happened there):\n\nWith the \"pseudo-advisory lock for Windows\" idea from the last email,\nwe don't need to bother with file system level locking in many places.\nJust in controldata_utils.c, for FE/BE interlocking (that is, we don't\nneed to use that for interlocking of concurrent reads and writes that\nare entirely in the backend, because we already have an LWLock that we\ncould use more consistently). Changes:\n\n1. xlog.c mostly uses no locking\n2. basebackup.c now acquires ControlFileLock\n3. only controldata_utils.c uses the new file locking, for FE-vs-BE interlocking\n4. lock past the end (pseudo-advisory locking for Windows)\n\nNote that when we recover from a basebackup or pg_backup_start()\nbackup, we use the backup label to find a redo start location in the\nWAL (see read_backup_label()), BUT we still read the copied pg_control\nfile (one that might be too \"new\", so we don't use its redo pointer).\nSo it had better not be torn, or the recovery will fail. So, in this\nversion I protected that sendFile() with ControlFileLock. But...\n\nIsn't that a bit strange? To go down this path we would also need to\ndocument the need to copy the control file with the file locked to\navoid a rare failure, in the pg_backup_start() documentation. That's\nannoying (I don't even know how to do it with easy shell commands;\nmaybe we should provide a program that locks and cats the file...?).\nCould we make better use of the safe copy that we have in the log?\nThen the pg_backup_start() subproblem would disappear. Conceptually,\nthat'd be just like the way we use FPI for data pages copied during a\nbackup. I'm not sure about any of that, though, it's just an idea,\nnot tested.\n\n> > Or maybe the problem is/was too theoretical before...\n>\n> As far as i understand, this problem has always been, but the probability of\n> this is extremely small in practice, which is directly pointed in\n> the docs [4]:\n> \"So while it is theoretically a weak spot, pg_control does not seem\n> to be a problem in practice.\"\n\nI guess that was talking about power loss atomicity again? Knowledge\nof the read/write atomicity problem seems to be less evenly\ndistributed (and I think it became more likely in Linux > 3.something;\nand the Windows situation possibly hadn't been examined by anyone\nbefore).\n\n> > Here's a patch like that.\n>\n> In this patch, the problem is solved for the live database and\n> maybe remains for some possible cases of an external backup. In a whole,\n> i think it can be stated that this is a sensible step forward.\n>\n> Just like last time, i ran a long stress test under windows with current patch.\n> There were no errors for more than 3 days even with fsync=off.\n\nThanks!", "msg_date": "Sat, 4 Mar 2023 10:39:20 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Hi, Thomas!\n\nOn 04.03.2023 00:39, Thomas Munro wrote:\n>> It seems a good topic for a separate thread patch. Would you provide a\n>> link to the thread you mentioned please?\n> \n> https://www.postgresql.org/message-id/flat/367d01a7-90bb-9b70-4cda-248e81cc475c%40cosium.com\n\nThanks! The important words there:\n>> But I fail to see how full_page_writes prevents this since it only act on writes\n\n> It ensures the damage is later repaired during WAL replay. Which can only\n> happen if the WAL contains the necessary information to do so - the full page\n> writes.\n\nTogether with the docs about restoring corrupted pg_control in the\npg_resetwal utility description these words made me think about whether\nto save the contents of pg_control at the beginning and end of\ncheckpoint into WAL and teach pg_resetwal to read them? It would be like\na periodic backup of the pg_control to a safe place.\nThis thought has nothing to do with this patch and this thread, and, in general,\ni'm not sure if it has any practical meaning, and whether, on the contrary, it\nmay lead to some difficulties. If it seems that there is a sense, then it\nwill be possible to think further about it.\n\n> For example, see subscription/011_generated which failed like this:\n> \n> 2023-02-16 06:57:25.724 GMT [5736][not initialized] PANIC: could not\n> read file \"global/pg_control\": Permission denied\n> \n> That was fixed after I changed it to also do locking in xlog.c\n> ReadControlFile(), in the version you tested. There must be something\n> I don't understand going on, because that cluster wasn't even running\n> before: it had just been created by initdb.\n> \n> But, anyway, I have a new idea that makes that whole problem go away\n> (though I'd still like to understand what happened there):\n\nSeems to be it's a race between the first reading of the pg_control in PostmasterMain()\nin LocalProcessControlFile(false) and the second one in SubPostmasterMain() here:\n\t/*\n\t * (re-)read control file, as it contains config. The postmaster will\n\t * already have read this, but this process doesn't know about that.\n\t */\n\tLocalProcessControlFile(false);\n\nwhich crashes according to the crash log: crashlog-postgres.exe_19a0_2023-02-16_06-57-26-675.txt\n\n> With the \"pseudo-advisory lock for Windows\" idea from the last email,\n> we don't need to bother with file system level locking in many places.\n> Just in controldata_utils.c, for FE/BE interlocking (that is, we don't\n> need to use that for interlocking of concurrent reads and writes that\n> are entirely in the backend, because we already have an LWLock that we\n> could use more consistently). Changes:\n> \n> 1. xlog.c mostly uses no locking\n> 2. basebackup.c now acquires ControlFileLock\n> 3. only controldata_utils.c uses the new file locking, for FE-vs-BE interlocking\n> 4. lock past the end (pseudo-advisory locking for Windows)\n\nAlthough the changes in 1. contributes to the problem described above again,\nbut 4. fixes this. And i did not find any other places where ReadControlFile()\ncan be called in different processes.That's all ok.\nThanks to 4., now it is not necessary to use VSS to copy the pg_control file,\nit can be copied in a common way even during the torture test. This is very good.\nI really like the idea with LWLock where possible.\nIn general, i think that these changes make the patch more lightweight and fast.\n\nAlso i ran tests for more than a day in stress mode with fsync=off under windows\nand Linux and found no problems. Patch-tester also passes without errors.\n\nI would like to move this patch to RFC, since I don’t see any problems\nboth in the code and in the tests, but the pg_backup_start() subproblem confuses me.\nMaybe move it to a separate patch in a distinct thread?\nAs there are a number of suggestions and questions to discuss such as:\n\n\n> Note that when we recover from a basebackup or pg_backup_start()\n> backup, we use the backup label to find a redo start location in the\n> WAL (see read_backup_label()), BUT we still read the copied pg_control\n> file (one that might be too \"new\", so we don't use its redo pointer).\n> So it had better not be torn, or the recovery will fail. So, in this\n> version I protected that sendFile() with ControlFileLock. But...\n> \n> Isn't that a bit strange? To go down this path we would also need to\n> document the need to copy the control file with the file locked to\n> avoid a rare failure, in the pg_backup_start() documentation. That's\n> annoying (I don't even know how to do it with easy shell commands;\n> maybe we should provide a program that locks and cats the file...?).\n\nVariant with separate utility looks good, with the recommendation\nin the doc to use it for the pg_control coping.\n\nAlso seems it is possible to make a function available in psql, such as\nexport_pg_control('dst_path') with the destination path as argument\nand call it before pg_backup_stop().\nOr pass the pg_control destination path to the pg_backup_stop() as extra argument.\nOr save pg_control to a predetermined location during pg_backup_stop() and specify\nin the docs that one need to copy it from there. I suppose that we have the right\nto ask the user to perform some manual manipulations here like with backup_label.\n\n> Could we make better use of the safe copy that we have in the log?\n> Then the pg_backup_start() subproblem would disappear. Conceptually,\n> that'd be just like the way we use FPI for data pages copied during a\n> backup. I'm not sure about any of that, though, it's just an idea,\n> not tested.\n\nSorry, i didn't understand the question about log. Would you explain me\nplease what kind of log did you mention and where can i look this\nsafe copy creation in the code?\n \nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 8 Mar 2023 06:43:47 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Wed, Mar 8, 2023 at 4:43 PM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> On 04.03.2023 00:39, Thomas Munro wrote:\n> > Could we make better use of the safe copy that we have in the log?\n> > Then the pg_backup_start() subproblem would disappear. Conceptually,\n> > that'd be just like the way we use FPI for data pages copied during a\n> > backup. I'm not sure about any of that, though, it's just an idea,\n> > not tested.\n>\n> Sorry, i didn't understand the question about log. Would you explain me\n> please what kind of log did you mention and where can i look this\n> safe copy creation in the code?\n\nSorry, I was confused; please ignore that part. We don't have a copy\nof the control file anywhere else. (Perhaps we should, but that could\nbe a separate topic.)\n\n\n", "msg_date": "Wed, 8 Mar 2023 17:28:07 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On 08.03.2023 07:28, Thomas Munro wrote:\n> Sorry, I was confused; please ignore that part. We don't have a copy\n> of the control file anywhere else. (Perhaps we should, but that could\n> be a separate topic.)\n\nThat’s all right! Fully agreed that this is a possible separate topic.\n\nSincerely yours,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 9 Mar 2023 01:10:57 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "This patch no longer applies and needs a rebase.\n\nGiven where we are in the commitfest, do you think this patch has the potential\nto go in or should it be moved?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 20 Jul 2023 16:37:08 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "This is a frustrating thread, because despite the last patch solving\nmost of the problems we discussed, it doesn't address the\nlow-level-backup procedure in a nice way. We'd have to tell users\nthey have to flock that file, or add a new step \"pg_controldata --raw\n> pg_control\", which seems weird when they already have a connection\nto the server.\n\nMaybe it just doesn't matter if eg the pg_controldata program can\nspuriously fail if pointed at a running system, and I was being too\ndogmatic trying to fix even that. Maybe we should just focus on\nfixing backups. Even there, I am beginning to suspect we are solving\nthis problem in the wrong place when a higher level change could\nsimplify the problem away.\n\nIdea for future research: Perhaps pg_backup_stop()'s label-file\noutput should include the control file image (suitably encoded)? Then\nthe recovery-from-label code could completely ignore the existing\ncontrol file, and overwrite it using that copy. It's already\npartially ignoring it, by using the label file's checkpoint LSN\ninstead of the control file's. Perhaps the captured copy could\ninclude the correct LSN already, simplifying that code, and the low\nlevel backup procedure would not need any additional steps or caveats.\nNo more atomicity problem for low-level-backups... but probably not\nsomething we would back-patch, for such a rare failure mode.\n\nHere's a new minimal patch that solves only the bugs in basebackup +\nthe simple SQL-facing functions that read the control file, by simply\nacquiring ControlFileLock in the obvious places. This should be\nsimple enough for back-patching?\n\nPerhaps we could use the control file image from server memory, but\nthat requires us to be certain that its CRC is always up to date.\nThat seems to be true, but I didn't want to require it for this, and\nit doesn't seem important for non-performance-critical code.\n\nThoughts?\n\nAs for the other topics that came up in this thread, I kicked the\nwal_sync_method thing out to its own thread[1]. (There was a logical\nchain connecting these topics: \"can I add file lock system calls\nhere?\" -> \"well if someone is going to complain that it's performance\ncritical then why are we using unnecessarily slow pg_fsync()?\" ->\n\"well if we change that to pg_fdatasync() we have to address known\nweakness/kludge on macOS first\". I don't like the flock stuff\nanymore, but I do want to fix the known macOS problem independently.\nHereby disentangled.)\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BF0EL4Up6yVYbbcWse4xKaqW4wc2xpw67Pq9FjmByWVg%40mail.gmail.com", "msg_date": "Sat, 22 Jul 2023 12:51:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Greetings,\n\n(Adding David Steele into the CC on this one...)\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> This is a frustrating thread, because despite the last patch solving\n> most of the problems we discussed, it doesn't address the\n> low-level-backup procedure in a nice way. We'd have to tell users\n> they have to flock that file, or add a new step \"pg_controldata --raw\n> > pg_control\", which seems weird when they already have a connection\n> to the server.\n\nflock'ing the file strikes me as dangerous to ask external tools to do\ndue to the chances of breaking the running system if they don't do it\nright. I'm generally open to the idea of having the backup tool have to\ndo more complicated work to be correct but that just seems likely to\ncause issues. Also- haven't looked yet, but I'm not sure that's even\npossible if your backup tool is running as a user who only has read\naccess to the data directory? I don't want us to give up on that\nfeature.\n\n> Maybe it just doesn't matter if eg the pg_controldata program can\n> spuriously fail if pointed at a running system, and I was being too\n> dogmatic trying to fix even that. Maybe we should just focus on\n> fixing backups. Even there, I am beginning to suspect we are solving\n> this problem in the wrong place when a higher level change could\n> simplify the problem away.\n\nFor a running system.. perhaps pg_controldata should be connecting to\nthe database and calling functions there? Or just complain that the\nsystem is online and tell the user to do that?\n\n> Idea for future research: Perhaps pg_backup_stop()'s label-file\n> output should include the control file image (suitably encoded)? Then\n> the recovery-from-label code could completely ignore the existing\n> control file, and overwrite it using that copy. It's already\n> partially ignoring it, by using the label file's checkpoint LSN\n> instead of the control file's. Perhaps the captured copy could\n> include the correct LSN already, simplifying that code, and the low\n> level backup procedure would not need any additional steps or caveats.\n> No more atomicity problem for low-level-backups... but probably not\n> something we would back-patch, for such a rare failure mode.\n\nI like this general direction and wonder if we could possibly even push\na bit harder on it: have the backup_label include the control file's\ncontents in some form that is understood and then tell tools to *not*\ncopy the running pg_control file ... and maybe even complain if a\npg_control file exists when we detect that backup_label has the control\nfile's image. We've certainly had problems in the past where people\nwould just nuke the backup_label file, even though they were restoring\nfrom a backup, because they couldn't start the system up since their\nrestore command wasn't set up properly or their WAL archive was missing.\n\nBeing able to get rid of the control file being in the backup at all\nwould make it harder for someone to get to a corrupted-but-running\nsystem and that seems like it's a good thing.\n\n> Here's a new minimal patch that solves only the bugs in basebackup +\n> the simple SQL-facing functions that read the control file, by simply\n> acquiring ControlFileLock in the obvious places. This should be\n> simple enough for back-patching?\n\nI don't particularly like fixing this in a way that only works for\npg_basebackup and means that the users of other backup tools don't have\na solution to this problem. What are we supposed to tell users of\npgBackRest when they see this fix for pg_basebackup in the next set of\nminor releases and they ask us if we've addressed this risk?\n\nWe might be able to accept the 're-read on CRC failure' approach, if it\nwere being used for pg_controldata and we documented that external\ntools and especially backup tools using the low-level API are required\nto check the CRC and to re-read on failure if accessing a running\nsystem.\n\nWhile it's not ideal, maybe we could get away with changing the contents\nof the backup_label as part of a back-patch? The documentation, at\nleast on a quick look, says to copy the second field from pg_backup_stop\ninto a backup_label file but doesn't say anything about what those\ncontents are or if they can change. That would at least address the\nconcern of backups ending up with a corrupt pg_control file and not\nbeing able to be restored, even if tools aren't updated to verify the\nCRC or similar. Of course, that's a fair bit of code churn for a\nbackpatch, which I certainly understand as being risky. If we can't\nback-patch that, it might still be the way to go moving forward, while\nalso telling tools to check the CRC. (I'm not going to try to figure\nout some back-patchable pretend solution for this for shell scripts that\npretend to be able to backup running PG databases; this issue is\nprobably the least of their issues anyway...)\n\nA couple of interesting notes on this though- pgBackRest doesn't only\nread the pg_control file at backup time, we also check it at\narchive_command time, to make sure that the system ID and version that\nare in the control file match up with the information in the WAL file\nwe're getting ready to archive and that those match with the system ID\nand version of the repo/stanza into which we are pushing the WAL file.\nWe do read the control file on the replica but that isn't the one we\nactually push into the repo as part of a backup- that's always the one\nwe read from the primary (we don't currently support 'backup just from\nthe replica').\n\nComing out of our discussion regarding this, we're likely to move\nforward on the check-CRC-and-re-read approach for the next pgBackRest\nrelease. If PG provides a better solution for us to use, great, but\ngiven that this has been shown to happen, we're not intending to wait\naround for PG to provide us with a better fix.\n\nThanks,\n\nStephen", "msg_date": "Mon, 24 Jul 2023 14:04:37 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Fri, Jul 21, 2023 at 8:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Idea for future research: Perhaps pg_backup_stop()'s label-file\n> output should include the control file image (suitably encoded)? Then\n> the recovery-from-label code could completely ignore the existing\n> control file, and overwrite it using that copy. It's already\n> partially ignoring it, by using the label file's checkpoint LSN\n> instead of the control file's. Perhaps the captured copy could\n> include the correct LSN already, simplifying that code, and the low\n> level backup procedure would not need any additional steps or caveats.\n> No more atomicity problem for low-level-backups... but probably not\n> something we would back-patch, for such a rare failure mode.\n\nI don't really know what the solution is, but this is a general\nproblem with the low-level backup API, and I think it sucks pretty\nhard. Here, we're talking about the control file, but the same problem\nexists with the data files. We try to work around that but it's all\nhacks. Unless your backup tool has special magic powers of some kind,\nyou can't take a backup using either pg_basebackup or the low-level\nAPI and then check that individual blocks have valid checksums, or\nthat they have sensible, interpretable contents, because they might\nnot. (Yeah, I know we have code to verify checksums during a base\nbackup, but as discussed elsewhere, it doesn't work.) It's also why we\nhave to force full-page write on during a backup. But the whole thing\nis nasty because you can't really verify anything about the backup you\njust took. It may be full of gibberish blocks but don't worry because,\nif all goes well, recovery will fix it. But you won't really know\nwhether recovery actually does fix it. You just kind of have to cross\nyour fingers and hope.\n\nIt's unclear to me how we could do better, especially when using the\nlow-level API. BASE_BACKUP could read via shared_buffers instead of\nthe FS, and I think that might be a good idea if we can defend\nadequately against cache poisoning, but with the low-level API someone\nmay just be calling a FS-level snapshot primitive. Unless we're\nprepared to pause all writes while that happens, I don't know how to\ndo better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jul 2023 16:17:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Tue, Jul 25, 2023 at 6:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Thomas Munro (thomas.munro@gmail.com) wrote:\n> > Here's a new minimal patch that solves only the bugs in basebackup +\n> > the simple SQL-facing functions that read the control file, by simply\n> > acquiring ControlFileLock in the obvious places. This should be\n> > simple enough for back-patching?\n>\n> I don't particularly like fixing this in a way that only works for\n> pg_basebackup and means that the users of other backup tools don't have\n> a solution to this problem. What are we supposed to tell users of\n> pgBackRest when they see this fix for pg_basebackup in the next set of\n> minor releases and they ask us if we've addressed this risk?\n>\n> We might be able to accept the 're-read on CRC failure' approach, if it\n> were being used for pg_controldata and we documented that external\n> tools and especially backup tools using the low-level API are required\n> to check the CRC and to re-read on failure if accessing a running\n> system.\n\nHi Stephen, David, and thanks for looking. Alright, let's try that idea out.\n\n0001 + 0002. Acquire the lock in the obvious places in the backend,\nto completely exclude the chance of anything going wrong for the easy\ncases, including pg_basebackup. (This is the v4 from yesterday).\nAnd...\n\n0003. Document this problem and how to detect it, in the\nlow-level-backup section. Better words welcome. And...\n\n0004. Retries for front-end programs, using the idea suggested by Tom\n(to wit: if it fails, retry until it fails with the same CRC value\ntwice). It's theoretically imperfect, but it's highly unlikely to\nfail in practice and the best we can do without file locks or a\nconnection to the server AFAICT. (This is the first patch I posted,\nadjusted to give up after 10 (?) loops, and not to bother at all in\nbackend code since that takes ControlFileLock with the 0001 patch).\n\n> While it's not ideal, maybe we could get away with changing the contents\n> of the backup_label as part of a back-patch? The documentation, at\n> least on a quick look, says to copy the second field from pg_backup_stop\n> into a backup_label file but doesn't say anything about what those\n> contents are or if they can change. That would at least address the\n> concern of backups ending up with a corrupt pg_control file and not\n> being able to be restored, even if tools aren't updated to verify the\n> CRC or similar. Of course, that's a fair bit of code churn for a\n> backpatch, which I certainly understand as being risky. If we can't\n> back-patch that, it might still be the way to go moving forward, while\n> also telling tools to check the CRC. (I'm not going to try to figure\n> out some back-patchable pretend solution for this for shell scripts that\n> pretend to be able to backup running PG databases; this issue is\n> probably the least of their issues anyway...)\n\nI think you're probably right that anyone following those instructions\nwould be OK if we just back-patched such a thing, but it all seems a\nlittle too much to me. +1 to looking into that for v17 (and I guess\nmaybe someone could eventually argue for back-patching much later with\nexperience). I'm sure other solutions are possible too... other\nplaces to put a safe atomic copy of the control file could include: in\nthe WAL, or in extra files (pg_control.XXX) in the data directory +\nsome infallible way for recovery to choose which one to start up from.\nOr something.", "msg_date": "Tue, 25 Jul 2023 12:25:25 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Tue, Jul 25, 2023 at 8:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> (Yeah, I know we have code to verify checksums during a base\n> backup, but as discussed elsewhere, it doesn't work.)\n\nBTW the the code you are referring to there seems to think 4KB\npage-halves are atomic; not sure if that's imagining page-level\nlocking in ancient Linux (?), or imagining default setvbuf() buffer\nsize observed with some specific implementation of fread(), or\nconfusing power-failure-sector-based atomicity with concurrent access\natomicity, or something else, but for the record what we actually see\nin this scenario on ext4 is the old/new page contents mashed together\non much smaller boundaries (maybe cache lines), caused by duelling\nconcurrent memcpy() to/from, independent of any buffer/page-level\nimplementation details we might have been thinking of with that code.\nMakes me wonder if it's even technically sound to examine the LSN.\n\n> It's also why we\n> have to force full-page write on during a backup. But the whole thing\n> is nasty because you can't really verify anything about the backup you\n> just took. It may be full of gibberish blocks but don't worry because,\n> if all goes well, recovery will fix it. But you won't really know\n> whether recovery actually does fix it. You just kind of have to cross\n> your fingers and hope.\n\nWell, not without also scanning the WAL for FPIs, anyway... And\nconceptually, that's why I think we probably want an 'FPI' of the\ncontrol file somewhere.\n\n\n", "msg_date": "Tue, 25 Jul 2023 13:36:03 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "While chatting to Robert and Andres about all this, a new idea came\nup. Or, rather, one of the first ideas that was initially rejected,\nnow resurrected to try out a suggestion of Andres’s on how to\nde-pessimise it. Unfortunately, it also suffers from Windows-specific\nproblems that I originally mentioned at the top of this thread but\nhad since repressed. Arrrghgh.\n\nFirst, the good news:\n\nWe could write out a whole new control file, and durable_rename() it\ninto place. We don’t want to do that in general, because we don’t\nwant to slow down UpdateMinRecoveryPoint(). The new concept is to do\nthat only if a backup is in progress. That requires a bit of\ninterlocking with backup start/stop (ie when runningBackups is\nchanging in shmem, we don’t want to overlap with UpdateControlFile()'s\ndecision on how to do it). Here is a patch to try that out. No more\nweasel wording needed for the docs; basebackup and low-level file\nsystem backup should always see an atomic control file (and\noccasionally also copy a harmless pg_control.tmp file). Then we only\nneed the gross retry-until-stable hack for front-end programs.\n\nAnd the bad news:\n\nIn my catalogue-of-Windows-weirdness project[1], I learned in v3-0003 that:\n\n+ fd = open(path, O_CREAT | O_EXCL | O_RDWR | PG_BINARY, 0777);\n+ PG_EXPECT_SYS(fd >= 0, \"touch name1.txt\");\n+ PG_REQUIRE_SYS(fd < 0 || close(fd) == 0);\n+\n+ fd = open(path2, O_RDWR | PG_BINARY, 0777);\n+ PG_EXPECT_SYS(fd >= 0, \"open name2.txt\");\n+ make_path(path2, \"name2.txt\");\n+#ifdef WIN32\n+\n+ /*\n+ * Windows can't rename over an open non-unlinked file, even with\n+ * have_posix_unlink_semantics.\n+ */\n+ pgwin32_dirmod_loops = 2; /* minimize looping to fail fast in testing */\n+ PG_EXPECT_SYS(rename(path, path2) == -1,\n+ \"Windows: can't rename name1.txt -> name2.txt while name2.txt is open\");\n+ PG_EXPECT_EQ(errno, EACCES);\n+ PG_EXPECT_SYS(unlink(path) == 0, \"unlink name1.txt\");\n+#else\n+ PG_EXPECT_SYS(rename(path, path2) == 0,\n+ \"POSIX: can rename name1.txt -> name2.txt while name2.txt is open\");\n+#endif\n+ PG_EXPECT_SYS(close(fd) == 0);\n\nLuckily the code in dirmod.c:pgrename() should retry lots of times if\na concurrent transient opener/reader comes along, so I think that\nshould be OK in practice (but if backups_r_us.exe holds the file open\nfor 10 seconds while we're trying to rename it, I assume we'll PANIC);\ncall that problem #1. What is slightly more disturbing is the clue in\nthe \"Cygwin cleanup\" thread[2] that rename() can fail to be 100%\natomic, so that a concurrent call to open() can fail with ENOENT (cf.\nthe POSIX requirement \"... a link named new shall remain visible to\nother processes throughout the renaming operation and refer either to\nthe file referred to by new or old ...\"). Call that problem #2, a\nproblem that already causes us rare breakage (for example: could not\nopen file \"pg_logical/snapshots/0-14FE6B0.snap\").\n\nI know that problem #1 can be fixed by applying v3-0004 from [1] but\nthat leads to impossible decisions like revoking support for non-NTFS\nfilesystems as discussed in that thread, and we certainly couldn't\nback-patch that anyway. I assume problem #2 can too.\n\nThat makes me want to *also* acquire ControlFileLock, for base\nbackup's read of pg_control. Even though it seems redundant with the\nrename() trick (the rename() trick should be enough for low-level\n*and* basebackup on ext4), it would at least avoid the above\nWindowsian pathologies during base backups.\n\nI'm sorry for the patch/idea-churn in this thread. It's like\nWhac-a-Mole. Blasted non-POSIX-compliant moles. New patches\nattached. Are they getting better?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BajSQ_8eu2AogTncOnZ5me2D-Cn66iN_-wZnRjLN%2Bicg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2Be13wK0PBX5Z63CCwWm7MfRQuwBRabM_3aKWSko2AUww%40mail.gmail.com", "msg_date": "Wed, 26 Jul 2023 16:06:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Hi Thomas,\n\nOn 7/26/23 06:06, Thomas Munro wrote:\n> While chatting to Robert and Andres about all this, a new idea came\n> up. Or, rather, one of the first ideas that was initially rejected,\n> now resurrected to try out a suggestion of Andres’s on how to\n> de-pessimise it. Unfortunately, it also suffers from Windows-specific\n> problems that I originally mentioned at the top of this thread but\n> had since repressed. Arrrghgh.\n> \n> First, the good news:\n> \n> We could write out a whole new control file, and durable_rename() it\n> into place. We don’t want to do that in general, because we don’t\n> want to slow down UpdateMinRecoveryPoint(). The new concept is to do\n> that only if a backup is in progress. That requires a bit of\n> interlocking with backup start/stop (ie when runningBackups is\n> changing in shmem, we don’t want to overlap with UpdateControlFile()'s\n> decision on how to do it). Here is a patch to try that out. No more\n> weasel wording needed for the docs; basebackup and low-level file\n> system backup should always see an atomic control file (and\n> occasionally also copy a harmless pg_control.tmp file). Then we only\n> need the gross retry-until-stable hack for front-end programs.\n\nI like the approach in these patches better than the last patch set. My \nonly concern would be possible performance regression on standbys (when \ndoing backup from standby) since pg_control can be written very \nfrequently to update min recovery point.\n\nI've made a first pass through the patches and they look generally \nreasonable (and back patch-able).\n\nOne thing:\n\n+ sendFileWithContent(sink, XLOG_CONTROL_FILE,\n+ (char *) control_file, sizeof(*control_file),\n+ &manifest);\n\nI wonder if we should pad pg_control out to 8k so it remains the same \nsize as now? Postgres doesn't care, but might look odd to users, and is \narguably a change in behavior that should not be back patched.\n\n> And the bad news:\n\nProvided we can reasonably address the Windows issues this seems to be \nthe way to go.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 27 Jul 2023 10:18:47 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Hello!\n\nOn 26.07.2023 07:06, Thomas Munro wrote:\n> New patches\n> attached. Are they getting better?\n\nIt seems to me that it is worth focusing efforts on the second part of the patch,\nas the most in demand. And try to commit it first.\n\nAnd seems there is a way to simplify it by adding a parameter to get_controlfile() that will return calculated\ncrc and moving the repetition logic level up.\n\nThere is a proposed algorithm in alg_level_up.pdf attached.\n\n[Excuse me, for at least three days i will be in a place where there is no available Internet. \\\nSo will be able to read this thread no earlier than August 2 evening]\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 30 Jul 2023 22:22:49 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "Sorry, attached the wrong version of the file. Here is the right one.\n\nSincerely yours,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 30 Jul 2023 22:30:50 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "I'm planning to push 0002 (retries in frontend programs, which is\nwhere this thread began) and 0004 (add missing locks to SQL\nfunctions), including back-patches as far as 12, in a day or so.\n\nI'll abandon the others for now, since we're now thinking bigger[1]\nfor backups, side stepping the problem.\n\n[1] https://www.postgresql.org/message-id/flat/1330cb48-4e47-03ca-f2fb-b144b49514d8%40pgmasters.net\n\n\n", "msg_date": "Thu, 12 Oct 2023 12:25:34 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Thu, Oct 12, 2023 at 12:25:34PM +1300, Thomas Munro wrote:\n> I'm planning to push 0002 (retries in frontend programs, which is\n> where this thread began) and 0004 (add missing locks to SQL\n> functions), including back-patches as far as 12, in a day or so.\n> \n> I'll abandon the others for now, since we're now thinking bigger[1]\n> for backups, side stepping the problem.\n\nFWIW, 0003 looks like a low-risk improvement seen from here, so I'd be\nOK to use it at least for now on HEAD before seeing where the other\ndiscussions lead. 0004 would be OK if applied to v11, as well, but I\nalso agree that it is not a big deal to let this branch be as it is\nnow at this stage if you feel strongly this way.\n--\nMichael", "msg_date": "Thu, 12 Oct 2023 10:10:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "\n\nOn 10/11/23 21:10, Michael Paquier wrote:\n> On Thu, Oct 12, 2023 at 12:25:34PM +1300, Thomas Munro wrote:\n>> I'm planning to push 0002 (retries in frontend programs, which is\n>> where this thread began) and 0004 (add missing locks to SQL\n>> functions), including back-patches as far as 12, in a day or so.\n>>\n>> I'll abandon the others for now, since we're now thinking bigger[1]\n>> for backups, side stepping the problem.\n> \n> FWIW, 0003 looks like a low-risk improvement seen from here, so I'd be\n> OK to use it at least for now on HEAD before seeing where the other\n> discussions lead. 0004 would be OK if applied to v11, as well, but I\n> also agree that it is not a big deal to let this branch be as it is\n> now at this stage if you feel strongly this way.\n\nAgreed on 0002 and 0004, though I don't really think a back patch of \n0004 to 11 is necessary. I'd feel differently if there was a single \nfield report of this issue.\n\nI would prefer to hold off on applying 0003 to HEAD until we see how [1] \npans out.\n\nHaving said that, I have a hard time seeing [1] as being something we \ncould back patch. The manipulation of backup_label is simple enough, but \nstarting a cluster without pg_control is definitely going to change some \nthings. Also, the requirement that backup software skip copying \npg_control after a minor release is not OK.\n\nIf we do back patch 0001 is 0003 really needed? Surely if 0001 works \nwith other backup software it would work fine for pg_basebackup.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/flat/1330cb48-4e47-03ca-f2fb-b144b49514d8%40pgmasters.net\n\n\n", "msg_date": "Thu, 12 Oct 2023 09:58:29 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On 10/12/23 09:58, David Steele wrote:\n>> On Thu, Oct 12, 2023 at 12:25:34PM +1300, Thomas Munro wrote:\n>>> I'm planning to push 0002 (retries in frontend programs, which is\n>>> where this thread began) and 0004 (add missing locks to SQL\n>>> functions), including back-patches as far as 12, in a day or so.\n>>>\n>>> I'll abandon the others for now, since we're now thinking bigger[1]\n>>> for backups, side stepping the problem.\n>>\n>> FWIW, 0003 looks like a low-risk improvement seen from here, so I'd be\n>> OK to use it at least for now on HEAD before seeing where the other\n>> discussions lead.  0004 would be OK if applied to v11, as well, but I\n>> also agree that it is not a big deal to let this branch be as it is\n>> now at this stage if you feel strongly this way.\n> \n> Agreed on 0002 and 0004, though I don't really think a back patch of \n> 0004 to 11 is necessary. I'd feel differently if there was a single \n> field report of this issue.\n> \n> I would prefer to hold off on applying 0003 to HEAD until we see how [1] \n> pans out.\n> \n> Having said that, I have a hard time seeing [1] as being something we \n> could back patch. The manipulation of backup_label is simple enough, but \n> starting a cluster without pg_control is definitely going to change some \n> things. Also, the requirement that backup software skip copying \n> pg_control after a minor release is not OK.\n> \n\nAfter some more thought, I think we could massage the \"pg_control in \nbackup_label\" method into something that could be back patched, with \nmore advanced features (e.g. error on backup_label and pg_control both \npresent on initial cluster start) saved for HEAD.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:41:39 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Thu, Oct 12, 2023 at 10:41:39AM -0400, David Steele wrote:\n> After some more thought, I think we could massage the \"pg_control in\n> backup_label\" method into something that could be back patched, with more\n> advanced features (e.g. error on backup_label and pg_control both present on\n> initial cluster start) saved for HEAD.\n\nI doubt that anything changed in this area would be in the\nbackpatchable zone, particularly as it would involve protocol changes\nwithin the replication commands, so I'd recommend to focus on HEAD.\nBackward-compatibility is not much of a conern as long as the backend\nis involved. The real problem here would be on the frontend side and\nhow much effort we should try to put in maintaining the code of\npg_basebackup compatible with older backends.\n--\nMichael", "msg_date": "Fri, 13 Oct 2023 08:15:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On 10/12/23 19:15, Michael Paquier wrote:\n> On Thu, Oct 12, 2023 at 10:41:39AM -0400, David Steele wrote:\n>> After some more thought, I think we could massage the \"pg_control in\n>> backup_label\" method into something that could be back patched, with more\n>> advanced features (e.g. error on backup_label and pg_control both present on\n>> initial cluster start) saved for HEAD.\n> \n> I doubt that anything changed in this area would be in the\n> backpatchable zone, particularly as it would involve protocol changes\n> within the replication commands, so I'd recommend to focus on HEAD.\n\nI can't see why there would be any protocol changes, but perhaps I am \nmissing something?\n\nOne thing that does have to change, however, is the ordering of \nbackup_label in the base tar file. Right now it is at the beginning but \nwe need it to be at the end like pg_control is now.\n\nI'm working up a POC patch now and hope to have something today or \ntomorrow. I think it makes sense to at least have a look at an \nalternative solution before going forward.\n\nRegards,\n-David\n\n\n\n", "msg_date": "Fri, 13 Oct 2023 10:40:44 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On 10/13/23 10:40, David Steele wrote:\n> On 10/12/23 19:15, Michael Paquier wrote:\n>> On Thu, Oct 12, 2023 at 10:41:39AM -0400, David Steele wrote:\n>>> After some more thought, I think we could massage the \"pg_control in\n>>> backup_label\" method into something that could be back patched, with \n>>> more\n>>> advanced features (e.g. error on backup_label and pg_control both \n>>> present on\n>>> initial cluster start) saved for HEAD.\n>>\n>> I doubt that anything changed in this area would be in the\n>> backpatchable zone, particularly as it would involve protocol changes\n>> within the replication commands, so I'd recommend to focus on HEAD.\n> \n> I can't see why there would be any protocol changes, but perhaps I am \n> missing something?\n> \n> One thing that does have to change, however, is the ordering of \n> backup_label in the base tar file. Right now it is at the beginning but \n> we need it to be at the end like pg_control is now.\n\nWell, no protocol changes, but overall this does not seem like a \ndirection that would be even remotely back patch-able. See [1] for details.\n\nFor back branches that puts us back to committing some form of 0001 and \n0003. I'm still worried about the performance implications of 0001 on a \nstandby when in backup mode, but I don't have any better ideas.\n\nIf we do commit 0001 and 0003 to the back branches I'd still like to \nhold off on HEAD to see if we can do something better there.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/e05f20f9-6f91-9a70-efab-9a2ae472e65d%40pgmasters.net\n\n\n", "msg_date": "Sat, 14 Oct 2023 11:42:23 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "I pushed the retry-loop-in-frontend-executables patch and the\nmissing-locking-in-SQL-functions patch yesterday. That leaves the\nbackup ones, which I've rebased and attached, no change. It sounds\nlike we need some more healthy debate about that backup label idea\nthat would mean we don't need these (two birds with one stone), so\nI'll just leave these here to keep the cfbot happy in the meantime.", "msg_date": "Tue, 17 Oct 2023 11:45:21 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Mon, Oct 16, 2023 at 6:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I pushed the retry-loop-in-frontend-executables patch and the\n> missing-locking-in-SQL-functions patch yesterday. That leaves the\n> backup ones, which I've rebased and attached, no change. It sounds\n> like we need some more healthy debate about that backup label idea\n> that would mean we don't need these (two birds with one stone), so\n> I'll just leave these here to keep the cfbot happy in the meantime.\n\n0002 has no comments at all, and the commit message is not specific\nenough for me to understand what problem it fixes. I suggest adding\nsome comments and fixing the commit message. I'm also not very sure\nwhether the change to the signature of sendFileWithContent is really\nthe best way to deal with the control file maybe containing a zero\nbyte ... but I'm quite sure that if we're going to do it that way, it\nneeds a comment. But maybe we should do something else that would\nrequire less explanation, like having the caller always pass the\nlength.\n\nRegarding 0001, the way you've hacked up update_controlfile() doesn't\nfill me with joy. It's nice if code that is common to the frontend and\nthe backend does the same thing in both cases rather than, say, having\nan extra argument that only works in one case but not the other. I bet\nthis could be refactored to make it nicer, e.g. have one function that\ntakes an exact pathname at which the control file is to be written and\nthen other functions that use it as a subroutine.\n\nPersonally, I think the general idea of 0001 is better than any\ncompeting proposal on the table. In the case of pg_basebackup, we\ncould fix the server to perform appropriate locking around reading the\ncontrol file, so that the version sent to the client doesn't get torn.\nBut if a backup is made by calling pg_backup_start() and copying\n$PGDATA, that isn't going to work. To fix that, we need to either make\nthe backup procedure more complicated and essentially treat the\ncontrol file as a special case, or we need to do something like this.\nI think this is better; but as you mention, opinions vary on that.\n\nLife would be a lot easier here if we could get rid of the low-level\nbackup API and just have pg_basebackup DTWT, but that seems like a\ncompletely non-viable proposal.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Oct 2023 13:47:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Tue, Oct 17, 2023 at 10:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Life would be a lot easier here if we could get rid of the low-level\n> backup API and just have pg_basebackup DTWT, but that seems like a\n> completely non-viable proposal.\n>\n\nYeah, my contribution to this area [1] is focusing on the API because I\nfigured we've provided it and should do our best to have it do as much as\npossible for the dba or third-parties that build tooling on top of it.\n\nI kinda think that adding a pg_backup_metadata directory that\npg_backup_start|stop can use may help here. I'm wondering whether those\nfunctions provide enough control guarantees that pg_control's\n\"in_backup=true|false\" flag proposed in that thread is reliable enough when\ncopied to the root directory in the backup. I kinda feel that so long as\nthe flag is reliable it should be possible for the signal file processing\ncode to implement whatever protocol we need.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKFQuwbpz4s8XP_%2BKhsif2eFaC78wpTbNbevUYBmjq-UCeNL7Q%40mail.gmail.com\n\nOn Tue, Oct 17, 2023 at 10:50 AM Robert Haas <robertmhaas@gmail.com> wrote:Life would be a lot easier here if we could get rid of the low-level\nbackup API and just have pg_basebackup DTWT, but that seems like a\ncompletely non-viable proposal.Yeah, my contribution to this area [1] is focusing on the API because I figured we've provided it and should do our best to have it do as much as possible for the dba or third-parties that build tooling on top of it.I kinda think that adding a pg_backup_metadata directory that pg_backup_start|stop can use may help here.  I'm wondering whether those functions provide enough control guarantees that pg_control's \"in_backup=true|false\" flag proposed in that thread is reliable enough when copied to the root directory in the backup.  I kinda feel that so long as the flag is reliable it should be possible for the signal file processing code to implement whatever protocol we need.David J.[1] https://www.postgresql.org/message-id/CAKFQuwbpz4s8XP_%2BKhsif2eFaC78wpTbNbevUYBmjq-UCeNL7Q%40mail.gmail.com", "msg_date": "Tue, 17 Oct 2023 11:04:56 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Tue, 17 Oct 2023 at 04:18, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> I pushed the retry-loop-in-frontend-executables patch and the\n> missing-locking-in-SQL-functions patch yesterday. That leaves the\n> backup ones, which I've rebased and attached, no change. It sounds\n> like we need some more healthy debate about that backup label idea\n> that would mean we don't need these (two birds with one stone), so\n> I'll just leave these here to keep the cfbot happy in the meantime.\n\nI have changed the status of this to \"Waiting on Author\" as Robert's\ncomments have not yet been handled. Feel free to post an updated\nversion and change the status accordingly.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 11 Jan 2024 19:50:46 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" }, { "msg_contents": "On Thu, 11 Jan 2024 at 19:50, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 17 Oct 2023 at 04:18, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > I pushed the retry-loop-in-frontend-executables patch and the\n> > missing-locking-in-SQL-functions patch yesterday. That leaves the\n> > backup ones, which I've rebased and attached, no change. It sounds\n> > like we need some more healthy debate about that backup label idea\n> > that would mean we don't need these (two birds with one stone), so\n> > I'll just leave these here to keep the cfbot happy in the meantime.\n>\n> I have changed the status of this to \"Waiting on Author\" as Robert's\n> comments have not yet been handled. Feel free to post an updated\n> version and change the status accordingly.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 1 Feb 2024 23:49:39 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: odd buildfarm failure - \"pg_ctl: control file appears to be\n corrupt\"" } ]
[ { "msg_contents": "I have psql working with readline (roughly) in windows 10!\nin my attempt to test it...\n\n>> 1..0 # SKIP IO::Pty is needed to run this test\n\nI would like to run these tests to see how far off I am...\n(Randomly typing sql and squealing like a child has its limits)\n\nI have built this using VS 2022 Community Edition.\n\nThe quick search failed to find an obvious answer.\nOne note in one of the strawberry .pm files read:\n>ptys are not supported yet under Win32, but will be emulated...\n\nThanks in advance!\n\nI have psql working with readline (roughly) in windows 10!in my attempt to test it...>> 1..0 # SKIP IO::Pty is needed to run this testI would like to run these tests to see how far off I am... (Randomly typing sql and squealing like a child has its limits)I have  built this using VS 2022 Community Edition.The quick search failed to find an obvious answer.One note in one of the strawberry .pm files read:>ptys are not supported yet under Win32, but will be emulated...Thanks in advance!", "msg_date": "Wed, 23 Nov 2022 01:31:09 -0500", "msg_from": "Kirk Wolak <wolakk@gmail.com>", "msg_from_op": true, "msg_subject": "Help running 010_tab_completion.pl on windows" } ]
[ { "msg_contents": "A little while ago we discussed briefly over in the meson thread whether \nwe could remove the postmaster symlink [0]. The meson build system \ncurrently does not install a postmaster symlink. (AFAICT, the MSVC \nbuild system does not either.) So if we want to elevate the meson build \nsystem, we either need to add the postmaster symlink, or remove it from \nthe other build system(s) as well. Seeing that it's been deprecated for \na long time, I propose we just remove it. See attached patches.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/bfdf03c8-c24f-c5b1-474e-4c9a96210f46@enterprisedb.com", "msg_date": "Wed, 23 Nov 2022 08:52:43 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "drop postmaster symlink" }, { "msg_contents": "On 11/23/22 02:52, Peter Eisentraut wrote:\n> A little while ago we discussed briefly over in the meson thread whether\n> we could remove the postmaster symlink [0]. The meson build system\n> currently does not install a postmaster symlink. (AFAICT, the MSVC\n> build system does not either.) So if we want to elevate the meson build\n> system, we either need to add the postmaster symlink, or remove it from\n> the other build system(s) as well. Seeing that it's been deprecated for\n> a long time, I propose we just remove it. See attached patches.\n\nI am a big +1 on removing the symlink, however it is worth pointing out \nthat the PGDG RPMs still use the symlink in the included systemd service \nfile:\n\n8<----------\nExecStart=/usr/pgsql-15/bin/postmaster -D ${PGDATA}\n8<----------\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 09:18:30 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Hi,\n\nOn Wed, 2022-11-23 at 09:18 -0500, Joe Conway wrote:\n> I am a big +1 on removing the symlink, however it is worth pointing\n> out \n> that the PGDG RPMs still use the symlink in the included systemd\n> service \n> file:\n> \n> 8<----------\n> ExecStart=/usr/pgsql-15/bin/postmaster -D ${PGDATA}\n\n\n...and it helps us to find the \"main\" process a bit easily.\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Wed, 23 Nov 2022 14:28:32 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> ...and it helps us to find the \"main\" process a bit easily.\n\nHmm, that's a nontrivial point perhaps. It's certain that this\nwill break some other people's start scripts too. On the whole,\nis it really that hard to add the symlink to the meson build?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 10:07:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Hi,\n\nOn 2022-11-23 10:07:49 -0500, Tom Lane wrote:\n> Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> > ...and it helps us to find the \"main\" process a bit easily.\n> \n> Hmm, that's a nontrivial point perhaps. It's certain that this\n> will break some other people's start scripts too.\n\nOTOH, postmaster has been deprecated for ~15 years.\n\n\n> On the whole, is it really that hard to add the symlink to the meson build?\n\nNo. Meson has a builtin command for it, just not in the meson version we're\ncurrently requiring. We can create the symlink ourselves instead. The problem\nis just detecting systems where we can't symlink and what to fall back to\nthere.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Nov 2022 11:50:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On Wed, Nov 23, 2022 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-23 10:07:49 -0500, Tom Lane wrote:\n> > Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> > > ...and it helps us to find the \"main\" process a bit easily.\n> >\n> > Hmm, that's a nontrivial point perhaps. It's certain that this\n> > will break some other people's start scripts too.\n>\n> OTOH, postmaster has been deprecated for ~15 years.\n\nYeah. Also, I don't think it's generally too hard to find the parent\nprocess anyway, because at least on my system, the other ones end up\nwith ps display that looks like \"postgres: logical replication\nlauncher\" or whatever. The main process doesn't set the ps status\ndisplay, so that's the only one that shows a full path to the\nexecutable in the ps status, which is how I usually spot it. That has\nthe advantage that it doesn't matter which name was used to launch it,\ntoo.\n\nI don't actually care very much whether we get rid of the postmaster\nsymlink or not, but if we aren't going to, we should stop calling it\ndeprecated. If 15 years isn't enough time to remove it, what ever will\nbe? I tend to think it's fairly pointless and perhaps also a bit\nconfusing, because the product is postgres not postmaster and people\ncan reasonably expect the binary name to match the product name. But\nif we keep it, I don't think anything too dire will happen, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:10:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On 11/23/22 15:10, Robert Haas wrote:\n> On Wed, Nov 23, 2022 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n>> On 2022-11-23 10:07:49 -0500, Tom Lane wrote:\n>> > Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n>> > > ...and it helps us to find the \"main\" process a bit easily.\n>> >\n>> > Hmm, that's a nontrivial point perhaps. It's certain that this\n>> > will break some other people's start scripts too.\n>>\n>> OTOH, postmaster has been deprecated for ~15 years.\n> \n> Yeah. Also, I don't think it's generally too hard to find the parent\n> process anyway, because at least on my system, the other ones end up\n> with ps display that looks like \"postgres: logical replication\n> launcher\" or whatever. The main process doesn't set the ps status\n> display, so that's the only one that shows a full path to the\n> executable in the ps status, which is how I usually spot it. That has\n> the advantage that it doesn't matter which name was used to launch it,\n> too.\n\nSame here\n\n> I don't actually care very much whether we get rid of the postmaster\n> symlink or not, but if we aren't going to, we should stop calling it\n> deprecated. If 15 years isn't enough time to remove it, what ever will\n> be? I tend to think it's fairly pointless and perhaps also a bit\n> confusing, because the product is postgres not postmaster and people\n> can reasonably expect the binary name to match the product name. But\n> if we keep it, I don't think anything too dire will happen, either.\n\nFWIW, the reason I took note of the postmaster symlink in the first \nplace a few years ago was because selinux treats execution of programs \nfrom symlinks differently than from actual files.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:32:17 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-23 10:07:49 -0500, Tom Lane wrote:\n>> On the whole, is it really that hard to add the symlink to the meson build?\n\n> No. Meson has a builtin command for it, just not in the meson version we're\n> currently requiring. We can create the symlink ourselves instead. The problem\n> is just detecting systems where we can't symlink and what to fall back to\n> there.\n\nThis isn't a hill I want to die on, either way. But \"it's a bit\nmore complicated in meson\" seems like a poor reason for changing\nthe user-visible installed fileset.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:48:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Hi,\n\nOn 2022-11-23 15:48:04 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-23 10:07:49 -0500, Tom Lane wrote:\n> >> On the whole, is it really that hard to add the symlink to the meson build?\n>\n> > No. Meson has a builtin command for it, just not in the meson version we're\n> > currently requiring. We can create the symlink ourselves instead. The problem\n> > is just detecting systems where we can't symlink and what to fall back to\n> > there.\n>\n> This isn't a hill I want to die on, either way. But \"it's a bit\n> more complicated in meson\" seems like a poor reason for changing\n> the user-visible installed fileset.\n\nI wouldn't even have thought about proposing dropping the symlink if it hadn't\nbeen deprecated forever, and I suspect Peter wouldn't have either...\n\nI think this is a bit more more complicated than \"changing the user-visible\ninstalled fileset\" because we didn't have logic to create 'postmaster' on\nwindows before, afaik the only OS without reliable symlink support.\n\nAnyway, my current thinking is to have dumb OS dependent behaviour and create\na symlink everywhere but windows, where we'd just copy the file.\n\nOr we could just continue to not install 'postmaster' on windows, because of\nthe potential confusion that 'postmaster.exe' differing from 'postgres.exe'\ncould cause.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:08:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "> On 23 Nov 2022, at 21:10, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I don't actually care very much whether we get rid of the postmaster\n> symlink or not, but if we aren't going to, we should stop calling it\n> deprecated. If 15 years isn't enough time to remove it, what ever will\n> be?\n\n+1. If we actively add support for something then it isn't really all that\ndeprecated IMHO.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 24 Nov 2022 01:15:03 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Hello,\n\nThis is a review of Peter's 2 patches. I see only 1 small problem.\n\n +++\n\nLooking at the documentation, a \"postmaster\" in the glossary is\ndefined as the controlling process. This works; it needs to be called\nsomething. There is still a postmaster.pid (etc.) in the data\ndirectory.\n\nThe word \"postmaster\" (case insensitive) shows up 84 times in the\ndocumentation. I looked at all of these. \n\nI see a possible problem at line 1,412 of runtime.sgml, the \"Linux\nMemory Overcommit\" section. It talks about the postmaster's startup\nscript invoking the postmaster. It might, possibly, be better to say\nthat \"postgres\" is invoked, that being the name of the invoked program.\nThere's a similar usage at line 1,416. On the other hand, the existing\ntext makes it quite clear what is going on and there's a lot to be said\nfor consistently using the word \"postmaster\". I mention this only in\ncase somebody deems it significant. Perhaps there's a way to use\nmarkup, identifying \"postgres\" as a program with a name in the file\nsystem, to make things more clear. Most likely, nobody will care.\n\nIn doc/src/sgml/ref/allfiles.sgml at line 222 there is an ENTITY\ndefined which references the deleted postmaster.sgml file. Since I\ndid a maintainer-clean, and the patch deletes the postmaster.sgml\nfile, and I don't see any references to the entity in the docs, I\nbelieve that this line should be removed. (In other words, I don't\nthink this file is automatically maintained.)\n\nAfter applying the patches the documentation seems to build just fine.\n\nI have not run contrib/start-scripts/{freebsd,linux}, but the patch\nlooks fine and the result of applying the patch looks fine, and the\npatch is a one-line simple replacement of \"postmaster\" with \"postgres\"\nso I expect no problems.\n\nThe modification to src/port/path.c is in a comment, so it will surely\nnot affect anything at runtime. And the changed comment looks right\nto me.\n\nThere's thousands of lines of comments in the code where \"postmaster\"\n(case insensitive) shows up after applying the patches. I've not\nreviewed any of these. Or looked for odd variable names, or\nreferences in the code to the postmaster binary - by name, etc. I'm\nnot sure where it would make sense to look for such problems.\n\nAside from the \"allfiles.sgml\" problem, I see no reason why these 2\npatches should not be applied. As mentioned by others, I don't have\nstrong feelings about getting rid of the postmaster symlink. But I\ndid pick this patch to review because I remember, once upon a time,\nbeing slightly confused by a \"postmaster\" in the process list.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Sat, 7 Jan 2023 16:59:42 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "\"Karl O. Pinc\" <kop@karlpinc.com> writes:\n> This is a review of Peter's 2 patches. I see only 1 small problem.\n\n> Looking at the documentation, a \"postmaster\" in the glossary is\n> defined as the controlling process. This works; it needs to be called\n> something. There is still a postmaster.pid (etc.) in the data\n> directory.\n\n> The word \"postmaster\" (case insensitive) shows up 84 times in the\n> documentation. I looked at all of these. \n\nHmm ... I thought this patch was about getting rid of the\nadmittedly-obsolete installed symlink. If it's trying to get rid of\nthe \"postmaster\" terminology for our parent process, I'm very strongly\nagainst that, either as regards to code or documentation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Jan 2023 18:38:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On 1/7/23 18:38, Tom Lane wrote:\n> \"Karl O. Pinc\" <kop@karlpinc.com> writes:\n>> This is a review of Peter's 2 patches. I see only 1 small problem.\n> \n>> Looking at the documentation, a \"postmaster\" in the glossary is\n>> defined as the controlling process. This works; it needs to be called\n>> something. There is still a postmaster.pid (etc.) in the data\n>> directory.\n> \n>> The word \"postmaster\" (case insensitive) shows up 84 times in the\n>> documentation. I looked at all of these. \n> \n> Hmm ... I thought this patch was about getting rid of the\n> admittedly-obsolete installed symlink.\n\nThat was my understanding too.\n\n> If it's trying to get rid of the \"postmaster\" terminology for our\n> parent process, I'm very strongly against that, either as regards to\n> code or documentation.\n\n+1\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 7 Jan 2023 19:33:38 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On Sat, 07 Jan 2023 18:38:25 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Karl O. Pinc\" <kop@karlpinc.com> writes:\n> > This is a review of Peter's 2 patches. I see only 1 small problem.\n> > \n> \n> > Looking at the documentation, a \"postmaster\" in the glossary is\n> > defined as the controlling process. This works; it needs to be\n> > called something. There is still a postmaster.pid (etc.) in the\n> > data directory. \n> \n> > The word \"postmaster\" (case insensitive) shows up 84 times in the\n> > documentation. I looked at all of these. \n> \n> Hmm ... I thought this patch was about getting rid of the\n> admittedly-obsolete installed symlink. If it's trying to get rid of\n> the \"postmaster\" terminology for our parent process, I'm very strongly\n> against that, either as regards to code or documentation.\n\nNo. It's about getting rid of the symlink. I was grepping around\nto look for use of the symlink, started with the glossary just\nto be sure, and saw that \"postmaster\" is consistently (I think)\nused to refer to the controlling, parent, process. And wrote\ndown what I was doing and what I found as I went along. And then\nsent out my notes.\n\nSorry for the confusion.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Sat, 7 Jan 2023 19:56:08 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On Sat, 7 Jan 2023 19:33:38 -0500\nJoe Conway <mail@joeconway.com> wrote:\n\n> On 1/7/23 18:38, Tom Lane wrote:\n> > \"Karl O. Pinc\" <kop@karlpinc.com> writes: \n> >> This is a review of Peter's 2 patches. I see only 1 small\n> >> problem. \n\nThe small problem is a reference to a deleted file.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Sat, 7 Jan 2023 19:57:09 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On Sat, 7 Jan 2023 19:56:08 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> On Sat, 07 Jan 2023 18:38:25 -0500\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > \"Karl O. Pinc\" <kop@karlpinc.com> writes: \n> > > This is a review of Peter's 2 patches. I see only 1 small\n> > > problem. ...\n\n> > Hmm ... I thought this patch was about getting rid of the\n> > admittedly-obsolete installed symlink. ...\n\n> No. It's about getting rid of the symlink. \n\nThe only way I could think of to review a patch\nthat removes something is to report all the places\nI looked where a reference to the symlink might be.\nThen report what I found each place I looked and\nreport if there's a problem, or might be.\n\nThat way the committer knows where I didn't look if there's\nmore that needs removing.\n\nApologies that this was not clear.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Sat, 7 Jan 2023 22:29:35 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On Sat, 7 Jan 2023 22:29:35 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> The only way I could think of to review a patch\n> that removes something is to report all the places\n> I looked where a reference to the symlink might be.\n\nI forgot to report that I also tried a `make install`\nand 'make uninstall`, with no problems.\n\nFWIW, I suspect the include/backend/postmaster/ directory\nwould today be named include/backend/postgres/. Now\nwould be the time to change the name, but I don't see\nit being worth the work. And while such a change\nwouldn't break pg, that kind of change would break something\nfor somebody.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Sun, 8 Jan 2023 14:17:00 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On 23.11.22 21:32, Joe Conway wrote:\n>> Yeah. Also, I don't think it's generally too hard to find the parent\n>> process anyway, because at least on my system, the other ones end up\n>> with ps display that looks like \"postgres: logical replication\n>> launcher\" or whatever. The main process doesn't set the ps status\n>> display, so that's the only one that shows a full path to the\n>> executable in the ps status, which is how I usually spot it. That has\n>> the advantage that it doesn't matter which name was used to launch it,\n>> too.\n\nI think it is a problem that one of the most widely used packagings of \nPostgreSQL uses techniques that are directly contradicting the \nPostgreSQL documentation and are also inconsistent with other widely \nused packagings. Users might learn this \"trick\" but then can't reuse it \nelsewhere, and conversely those who come from other systems might not be \nable to reuse their scripts. That is annoying.\n\n> FWIW, the reason I took note of the postmaster symlink in the first \n> place a few years ago was because selinux treats execution of programs \n> from symlinks differently than from actual files.\n\nThis is another such case, where knowledge about selinux configuration \ncannot be transported between Linux distributions.\n\nI almost feel that issues like this make a stronger case for removing \nthe postmaster symlink than if it hadn't actually been in use, since the \nremoval would serve to unify the landscape for the benefit of users.\n\n\n", "msg_date": "Thu, 12 Jan 2023 18:00:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On 1/12/23 12:00, Peter Eisentraut wrote:\n> On 23.11.22 21:32, Joe Conway wrote:\n>>> Yeah. Also, I don't think it's generally too hard to find the parent\n>>> process anyway, because at least on my system, the other ones end up\n>>> with ps display that looks like \"postgres: logical replication\n>>> launcher\" or whatever. The main process doesn't set the ps status\n>>> display, so that's the only one that shows a full path to the\n>>> executable in the ps status, which is how I usually spot it. That has\n>>> the advantage that it doesn't matter which name was used to launch it,\n>>> too.\n> \n> I think it is a problem that one of the most widely used packagings of\n> PostgreSQL uses techniques that are directly contradicting the\n> PostgreSQL documentation and are also inconsistent with other widely\n> used packagings. Users might learn this \"trick\" but then can't reuse it\n> elsewhere, and conversely those who come from other systems might not be\n> able to reuse their scripts. That is annoying.\n> \n>> FWIW, the reason I took note of the postmaster symlink in the first \n>> place a few years ago was because selinux treats execution of programs \n>> from symlinks differently than from actual files.\n> \n> This is another such case, where knowledge about selinux configuration\n> cannot be transported between Linux distributions.\n> \n> I almost feel that issues like this make a stronger case for removing\n> the postmaster symlink than if it hadn't actually been in use, since the\n> removal would serve to unify the landscape for the benefit of users.\n\nTo be clear, I am completely in agreement with you about removing the \nsymlink. I just wanted to be sure Devrim was alerted because I knew he \nhad a strong opinion on this topic ;-)\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 13:35:09 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Hi,\n\nOn Thu, 2023-01-12 at 13:35 -0500, Joe Conway wrote:\n> To be clear, I am completely in agreement with you about removing the\n> symlink. I just wanted to be sure Devrim was alerted because I knew\n> he had a strong opinion on this topic ;-)\n\nRed Hat's own packages, thus their users may be unhappy about that,\ntoo. They also call postmaster directly.\n\nRegsards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Thu, 12 Jan 2023 19:11:09 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On 12.01.23 20:11, Devrim Gündüz wrote:\n> On Thu, 2023-01-12 at 13:35 -0500, Joe Conway wrote:\n>> To be clear, I am completely in agreement with you about removing the\n>> symlink. I just wanted to be sure Devrim was alerted because I knew\n>> he had a strong opinion on this topic ;-)\n> \n> Red Hat's own packages, thus their users may be unhappy about that,\n> too. They also call postmaster directly.\n\nDevrim,\n\nApart from your concerns, it appears there is consensus for making this \nchange. The RPM packaging scripts can obviously be fixed easily for \nthis. Do you have an objection to making this change?\n\n\n\n", "msg_date": "Wed, 25 Jan 2023 08:54:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Hi,\n\nOn Wed, 2023-01-25 at 08:54 +0100, Peter Eisentraut wrote:\n> \n> Apart from your concerns, it appears there is consensus for making\n> this change.  The RPM packaging scripts can obviously be fixed\n> easily for this.  Do you have an objection to making this change?\n\nI'm inclined to create the symlink in the RPMs to make users' lives\n(and my life) easier. So, no objection from here.\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Wed, 25 Jan 2023 13:00:03 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "Hello,\n\nSomehow I missed the email changing the status of this back\nto \"needs review\".\n\nBuried in\nhttps://www.postgresql.org/message-id/20230107165942.748ccf4e%40slate.karlpinc.com\nis the one change I see that should be made.\n\n> In doc/src/sgml/ref/allfiles.sgml at line 222 there is an ENTITY\n> defined which references the deleted postmaster.sgml file.\n\nThis line needs to be removed and the\n0002-Don-t-install-postmaster-symlink-anymore.patch \nupdated. (Unless there's some magic going on\nwith the various allfiles.sgml files of which I am\nnot aware.)\n\nIf this is fixed I see no other problems.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Wed, 25 Jan 2023 18:03:25 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On 26.01.23 01:03, Karl O. Pinc wrote:\n> Buried in\n> https://www.postgresql.org/message-id/20230107165942.748ccf4e%40slate.karlpinc.com\n> is the one change I see that should be made.\n> \n>> In doc/src/sgml/ref/allfiles.sgml at line 222 there is an ENTITY\n>> defined which references the deleted postmaster.sgml file.\n> \n> This line needs to be removed and the\n> 0002-Don-t-install-postmaster-symlink-anymore.patch\n> updated. (Unless there's some magic going on\n> with the various allfiles.sgml files of which I am\n> not aware.)\n> \n> If this is fixed I see no other problems.\n\nGood find. Committed with this additional change.\n\n\n\n", "msg_date": "Thu, 26 Jan 2023 12:17:46 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On Wed, 25 Jan 2023 18:03:25 -0600\n\"Karl O. Pinc\" <kop@k\n\n> Buried in\n> https://www.postgresql.org/message-id/20230107165942.748ccf4e%40slate.karlpinc.com\n> is the one change I see that should be made.\n> \n> > In doc/src/sgml/ref/allfiles.sgml at line 222 there is an ENTITY\n> > defined which references the deleted postmaster.sgml file. \n> \n> This line needs to be removed and the\n> 0002-Don-t-install-postmaster-symlink-anymore.patch \n> updated. (Unless there's some magic going on\n> with the various allfiles.sgml files of which I am\n> not aware.)\n> \n> If this is fixed I see no other problems.\n\nBuried in the same email, and I apologize for not mentioning\nthis, is one other bit of documentation text that might\nor might not need attention. \n\n> I see a possible problem at line 1,412 of runtime.sgml\n\nThis says:\n\n in the postmaster's startup script just before invoking the postmaster.\n\nDepending on how this is read, it could be interpreted to mean\nthat a \"postmaster\" binary is invoked. It might be more clear\nto write: ... just before invoking <command>postgres</command>.\n\nOr it might not be worth bothering about; at this point, probably\nnot, but I thought you might want the heads-up anyway.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Thu, 26 Jan 2023 12:36:38 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: drop postmaster symlink" }, { "msg_contents": "On 26.01.23 19:36, Karl O. Pinc wrote:\n>> I see a possible problem at line 1,412 of runtime.sgml\n> This says:\n> \n> in the postmaster's startup script just before invoking the postmaster.\n> \n> Depending on how this is read, it could be interpreted to mean\n> that a \"postmaster\" binary is invoked. It might be more clear\n> to write: ... just before invoking <command>postgres</command>.\n> \n> Or it might not be worth bothering about; at this point, probably\n> not, but I thought you might want the heads-up anyway.\n\nGood find. I have adjusted that, and a few more nearby.\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 08:47:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: drop postmaster symlink" } ]
[ { "msg_contents": "I was thinking some more about the recent fix to multi-row VALUES\nhandling in the rewriter (b8f2687fdc), and I realised that there is\nanother bug in the way DEFAULT values are handled:\n\nIn RewriteQuery(), the code assumes that in a multi-row INSERT query,\nthe VALUES RTE will be the only thing in the query's fromlist. That's\ntrue for the original query, but it's not necessarily the case for\nproduct queries, if the rule action performs a multi-row insert,\nleading to a new VALUES RTE that the DEFAULT-handling code might fail\nto process. For example:\n\nCREATE TABLE foo(a int);\nINSERT INTO foo VALUES (1);\n\nCREATE TABLE foo_log(t timestamptz DEFAULT now(), a int, c text);\nCREATE RULE foo_r AS ON UPDATE TO foo\n DO ALSO INSERT INTO foo_log VALUES (DEFAULT, old.a, 'old'),\n (DEFAULT, new.a, 'new');\n\nUPDATE foo SET a = 2 WHERE a = 1;\n\nERROR: unrecognized node type: 43\n\nThere's a similar example to this in the regression tests, but it\ndoesn't test DEFAULT-handling.\n\nIt's also possible for the current code to cause the same VALUES RTE\nto be rewritten multiple times, when recursing into product queries\n(if the rule action doesn't add any more stuff to the query's\nfromlist). That turns out to be harmless, because the second time\nround it will no longer contain any defaults, but it's technically\nincorrect, and certainly a waste of cycles.\n\nSo I think what the code needs to do is examine the targetlist, and\nidentify the VALUES RTE that the current query is using as a source,\nand rewrite just that RTE (so any original VALUES RTE is rewritten at\nthe top level, and any VALUES RTEs from rule actions are rewritten\nwhile recursing, and none are rewritten more than once), as in the\nattached patch.\n\nWhile at it, I noticed an XXX code comment questioning whether any of\nthis applies to MERGE. The answer is \"no\", because MERGE actions don't\nallow multi-row inserts, so I think it's worth updating that comment\nto make that clearer.\n\nRegards,\nDean", "msg_date": "Wed, 23 Nov 2022 12:43:47 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Another multi-row VALUES bug" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> In RewriteQuery(), the code assumes that in a multi-row INSERT query,\n> the VALUES RTE will be the only thing in the query's fromlist. That's\n> true for the original query, but it's not necessarily the case for\n> product queries, if the rule action performs a multi-row insert,\n> leading to a new VALUES RTE that the DEFAULT-handling code might fail\n> to process. For example:\n\n> CREATE TABLE foo(a int);\n> INSERT INTO foo VALUES (1);\n\n> CREATE TABLE foo_log(t timestamptz DEFAULT now(), a int, c text);\n> CREATE RULE foo_r AS ON UPDATE TO foo\n> DO ALSO INSERT INTO foo_log VALUES (DEFAULT, old.a, 'old'),\n> (DEFAULT, new.a, 'new');\n\n> UPDATE foo SET a = 2 WHERE a = 1;\n\n> ERROR: unrecognized node type: 43\n\nUgh.\n\n> So I think what the code needs to do is examine the targetlist, and\n> identify the VALUES RTE that the current query is using as a source,\n> and rewrite just that RTE (so any original VALUES RTE is rewritten at\n> the top level, and any VALUES RTEs from rule actions are rewritten\n> while recursing, and none are rewritten more than once), as in the\n> attached patch.\n\nHmm ... this patch does not feel any more principled or future-proof\nthan what it replaces, because now instead of making assumptions\nabout what's in the jointree, you're making assumptions about what's\nin the targetlist. I wonder if there is some other way to identify\nthe target VALUES RTE.\n\nLooking at the parsetree in gdb, I see that in this example the\nVALUES RTE is still the first entry in the fromlist, it's just not\nthe only one there. So I wonder whether it'd be sufficient to do\n\n- if (list_length(parsetree->jointree->fromlist) == 1)\n+ if (parsetree->jointree->fromlist != NIL)\n\nI'm not 100% sure that product-query rewriting would always produce\na FROM-list in this order, but I think it might be true.\n\nAnother idea is to identify the VALUES RTE before we start rewriting,\nand pass that information on. That should be pretty bulletproof,\nbut of course more invasive.\n\nOr ... maybe we should perform this particular step before we build\nproduct queries? Just because we stuck it into QueryRewrite\noriginally doesn't mean that's the right place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 10:30:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Another multi-row VALUES bug" }, { "msg_contents": "On Wed, 23 Nov 2022 at 15:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > So I think what the code needs to do is examine the targetlist, and\n> > identify the VALUES RTE that the current query is using as a source,\n> > and rewrite just that RTE (so any original VALUES RTE is rewritten at\n> > the top level, and any VALUES RTEs from rule actions are rewritten\n> > while recursing, and none are rewritten more than once), as in the\n> > attached patch.\n>\n> Hmm ... this patch does not feel any more principled or future-proof\n> than what it replaces, because now instead of making assumptions\n> about what's in the jointree, you're making assumptions about what's\n> in the targetlist.\n\nTrue, but it's consistent with what rewriteValuesRTE() does -- it has\nto examine the targetlist to work out how items in the VALUES lists\nare mapped to attributes of the target relation.\n\n> I wonder if there is some other way to identify\n> the target VALUES RTE.\n>\n> Looking at the parsetree in gdb, I see that in this example the\n> VALUES RTE is still the first entry in the fromlist, it's just not\n> the only one there. So I wonder whether it'd be sufficient to do\n>\n> - if (list_length(parsetree->jointree->fromlist) == 1)\n> + if (parsetree->jointree->fromlist != NIL)\n>\n> I'm not 100% sure that product-query rewriting would always produce\n> a FROM-list in this order, but I think it might be true.\n\nNo, the test case using rule r3 is a counter-example. In that case,\nthe product query has 2 VALUES RTEs, both of which appear in the\nfromlist, and it's the second one that needs rewriting when it\nrecurses into the product query.\n\nIn fact, looking at what rewriteRuleAction() does, the relevant VALUES\nRTE will be the last or last-but-one entry in the fromlist, depending\non whether the rule action refers to OLD. Relying on a particular\nordering of the fromlist seems quite fragile though.\n\n> Another idea is to identify the VALUES RTE before we start rewriting,\n> and pass that information on. That should be pretty bulletproof,\n> but of course more invasive.\n>\n> Or ... maybe we should perform this particular step before we build\n> product queries? Just because we stuck it into QueryRewrite\n> originally doesn't mean that's the right place.\n\nHmm, I'm not quite sure how that would work. Possibly we could\nidentify the VALUES RTE while building the product query, but that\nlooks pretty messy.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 23 Nov 2022 18:43:58 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Another multi-row VALUES bug" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Wed, 23 Nov 2022 at 15:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... this patch does not feel any more principled or future-proof\n>> than what it replaces, because now instead of making assumptions\n>> about what's in the jointree, you're making assumptions about what's\n>> in the targetlist.\n\n> True, but it's consistent with what rewriteValuesRTE() does -- it has\n> to examine the targetlist to work out how items in the VALUES lists\n> are mapped to attributes of the target relation.\n\nThat argument seems a little circular, because rewriteValuesRTE\nis taking it on faith that it's told the correct RTE to modify.\n\n>> I'm not 100% sure that product-query rewriting would always produce\n>> a FROM-list in this order, but I think it might be true.\n\n> No, the test case using rule r3 is a counter-example. In that case,\n> the product query has 2 VALUES RTEs, both of which appear in the\n> fromlist, and it's the second one that needs rewriting when it\n> recurses into the product query.\n\nAh, right. I wonder if somehow we could just make one pass over\nall the VALUES RTEs, and process each one as needed? The problem\nis to identify the relevant target relation, I guess.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:56:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Another multi-row VALUES bug" }, { "msg_contents": "On Wed, 23 Nov 2022 at 18:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wonder if somehow we could just make one pass over\n> all the VALUES RTEs, and process each one as needed? The problem\n> is to identify the relevant target relation, I guess.\n>\n\nI have been thinking about that some more, but I think it would be\npretty difficult to achieve.\n\nPart of the problem is that the targetlist processing and VALUES RTE\nprocessing are quite closely coupled (because of things like GENERATED\nALWAYS columns). Both rewriteTargetListIU() and rewriteValuesRTE()\nrely on being passed the VALUES RTE that the targetlist is reading\nfrom, and rewriteValuesRTE() then relies on extra information returned\nby rewriteTargetListIU().\n\nAlso, there's the way that DEFAULTs from updatable views work, which\nmeans that the DEFAULTs in a VALUES RTE won't necessarily all come\nfrom the same target relation.\n\nSo I think it would be much harder to do the VALUES RTE processing\nanywhere other than where it's being done right now, and even if it\ncould be done elsewhere, it would be a very invasive change, and\ntherefore hard to back-patch.\n\nThat, of course, leaves the problem of identifying the right VALUES\nRTE to process.\n\nA different way to do this, without relying on the contents of the\ntargetlist, is to note that, while processing a product query, what we\nreally want to do is ignore any VALUES RTEs from the original query,\nsince they will have already been processed. There should then never\nbe more than one VALUES RTE left to process -- the one from the rule\naction.\n\nThis can be done by exploiting the fact that in product queries, the\nrtable always consists of the rtable from the original query followed\nby the rtable from the rule action, so we just need to ignore the\nright number of RTEs at the start of the rtable. Of course that would\nbreak if we ever changed the way rewriteRuleAction() worked, but at\nleast it only depends on that one other place in the code, which has\nbeen stable for a long time, so the risk of future breakage seems\nmanagable.\n\nRegards,\nDean", "msg_date": "Mon, 28 Nov 2022 10:29:41 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Another multi-row VALUES bug" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> A different way to do this, without relying on the contents of the\n> targetlist, is to note that, while processing a product query, what we\n> really want to do is ignore any VALUES RTEs from the original query,\n> since they will have already been processed. There should then never\n> be more than one VALUES RTE left to process -- the one from the rule\n> action.\n\n> This can be done by exploiting the fact that in product queries, the\n> rtable always consists of the rtable from the original query followed\n> by the rtable from the rule action, so we just need to ignore the\n> right number of RTEs at the start of the rtable. Of course that would\n> break if we ever changed the way rewriteRuleAction() worked, but at\n> least it only depends on that one other place in the code, which has\n> been stable for a long time, so the risk of future breakage seems\n> managable.\n\nThis looks like a good solution. I didn't actually test the patch,\nbut it passes an eyeball check. I like the fact that we can verify\nthat we find only one candidate VALUES RTE.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Nov 2022 13:52:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Another multi-row VALUES bug" } ]
[ { "msg_contents": "While playing around with rules and MERGE, I noticed that there is a\nbug in the way that it detects whether the target table has rules ---\nit uses rd_rel->relhasrules, which can be incorrect, since it might be\nset for a table that doesn't currently have rules, but did in the\nrecent past.\n\nSo it actually needs to examine rd_rules. Technically, I think that it\nwould be sufficient to just test whether rd_rules is non-NULL, but I\nthink it's more robust and readable to check rd_rules->numLocks, as in\nthe attached patch.\n\nRegards,\nDean", "msg_date": "Wed, 23 Nov 2022 12:49:44 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Bug in MERGE's test for tables with rules" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> While playing around with rules and MERGE, I noticed that there is a\n> bug in the way that it detects whether the target table has rules ---\n> it uses rd_rel->relhasrules, which can be incorrect, since it might be\n> set for a table that doesn't currently have rules, but did in the\n> recent past.\n\n> So it actually needs to examine rd_rules. Technically, I think that it\n> would be sufficient to just test whether rd_rules is non-NULL, but I\n> think it's more robust and readable to check rd_rules->numLocks, as in\n> the attached patch.\n\n+1 for the code change. Not quite sure the added test case is worth\nthe cycles.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 10:32:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in MERGE's test for tables with rules" }, { "msg_contents": "On Wed, 23 Nov 2022 at 15:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Not quite sure the added test case is worth the cycles.\n>\n\nNo, probably not, for such a trivial change.\n\nPushed to HEAD and 15, without the test. Thanks for looking!\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 25 Nov 2022 13:39:59 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bug in MERGE's test for tables with rules" } ]
[ { "msg_contents": "Hi,\n\nWhile working on something else, I noticed that each WAL insert lock\ntracks its own last important WAL record's LSN (lastImportantAt) and\nboth the bgwriter and checkpointer later computes the max\nvalue/server-wide last important WAL record's LSN via\nGetLastImportantRecPtr(). While doing so, each WAL insertion lock is\nacquired in exclusive mode in a for loop. This seems like too much\noverhead to me. I quickly coded a patch (attached herewith) that\ntracks the server-wide last important WAL record's LSN in\nXLogCtlInsert (lastImportantPos) protected with a spinlock and gets\nrid of lastImportantAt from each WAL insert lock. I ran pgbench with a\nsimple insert [1] and the results are below. While the test was run,\nthe GetLastImportantRecPtr() was called 4-5 times.\n\n# of clients HEAD PATCHED\n1 83 82\n2 159 157\n4 303 302\n8 576 570\n16 1104 1095\n32 2055 2041\n64 2286 2295\n128 2270 2285\n256 2302 2253\n512 2205 2290\n768 2224 2180\n1024 2109 2150\n2048 1941 1936\n4096 1856 1848\n\nIt doesn't seem to hurt (for this use-case) anyone, however there\nmight be some benefit if bgwriter and checkpointer come in the way of\nWAL inserters. With the patch, the extra exclusive lock burden on WAL\ninsert locks is gone. Since the amount of work the WAL inserters do\nunder the new spinlock is very minimal (updating\nXLogCtlInsert->lastImportantPos), it may not be an issue. Also, it's\nworthwhile to look at the existing comment [2], which doesn't talk\nabout the performance impact of having a lock.\n\nThoughts?\n\n[1]\n./configure --prefix=$PWD/inst/ CFLAGS=\"-O3\" > install.log && make -j\n8 install > install.log 2>&1 &\ncd inst/bin\n./pg_ctl -D data -l logfile stop\nrm -rf data logfile insert.sql\nfree -m\nsudo su -c 'sync; echo 3 > /proc/sys/vm/drop_caches'\nfree -m\n./initdb -D data\n./pg_ctl -D data -l logfile start\n./psql -d postgres -c 'ALTER SYSTEM SET shared_buffers = \"8GB\";'\n./psql -d postgres -c 'ALTER SYSTEM SET max_wal_size = \"32GB\";'\n./psql -d postgres -c 'ALTER SYSTEM SET max_connections = \"4096\";'\n./psql -d postgres -c 'ALTER SYSTEM SET bgwriter_delay = \"10ms\";'\n./pg_ctl -D data -l logfile restart\n./pgbench -i -s 1 -d postgres\n./psql -d postgres -c \"ALTER TABLE pgbench_accounts DROP CONSTRAINT\npgbench_accounts_pkey;\"\ncat << EOF >> insert.sql\n\\set aid random(1, 10 * :scale)\n\\set delta random(1, 100000 * :scale)\nINSERT INTO pgbench_accounts (aid, bid, abalance) VALUES (:aid, :aid, :delta);\nEOF\nfor c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do echo -n\n\"$c \";./pgbench -n -M prepared -U ubuntu postgres -b simple-update\n-c$c -j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n\n[2]\n * records. Tracking the WAL activity directly in WALInsertLock has the\n * advantage of not needing any additional locks to update the value.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 23 Nov 2022 19:12:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Decouple last important WAL record LSN from WAL insert locks" }, { "msg_contents": "Hi,\n\nOn 2022-11-23 19:12:07 +0530, Bharath Rupireddy wrote:\n> While working on something else, I noticed that each WAL insert lock\n> tracks its own last important WAL record's LSN (lastImportantAt) and\n> both the bgwriter and checkpointer later computes the max\n> value/server-wide last important WAL record's LSN via\n> GetLastImportantRecPtr(). While doing so, each WAL insertion lock is\n> acquired in exclusive mode in a for loop. This seems like too much\n> overhead to me.\n\nGetLastImportantRecPtr() should be a very rare operation, so it's fine for it\nto be expensive. The important thing is for the maintenance of the underlying\ndata to be very cheap.\n\n\n> I quickly coded a patch (attached herewith) that\n> tracks the server-wide last important WAL record's LSN in\n> XLogCtlInsert (lastImportantPos) protected with a spinlock and gets\n> rid of lastImportantAt from each WAL insert lock.\n\nThat strikes me as a very bad idea. It adds another point of contention to a\nvery very hot code path, to make a very rare code path cheaper.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 26 Nov 2022 13:13:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Decouple last important WAL record LSN from WAL insert locks" }, { "msg_contents": "On Sun, Nov 27, 2022 at 2:43 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-11-23 19:12:07 +0530, Bharath Rupireddy wrote:\n> > While working on something else, I noticed that each WAL insert lock\n> > tracks its own last important WAL record's LSN (lastImportantAt) and\n> > both the bgwriter and checkpointer later computes the max\n> > value/server-wide last important WAL record's LSN via\n> > GetLastImportantRecPtr(). While doing so, each WAL insertion lock is\n> > acquired in exclusive mode in a for loop. This seems like too much\n> > overhead to me.\n>\n> GetLastImportantRecPtr() should be a very rare operation, so it's fine for it\n> to be expensive. The important thing is for the maintenance of the underlying\n> data to be very cheap.\n>\n> > I quickly coded a patch (attached herewith) that\n> > tracks the server-wide last important WAL record's LSN in\n> > XLogCtlInsert (lastImportantPos) protected with a spinlock and gets\n> > rid of lastImportantAt from each WAL insert lock.\n>\n> That strikes me as a very bad idea. It adds another point of contention to a\n> very very hot code path, to make a very rare code path cheaper.\n\nThanks for the response. I agree that GetLastImportantRecPtr() gets\ncalled rarely, however, what concerns me is that it's taking all the\nWAL insertion locks when it gets called.\n\nIs tracking lastImportantPos as pg_atomic_uint64 in XLogCtlInsert any\nbetter than an explicit spinlock? I think it's better on platforms\nwhere atomics are supported, however, it boils down to using a spin\nlock on the platforms where atomics aren't supported.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 11:42:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Decouple last important WAL record LSN from WAL insert locks" }, { "msg_contents": "Hi,\n\nOn 2022-11-28 11:42:19 +0530, Bharath Rupireddy wrote:\n> On Sun, Nov 27, 2022 at 2:43 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-11-23 19:12:07 +0530, Bharath Rupireddy wrote:\n> > > While working on something else, I noticed that each WAL insert lock\n> > > tracks its own last important WAL record's LSN (lastImportantAt) and\n> > > both the bgwriter and checkpointer later computes the max\n> > > value/server-wide last important WAL record's LSN via\n> > > GetLastImportantRecPtr(). While doing so, each WAL insertion lock is\n> > > acquired in exclusive mode in a for loop. This seems like too much\n> > > overhead to me.\n> >\n> > GetLastImportantRecPtr() should be a very rare operation, so it's fine for it\n> > to be expensive. The important thing is for the maintenance of the underlying\n> > data to be very cheap.\n> >\n> > > I quickly coded a patch (attached herewith) that\n> > > tracks the server-wide last important WAL record's LSN in\n> > > XLogCtlInsert (lastImportantPos) protected with a spinlock and gets\n> > > rid of lastImportantAt from each WAL insert lock.\n> >\n> > That strikes me as a very bad idea. It adds another point of contention to a\n> > very very hot code path, to make a very rare code path cheaper.\n> \n> Thanks for the response. I agree that GetLastImportantRecPtr() gets\n> called rarely, however, what concerns me is that it's taking all the\n> WAL insertion locks when it gets called.\n\nSo what? It's far from the only operation doing so. And in contrast to most of\nthe other places (c.f. WALInsertLockAcquireExclusive()) it only takes one of\nthem at a time.\n\n\n> Is tracking lastImportantPos as pg_atomic_uint64 in XLogCtlInsert any\n> better than an explicit spinlock? I think it's better on platforms\n> where atomics are supported, however, it boils down to using a spin\n> lock on the platforms where atomics aren't supported.\n\nA central atomic in XLogCtlInsert would be better than a spinlock protected\nvariable, but still bad. We do *not* want to have more central state that\nneeds to be manipulated, we want *less*.\n\nIf we wanted to optimize this - and I haven't seen any evidence it's worth\ndoing so - we should just optimize the lock acquisitions in\nGetLastImportantRecPtr() away. *Without* centralizing the state.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 28 Nov 2022 10:25:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Decouple last important WAL record LSN from WAL insert locks" }, { "msg_contents": "On Mon, Nov 28, 2022 at 11:55 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > Is tracking lastImportantPos as pg_atomic_uint64 in XLogCtlInsert any\n> > better than an explicit spinlock? I think it's better on platforms\n> > where atomics are supported, however, it boils down to using a spin\n> > lock on the platforms where atomics aren't supported.\n>\n> A central atomic in XLogCtlInsert would be better than a spinlock protected\n> variable, but still bad. We do *not* want to have more central state that\n> needs to be manipulated, we want *less*.\n\nAgreed.\n\n> If we wanted to optimize this - and I haven't seen any evidence it's worth\n> doing so - we should just optimize the lock acquisitions in\n> GetLastImportantRecPtr() away. *Without* centralizing the state.\n\nHm. I can think of converting lastImportantAt from XLogRecPtr to\npg_atomic_uint64 and letting it stay within the WALInsertLock\nstructure. This prevents torn-reads and also avoids WAL insertion lock\nacquire-release cycles in GetLastImportantRecPtr(). Please see the\nattached patch herewith.\n\nIf this idea is worth it, I would like to bring this and the other\nthread [1] that converts insertingAt to atomic and modifies other WAL\ninsert locks related code under one roof and start a new thread. BTW,\nthe patch at [1] seems to be showing a good benefit for\nhigh-concurrent inserts with small records.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACWkWbheFhkPwMw83CUpzHFGXSV_HXTBxG9%2B-PZ3ufHE%3DQ%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 29 Nov 2022 13:00:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Decouple last important WAL record LSN from WAL insert locks" } ]
[ { "msg_contents": "Some modest cleanups I've accumulated.", "msg_date": "Wed, 23 Nov 2022 11:24:36 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "code cleanups" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Some modest cleanups I've accumulated.\n\nHmm ...\n\nI don't especially care for either 0001 or 0002, mainly because\nI do not agree that this is good style:\n\n-\tbool\t\tnulls[PG_STAT_GET_RECOVERY_PREFETCH_COLS];\n+\tbool\t\tnulls[PG_STAT_GET_RECOVERY_PREFETCH_COLS] = {0};\n\nIt causes the code to be far more in bed than I like with the assumption\nthat we're initializing to physical zeroes. The explicit loop method\ncan be trivially adjusted to initialize to \"true\" or some other value;\nat least for bool arrays, that's true of memset'ing as well. But this,\nif you decide you need something other than zeroes, is a foot-gun.\nIn particular, someone whose C is a bit weak might mistakenly think that\n\n\tbool\t\tnulls[PG_STAT_GET_RECOVERY_PREFETCH_COLS] = {true};\n\nwill set all the array elements to true. Nor is there a plausible\nargument that this is more efficient. So I don't care for this approach\nand I don't want to adopt it.\n\n0003: I agree with getting rid of the duplicated code, but did you go\nfar enough? Isn't the code just above those parent checks also pretty\nredundant? It would be more intellectually consistent to move the full\nresponsibility for setting acl_ok into a subroutine. This shows in\nthe patch as you have it because the header comment for\nrecheck_parent_acl is completely out-of-context.\n\n0004: Right, somebody injected code in a poorly chosen place\n(yet another victim of the \"add at the end\" anti-pattern).\n\n0005: No particular objection, but it's not much of an improvement\neither. It seems maybe a shade less consistent with the following\nline.\n\n0006: These changes will cause fetching of one more source byte than\nwas fetched before. I'm not sure that's safe, so I don't think this\nis an improvement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Nov 2022 12:52:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: code cleanups" }, { "msg_contents": "On Wed, Nov 23, 2022 at 12:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> at least for bool arrays, that's true of memset'ing as well. But this,\n> if you decide you need something other than zeroes, is a foot-gun.\n> In particular, someone whose C is a bit weak might mistakenly think that\n>\n> bool nulls[PG_STAT_GET_RECOVERY_PREFETCH_COLS] = {true};\n>\n> will set all the array elements to true. Nor is there a plausible\n> argument that this is more efficient. So I don't care for this approach\n> and I don't want to adopt it.\n\nI don't really know what the argument is for the explicit initializer\nstyle, but I think this argument against it is pretty weak.\n\nIt should be more than fine to assume that anyone who is hacking on\nPostgreSQL is proficient in C. It's true that there might be some\npeople who aren't, or who aren't familiar with the limitations of the\ninitializer construct, and I include myself in that latter category. I\ndon't think it was part of C when I learned C. But if we don't possess\nthe collective expertise as a project to bring people who have missed\nthese details of the C programming language up to speed, we should\njust throw in the towel now and go home.\n\nHacking on PostgreSQL is HARD and it relies on knowing FAR more than\njust the basics of how to code in C. Put differently, if you can't\neven figure out how C works, you have no chance of doing anything very\ninteresting with the PostgreSQL code base, because you're going to\nhave to figure out a lot more than the basics of the implementation\nlanguage to make a meaningful contribution.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Nov 2022 13:05:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: code cleanups" }, { "msg_contents": "On Thu, Nov 24, 2022 at 12:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > Some modest cleanups I've accumulated.\n\n> 0004: Right, somebody injected code in a poorly chosen place\n> (yet another victim of the \"add at the end\" anti-pattern).\n\nI've pushed 0004.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Nov 24, 2022 at 12:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Justin Pryzby <pryzby@telsasoft.com> writes:> > Some modest cleanups I've accumulated.> 0004: Right, somebody injected code in a poorly chosen place> (yet another victim of the \"add at the end\" anti-pattern).I've pushed 0004.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 20 Dec 2022 14:20:20 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: code cleanups" } ]
[ { "msg_contents": "Hello,\n\n\nI have questions regarding distinct operation and would be glad if \nsomeone could help me out.\n\nConsider the following table (mytable):\n\nid, name\n\n1, A\n\n1, A\n\n2, B\n\n3, A\n\n1, A\n\nIf we do /select avg(id) over (partition by name) from mytable/, \npartition logic goes like this:\n\nfor A: 1, 1, 3, 1\n\nIf we want to implement something like this /select avg(distinct id) \nover (partition by name) from m/ytable\n\nand remove duplicate by storing last datum of aggregate column (id) and \ncomparing it with current value. It fails here because aggregate column \nis not sorted within the partition.\n\nQuestions:\n\n1. Is sorting prerequisite for finding distinct values?\n\n2. Is it okay to sort aggregate column (within partition) for distinct \nto work in case of window function?\n\n3. Is an alternative way exists to handle this scenario (because here \nsort is of no use in aggregation)?\n\n\nThanks\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n\n\n\nHello,\n\n\nI have questions regarding distinct operation and would be glad\n if someone could help me out.\nConsider the following table (mytable):\nid, name\n1, A\n1, A\n2, B\n3, A\n1, A\nIf we do select avg(id) over (partition by name) from mytable,\n partition logic goes like this:\nfor A: 1, 1, 3, 1\nIf we want to implement something like this select\n avg(distinct id) over (partition by name) from mytable\nand remove duplicate by storing last datum of aggregate column\n (id) and comparing it with current value. It fails here because\n aggregate column is not sorted within the partition.\nQuestions: \n\n1. Is sorting prerequisite for finding distinct values? \n\n2. Is it okay to sort aggregate column (within partition) for\n distinct to work in case of window function? \n\n3. Is an alternative way exists to handle this scenario (because\n here sort is of no use in aggregation)?\n\n\nThanks\n\n\n\n-- \nRegards,\nAnkit Kumar Pandey", "msg_date": "Wed, 23 Nov 2022 23:48:20 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Questions regarding distinct operation implementation" }, { "msg_contents": "On 23/11/22 23:48, Ankit Kumar Pandey wrote:\n>\n> Hello,\n>\n>\n> I have questions regarding distinct operation and would be glad if \n> someone could help me out.\n>\n> Consider the following table (mytable):\n>\n> id, name\n>\n> 1, A\n>\n> 1, A\n>\n> 2, B\n>\n> 3, A\n>\n> 1, A\n>\n> If we do /select avg(id) over (partition by name) from mytable/, \n> partition logic goes like this:\n>\n> for A: 1, 1, 3, 1\n>\n> If we want to implement something like this /select avg(distinct id) \n> over (partition by name) from m/ytable\n>\n> and remove duplicate by storing last datum of aggregate column (id) \n> and comparing it with current value. It fails here because aggregate \n> column is not sorted within the partition.\n>\n> Questions:\n>\n> 1. Is sorting prerequisite for finding distinct values?\n>\n> 2. Is it okay to sort aggregate column (within partition) for distinct \n> to work in case of window function?\n>\n> 3. Is an alternative way exists to handle this scenario (because here \n> sort is of no use in aggregation)?\n>\n>\n> Thanks\n>\n>\n> -- \n> Regards,\n> Ankit Kumar Pandey\n\nHi,\n\nAfter little more digging, I can see that aggregation on Window \nfunctions are of running type, it would be bit more effective if a \nlookup hashtable is created where every value in current aggregate \ncolumn get inserted. Whenever frame moves ahead, a lookup if performed \nfor presence of duplicate.\n\nOn performance standpoint, this might be bad idea though.\n\nPlease let me know any opinions on this.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n\n\n\n\n\nOn 23/11/22 23:48, Ankit Kumar Pandey\n wrote:\n\n\n\nHello,\n\n\nI have questions regarding distinct operation and would be glad\n if someone could help me out.\nConsider the following table (mytable):\nid, name\n1, A\n1, A\n2, B\n3, A\n1, A\nIf we do select avg(id) over (partition by name) from\n mytable, partition logic goes like this:\nfor A: 1, 1, 3, 1\nIf we want to implement something like this select\n avg(distinct id) over (partition by name) from mytable\nand remove duplicate by storing last datum of aggregate column\n (id) and comparing it with current value. It fails here because\n aggregate column is not sorted within the partition.\nQuestions: \n\n1. Is sorting prerequisite for finding distinct values? \n\n2. Is it okay to sort aggregate column (within partition) for\n distinct to work in case of window function? \n\n3. Is an alternative way exists to handle this scenario\n (because here sort is of no use in aggregation)?\n\n\nThanks\n\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\nHi,\n\nAfter little more digging, I can see that aggregation on Window\n functions are of running type, it would be bit more effective if a\n lookup hashtable is created where every value in current aggregate\n column get inserted. Whenever frame moves ahead, a lookup if\n performed for presence of duplicate.\nOn performance standpoint, this might be bad idea though.\nPlease let me know any opinions on this.\n-- \nRegards,\nAnkit Kumar Pandey", "msg_date": "Thu, 24 Nov 2022 23:27:11 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On Fri, 25 Nov 2022 at 06:57, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Please let me know any opinions on this.\n\nI think if you're planning on working on this then step 1 would have\nto be checking the SQL standard to see which set of rows it asks\nimplementations to consider for duplicate checks when deciding if the\ntransition should be performed or not. Having not looked, I don't\nknow if this is the entire partition or just the rows in the current\nframe.\n\nDepending on what you want, an alternative today would be to run a\nsubquery to uniquify the rows the way you want and then do the window\nfunction stuff.\n\nDavid\n\n\n", "msg_date": "Fri, 25 Nov 2022 09:44:51 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "\nOn 25/11/22 02:14, David Rowley wrote:\n> On Fri, 25 Nov 2022 at 06:57, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>> Please let me know any opinions on this.\n> I think if you're planning on working on this then step 1 would have\n> to be checking the SQL standard to see which set of rows it asks\n> implementations to consider for duplicate checks when deciding if the\n> transition should be performed or not. Having not looked, I don't\n> know if this is the entire partition or just the rows in the current\n> frame.\n>\n> Depending on what you want, an alternative today would be to run a\n> subquery to uniquify the rows the way you want and then do the window\n> function stuff.\n>\n> David\nThanks David, these are excellent pointers, I will look into SQL \nstandard first and so on.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n", "msg_date": "Fri, 25 Nov 2022 11:00:33 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On 25/11/22 11:00, Ankit Kumar Pandey wrote:\n>\n> On 25/11/22 02:14, David Rowley wrote:\n>> On Fri, 25 Nov 2022 at 06:57, Ankit Kumar Pandey \n>> <itsankitkp@gmail.com> wrote:\n>>> Please let me know any opinions on this.\n>> I think if you're planning on working on this then step 1 would have\n>> to be checking the SQL standard to see which set of rows it asks\n>> implementations to consider for duplicate checks when deciding if the\n>> transition should be performed or not.  Having not looked, I don't\n>> know if this is the entire partition or just the rows in the current\n>> frame.\n>>\n>> Depending on what you want, an alternative today would be to run a\n>> subquery to uniquify the rows the way you want and then do the window\n>> function stuff.\n>>\n>> David\n> Thanks David, these are excellent pointers, I will look into SQL \n> standard first and so on.\n>\nHi,\n\nLooking further into it, I am bit clear about expectations of having \ndistinct in Windows Aggregates (although I couldn't got hands on SQL \nstandard as it is not in public domain but distinct in windows aggregate \nis supported by Oracle and I am using it as reference).\n\nFor table (mytable):\n\nid, name\n\n1, A\n\n1, A\n\n10, B\n\n3, A\n\n1, A\n\n\n/select avg(distinct id) over (partition by name)/ from mytable (in \noracle db) yields:\n\n2\n\n2\n\n2\n\n2\n\n10\n\n\n From this, it is seen distinct is taken across the all rows in the \npartition.\n\nI also thought of using a subquery approach: /select avg(id) over \n(partition by name) from (select distinct(id), name from mytable)/\n\nbut this obviously doesn't yield right answer because result should \ncontain same number of rows as input. This implies we need to find \npartition first and then remove duplicates within the partition.\n\nCan we avoid any ordering/sort until existing logic finds if value is in \nframe (so as to respect any /order by/ clause given by user), and once \nit is determined that tuple is in frame, skip the tuple if it is a \nduplicate? If aforementioned approach is right, question is how do we \ncheck if it is duplicate? Should we create a lookup table (as tuples \ncoming to advance_windowaggregate can be in arbitrary order)? Or any \nother approach would be better?\n\nAny opinion on this will be appreciated.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n\n\n\n\n\nOn 25/11/22 11:00, Ankit Kumar Pandey\n wrote:\n\n\n\n On 25/11/22 02:14, David Rowley wrote:\n \nOn Fri, 25 Nov 2022 at 06:57, Ankit Kumar\n Pandey <itsankitkp@gmail.com> wrote:\n \nPlease let me know any opinions on this.\n \n\n I think if you're planning on working on this then step 1 would\n have\n \n to be checking the SQL standard to see which set of rows it asks\n \n implementations to consider for duplicate checks when deciding\n if the\n \n transition should be performed or not.  Having not looked, I\n don't\n \n know if this is the entire partition or just the rows in the\n current\n \n frame.\n \n\n Depending on what you want, an alternative today would be to run\n a\n \n subquery to uniquify the rows the way you want and then do the\n window\n \n function stuff.\n \n\n David\n \n\n Thanks David, these are excellent pointers, I will look into SQL\n standard first and so on.\n \n\n\nHi,\nLooking further into it, I am bit clear about expectations of\n having distinct in Windows Aggregates (although I couldn't got\n hands on SQL standard as it is not in public domain but distinct\n in windows aggregate is supported by Oracle and I am using it as\n reference).\nFor table (mytable):\nid, name\n1, A\n1, A\n10, B\n3, A\n1, A\n\n\nselect avg(distinct id) over (partition by name) from\n mytable (in oracle db) yields:\n2\n2\n2\n2\n10\n\n\nFrom this, it is seen distinct is taken across the all rows in\n the partition.\nI also thought of using a subquery approach: select avg(id)\n over (partition by name) from (select distinct(id), name from\n mytable)\nbut this obviously doesn't yield right answer because result\n should contain same number of rows as input. This implies we need\n to find partition first and then remove duplicates within the\n partition.\n\nCan we avoid any ordering/sort until existing logic finds if\n value is in frame (so as to respect any order by clause\n given by user), and once it is determined that tuple is in frame,\n skip the tuple if it is a duplicate? If aforementioned approach is\n right, question is how do we check if it is duplicate? Should we\n create a lookup table (as tuples coming to advance_windowaggregate\n can be in arbitrary order)? Or any other approach would be better?\n\nAny opinion on this will be appreciated. \n\n-- \nRegards,\nAnkit Kumar Pandey", "msg_date": "Fri, 2 Dec 2022 00:40:46 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On Fri, 2 Dec 2022 at 08:10, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> select avg(distinct id) over (partition by name) from mytable (in oracle db) yields:\n> 2\n> 2\n> 2\n> 2\n> 10\n>\n> From this, it is seen distinct is taken across the all rows in the partition.\n\nDue to the lack of ORDER BY clause, all rows in the partition are in\nthe window frame at once. The question is, what *should* happen if\nyou add an ORDER BY.\n\nLooking at the copy of the standard that I have, I see nothing\nexplicitly mentioned about aggregates with DISTINCT used as window\nfunctions, however, I do see in the Window Function section:\n\n\"The window aggregate functions compute an <aggregate function>\n(COUNT, SUM, AVG, etc.), the same as\na group aggregate function, except that the computation aggregates\nover the window frame of a row rather than\nover a group of a grouped table. The hypothetical set functions are\nnot permitted as window aggregate functions.\"\n\nSo you could deduce that the DISTINCT would also need to be applied\nover the frame too.\n\nThe question is, what do you want to make work? If you're not worried\nabout supporting DISTINCT when there is an ORDER BY clause and the\nframe options are effectively ROWS BETWEEN UNBOUNDED PRECEDING AND\nUNBOUNDED FOLLOWING, then it's going to be much easier to make work.\nYou never need to worry about rows dropping out of visibility in the\nframe. Simply all rows in the partition are in the frame.\n\nYou do need to be careful as, if I remember correctly, we do support\nsome non-standard things here. I believe the standard requires an\nORDER BY when specifying frame options. I think we didn't see any\ntechnical reason to apply that limitation, so didn't. That means in\nPostgres, you can do things like:\n\nselect avg(id) over (partition by name ROWS BETWEEN CURRENT ROW AND 3\nFOLLOWING) from mytable;\n\nbut that's unlikely to work on most other databases without adding an ORDER BY.\n\nSo if you are going to limit this to only being supported without an\nORDER BY, then you'll need to ensure that the specified frame options\ndon't cause your code to break. I'm unsure, but this might be a case\nof checking for FRAMEOPTION_NONDEFAULT unless it's\nFRAMEOPTION_START_UNBOUNDED_PRECEDING|FRAMEOPTION_END_UNBOUNDED_FOLLOWING.\nYou'll need to study that a bit more than I just did though.\n\nOne way to make that work might be to add code to\neval_windowaggregates() around the call to advance_windowaggregate(),\nyou can see the row being aggregated is set by:\n\nwinstate->tmpcontext->ecxt_outertuple = agg_row_slot;\n\nwhat you'd need to do here is change the code so that you put all the\nrows to aggregate into a tuplesort then sort them by the distinct\ncolumn and instead, feed the tuplesort rows to\nadvance_windowaggregate(). You'd need to add code similar to what is\nin process_ordered_aggregate_single() in nodeAgg.c to have the\nduplicate consecutive rows skipped.\n\nJust a word of warning on this. This is a hugely complex area of\nPostgres. If I was you, I'd make sure and spend quite a bit of time\nreading nodeWindowAgg.c and likely much of nodeAgg.c. Any changes we\naccept in that area are going to have to be very carefully done. Make\nsure you're comfortable with the code before doing too much. It would\nbe very easy to end up with a giant mess if you try to do this without\nfully understanding the implications of your changes. Also, you'll\nneed to show you've not regressed the performance of the existing\nfeatures with the code you've added.\n\nGood luck!\n\nDavid\n\n\n", "msg_date": "Fri, 2 Dec 2022 10:37:33 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On Thu, Dec 1, 2022 at 2:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n>\n> The question is, what do you want to make work? If you're not worried\n> about supporting DISTINCT when there is an ORDER BY clause and the\n> frame options are effectively ROWS BETWEEN UNBOUNDED PRECEDING AND\n> UNBOUNDED FOLLOWING, then it's going to be much easier to make work.\n> You never need to worry about rows dropping out of visibility in the\n> frame. Simply all rows in the partition are in the frame.\n>\n\nI would definitely want the ability to have the output ordered and distinct\nat the same time.\n\narray_agg(distinct col) over (order by whatever)\n\nConceptually this seems like it can be trivially accomplished with a simple\nlookup table, the key being the distinct column(s) and the value being a\ncounter - with the entry being removed when the counter goes to zero\n(decreases happening each time a row goes out of scope). The main concern,\nI suspect, isn't implementation ability, it is speed and memory consumption.\n\nI would expect the distinct output to be identical to the non-distinct\noutput except for duplicates removed. Using array_agg as an example makes\nseeing the distinction quite easy.\n\nThinking over the above a bit more, is something like this possible?\n\narray_agg(distinct col order by col) over (order by whatever)\n\ni.e., can we add order by within the aggregate to control its internal\nordering separately from the ordering needed for the window framing?\n\nDavid J.\n\nOn Thu, Dec 1, 2022 at 2:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\nThe question is, what do you want to make work?  If you're not worried\nabout supporting DISTINCT when there is an ORDER BY clause and the\nframe options are effectively ROWS BETWEEN UNBOUNDED PRECEDING AND\nUNBOUNDED FOLLOWING, then it's going to be much easier to make work.\nYou never need to worry about rows dropping out of visibility in the\nframe. Simply all rows in the partition are in the frame.I would definitely want the ability to have the output ordered and distinct at the same time.array_agg(distinct col) over (order by whatever)Conceptually this seems like it can be trivially accomplished with a simple lookup table, the key being the distinct column(s) and the value being a counter - with the entry being removed when the counter goes to zero (decreases happening each time a row goes out of scope).  The main concern, I suspect, isn't implementation ability, it is speed and memory consumption.I would expect the distinct output to be identical to the non-distinct output except for duplicates removed.  Using array_agg as an example makes seeing the distinction quite easy.Thinking over the above a bit more, is something like this possible?array_agg(distinct col order by col) over (order by whatever)i.e., can we add order by within the aggregate to control its internal ordering separately from the ordering needed for the window framing?David J.", "msg_date": "Thu, 1 Dec 2022 14:51:58 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On 02/12/22 00:40, Ankit Kumar Pandey wrote:\n>\n>\n> On 25/11/22 11:00, Ankit Kumar Pandey wrote:\n>>\n>> On 25/11/22 02:14, David Rowley wrote:\n>>> On Fri, 25 Nov 2022 at 06:57, Ankit Kumar Pandey \n>>> <itsankitkp@gmail.com> wrote:\n>>>> Please let me know any opinions on this.\n>>> I think if you're planning on working on this then step 1 would have\n>>> to be checking the SQL standard to see which set of rows it asks\n>>> implementations to consider for duplicate checks when deciding if the\n>>> transition should be performed or not.  Having not looked, I don't\n>>> know if this is the entire partition or just the rows in the current\n>>> frame.\n>>>\n>>> Depending on what you want, an alternative today would be to run a\n>>> subquery to uniquify the rows the way you want and then do the window\n>>> function stuff.\n>>>\n>>> David\n>> Thanks David, these are excellent pointers, I will look into SQL \n>> standard first and so on.\n>>\n> Hi,\n>\n> Looking further into it, I am bit clear about expectations of having \n> distinct in Windows Aggregates (although I couldn't got hands on SQL \n> standard as it is not in public domain but distinct in windows \n> aggregate is supported by Oracle and I am using it as reference).\n>\n> For table (mytable):\n>\n> id, name\n>\n> 1, A\n>\n> 1, A\n>\n> 10, B\n>\n> 3, A\n>\n> 1, A\n>\n>\n> /select avg(distinct id) over (partition by name)/ from mytable (in \n> oracle db) yields:\n>\n> 2\n>\n> 2\n>\n> 2\n>\n> 2\n>\n> 10\n>\n>\n> From this, it is seen distinct is taken across the all rows in the \n> partition.\n>\n> I also thought of using a subquery approach: /select avg(id) over \n> (partition by name) from (select distinct(id), name from mytable)/\n>\n> but this obviously doesn't yield right answer because result should \n> contain same number of rows as input. This implies we need to find \n> partition first and then remove duplicates within the partition.\n>\n> Can we avoid any ordering/sort until existing logic finds if value is \n> in frame (so as to respect any /order by/ clause given by user), and \n> once it is determined that tuple is in frame, skip the tuple if it is \n> a duplicate? If aforementioned approach is right, question is how do \n> we check if it is duplicate? Should we create a lookup table (as \n> tuples coming to advance_windowaggregate can be in arbitrary order)? \n> Or any other approach would be better?\n>\n> Any opinion on this will be appreciated.\n>\n> -- \n> Regards,\n> Ankit Kumar Pandey\n\nHi,\n\nI am still looking at this but unable to move ahead as I am not able to \nuse prior use-cases (normal aggregates) to implement distinct in window \nfunction because they both differ in design (and window function is bit \nunique in general). One approach (among others) that I thought was that \nduring spool_tuples, rescan tuplestore and add new tuples only if they \nare not already present. This is not very efficient because of multiple \nread operation on tuplestore, only for checking if tuple already exists \nand other issues (like tuplestore_in_memory forcing entire partition to \nget spooled in one go) etc.\n\nAny ideas will be much appreciated.\n\nThanks.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n\n\n\n\n\nOn 02/12/22 00:40, Ankit Kumar Pandey\n wrote:\n\n\n\n\n\nOn 25/11/22 11:00, Ankit Kumar Pandey\n wrote:\n\n \n On 25/11/22 02:14, David Rowley wrote: \nOn Fri, 25 Nov 2022 at 06:57, Ankit\n Kumar Pandey <itsankitkp@gmail.com>\n wrote: \nPlease let me know any opinions on\n this. \n\n I think if you're planning on working on this then step 1\n would have \n to be checking the SQL standard to see which set of rows it\n asks \n implementations to consider for duplicate checks when deciding\n if the \n transition should be performed or not.  Having not looked, I\n don't \n know if this is the entire partition or just the rows in the\n current \n frame. \n\n Depending on what you want, an alternative today would be to\n run a \n subquery to uniquify the rows the way you want and then do the\n window \n function stuff. \n\n David \n\n Thanks David, these are excellent pointers, I will look into SQL\n standard first and so on. \n\n\nHi,\nLooking further into it, I am bit clear about expectations of\n having distinct in Windows Aggregates (although I couldn't got\n hands on SQL standard as it is not in public domain but distinct\n in windows aggregate is supported by Oracle and I am using it as\n reference).\nFor table (mytable):\nid, name\n1, A\n1, A\n10, B\n3, A\n1, A\n\n\nselect avg(distinct id) over (partition by name) from\n mytable (in oracle db) yields:\n2\n2\n2\n2\n10\n\n\nFrom this, it is seen distinct is taken across the all rows in\n the partition.\nI also thought of using a subquery approach: select avg(id)\n over (partition by name) from (select distinct(id), name from\n mytable)\nbut this obviously doesn't yield right answer because result\n should contain same number of rows as input. This implies we\n need to find partition first and then remove duplicates within\n the partition.\n\nCan we avoid any ordering/sort until existing logic finds if\n value is in frame (so as to respect any order by clause\n given by user), and once it is determined that tuple is in\n frame, skip the tuple if it is a duplicate? If aforementioned\n approach is right, question is how do we check if it is\n duplicate? Should we create a lookup table (as tuples coming to\n advance_windowaggregate can be in arbitrary order)? Or any other\n approach would be better?\n\nAny opinion on this will be appreciated. \n\n-- \nRegards,\nAnkit Kumar Pandey\n\nHi,\nI am still looking at this but unable to move ahead as I am not\n able to use prior use-cases (normal aggregates) to implement\n distinct in window function because they both differ in design\n (and window function is bit unique in general). One approach\n (among others) that I thought was that during spool_tuples, rescan\n tuplestore and add new tuples only if they are not already\n present. This is not very efficient because of multiple read\n operation on tuplestore, only for checking if tuple already exists\n and other issues (like tuplestore_in_memory forcing entire\n partition to get spooled in one go) etc.\n\nAny ideas will be much appreciated.\nThanks.\n\n-- \nRegards,\nAnkit Kumar Pandey", "msg_date": "Sat, 3 Dec 2022 01:40:01 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "\nOn 02/12/22 03:07, David Rowley wrote:\n> On Fri, 2 Dec 2022 at 08:10, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>> select avg(distinct id) over (partition by name) from mytable (in oracle db) yields:\n>> 2\n>> 2\n>> 2\n>> 2\n>> 10\n>>\n>> From this, it is seen distinct is taken across the all rows in the partition.\n> Due to the lack of ORDER BY clause, all rows in the partition are in\n> the window frame at once. The question is, what *should* happen if\n> you add an ORDER BY.\n>\n> Looking at the copy of the standard that I have, I see nothing\n> explicitly mentioned about aggregates with DISTINCT used as window\n> functions, however, I do see in the Window Function section:\n>\n> \"The window aggregate functions compute an <aggregate function>\n> (COUNT, SUM, AVG, etc.), the same as\n> a group aggregate function, except that the computation aggregates\n> over the window frame of a row rather than\n> over a group of a grouped table. The hypothetical set functions are\n> not permitted as window aggregate functions.\"\n>\n> So you could deduce that the DISTINCT would also need to be applied\n> over the frame too.\n>\n> The question is, what do you want to make work? If you're not worried\n> about supporting DISTINCT when there is an ORDER BY clause and the\n> frame options are effectively ROWS BETWEEN UNBOUNDED PRECEDING AND\n> UNBOUNDED FOLLOWING, then it's going to be much easier to make work.\n> You never need to worry about rows dropping out of visibility in the\n> frame. Simply all rows in the partition are in the frame.\n>\n> You do need to be careful as, if I remember correctly, we do support\n> some non-standard things here. I believe the standard requires an\n> ORDER BY when specifying frame options. I think we didn't see any\n> technical reason to apply that limitation, so didn't. That means in\n> Postgres, you can do things like:\n>\n> select avg(id) over (partition by name ROWS BETWEEN CURRENT ROW AND 3\n> FOLLOWING) from mytable;\n>\n> but that's unlikely to work on most other databases without adding an ORDER BY.\n>\n> So if you are going to limit this to only being supported without an\n> ORDER BY, then you'll need to ensure that the specified frame options\n> don't cause your code to break. I'm unsure, but this might be a case\n> of checking for FRAMEOPTION_NONDEFAULT unless it's\n> FRAMEOPTION_START_UNBOUNDED_PRECEDING|FRAMEOPTION_END_UNBOUNDED_FOLLOWING.\n> You'll need to study that a bit more than I just did though.\n>\n> One way to make that work might be to add code to\n> eval_windowaggregates() around the call to advance_windowaggregate(),\n> you can see the row being aggregated is set by:\n>\n> winstate->tmpcontext->ecxt_outertuple = agg_row_slot;\n>\n> what you'd need to do here is change the code so that you put all the\n> rows to aggregate into a tuplesort then sort them by the distinct\n> column and instead, feed the tuplesort rows to\n> advance_windowaggregate(). You'd need to add code similar to what is\n> in process_ordered_aggregate_single() in nodeAgg.c to have the\n> duplicate consecutive rows skipped.\n>\n> Just a word of warning on this. This is a hugely complex area of\n> Postgres. If I was you, I'd make sure and spend quite a bit of time\n> reading nodeWindowAgg.c and likely much of nodeAgg.c. Any changes we\n> accept in that area are going to have to be very carefully done. Make\n> sure you're comfortable with the code before doing too much. It would\n> be very easy to end up with a giant mess if you try to do this without\n> fully understanding the implications of your changes. Also, you'll\n> need to show you've not regressed the performance of the existing\n> features with the code you've added.\n>\n> Good luck!\n>\n> David\n\nThanks a lot David, this is of an immense help. I will go through \nmentioned pointers, biggest being that this is complex piece and will \ntake its due course.\n\nThanks again\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n", "msg_date": "Sat, 3 Dec 2022 02:03:16 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "\nOn 02/12/22 03:21, David G. Johnston wrote:\n>  The main concern, I suspect, isn't implementation ability, it is \n> speed and memory consumption.\n\nHi David,\n\nShouldn't this be an acceptable tradeoff if someone wants to perform \nextra operation in plain old aggregates? Although I am not sure how much \nthis extra memory and compute usage is considered as acceptable.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n", "msg_date": "Sat, 3 Dec 2022 02:12:55 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On Sat, 3 Dec 2022 at 20:36, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Shouldn't this be an acceptable tradeoff if someone wants to perform\n> extra operation in plain old aggregates? Although I am not sure how much\n> this extra memory and compute usage is considered as acceptable.\n\nWe do our best to ensure that a given executor node never uses more\nthan work_mem. Certainly, we still do have nodes that can exceed this\nby a long way. It would be unlikely that we'd accept anything new\nthat could do this. Since nodeWindowAgg.c already can use up to\nwork_mem for the tuplestore, it does not seem unreasonable that if\nthere is a DISTINCT aggregate that you could use 50% of work_mem for\neach, that is, providing you can code it in such a way that you only\nallocate one of these at once, i.e not allocate one per DISTINCT\naggregate all at once.\n\nDavid\n\n\n", "msg_date": "Sun, 4 Dec 2022 08:20:35 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On 04/12/22 00:50, David Rowley wrote:\n>\n> We do our best to ensure that a given executor node never uses more\n> than work_mem. Certainly, we still do have nodes that can exceed this\n> by a long way. It would be unlikely that we'd accept anything new\n> that could do this.\n\nMakes sense, also would definitely rule out any brute force algorithms. Good point to know\n\n> providing you can code it in such a way that you only allocate one of \n> these at once, i.e not allocate one per DISTINCT aggregate all at once. \nI am not sure if I understand this, does it means at given time, do allocation for only one distinct aggregate\ninstead of all, in case of multiple aggregates using distinct?\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n\n\n\n\n\nOn 04/12/22 00:50, David Rowley wrote:\n\n\nWe do our best to ensure that a given executor node never uses more\nthan work_mem. Certainly, we still do have nodes that can exceed this\nby a long way. It would be unlikely that we'd accept anything new\nthat could do this. \n\n\nMakes sense, also would definitely rule out any brute force algorithms. Good point to know\n\nproviding you can code it in such a way that you only\nallocate one of these at once, i.e not allocate one per DISTINCT\naggregate all at once.\n\nI am not sure if I understand this, does it means at given time, do allocation for only one distinct aggregate\ninstead of all, in case of multiple aggregates using distinct? \n\n-- \nRegards,\nAnkit Kumar Pandey", "msg_date": "Sun, 4 Dec 2022 01:27:40 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On Sun, 4 Dec 2022 at 08:57, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> On 04/12/22 00:50, David Rowley wrote:\n>> providing you can code it in such a way that you only\n>> allocate one of these at once, i.e not allocate one per DISTINCT\n>> aggregate all at once.\n>\n> I am not sure if I understand this, does it means at given time, do allocation for only one distinct aggregate\n> instead of all, in case of multiple aggregates using distinct?\n\nIf you were to limit this to only working with the query you mentioned\nin [1], i.e PARTITION BY without an ORDER BY, then you only need to\naggregate once per partition per aggregate and you only need to do\nthat once all of the tuples for the partition are in the tuplestore.\nIt seems to me like you could add all the records to a tuplesort and\nthen sort by the DISTINCT column then aggregate everything except for\nconsecutive duplicates. You can then aggregate any other aggregates\nwhich share the same DISTINCT column, otherwise, you just destroy the\ntuplesort and rinse and repeat for the next aggregate.\n\nTo make this work when rows can exit the window frame seems\nsignificantly harder. Likely a hash table would be a better data\nstructure to remove records from, but then how are you going to spill\nthe hash table to disk when it reaches work_mem? As David J mentions,\nit seems like you'd need a hash table with a counter to track how many\ntimes a given value appears and only remove it from the table once\nthat counter reaches 0. Unsure how you're going to constrain that to\nnot use more than work_mem though.\n\nAre there any other databases which support DISTINCT window aggregate\nwith an ORDER BY in the window clause?\n\nDavid\n\n[1] https://postgr.es/m/b10d2b78-a07e-e520-0cfc-e19f0ec685b2@gmail.com\n\n\n", "msg_date": "Sun, 4 Dec 2022 09:57:44 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "\nOn 04/12/22 02:27, David Rowley wrote:\n>\n> To make this work when rows can exit the window frame seems\n> significantly harder. Likely a hash table would be a better data\n> structure to remove records from, but then how are you going to spill\n> the hash table to disk when it reaches work_mem? As David J mentions,\n> it seems like you'd need a hash table with a counter to track how many\n> times a given value appears and only remove it from the table once\n> that counter reaches 0. Unsure how you're going to constrain that to\n> not use more than work_mem though.\n>\nInteresting problem, Hashtables created in normal aggregates (AGG_HASHED \nmode) may provide some reference as they have hashagg_spill_tuple but I \nam not sure of any prior implementation of hashtable with counter and \nspill. Major concern is, if we go through tuplesort route (without order \nby case), would we get handicapped in future if we want order by or more \nfeatures?\n\n> Are there any other databases which support DISTINCT window aggregate\n> with an ORDER BY in the window clause?\n>\nOracle db support distinct window aggregates albeit without order by \nclause. Rest of databases which I've tried (mysql/sqlserver express) \ndon't even support distinct in window aggregates so those gets ruled out \nas well.\n\n> If you were to limit this to only working with the query you mentioned\n> in [1], i.e PARTITION BY without an ORDER BY, then you only need to\n> aggregate once per partition per aggregate and you only need to do\n> that once all of the tuples for the partition are in the tuplestore.\n> It seems to me like you could add all the records to a tuplesort and\n> then sort by the DISTINCT column then aggregate everything except for\n> consecutive duplicates. You can then aggregate any other aggregates\n> which share the same DISTINCT column, otherwise, you just destroy the\n> tuplesort and rinse and repeat for the next aggregate.\nThis looks like way to go that would ensure main use case of portability \nfrom Oracle.\n\n> If you were to limit this to only working with the query you mentioned\n> in [1], i.e PARTITION BY without an ORDER BY,\n\nI need to dig deeper into order by case.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n", "msg_date": "Sun, 4 Dec 2022 19:04:24 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On 12/4/22 14:34, Ankit Kumar Pandey wrote:\n> \n> On 04/12/22 02:27, David Rowley wrote:\n>>\n> \n>> If you were to limit this to only working with the query you mentioned\n>> in [1], i.e PARTITION BY without an ORDER BY, then you only need to\n>> aggregate once per partition per aggregate and you only need to do\n>> that once all of the tuples for the partition are in the tuplestore.\n>> It seems to me like you could add all the records to a tuplesort and\n>> then sort by the DISTINCT column then aggregate everything except for\n>> consecutive duplicates. You can then aggregate any other aggregates\n>> which share the same DISTINCT column, otherwise, you just destroy the\n>> tuplesort and rinse and repeat for the next aggregate.\n >\n> This looks like way to go that would ensure main use case of portability \n> from Oracle.\n\nThe goal should not be portability from Oracle, but adherence to the \nstandard.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Sun, 4 Dec 2022 17:55:06 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "\nOn 04/12/22 22:25, Vik Fearing wrote:\n> On 12/4/22 14:34, Ankit Kumar Pandey wrote:\n>\n>> This looks like way to go that would ensure main use case of \n>> portability from Oracle.\n>\n> The goal should not be portability from Oracle, but adherence to the \n> standard.\nYes, Vik. You are right. Wrong remark from my side.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n", "msg_date": "Sun, 4 Dec 2022 22:39:05 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions regarding distinct operation implementation" }, { "msg_contents": "On Mon, 5 Dec 2022 at 02:34, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Interesting problem, Hashtables created in normal aggregates (AGG_HASHED\n> mode) may provide some reference as they have hashagg_spill_tuple but I\n> am not sure of any prior implementation of hashtable with counter and\n> spill.\n\nI'm unsure if there's such guidance that can be gleaned from studying\nnodeAgg.c. IIRC, that works by deciding up-front how many partitions\nwe're going to split the hash key space into and then writing out\ntuples to files based on \"hashkey MOD number-of-partitions\". At the\nend of that, you can just aggregate tuples one partition at a time.\nAll groups are in the same file/partition.\n\nThe reason this does not seem useful to your case is that you need to\nbe able to quickly look up a given Datum or set of Datums to check if\nthey are unique or not. For that, you'd need to reload the hash table\nevery time your lookup lands on a different partition of the hashkey\nspace. I fail to see how that could ever be fast unless there happened\nto only be 1 partition. To make that worse, when a tuple goes out of\nthe frame and the counter that's tracking how many times the Datum(s)\nappeared reaches 0, you need to write the entire file out again minus\nthat tuple. Let's say you're window function is on a column which is\ndistinct or *very* close to it and the given window is moving the\nwindow frame forward 1 tuple per input tuple. If each subsequent Datum\nhashes to a different partition, then you're going to need to load the\nfile for that hash key space to check if that Datum has already been\nseen, then you're going to have to evict that tuple from the file as\nit moves out of frame, so that means reading and writing that entire\nfile per input tuple consumed. That'll perform very poorly! It's\npossible that you could maybe speed it up a bit with some lossy hash\ntable that sits atop of this can only tell you if the given key\ndefinitely does *not* exists. You'd then be able to just write that\ntuple out to the partition and you'd not have to read or write out the\nfile again. It's going to slow down to a crawl when the lossy table\ncontains too many false positives though.\n\n> Major concern is, if we go through tuplesort route (without order\n> by case), would we get handicapped in future if we want order by or more\n> features?\n\nYeah, deciding that before you go down one of the paths is going to be\nimportant. I imagine the reason that you've not found another database\nthat supports DISTINCT window functions in a window with an ORDER BY\nclause is that it's very hard to make it in a way where it performs\nwell in all cases.\n\nMaybe another way to go about it that will give you less lock-in if we\ndecide to make ORDER BY work later would be to design some new\ntuple-store-like data structure that can be defined with a lookup key\nso you could ask it if a given key is stored and it would return the\nanswer quickly without having to trawl through all stored tuples. It\nwould also need to support the same positional lookups as tuplestore\ndoes today so that all evicting-tuples-from-the-window-frame stuff\nworks as it does today. If you made something like that, then the\nchanges required in nodeWindowAgg.c would be significantly reduced.\nYou'd also just have 1 work_mem limit to abide by instead of having to\nconsider sharing that between a tuplestore and a hashtable/tuplesort.\n\nMaybe as step 1, you could invent keyedtuplestore.c and consume\ntuplestore's functions but layer on the lossy hashtable idea that I\nmentioned above. That'd have to be more than just a bloom filter as\nyou need a payload of the count of tuples matching the given hashkey\nMOD nbuckets. If you then had a function like\nkeyedtuplestore_key_definately_does_not_exist() (can't think of a\nbetter name now) then you can just lookup the lossy table and if there\nare 0 tuples at that lossy bucket, then you can\nkeyedtuplestore_puttupleslot() from nodeWindowAgg.c.\nkeyedtuplestore_key_definately_does_not_exist() would have to work\nmuch harder if there were>0 tuples with the same lossy hashkey. You'd\nneed to trawl through the tuples and check each one. Perhaps that\ncould be tuned a bit so if you get too many collisions then the lossy\ntable could be rehashed to a larger size. It's going to fall flat on\nits face, performance-wise, when the hash table can't be made larger\ndue to work_mem constraints.\n\nAnyway, that's a lot of only partially thought-through ideas above. If\nyou're working on a patch like this, you should expect to have to\nrewrite it a dozen or 2 times as new ideas arrive. If you're good at\nusing the fact that the new patch is better than the old one as\nmotivation to continue, then you're onto an attitude that is\nPostgreSQL-community-proof :-) (thankfully) we're often not good at\n\"let's just commit it now and make it better later\".\n\nDavid\n\n\n", "msg_date": "Tue, 6 Dec 2022 11:24:32 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions regarding distinct operation implementation" } ]